id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2212.04690
Mingu Kang
Mingu Kang, Heon Song, Seonwook Park, Donggeun Yoo, S\'ergio Pereira
Benchmarking Self-Supervised Learning on Diverse Pathology Datasets
Accepted to CVPR 2023
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computational pathology can lead to saving human lives, but models are annotation hungry and pathology images are notoriously expensive to annotate. Self-supervised learning has shown to be an effective method for utilizing unlabeled data, and its application to pathology could greatly benefit its downstream tasks. Yet, there are no principled studies that compare SSL methods and discuss how to adapt them for pathology. To address this need, we execute the largest-scale study of SSL pre-training on pathology image data, to date. Our study is conducted using 4 representative SSL methods on diverse downstream tasks. We establish that large-scale domain-aligned pre-training in pathology consistently out-performs ImageNet pre-training in standard SSL settings such as linear and fine-tuning evaluations, as well as in low-label regimes. Moreover, we propose a set of domain-specific techniques that we experimentally show leads to a performance boost. Lastly, for the first time, we apply SSL to the challenging task of nuclei instance segmentation and show large and consistent performance improvements under diverse settings.
[ { "created": "Fri, 9 Dec 2022 06:38:34 GMT", "version": "v1" }, { "created": "Tue, 18 Apr 2023 15:07:46 GMT", "version": "v2" } ]
2023-04-19
[ [ "Kang", "Mingu", "" ], [ "Song", "Heon", "" ], [ "Park", "Seonwook", "" ], [ "Yoo", "Donggeun", "" ], [ "Pereira", "Sérgio", "" ] ]
Computational pathology can lead to saving human lives, but models are annotation hungry and pathology images are notoriously expensive to annotate. Self-supervised learning has shown to be an effective method for utilizing unlabeled data, and its application to pathology could greatly benefit its downstream tasks. Yet, there are no principled studies that compare SSL methods and discuss how to adapt them for pathology. To address this need, we execute the largest-scale study of SSL pre-training on pathology image data, to date. Our study is conducted using 4 representative SSL methods on diverse downstream tasks. We establish that large-scale domain-aligned pre-training in pathology consistently out-performs ImageNet pre-training in standard SSL settings such as linear and fine-tuning evaluations, as well as in low-label regimes. Moreover, we propose a set of domain-specific techniques that we experimentally show leads to a performance boost. Lastly, for the first time, we apply SSL to the challenging task of nuclei instance segmentation and show large and consistent performance improvements under diverse settings.
2206.07080
Carl Corea
Carl Corea, John Grant, Matthias Thimm
Measuring Inconsistency in Declarative Process Specifications
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
We address the problem of measuring inconsistency in declarative process specifications, with an emphasis on linear temporal logic on fixed traces (LTLff). As we will show, existing inconsistency measures for classical logic cannot provide a meaningful assessment of inconsistency in LTL in general, as they cannot adequately handle the temporal operators. We therefore propose a novel paraconsistent semantics as a framework for inconsistency measurement. We then present two new inconsistency measures based on these semantics and show that they satisfy important desirable properties. We show how these measures can be applied to declarative process models and investigate the computational complexity of the introduced approach.
[ { "created": "Tue, 14 Jun 2022 18:08:49 GMT", "version": "v1" } ]
2022-06-16
[ [ "Corea", "Carl", "" ], [ "Grant", "John", "" ], [ "Thimm", "Matthias", "" ] ]
We address the problem of measuring inconsistency in declarative process specifications, with an emphasis on linear temporal logic on fixed traces (LTLff). As we will show, existing inconsistency measures for classical logic cannot provide a meaningful assessment of inconsistency in LTL in general, as they cannot adequately handle the temporal operators. We therefore propose a novel paraconsistent semantics as a framework for inconsistency measurement. We then present two new inconsistency measures based on these semantics and show that they satisfy important desirable properties. We show how these measures can be applied to declarative process models and investigate the computational complexity of the introduced approach.
2206.00113
Daniel Hernandez Dr
Daniel Hernandez, Hendrik Baier, Michael Kaisers
BRExIt: On Opponent Modelling in Expert Iteration
null
null
null
null
cs.AI cs.GT
http://creativecommons.org/publicdomain/zero/1.0/
Finding a best response policy is a central objective in game theory and multi-agent learning, with modern population-based training approaches employing reinforcement learning algorithms as best-response oracles to improve play against candidate opponents (typically previously learnt policies). We propose Best Response Expert Iteration (BRExIt), which accelerates learning in games by incorporating opponent models into the state-of-the-art learning algorithm Expert Iteration (ExIt). BRExIt aims to (1) improve feature shaping in the apprentice, with a policy head predicting opponent policies as an auxiliary task, and (2) bias opponent moves in planning towards the given or learnt opponent model, to generate apprentice targets that better approximate a best response. In an empirical ablation on BRExIt's algorithmic variants against a set of fixed test agents, we provide statistical evidence that BRExIt learns better performing policies than ExIt.
[ { "created": "Tue, 31 May 2022 20:49:10 GMT", "version": "v1" }, { "created": "Tue, 25 Apr 2023 15:31:34 GMT", "version": "v2" } ]
2023-04-26
[ [ "Hernandez", "Daniel", "" ], [ "Baier", "Hendrik", "" ], [ "Kaisers", "Michael", "" ] ]
Finding a best response policy is a central objective in game theory and multi-agent learning, with modern population-based training approaches employing reinforcement learning algorithms as best-response oracles to improve play against candidate opponents (typically previously learnt policies). We propose Best Response Expert Iteration (BRExIt), which accelerates learning in games by incorporating opponent models into the state-of-the-art learning algorithm Expert Iteration (ExIt). BRExIt aims to (1) improve feature shaping in the apprentice, with a policy head predicting opponent policies as an auxiliary task, and (2) bias opponent moves in planning towards the given or learnt opponent model, to generate apprentice targets that better approximate a best response. In an empirical ablation on BRExIt's algorithmic variants against a set of fixed test agents, we provide statistical evidence that BRExIt learns better performing policies than ExIt.
1006.5299
Yao Sun
Yao Sun and Dingkang Wang
The F5 Algorithm in Buchberger's Style
null
null
null
null
cs.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The famous F5 algorithm for computing \gr basis was presented by Faug\`ere in 2002. The original version of F5 is given in programming codes, so it is a bit difficult to understand. In this paper, the F5 algorithm is simplified as F5B in a Buchberger's style such that it is easy to understand and implement. In order to describe F5B, we introduce F5-reduction, which keeps the signature of labeled polynomials unchanged after reduction. The equivalence between F5 and F5B is also shown. At last, some versions of the F5 algorithm are illustrated.
[ { "created": "Mon, 28 Jun 2010 09:23:32 GMT", "version": "v1" }, { "created": "Wed, 29 Dec 2010 01:47:14 GMT", "version": "v2" } ]
2010-12-30
[ [ "Sun", "Yao", "" ], [ "Wang", "Dingkang", "" ] ]
The famous F5 algorithm for computing \gr basis was presented by Faug\`ere in 2002. The original version of F5 is given in programming codes, so it is a bit difficult to understand. In this paper, the F5 algorithm is simplified as F5B in a Buchberger's style such that it is easy to understand and implement. In order to describe F5B, we introduce F5-reduction, which keeps the signature of labeled polynomials unchanged after reduction. The equivalence between F5 and F5B is also shown. At last, some versions of the F5 algorithm are illustrated.
1901.02720
Rasmus Vestergaard
Rasmus Vestergaard, Qi Zhang, Daniel E. Lucani
Generalized Deduplication: Bounds, Convergence, and Asymptotic Properties
15 pages, 4 figures. This is the full version of a paper accepted for GLOBECOM 2019
null
10.1109/GLOBECOM38437.2019.9014012
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a generalization of deduplication, which enables lossless deduplication of highly similar data and show that standard deduplication with fixed chunk length is a special case. We provide bounds on the expected length of coded sequences for generalized deduplication and show that the coding has asymptotic near-entropy cost under the proposed source model. More importantly, we show that generalized deduplication allows for multiple orders of magnitude faster convergence than standard deduplication. This means that generalized deduplication can provide compression benefits much earlier than standard deduplication, which is key in practical systems. Numerical examples demonstrate our results, showing that our lower bounds are achievable, and illustrating the potential gain of using the generalization over standard deduplication. In fact, we show that even for a simple case of generalized deduplication, the gain in convergence speed is linear with the size of the data chunks.
[ { "created": "Wed, 9 Jan 2019 13:17:06 GMT", "version": "v1" }, { "created": "Thu, 17 Jan 2019 10:14:50 GMT", "version": "v2" }, { "created": "Mon, 29 Apr 2019 06:23:02 GMT", "version": "v3" }, { "created": "Wed, 7 Aug 2019 09:44:47 GMT", "version": "v4" } ]
2020-03-04
[ [ "Vestergaard", "Rasmus", "" ], [ "Zhang", "Qi", "" ], [ "Lucani", "Daniel E.", "" ] ]
We study a generalization of deduplication, which enables lossless deduplication of highly similar data and show that standard deduplication with fixed chunk length is a special case. We provide bounds on the expected length of coded sequences for generalized deduplication and show that the coding has asymptotic near-entropy cost under the proposed source model. More importantly, we show that generalized deduplication allows for multiple orders of magnitude faster convergence than standard deduplication. This means that generalized deduplication can provide compression benefits much earlier than standard deduplication, which is key in practical systems. Numerical examples demonstrate our results, showing that our lower bounds are achievable, and illustrating the potential gain of using the generalization over standard deduplication. In fact, we show that even for a simple case of generalized deduplication, the gain in convergence speed is linear with the size of the data chunks.
2101.07541
Malisa Vucinic
Jelena Kova\v{c} (UCG), Jovan Crnogorac (UCG), Enis Ko\v{c}an (UCG), Malisa Vucinic (EVA)
Sniffing Multi-hop Multi-channel Wireless Sensor Networks
2020 28th Telecommunications Forum (TELFOR), Nov 2020, Belgrade, Serbia
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As wireless sensor networks grow larger, more complex and their role more significant, it becomes necessary to have an insight into the network traffic. For this purpose, sniffers play an irreplaceable role. Since a sniffer is a device of limited range, to cover a multi-hop network it is necessary to consider the deployment of multiple sniffers. This motivates the research on the optimal number and position of sniffers in the network. We present a solution based on a minimal dominant set from graph theory. We evaluate the proposed solution and implement it as an extension of the 6TiSCH simulator. Our solution assumes a 50-nodes scenario, deployed in 2x2 km outdoor area, with 10% of packet drops over all channels, when 10 sniffers are used.
[ { "created": "Tue, 19 Jan 2021 10:02:11 GMT", "version": "v1" } ]
2021-01-20
[ [ "Kovač", "Jelena", "", "UCG" ], [ "Crnogorac", "Jovan", "", "UCG" ], [ "Kočan", "Enis", "", "UCG" ], [ "Vucinic", "Malisa", "", "EVA" ] ]
As wireless sensor networks grow larger, more complex and their role more significant, it becomes necessary to have an insight into the network traffic. For this purpose, sniffers play an irreplaceable role. Since a sniffer is a device of limited range, to cover a multi-hop network it is necessary to consider the deployment of multiple sniffers. This motivates the research on the optimal number and position of sniffers in the network. We present a solution based on a minimal dominant set from graph theory. We evaluate the proposed solution and implement it as an extension of the 6TiSCH simulator. Our solution assumes a 50-nodes scenario, deployed in 2x2 km outdoor area, with 10% of packet drops over all channels, when 10 sniffers are used.
2108.13927
Michiel de Bondt
Stijn Cambie, Michiel de Bondt, and Henk Don
Extremal Binary PFAs with Small Number of States
Even more extended than the IJFCS publication referenced below, which is an extended version of a publication in the proceedings of DLT 2021 titled 'Extremal Binary PFAs in a Cerny Family'
International Journal of Foundations of Computer Science, Vol. 34, No. 02n03, pp. 85-115 (2023)
10.1142/S0129054122440038
null
cs.FL
http://creativecommons.org/licenses/by/4.0/
The largest known reset thresholds for DFAs are equal to $(n-1)^2$, where $n$ is the number of states. This is conjectured to be the maximum possible. PFAs (with partial transition function) can have exponentially large reset thresholds. This is still true if we restrict to binary PFAs. However, asymptotics do not give conclusions for fixed $n$. We prove that the maximal reset threshold for binary PFAs is strictly greater than $(n-1)^2$ if and only if $n\geq 6$. These results are mostly based on the analysis of synchronizing word lengths for a certain family of binary PFAs. This family has the following properties: it contains the well-known \v{C}ern\'y automata; for $n\leq 10$ it contains a binary PFA with maximal possible reset threshold; for all $n\geq 6$ it contains a PFA with reset threshold larger than the maximum known for DFAs. Analysis of this family reveals remarkable patterns involving the Fibonacci numbers and related sequences such as the Padovan sequence. We derive explicit formulas for the reset thresholds in terms of these recurrent sequences. Asymptotically the \v{C}ern\'y family gives reset thresholds of polynomial order. We prove that PFAs in the family are not extremal for $n\geq 41$. For that purpose, we present an improvement of Martyugin's prime number construction of binary PFAs.
[ { "created": "Wed, 25 Aug 2021 07:37:51 GMT", "version": "v1" }, { "created": "Thu, 6 Jan 2022 08:53:55 GMT", "version": "v2" }, { "created": "Sat, 15 Apr 2023 17:31:21 GMT", "version": "v3" } ]
2023-04-18
[ [ "Cambie", "Stijn", "" ], [ "de Bondt", "Michiel", "" ], [ "Don", "Henk", "" ] ]
The largest known reset thresholds for DFAs are equal to $(n-1)^2$, where $n$ is the number of states. This is conjectured to be the maximum possible. PFAs (with partial transition function) can have exponentially large reset thresholds. This is still true if we restrict to binary PFAs. However, asymptotics do not give conclusions for fixed $n$. We prove that the maximal reset threshold for binary PFAs is strictly greater than $(n-1)^2$ if and only if $n\geq 6$. These results are mostly based on the analysis of synchronizing word lengths for a certain family of binary PFAs. This family has the following properties: it contains the well-known \v{C}ern\'y automata; for $n\leq 10$ it contains a binary PFA with maximal possible reset threshold; for all $n\geq 6$ it contains a PFA with reset threshold larger than the maximum known for DFAs. Analysis of this family reveals remarkable patterns involving the Fibonacci numbers and related sequences such as the Padovan sequence. We derive explicit formulas for the reset thresholds in terms of these recurrent sequences. Asymptotically the \v{C}ern\'y family gives reset thresholds of polynomial order. We prove that PFAs in the family are not extremal for $n\geq 41$. For that purpose, we present an improvement of Martyugin's prime number construction of binary PFAs.
cs/0306106
Joseph Y. Halpern
Joseph Y. Halpern
Lexicographic probability, conditional probability, and nonstandard probability
A preliminary version appears in Proceedings of the Eighth Conference on Theoretical Aspects of Rationality and Knowledge, 2001, pp. 17--30. The final version will appear in Games and Economic Behavior
null
null
null
cs.GT cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The relationship between Popper spaces (conditional probability spaces that satisfy some regularity conditions), lexicographic probability systems (LPS's), and nonstandard probability spaces (NPS's) is considered. If countable additivity is assumed, Popper spaces and a subclass of LPS's are equivalent; without the assumption of countable additivity, the equivalence no longer holds. If the state space is finite, LPS's are equivalent to NPS's. However, if the state space is infinite, NPS's are shown to be more general than LPS's.
[ { "created": "Tue, 17 Jun 2003 22:11:36 GMT", "version": "v1" }, { "created": "Wed, 22 Apr 2009 11:32:53 GMT", "version": "v2" } ]
2009-04-22
[ [ "Halpern", "Joseph Y.", "" ] ]
The relationship between Popper spaces (conditional probability spaces that satisfy some regularity conditions), lexicographic probability systems (LPS's), and nonstandard probability spaces (NPS's) is considered. If countable additivity is assumed, Popper spaces and a subclass of LPS's are equivalent; without the assumption of countable additivity, the equivalence no longer holds. If the state space is finite, LPS's are equivalent to NPS's. However, if the state space is infinite, NPS's are shown to be more general than LPS's.
cs/0504105
G Gordon Worley III
G Gordon Worley III
Wikis in Tuple Spaces
To appear at WMSCI 2005
null
null
null
cs.DC cs.MM
null
We consider storing the pages of a wiki in a tuple space and the effects this might have on the wiki experience. In particular, wiki pages are stored in tuples with a few identifying values such as title, author, revision date, content, etc. and pages are retrieved by sending the tuple space templates, such as one that gives the title but nothing else, leaving the tuple space to resolve to a single tuple. We use a tuple space wiki to avoid deadlocks, infinite loops, and wasted efforts when page edit contention arises and examine how a tuple space wiki changes the wiki experience.
[ { "created": "Wed, 27 Apr 2005 23:04:35 GMT", "version": "v1" } ]
2007-05-23
[ [ "Worley", "G Gordon", "III" ] ]
We consider storing the pages of a wiki in a tuple space and the effects this might have on the wiki experience. In particular, wiki pages are stored in tuples with a few identifying values such as title, author, revision date, content, etc. and pages are retrieved by sending the tuple space templates, such as one that gives the title but nothing else, leaving the tuple space to resolve to a single tuple. We use a tuple space wiki to avoid deadlocks, infinite loops, and wasted efforts when page edit contention arises and examine how a tuple space wiki changes the wiki experience.
2208.10322
Shanshan Zhong
Shanshan Zhong, Wushao Wen, Jinghui Qin
Mix-Pooling Strategy for Attention Mechanism
Work in progress
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently many effective attention modules are proposed to boot the model performance by exploiting the internal information of convolutional neural networks in computer vision. In general, many previous works ignore considering the design of the pooling strategy of the attention mechanism since they adopt the global average pooling for granted, which hinders the further improvement of the performance of the attention mechanism. However, we empirically find and verify a phenomenon that the simple linear combination of global max-pooling and global min-pooling can produce pooling strategies that match or exceed the performance of global average pooling. Based on this empirical observation, we propose a simple-yet-effective attention module SPEM, which adopts a self-adaptive pooling strategy based on global max-pooling and global min-pooling and a lightweight module for producing the attention map. The effectiveness of SPEM is demonstrated by extensive experiments on widely-used benchmark datasets and popular attention networks.
[ { "created": "Mon, 22 Aug 2022 14:01:42 GMT", "version": "v1" }, { "created": "Sun, 23 Oct 2022 03:01:03 GMT", "version": "v2" } ]
2022-10-25
[ [ "Zhong", "Shanshan", "" ], [ "Wen", "Wushao", "" ], [ "Qin", "Jinghui", "" ] ]
Recently many effective attention modules are proposed to boot the model performance by exploiting the internal information of convolutional neural networks in computer vision. In general, many previous works ignore considering the design of the pooling strategy of the attention mechanism since they adopt the global average pooling for granted, which hinders the further improvement of the performance of the attention mechanism. However, we empirically find and verify a phenomenon that the simple linear combination of global max-pooling and global min-pooling can produce pooling strategies that match or exceed the performance of global average pooling. Based on this empirical observation, we propose a simple-yet-effective attention module SPEM, which adopts a self-adaptive pooling strategy based on global max-pooling and global min-pooling and a lightweight module for producing the attention map. The effectiveness of SPEM is demonstrated by extensive experiments on widely-used benchmark datasets and popular attention networks.
2406.01439
Bart Cox
Yuncong Zuo, Bart Cox, Lydia Y. Chen, J\'er\'emie Decouchant
Asynchronous Multi-Server Federated Learning for Geo-Distributed Clients
null
null
null
null
cs.LG cs.DC
http://creativecommons.org/licenses/by-sa/4.0/
Federated learning (FL) systems enable multiple clients to train a machine learning model iteratively through synchronously exchanging the intermediate model weights with a single server. The scalability of such FL systems can be limited by two factors: server idle time due to synchronous communication and the risk of a single server becoming the bottleneck. In this paper, we propose a new FL architecture, to our knowledge, the first multi-server FL system that is entirely asynchronous, and therefore addresses these two limitations simultaneously. Our solution keeps both servers and clients continuously active. As in previous multi-server methods, clients interact solely with their nearest server, ensuring efficient update integration into the model. Differently, however, servers also periodically update each other asynchronously, and never postpone interactions with clients. We compare our solution to three representative baselines - FedAvg, FedAsync and HierFAVG - on the MNIST and CIFAR-10 image classification datasets and on the WikiText-2 language modeling dataset. Our solution converges to similar or higher accuracy levels than previous baselines and requires 61% less time to do so in geo-distributed settings.
[ { "created": "Mon, 3 Jun 2024 15:29:46 GMT", "version": "v1" }, { "created": "Thu, 20 Jun 2024 12:04:28 GMT", "version": "v2" } ]
2024-06-21
[ [ "Zuo", "Yuncong", "" ], [ "Cox", "Bart", "" ], [ "Chen", "Lydia Y.", "" ], [ "Decouchant", "Jérémie", "" ] ]
Federated learning (FL) systems enable multiple clients to train a machine learning model iteratively through synchronously exchanging the intermediate model weights with a single server. The scalability of such FL systems can be limited by two factors: server idle time due to synchronous communication and the risk of a single server becoming the bottleneck. In this paper, we propose a new FL architecture, to our knowledge, the first multi-server FL system that is entirely asynchronous, and therefore addresses these two limitations simultaneously. Our solution keeps both servers and clients continuously active. As in previous multi-server methods, clients interact solely with their nearest server, ensuring efficient update integration into the model. Differently, however, servers also periodically update each other asynchronously, and never postpone interactions with clients. We compare our solution to three representative baselines - FedAvg, FedAsync and HierFAVG - on the MNIST and CIFAR-10 image classification datasets and on the WikiText-2 language modeling dataset. Our solution converges to similar or higher accuracy levels than previous baselines and requires 61% less time to do so in geo-distributed settings.
2001.00594
Changwei Hu
Yao Zhan, Changwei Hu, Yifan Hu, Tejaswi Kasturi, Shanmugam Ramasamy, Matt Gillingham, Keith Yamamoto
Large-scale Gender/Age Prediction of Tumblr Users
null
IEEE ICMLA 2019
null
null
cs.LG cs.SI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tumblr, as a leading content provider and social media, attracts 371 million monthly visits, 280 million blogs and 53.3 million daily posts. The popularity of Tumblr provides great opportunities for advertisers to promote their products through sponsored posts. However, it is a challenging task to target specific demographic groups for ads, since Tumblr does not require user information like gender and ages during their registration. Hence, to promote ad targeting, it is essential to predict user's demography using rich content such as posts, images and social connections. In this paper, we propose graph based and deep learning models for age and gender predictions, which take into account user activities and content features. For graph based models, we come up with two approaches, network embedding and label propagation, to generate connection features as well as directly infer user's demography. For deep learning models, we leverage convolutional neural network (CNN) and multilayer perceptron (MLP) to prediction users' age and gender. Experimental results on real Tumblr daily dataset, with hundreds of millions of active users and billions of following relations, demonstrate that our approaches significantly outperform the baseline model, by improving the accuracy relatively by 81% for age, and the AUC and accuracy by 5\% for gender.
[ { "created": "Thu, 2 Jan 2020 19:01:45 GMT", "version": "v1" } ]
2020-01-06
[ [ "Zhan", "Yao", "" ], [ "Hu", "Changwei", "" ], [ "Hu", "Yifan", "" ], [ "Kasturi", "Tejaswi", "" ], [ "Ramasamy", "Shanmugam", "" ], [ "Gillingham", "Matt", "" ], [ "Yamamoto", "Keith", "" ] ]
Tumblr, as a leading content provider and social media, attracts 371 million monthly visits, 280 million blogs and 53.3 million daily posts. The popularity of Tumblr provides great opportunities for advertisers to promote their products through sponsored posts. However, it is a challenging task to target specific demographic groups for ads, since Tumblr does not require user information like gender and ages during their registration. Hence, to promote ad targeting, it is essential to predict user's demography using rich content such as posts, images and social connections. In this paper, we propose graph based and deep learning models for age and gender predictions, which take into account user activities and content features. For graph based models, we come up with two approaches, network embedding and label propagation, to generate connection features as well as directly infer user's demography. For deep learning models, we leverage convolutional neural network (CNN) and multilayer perceptron (MLP) to prediction users' age and gender. Experimental results on real Tumblr daily dataset, with hundreds of millions of active users and billions of following relations, demonstrate that our approaches significantly outperform the baseline model, by improving the accuracy relatively by 81% for age, and the AUC and accuracy by 5\% for gender.
1701.08256
Philipp Kemkes
Nattiya Kanhabua, Philipp Kemkes, Wolfgang Nejdl, Tu Ngoc Nguyen, Felipe Reis, Nam Khanh Tran
How to Search the Internet Archive Without Indexing It
null
20th International Conference on Theory and Practice of Digital Libraries, TPDL 2016, Proceedings, pp 147-160
10.1007/978-3-319-43997-6_12
null
cs.DL cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Significant parts of cultural heritage are produced on the web during the last decades. While easy accessibility to the current web is a good baseline, optimal access to the past web faces several challenges. This includes dealing with large-scale web archive collections and lacking of usage logs that contain implicit human feedback most relevant for today's web search. In this paper, we propose an entity-oriented search system to support retrieval and analytics on the Internet Archive. We use Bing to retrieve a ranked list of results from the current web. In addition, we link retrieved results to the WayBack Machine; thus allowing keyword search on the Internet Archive without processing and indexing its raw archived content. Our search system complements existing web archive search tools through a user-friendly interface, which comes close to the functionalities of modern web search engines (e.g., keyword search, query auto-completion and related query suggestion), and provides a great benefit of taking user feedback on the current web into account also for web archive search. Through extensive experiments, we conduct quantitative and qualitative analyses in order to provide insights that enable further research on and practical applications of web archives.
[ { "created": "Sat, 28 Jan 2017 05:46:46 GMT", "version": "v1" } ]
2017-01-31
[ [ "Kanhabua", "Nattiya", "" ], [ "Kemkes", "Philipp", "" ], [ "Nejdl", "Wolfgang", "" ], [ "Nguyen", "Tu Ngoc", "" ], [ "Reis", "Felipe", "" ], [ "Tran", "Nam Khanh", "" ] ]
Significant parts of cultural heritage are produced on the web during the last decades. While easy accessibility to the current web is a good baseline, optimal access to the past web faces several challenges. This includes dealing with large-scale web archive collections and lacking of usage logs that contain implicit human feedback most relevant for today's web search. In this paper, we propose an entity-oriented search system to support retrieval and analytics on the Internet Archive. We use Bing to retrieve a ranked list of results from the current web. In addition, we link retrieved results to the WayBack Machine; thus allowing keyword search on the Internet Archive without processing and indexing its raw archived content. Our search system complements existing web archive search tools through a user-friendly interface, which comes close to the functionalities of modern web search engines (e.g., keyword search, query auto-completion and related query suggestion), and provides a great benefit of taking user feedback on the current web into account also for web archive search. Through extensive experiments, we conduct quantitative and qualitative analyses in order to provide insights that enable further research on and practical applications of web archives.
1612.04790
Vishnu Narayan
Vishnu V. Narayan
A 17/12-Approximation Algorithm for 2-Vertex-Connected Spanning Subgraphs on Graphs with Minimum Degree At Least 3
Revised Lemma 1 and Theorem 2, results unchanged
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We obtain a polynomial-time 17/12-approximation algorithm for the minimum-cost 2-vertex-connected spanning subgraph problem, restricted to graphs of minimum degree at least 3. Our algorithm uses the framework of ear-decompositions for approximating connectivity problems, which was previously used in algorithms for finding the smallest 2-edge-connected spanning subgraph by Cheriyan, Seb\H{o} and Szigeti (SIAM J.Discrete Math. 2001) who gave a 17/12-approximation algorithm for this problem, and by Seb\H{o} and Vygen (Combinatorica 2014), who improved the approximation ratio to 4/3.
[ { "created": "Wed, 14 Dec 2016 20:07:40 GMT", "version": "v1" }, { "created": "Tue, 17 Jan 2017 18:25:21 GMT", "version": "v2" } ]
2017-01-18
[ [ "Narayan", "Vishnu V.", "" ] ]
We obtain a polynomial-time 17/12-approximation algorithm for the minimum-cost 2-vertex-connected spanning subgraph problem, restricted to graphs of minimum degree at least 3. Our algorithm uses the framework of ear-decompositions for approximating connectivity problems, which was previously used in algorithms for finding the smallest 2-edge-connected spanning subgraph by Cheriyan, Seb\H{o} and Szigeti (SIAM J.Discrete Math. 2001) who gave a 17/12-approximation algorithm for this problem, and by Seb\H{o} and Vygen (Combinatorica 2014), who improved the approximation ratio to 4/3.
2401.11430
Zhongqi Yue
Zhongqi Yue, Jiankun Wang, Qianru Sun, Lei Ji, Eric I-Chao Chang, Hanwang Zhang
Exploring Diffusion Time-steps for Unsupervised Representation Learning
Accepted by ICLR 2024
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Representation learning is all about discovering the hidden modular attributes that generate the data faithfully. We explore the potential of Denoising Diffusion Probabilistic Model (DM) in unsupervised learning of the modular attributes. We build a theoretical framework that connects the diffusion time-steps and the hidden attributes, which serves as an effective inductive bias for unsupervised learning. Specifically, the forward diffusion process incrementally adds Gaussian noise to samples at each time-step, which essentially collapses different samples into similar ones by losing attributes, e.g., fine-grained attributes such as texture are lost with less noise added (i.e., early time-steps), while coarse-grained ones such as shape are lost by adding more noise (i.e., late time-steps). To disentangle the modular attributes, at each time-step t, we learn a t-specific feature to compensate for the newly lost attribute, and the set of all 1,...,t-specific features, corresponding to the cumulative set of lost attributes, are trained to make up for the reconstruction error of a pre-trained DM at time-step t. On CelebA, FFHQ, and Bedroom datasets, the learned feature significantly improves attribute classification and enables faithful counterfactual generation, e.g., interpolating only one specified attribute between two images, validating the disentanglement quality. Codes are in https://github.com/yue-zhongqi/diti.
[ { "created": "Sun, 21 Jan 2024 08:35:25 GMT", "version": "v1" } ]
2024-01-23
[ [ "Yue", "Zhongqi", "" ], [ "Wang", "Jiankun", "" ], [ "Sun", "Qianru", "" ], [ "Ji", "Lei", "" ], [ "Chang", "Eric I-Chao", "" ], [ "Zhang", "Hanwang", "" ] ]
Representation learning is all about discovering the hidden modular attributes that generate the data faithfully. We explore the potential of Denoising Diffusion Probabilistic Model (DM) in unsupervised learning of the modular attributes. We build a theoretical framework that connects the diffusion time-steps and the hidden attributes, which serves as an effective inductive bias for unsupervised learning. Specifically, the forward diffusion process incrementally adds Gaussian noise to samples at each time-step, which essentially collapses different samples into similar ones by losing attributes, e.g., fine-grained attributes such as texture are lost with less noise added (i.e., early time-steps), while coarse-grained ones such as shape are lost by adding more noise (i.e., late time-steps). To disentangle the modular attributes, at each time-step t, we learn a t-specific feature to compensate for the newly lost attribute, and the set of all 1,...,t-specific features, corresponding to the cumulative set of lost attributes, are trained to make up for the reconstruction error of a pre-trained DM at time-step t. On CelebA, FFHQ, and Bedroom datasets, the learned feature significantly improves attribute classification and enables faithful counterfactual generation, e.g., interpolating only one specified attribute between two images, validating the disentanglement quality. Codes are in https://github.com/yue-zhongqi/diti.
1803.00188
Graham Neubig
Graham Neubig, Matthias Sperber, Xinyi Wang, Matthieu Felix, Austin Matthews, Sarguna Padmanabhan, Ye Qi, Devendra Singh Sachan, Philip Arthur, Pierre Godard, John Hewitt, Rachid Riad, Liming Wang
XNMT: The eXtensible Neural Machine Translation Toolkit
To be presented at AMTA 2018 Open Source Software Showcase
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes XNMT, the eXtensible Neural Machine Translation toolkit. XNMT distin- guishes itself from other open-source NMT toolkits by its focus on modular code design, with the purpose of enabling fast iteration in research and replicable, reliable results. In this paper we describe the design of XNMT and its experiment configuration system, and demonstrate its utility on the tasks of machine translation, speech recognition, and multi-tasked machine translation/parsing. XNMT is available open-source at https://github.com/neulab/xnmt
[ { "created": "Thu, 1 Mar 2018 03:14:54 GMT", "version": "v1" } ]
2018-03-02
[ [ "Neubig", "Graham", "" ], [ "Sperber", "Matthias", "" ], [ "Wang", "Xinyi", "" ], [ "Felix", "Matthieu", "" ], [ "Matthews", "Austin", "" ], [ "Padmanabhan", "Sarguna", "" ], [ "Qi", "Ye", "" ], [ "Sachan", "Devendra Singh", "" ], [ "Arthur", "Philip", "" ], [ "Godard", "Pierre", "" ], [ "Hewitt", "John", "" ], [ "Riad", "Rachid", "" ], [ "Wang", "Liming", "" ] ]
This paper describes XNMT, the eXtensible Neural Machine Translation toolkit. XNMT distin- guishes itself from other open-source NMT toolkits by its focus on modular code design, with the purpose of enabling fast iteration in research and replicable, reliable results. In this paper we describe the design of XNMT and its experiment configuration system, and demonstrate its utility on the tasks of machine translation, speech recognition, and multi-tasked machine translation/parsing. XNMT is available open-source at https://github.com/neulab/xnmt
2201.04100
Yang Li
Gang Li, Gilles Baechler, Manuel Tragut, Yang Li
Learning to Denoise Raw Mobile UI Layouts for Improving Datasets at Scale
Accepted to ACM CHI 2022
null
null
null
cs.HC cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The layout of a mobile screen is a critical data source for UI design research and semantic understanding of the screen. However, UI layouts in existing datasets are often noisy, have mismatches with their visual representation, or consists of generic or app-specific types that are difficult to analyze and model. In this paper, we propose the CLAY pipeline that uses a deep learning approach for denoising UI layouts, allowing us to automatically improve existing mobile UI layout datasets at scale. Our pipeline takes both the screenshot and the raw UI layout, and annotates the raw layout by removing incorrect nodes and assigning a semantically meaningful type to each node. To experiment with our data-cleaning pipeline, we create the CLAY dataset of 59,555 human-annotated screen layouts, based on screenshots and raw layouts from Rico, a public mobile UI corpus. Our deep models achieve high accuracy with F1 scores of 82.7% for detecting layout objects that do not have a valid visual representation and 85.9% for recognizing object types, which significantly outperforms a heuristic baseline. Our work lays a foundation for creating large-scale high quality UI layout datasets for data-driven mobile UI research and reduces the need of manual labeling efforts that are prohibitively expensive.
[ { "created": "Tue, 11 Jan 2022 17:52:40 GMT", "version": "v1" }, { "created": "Thu, 13 Jan 2022 17:53:31 GMT", "version": "v2" } ]
2022-01-14
[ [ "Li", "Gang", "" ], [ "Baechler", "Gilles", "" ], [ "Tragut", "Manuel", "" ], [ "Li", "Yang", "" ] ]
The layout of a mobile screen is a critical data source for UI design research and semantic understanding of the screen. However, UI layouts in existing datasets are often noisy, have mismatches with their visual representation, or consists of generic or app-specific types that are difficult to analyze and model. In this paper, we propose the CLAY pipeline that uses a deep learning approach for denoising UI layouts, allowing us to automatically improve existing mobile UI layout datasets at scale. Our pipeline takes both the screenshot and the raw UI layout, and annotates the raw layout by removing incorrect nodes and assigning a semantically meaningful type to each node. To experiment with our data-cleaning pipeline, we create the CLAY dataset of 59,555 human-annotated screen layouts, based on screenshots and raw layouts from Rico, a public mobile UI corpus. Our deep models achieve high accuracy with F1 scores of 82.7% for detecting layout objects that do not have a valid visual representation and 85.9% for recognizing object types, which significantly outperforms a heuristic baseline. Our work lays a foundation for creating large-scale high quality UI layout datasets for data-driven mobile UI research and reduces the need of manual labeling efforts that are prohibitively expensive.
2403.19456
Yu Xu
Yu Xu, Fan Tang, Juan Cao, Yuxin Zhang, Oliver Deussen, Weiming Dong, Jintao Li, Tong-Yee Lee
Break-for-Make: Modular Low-Rank Adaptations for Composable Content-Style Customization
null
null
null
null
cs.CV cs.GR cs.MM
http://creativecommons.org/licenses/by/4.0/
Personalized generation paradigms empower designers to customize visual intellectual properties with the help of textual descriptions by tuning or adapting pre-trained text-to-image models on a few images. Recent works explore approaches for concurrently customizing both content and detailed visual style appearance. However, these existing approaches often generate images where the content and style are entangled. In this study, we reconsider the customization of content and style concepts from the perspective of parameter space construction. Unlike existing methods that utilize a shared parameter space for content and style, we propose a learning framework that separates the parameter space to facilitate individual learning of content and style, thereby enabling disentangled content and style. To achieve this goal, we introduce "partly learnable projection" (PLP) matrices to separate the original adapters into divided sub-parameter spaces. We propose "break-for-make" customization learning pipeline based on PLP, which is simple yet effective. We break the original adapters into "up projection" and "down projection", train content and style PLPs individually with the guidance of corresponding textual prompts in the separate adapters, and maintain generalization by employing a multi-correspondence projection learning strategy. Based on the adapters broken apart for separate training content and style, we then make the entity parameter space by reconstructing the content and style PLPs matrices, followed by fine-tuning the combined adapter to generate the target object with the desired appearance. Experiments on various styles, including textures, materials, and artistic style, show that our method outperforms state-of-the-art single/multiple concept learning pipelines in terms of content-style-prompt alignment.
[ { "created": "Thu, 28 Mar 2024 14:27:36 GMT", "version": "v1" }, { "created": "Sun, 31 Mar 2024 13:26:11 GMT", "version": "v2" } ]
2024-04-02
[ [ "Xu", "Yu", "" ], [ "Tang", "Fan", "" ], [ "Cao", "Juan", "" ], [ "Zhang", "Yuxin", "" ], [ "Deussen", "Oliver", "" ], [ "Dong", "Weiming", "" ], [ "Li", "Jintao", "" ], [ "Lee", "Tong-Yee", "" ] ]
Personalized generation paradigms empower designers to customize visual intellectual properties with the help of textual descriptions by tuning or adapting pre-trained text-to-image models on a few images. Recent works explore approaches for concurrently customizing both content and detailed visual style appearance. However, these existing approaches often generate images where the content and style are entangled. In this study, we reconsider the customization of content and style concepts from the perspective of parameter space construction. Unlike existing methods that utilize a shared parameter space for content and style, we propose a learning framework that separates the parameter space to facilitate individual learning of content and style, thereby enabling disentangled content and style. To achieve this goal, we introduce "partly learnable projection" (PLP) matrices to separate the original adapters into divided sub-parameter spaces. We propose "break-for-make" customization learning pipeline based on PLP, which is simple yet effective. We break the original adapters into "up projection" and "down projection", train content and style PLPs individually with the guidance of corresponding textual prompts in the separate adapters, and maintain generalization by employing a multi-correspondence projection learning strategy. Based on the adapters broken apart for separate training content and style, we then make the entity parameter space by reconstructing the content and style PLPs matrices, followed by fine-tuning the combined adapter to generate the target object with the desired appearance. Experiments on various styles, including textures, materials, and artistic style, show that our method outperforms state-of-the-art single/multiple concept learning pipelines in terms of content-style-prompt alignment.
2101.04388
Akshayaa Magesh
Meghana Bande, Akshayaa Magesh, Venugopal V. Veeravalli
Dynamic Spectrum Access using Stochastic Multi-User Bandits
null
null
null
null
cs.IT math.IT stat.ML
http://creativecommons.org/licenses/by/4.0/
A stochastic multi-user multi-armed bandit framework is used to develop algorithms for uncoordinated spectrum access. In contrast to prior work, it is assumed that rewards can be non-zero even under collisions, thus allowing for the number of users to be greater than the number of channels. The proposed algorithm consists of an estimation phase and an allocation phase. It is shown that if every user adopts the algorithm, the system wide regret is order-optimal of order $O(\log T)$ over a time-horizon of duration $T$. The regret guarantees hold for both the cases where the number of users is greater than or less than the number of channels. The algorithm is extended to the dynamic case where the number of users in the system evolves over time, and is shown to lead to sub-linear regret.
[ { "created": "Tue, 12 Jan 2021 10:29:57 GMT", "version": "v1" } ]
2021-01-13
[ [ "Bande", "Meghana", "" ], [ "Magesh", "Akshayaa", "" ], [ "Veeravalli", "Venugopal V.", "" ] ]
A stochastic multi-user multi-armed bandit framework is used to develop algorithms for uncoordinated spectrum access. In contrast to prior work, it is assumed that rewards can be non-zero even under collisions, thus allowing for the number of users to be greater than the number of channels. The proposed algorithm consists of an estimation phase and an allocation phase. It is shown that if every user adopts the algorithm, the system wide regret is order-optimal of order $O(\log T)$ over a time-horizon of duration $T$. The regret guarantees hold for both the cases where the number of users is greater than or less than the number of channels. The algorithm is extended to the dynamic case where the number of users in the system evolves over time, and is shown to lead to sub-linear regret.
2008.06651
Lichang Chen
Lichang Chen, Guosheng Lin, Shijie Wang, Qingyao Wu
Graph Edit Distance Reward: Learning to Edit Scene Graph
14 pages, 6 figures, ECCV camera ready version
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scene Graph, as a vital tool to bridge the gap between language domain and image domain, has been widely adopted in the cross-modality task like VQA. In this paper, we propose a new method to edit the scene graph according to the user instructions, which has never been explored. To be specific, in order to learn editing scene graphs as the semantics given by texts, we propose a Graph Edit Distance Reward, which is based on the Policy Gradient and Graph Matching algorithm, to optimize neural symbolic model. In the context of text-editing image retrieval, we validate the effectiveness of our method in CSS and CRIR dataset. Besides, CRIR is a new synthetic dataset generated by us, which we will publish it soon for future use.
[ { "created": "Sat, 15 Aug 2020 04:52:16 GMT", "version": "v1" } ]
2020-08-18
[ [ "Chen", "Lichang", "" ], [ "Lin", "Guosheng", "" ], [ "Wang", "Shijie", "" ], [ "Wu", "Qingyao", "" ] ]
Scene Graph, as a vital tool to bridge the gap between language domain and image domain, has been widely adopted in the cross-modality task like VQA. In this paper, we propose a new method to edit the scene graph according to the user instructions, which has never been explored. To be specific, in order to learn editing scene graphs as the semantics given by texts, we propose a Graph Edit Distance Reward, which is based on the Policy Gradient and Graph Matching algorithm, to optimize neural symbolic model. In the context of text-editing image retrieval, we validate the effectiveness of our method in CSS and CRIR dataset. Besides, CRIR is a new synthetic dataset generated by us, which we will publish it soon for future use.
2206.08182
Adrian Pfleiderer
Adrian Pfleiderer, Dominik M\"uller, Frank Kramer
Nucleus Segmentation and Analysis in Breast Cancer with the MIScnn Framework
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The NuCLS dataset contains over 220.000 annotations of cell nuclei in breast cancers. We show how to use these data to create a multi-rater model with the MIScnn Framework to automate the analysis of cell nuclei. For the model creation, we use the widespread U-Net approach embedded in a pipeline. This pipeline provides besides the high performance convolution neural network, several preprocessor techniques and a extended data exploration. The final model is tested in the evaluation phase using a wide variety of metrics with a subsequent visualization. Finally, the results are compared and interpreted with the results of the NuCLS study. As an outlook, indications are given which are important for the future development of models in the context of cell nuclei.
[ { "created": "Thu, 16 Jun 2022 13:51:19 GMT", "version": "v1" }, { "created": "Wed, 23 Nov 2022 09:51:07 GMT", "version": "v2" }, { "created": "Wed, 1 Feb 2023 16:53:32 GMT", "version": "v3" } ]
2023-02-02
[ [ "Pfleiderer", "Adrian", "" ], [ "Müller", "Dominik", "" ], [ "Kramer", "Frank", "" ] ]
The NuCLS dataset contains over 220.000 annotations of cell nuclei in breast cancers. We show how to use these data to create a multi-rater model with the MIScnn Framework to automate the analysis of cell nuclei. For the model creation, we use the widespread U-Net approach embedded in a pipeline. This pipeline provides besides the high performance convolution neural network, several preprocessor techniques and a extended data exploration. The final model is tested in the evaluation phase using a wide variety of metrics with a subsequent visualization. Finally, the results are compared and interpreted with the results of the NuCLS study. As an outlook, indications are given which are important for the future development of models in the context of cell nuclei.
2204.07897
Naba Rizvi
Naba Rizvi, Harshini Ramaswamy, Reggie Casanova-Perez, Andrea Hartzler, Nadir Weibel
Making Hidden Bias Visible: Designing a Feedback Ecosystem for Primary Care Providers
6 pages, 2 figures, 2 tables, CHI 2022 Workshop Publication (Complex Health Ecosystems)
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Implicit bias may perpetuate healthcare disparities for marginalized patient populations. Such bias is expressed in communication between patients and their providers. We design an ecosystem with guidance from providers to make this bias explicit in patient-provider communication. Our end users are providers seeking to improve their quality of care for patients who are Black, Indigenous, People of Color (BIPOC) and/or Lesbian, Gay, Bisexual, Transgender, and Queer (LGBTQ). We present wireframes displaying communication metrics that negatively impact patient-centered care divided into the following categories: digital nudge, dashboard, and guided reflection. Our wireframes provide quantitative, real-time, and conversational feedback promoting provider reflection on their interactions with patients. This is the first design iteration toward the development of a tool to raise providers' awareness of their own implicit biases.
[ { "created": "Sun, 17 Apr 2022 01:32:19 GMT", "version": "v1" } ]
2022-04-19
[ [ "Rizvi", "Naba", "" ], [ "Ramaswamy", "Harshini", "" ], [ "Casanova-Perez", "Reggie", "" ], [ "Hartzler", "Andrea", "" ], [ "Weibel", "Nadir", "" ] ]
Implicit bias may perpetuate healthcare disparities for marginalized patient populations. Such bias is expressed in communication between patients and their providers. We design an ecosystem with guidance from providers to make this bias explicit in patient-provider communication. Our end users are providers seeking to improve their quality of care for patients who are Black, Indigenous, People of Color (BIPOC) and/or Lesbian, Gay, Bisexual, Transgender, and Queer (LGBTQ). We present wireframes displaying communication metrics that negatively impact patient-centered care divided into the following categories: digital nudge, dashboard, and guided reflection. Our wireframes provide quantitative, real-time, and conversational feedback promoting provider reflection on their interactions with patients. This is the first design iteration toward the development of a tool to raise providers' awareness of their own implicit biases.
2204.07363
James Caddy
James Caddy, Markus Wagner, Christoph Treude, Earl T. Barr, Miltiadis Allamanis
Is Surprisal in Issue Trackers Actionable?
8 pages, 1 figure. Submitted to 2022 International Conference on Mining Software Repositories Registered Reports track
null
null
null
cs.CL cs.SE
http://creativecommons.org/licenses/by/4.0/
Background. From information theory, surprisal is a measurement of how unexpected an event is. Statistical language models provide a probabilistic approximation of natural languages, and because surprisal is constructed with the probability of an event occuring, it is therefore possible to determine the surprisal associated with English sentences. The issues and pull requests of software repository issue trackers give insight into the development process and likely contain the surprising events of this process. Objective. Prior works have identified that unusual events in software repositories are of interest to developers, and use simple code metrics-based methods for detecting them. In this study we will propose a new method for unusual event detection in software repositories using surprisal. With the ability to find surprising issues and pull requests, we intend to further analyse them to determine if they actually hold importance in a repository, or if they pose a significant challenge to address. If it is possible to find bad surprises early, or before they cause additional troubles, it is plausible that effort, cost and time will be saved as a result. Method. After extracting the issues and pull requests from 5000 of the most popular software repositories on GitHub, we will train a language model to represent these issues. We will measure their perceived importance in the repository, measure their resolution difficulty using several analogues, measure the surprisal of each, and finally generate inferential statistics to describe any correlations.
[ { "created": "Fri, 15 Apr 2022 07:49:40 GMT", "version": "v1" } ]
2022-04-18
[ [ "Caddy", "James", "" ], [ "Wagner", "Markus", "" ], [ "Treude", "Christoph", "" ], [ "Barr", "Earl T.", "" ], [ "Allamanis", "Miltiadis", "" ] ]
Background. From information theory, surprisal is a measurement of how unexpected an event is. Statistical language models provide a probabilistic approximation of natural languages, and because surprisal is constructed with the probability of an event occuring, it is therefore possible to determine the surprisal associated with English sentences. The issues and pull requests of software repository issue trackers give insight into the development process and likely contain the surprising events of this process. Objective. Prior works have identified that unusual events in software repositories are of interest to developers, and use simple code metrics-based methods for detecting them. In this study we will propose a new method for unusual event detection in software repositories using surprisal. With the ability to find surprising issues and pull requests, we intend to further analyse them to determine if they actually hold importance in a repository, or if they pose a significant challenge to address. If it is possible to find bad surprises early, or before they cause additional troubles, it is plausible that effort, cost and time will be saved as a result. Method. After extracting the issues and pull requests from 5000 of the most popular software repositories on GitHub, we will train a language model to represent these issues. We will measure their perceived importance in the repository, measure their resolution difficulty using several analogues, measure the surprisal of each, and finally generate inferential statistics to describe any correlations.
0906.5114
Hal Daum\'e III
Hal Daum\'e III
Non-Parametric Bayesian Areal Linguistics
null
Proceedings of the Conference of the North American Association for Computational Linguistics, 2009
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe a statistical model over linguistic areas and phylogeny. Our model recovers known areas and identifies a plausible hierarchy of areal features. The use of areas improves genetic reconstruction of languages both qualitatively and quantitatively according to a variety of metrics. We model linguistic areas by a Pitman-Yor process and linguistic phylogeny by Kingman's coalescent.
[ { "created": "Sun, 28 Jun 2009 02:32:53 GMT", "version": "v1" } ]
2009-06-30
[ [ "Daumé", "Hal", "III" ] ]
We describe a statistical model over linguistic areas and phylogeny. Our model recovers known areas and identifies a plausible hierarchy of areal features. The use of areas improves genetic reconstruction of languages both qualitatively and quantitatively according to a variety of metrics. We model linguistic areas by a Pitman-Yor process and linguistic phylogeny by Kingman's coalescent.
1405.0109
Vaibhav Jha
Vaibhav Jha, Mohit Jha, GK Sharma
Estimation of Optimized Energy and Latency Constraints for Task Allocation in 3d Network on Chip
20 Pages,17 Figure, International Journal of Computer Science & Information Technology. arXiv admin note: substantial text overlap with arXiv:1404.2512
International Journal of Computer Science & Information Technology Vol 6,No 2,pp 67-86 April 2014
10.5121/ijcsit.2014.6205
null
cs.OH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In Network on Chip (NoC) rooted system, energy consumption is affected by task scheduling and allocation schemes which affect the performance of the system. In this paper we test the pre-existing proposed algorithms and introduced a new energy skilled algorithm for 3D NoC architecture. An efficient dynamic and cluster approaches are proposed along with the optimization using bio-inspired algorithm. The proposed algorithm has been implemented and evaluated on randomly generated benchmark and real life application such as MMS, Telecom and VOPD. The algorithm has also been tested with the E3S benchmark and has been compared with the existing mapping algorithm spiral and crinkle and has shown better reduction in the communication energy consumption and shows improvement in the performance of the system. On performing experimental analysis of proposed algorithm results shows that average reduction in energy consumption is 49%, reduction in communication cost is 48% and average latency is 34%. Cluster based approach is mapped onto NoC using Dynamic Diagonal Mapping (DDMap), Crinkle and Spiral algorithms and found DDmap provides improved result. On analysis and comparison of mapping of cluster using DDmap approach the average energy reduction is 14% and 9% with crinkle and spiral.
[ { "created": "Thu, 1 May 2014 07:38:46 GMT", "version": "v1" } ]
2014-05-02
[ [ "Jha", "Vaibhav", "" ], [ "Jha", "Mohit", "" ], [ "Sharma", "GK", "" ] ]
In Network on Chip (NoC) rooted system, energy consumption is affected by task scheduling and allocation schemes which affect the performance of the system. In this paper we test the pre-existing proposed algorithms and introduced a new energy skilled algorithm for 3D NoC architecture. An efficient dynamic and cluster approaches are proposed along with the optimization using bio-inspired algorithm. The proposed algorithm has been implemented and evaluated on randomly generated benchmark and real life application such as MMS, Telecom and VOPD. The algorithm has also been tested with the E3S benchmark and has been compared with the existing mapping algorithm spiral and crinkle and has shown better reduction in the communication energy consumption and shows improvement in the performance of the system. On performing experimental analysis of proposed algorithm results shows that average reduction in energy consumption is 49%, reduction in communication cost is 48% and average latency is 34%. Cluster based approach is mapped onto NoC using Dynamic Diagonal Mapping (DDMap), Crinkle and Spiral algorithms and found DDmap provides improved result. On analysis and comparison of mapping of cluster using DDmap approach the average energy reduction is 14% and 9% with crinkle and spiral.
1610.03612
Xiaodong Zhuang
Xiaodong Zhuang, N. E. Mastorakis
The Analysis of Local Motion and Deformation in Image Sequences Inspired by Physical Electromagnetic Interaction
15 pages, 23 figures. arXiv admin note: substantial text overlap with arXiv:1610.03615, arXiv:1610.02762
WSEAS TRANSACTIONS on COMPUTERS, pp. 231-245, Volume 14, 2015
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to analyze the moving and deforming of the objects in image sequence, a novel way is presented to analyze the local changes of object edges between two related images (such as two adjacent frames in a video sequence), which is inspired by the physical electromagnetic interaction. The changes of edge between adjacent frames in sequences are analyzed by simulation of virtual current interaction, which can reflect the change of the object's position or shape. The virtual current along the main edge line is proposed based on the significant edge extraction. Then the virtual interaction between the current elements in the two related images is studied by imitating the interaction between physical current-carrying wires. The experimental results prove that the distribution of magnetic forces on the current elements in one image applied by the other can reflect the local change of edge lines from one image to the other, which is important in further analysis.
[ { "created": "Wed, 12 Oct 2016 06:39:15 GMT", "version": "v1" } ]
2016-10-13
[ [ "Zhuang", "Xiaodong", "" ], [ "Mastorakis", "N. E.", "" ] ]
In order to analyze the moving and deforming of the objects in image sequence, a novel way is presented to analyze the local changes of object edges between two related images (such as two adjacent frames in a video sequence), which is inspired by the physical electromagnetic interaction. The changes of edge between adjacent frames in sequences are analyzed by simulation of virtual current interaction, which can reflect the change of the object's position or shape. The virtual current along the main edge line is proposed based on the significant edge extraction. Then the virtual interaction between the current elements in the two related images is studied by imitating the interaction between physical current-carrying wires. The experimental results prove that the distribution of magnetic forces on the current elements in one image applied by the other can reflect the local change of edge lines from one image to the other, which is important in further analysis.
1710.00269
Kristina Lerman
Kristina Lerman, Nathan Hodas, Hao Wu
Bounded Rationality in Scholarly Knowledge Discovery
null
null
null
null
cs.DL cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In an information-rich world, people's time and attention must be divided among rapidly changing information sources and the diverse tasks demanded of them. How people decide which of the many sources, such as scientific articles or patents, to read and use in their own work affects dissemination of scholarly knowledge and adoption of innovation. We analyze the choices people make about what information to propagate on the citation networks of Physical Review journals, US patents and legal opinions. We observe regularities in behavior consistent with human bounded rationality: rather than evaluate all available choices, people rely on simply cognitive heuristics to decide what information to attend to. We demonstrate that these heuristics bias choices, so that people preferentially propagate information that is easier to discover, often because it is newer or more popular. However, we do not find evidence that popular sources help to amplify the spread of information beyond making it more salient. Our paper provides novel evidence of the critical role that bounded rationality plays in the decisions to allocate attention in social communication.
[ { "created": "Sat, 30 Sep 2017 22:54:13 GMT", "version": "v1" } ]
2017-10-03
[ [ "Lerman", "Kristina", "" ], [ "Hodas", "Nathan", "" ], [ "Wu", "Hao", "" ] ]
In an information-rich world, people's time and attention must be divided among rapidly changing information sources and the diverse tasks demanded of them. How people decide which of the many sources, such as scientific articles or patents, to read and use in their own work affects dissemination of scholarly knowledge and adoption of innovation. We analyze the choices people make about what information to propagate on the citation networks of Physical Review journals, US patents and legal opinions. We observe regularities in behavior consistent with human bounded rationality: rather than evaluate all available choices, people rely on simply cognitive heuristics to decide what information to attend to. We demonstrate that these heuristics bias choices, so that people preferentially propagate information that is easier to discover, often because it is newer or more popular. However, we do not find evidence that popular sources help to amplify the spread of information beyond making it more salient. Our paper provides novel evidence of the critical role that bounded rationality plays in the decisions to allocate attention in social communication.
1607.00575
Jie Gong
Jie Gong, Sheng Zhou, Zhenyu Zhou
Networked MIMO with Fractional Joint Transmission in Energy Harvesting Systems
33 pages, 7 figures, accepted by IEEE Transactions on Communications
null
10.1109/TCOMM.2016.2589267
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper considers two base stations (BSs) powered by renewable energy serving two users cooperatively. With different BS energy arrival rates, a fractional joint transmission (JT) strategy is proposed, which divides each transmission frame into two subframes. In the first subframe, one BS keeps silent to store energy while the other transmits data, and then they perform zero-forcing JT (ZF-JT) in the second subframe. We consider the average sum-rate maximization problem by optimizing the energy allocation and the time fraction of ZF-JT in two steps. Firstly, the sum-rate maximization for given energy budget in each frame is analyzed. We prove that the optimal transmit power can be derived in closed-form, and the optimal time fraction can be found via bi-section search. Secondly, approximate dynamic programming (DP) algorithm is introduced to determine the energy allocation among frames. We adopt a linear approximation with the features associated with system states, and determine the weights of features by simulation. We also operate the approximation several times with random initial policy, named as policy exploration, to broaden the policy search range. Numerical results show that the proposed fractional JT greatly improves the performance. Also, appropriate policy exploration is shown to perform close to the optimal.
[ { "created": "Sun, 3 Jul 2016 01:57:07 GMT", "version": "v1" } ]
2016-11-18
[ [ "Gong", "Jie", "" ], [ "Zhou", "Sheng", "" ], [ "Zhou", "Zhenyu", "" ] ]
This paper considers two base stations (BSs) powered by renewable energy serving two users cooperatively. With different BS energy arrival rates, a fractional joint transmission (JT) strategy is proposed, which divides each transmission frame into two subframes. In the first subframe, one BS keeps silent to store energy while the other transmits data, and then they perform zero-forcing JT (ZF-JT) in the second subframe. We consider the average sum-rate maximization problem by optimizing the energy allocation and the time fraction of ZF-JT in two steps. Firstly, the sum-rate maximization for given energy budget in each frame is analyzed. We prove that the optimal transmit power can be derived in closed-form, and the optimal time fraction can be found via bi-section search. Secondly, approximate dynamic programming (DP) algorithm is introduced to determine the energy allocation among frames. We adopt a linear approximation with the features associated with system states, and determine the weights of features by simulation. We also operate the approximation several times with random initial policy, named as policy exploration, to broaden the policy search range. Numerical results show that the proposed fractional JT greatly improves the performance. Also, appropriate policy exploration is shown to perform close to the optimal.
1805.10410
Ross Hartley
Ross Hartley, Maani Ghaffari Jadidi, Jessy W. Grizzle, and Ryan M. Eustice
Contact-Aided Invariant Extended Kalman Filtering for Legged Robot State Estimation
Published in the proceedings of Robotics: Science and Systems 2018
null
10.15607/RSS.2018.XIV.050
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper derives a contact-aided inertial navigation observer for a 3D bipedal robot using the theory of invariant observer design. Aided inertial navigation is fundamentally a nonlinear observer design problem; thus, current solutions are based on approximations of the system dynamics, such as an Extended Kalman Filter (EKF), which uses a system's Jacobian linearization along the current best estimate of its trajectory. On the basis of the theory of invariant observer design by Barrau and Bonnabel, and in particular, the Invariant EKF (InEKF), we show that the error dynamics of the point contact-inertial system follows a log-linear autonomous differential equation; hence, the observable state variables can be rendered convergent with a domain of attraction that is independent of the system's trajectory. Due to the log-linear form of the error dynamics, it is not necessary to perform a nonlinear observability analysis to show that when using an Inertial Measurement Unit (IMU) and contact sensors, the absolute position of the robot and a rotation about the gravity vector (yaw) are unobservable. We further augment the state of the developed InEKF with IMU biases, as the online estimation of these parameters has a crucial impact on system performance. We evaluate the convergence of the proposed system with the commonly used quaternion-based EKF observer using a Monte-Carlo simulation. In addition, our experimental evaluation using a Cassie-series bipedal robot shows that the contact-aided InEKF provides better performance in comparison with the quaternion-based EKF as a result of exploiting symmetries present in the system dynamics.
[ { "created": "Sat, 26 May 2018 01:58:02 GMT", "version": "v1" } ]
2019-05-22
[ [ "Hartley", "Ross", "" ], [ "Jadidi", "Maani Ghaffari", "" ], [ "Grizzle", "Jessy W.", "" ], [ "Eustice", "Ryan M.", "" ] ]
This paper derives a contact-aided inertial navigation observer for a 3D bipedal robot using the theory of invariant observer design. Aided inertial navigation is fundamentally a nonlinear observer design problem; thus, current solutions are based on approximations of the system dynamics, such as an Extended Kalman Filter (EKF), which uses a system's Jacobian linearization along the current best estimate of its trajectory. On the basis of the theory of invariant observer design by Barrau and Bonnabel, and in particular, the Invariant EKF (InEKF), we show that the error dynamics of the point contact-inertial system follows a log-linear autonomous differential equation; hence, the observable state variables can be rendered convergent with a domain of attraction that is independent of the system's trajectory. Due to the log-linear form of the error dynamics, it is not necessary to perform a nonlinear observability analysis to show that when using an Inertial Measurement Unit (IMU) and contact sensors, the absolute position of the robot and a rotation about the gravity vector (yaw) are unobservable. We further augment the state of the developed InEKF with IMU biases, as the online estimation of these parameters has a crucial impact on system performance. We evaluate the convergence of the proposed system with the commonly used quaternion-based EKF observer using a Monte-Carlo simulation. In addition, our experimental evaluation using a Cassie-series bipedal robot shows that the contact-aided InEKF provides better performance in comparison with the quaternion-based EKF as a result of exploiting symmetries present in the system dynamics.
2305.07006
Yiheng Shen
Siddhartha Banerjee, Kamesh Munagala, Yiheng Shen, Kangning Wang
Fair Price Discrimination
null
null
null
null
cs.GT cs.DS econ.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A seller is pricing identical copies of a good to a stream of unit-demand buyers. Each buyer has a value on the good as his private information. The seller only knows the empirical value distribution of the buyer population and chooses the revenue-optimal price. We consider a widely studied third-degree price discrimination model where an information intermediary with perfect knowledge of the arriving buyer's value sends a signal to the seller, hence changing the seller's posterior and inducing the seller to set a personalized posted price. Prior work of Bergemann, Brooks, and Morris (American Economic Review, 2015) has shown the existence of a signaling scheme that preserves seller revenue, while always selling the item, hence maximizing consumer surplus. In a departure from prior work, we ask whether the consumer surplus generated is fairly distributed among buyers with different values. To this end, we aim to maximize welfare functions that reward more balanced surplus allocations. Our main result is the surprising existence of a novel signaling scheme that simultaneously $8$-approximates all welfare functions that are non-negative, monotonically increasing, symmetric, and concave, compared with any other signaling scheme. Classical examples of such welfare functions include the utilitarian social welfare, the Nash welfare, and the max-min welfare. Such a guarantee cannot be given by any consumer-surplus-maximizing scheme -- which are the ones typically studied in the literature. In addition, our scheme is socially efficient, and has the fairness property that buyers with higher values enjoy higher expected surplus, which is not always the case for existing schemes.
[ { "created": "Thu, 11 May 2023 17:45:06 GMT", "version": "v1" } ]
2023-05-12
[ [ "Banerjee", "Siddhartha", "" ], [ "Munagala", "Kamesh", "" ], [ "Shen", "Yiheng", "" ], [ "Wang", "Kangning", "" ] ]
A seller is pricing identical copies of a good to a stream of unit-demand buyers. Each buyer has a value on the good as his private information. The seller only knows the empirical value distribution of the buyer population and chooses the revenue-optimal price. We consider a widely studied third-degree price discrimination model where an information intermediary with perfect knowledge of the arriving buyer's value sends a signal to the seller, hence changing the seller's posterior and inducing the seller to set a personalized posted price. Prior work of Bergemann, Brooks, and Morris (American Economic Review, 2015) has shown the existence of a signaling scheme that preserves seller revenue, while always selling the item, hence maximizing consumer surplus. In a departure from prior work, we ask whether the consumer surplus generated is fairly distributed among buyers with different values. To this end, we aim to maximize welfare functions that reward more balanced surplus allocations. Our main result is the surprising existence of a novel signaling scheme that simultaneously $8$-approximates all welfare functions that are non-negative, monotonically increasing, symmetric, and concave, compared with any other signaling scheme. Classical examples of such welfare functions include the utilitarian social welfare, the Nash welfare, and the max-min welfare. Such a guarantee cannot be given by any consumer-surplus-maximizing scheme -- which are the ones typically studied in the literature. In addition, our scheme is socially efficient, and has the fairness property that buyers with higher values enjoy higher expected surplus, which is not always the case for existing schemes.
1210.2457
EPTCS
Daniel Neider (RWTH Aachen University), Roman Rabinovich (RWTH Aachen University), Martin Zimmermann (RWTH Aachen University and University of Warsaw)
Down the Borel Hierarchy: Solving Muller Games via Safety Games
In Proceedings GandALF 2012, arXiv:1210.2028
EPTCS 96, 2012, pp. 169-182
10.4204/EPTCS.96.13
null
cs.LO cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We transform a Muller game with n vertices into a safety game with (n!)^3 vertices whose solution allows to determine the winning regions of the Muller game and to compute a finite-state winning strategy for one player. This yields a novel antichain-based memory structure and a natural notion of permissive strategies for Muller games. Moreover, we generalize our construction by presenting a new type of game reduction from infinite games to safety games and show its applicability to several other winning conditions.
[ { "created": "Tue, 9 Oct 2012 00:54:33 GMT", "version": "v1" } ]
2012-10-10
[ [ "Neider", "Daniel", "", "RWTH Aachen University" ], [ "Rabinovich", "Roman", "", "RWTH Aachen\n University" ], [ "Zimmermann", "Martin", "", "RWTH Aachen University and University of\n Warsaw" ] ]
We transform a Muller game with n vertices into a safety game with (n!)^3 vertices whose solution allows to determine the winning regions of the Muller game and to compute a finite-state winning strategy for one player. This yields a novel antichain-based memory structure and a natural notion of permissive strategies for Muller games. Moreover, we generalize our construction by presenting a new type of game reduction from infinite games to safety games and show its applicability to several other winning conditions.
2406.02148
Qingkai Min
Qingkai Min, Qipeng Guo, Xiangkun Hu, Songfang Huang, Zheng Zhang, Yue Zhang
Synergetic Event Understanding: A Collaborative Approach to Cross-Document Event Coreference Resolution with Large Language Models
Accepted to ACL-24 Main
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Cross-document event coreference resolution (CDECR) involves clustering event mentions across multiple documents that refer to the same real-world events. Existing approaches utilize fine-tuning of small language models (SLMs) like BERT to address the compatibility among the contexts of event mentions. However, due to the complexity and diversity of contexts, these models are prone to learning simple co-occurrences. Recently, large language models (LLMs) like ChatGPT have demonstrated impressive contextual understanding, yet they encounter challenges in adapting to specific information extraction (IE) tasks. In this paper, we propose a collaborative approach for CDECR, leveraging the capabilities of both a universally capable LLM and a task-specific SLM. The collaborative strategy begins with the LLM accurately and comprehensively summarizing events through prompting. Then, the SLM refines its learning of event representations based on these insights during fine-tuning. Experimental results demonstrate that our approach surpasses the performance of both the large and small language models individually, forming a complementary advantage. Across various datasets, our approach achieves state-of-the-art performance, underscoring its effectiveness in diverse scenarios.
[ { "created": "Tue, 4 Jun 2024 09:35:47 GMT", "version": "v1" } ]
2024-06-05
[ [ "Min", "Qingkai", "" ], [ "Guo", "Qipeng", "" ], [ "Hu", "Xiangkun", "" ], [ "Huang", "Songfang", "" ], [ "Zhang", "Zheng", "" ], [ "Zhang", "Yue", "" ] ]
Cross-document event coreference resolution (CDECR) involves clustering event mentions across multiple documents that refer to the same real-world events. Existing approaches utilize fine-tuning of small language models (SLMs) like BERT to address the compatibility among the contexts of event mentions. However, due to the complexity and diversity of contexts, these models are prone to learning simple co-occurrences. Recently, large language models (LLMs) like ChatGPT have demonstrated impressive contextual understanding, yet they encounter challenges in adapting to specific information extraction (IE) tasks. In this paper, we propose a collaborative approach for CDECR, leveraging the capabilities of both a universally capable LLM and a task-specific SLM. The collaborative strategy begins with the LLM accurately and comprehensively summarizing events through prompting. Then, the SLM refines its learning of event representations based on these insights during fine-tuning. Experimental results demonstrate that our approach surpasses the performance of both the large and small language models individually, forming a complementary advantage. Across various datasets, our approach achieves state-of-the-art performance, underscoring its effectiveness in diverse scenarios.
1612.08274
Jinho Lee
Jinho Lee, Brian Kenji Iwana, Shouta Ide, Seiichi Uchida
Globally Optimal Object Tracking with Fully Convolutional Networks
6pages, 8figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tracking is one of the most important but still difficult tasks in computer vision and pattern recognition. The main difficulties in the tracking field are appearance variation and occlusion. Most traditional tracking methods set the parameters or templates to track target objects in advance and should be modified accordingly. Thus, we propose a new and robust tracking method using a Fully Convolutional Network (FCN) to obtain an object probability map and Dynamic Programming (DP) to seek the globally optimal path through all frames of video. Our proposed method solves the object appearance variation problem with the use of a FCN and deals with occlusion by DP. We show that our method is effective in tracking various single objects through video frames.
[ { "created": "Sun, 25 Dec 2016 16:00:40 GMT", "version": "v1" } ]
2016-12-28
[ [ "Lee", "Jinho", "" ], [ "Iwana", "Brian Kenji", "" ], [ "Ide", "Shouta", "" ], [ "Uchida", "Seiichi", "" ] ]
Tracking is one of the most important but still difficult tasks in computer vision and pattern recognition. The main difficulties in the tracking field are appearance variation and occlusion. Most traditional tracking methods set the parameters or templates to track target objects in advance and should be modified accordingly. Thus, we propose a new and robust tracking method using a Fully Convolutional Network (FCN) to obtain an object probability map and Dynamic Programming (DP) to seek the globally optimal path through all frames of video. Our proposed method solves the object appearance variation problem with the use of a FCN and deals with occlusion by DP. We show that our method is effective in tracking various single objects through video frames.
1708.00634
Junnan Li Mr
Junnan Li, Yongkang Wong, Qi Zhao, Mohan S. Kankanhalli
Dual-Glance Model for Deciphering Social Relationships
IEEE International Conference on Computer Vision (ICCV), 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since the beginning of early civilizations, social relationships derived from each individual fundamentally form the basis of social structure in our daily life. In the computer vision literature, much progress has been made in scene understanding, such as object detection and scene parsing. Recent research focuses on the relationship between objects based on its functionality and geometrical relations. In this work, we aim to study the problem of social relationship recognition, in still images. We have proposed a dual-glance model for social relationship recognition, where the first glance fixates at the individual pair of interest and the second glance deploys attention mechanism to explore contextual cues. We have also collected a new large scale People in Social Context (PISC) dataset, which comprises of 22,670 images and 76,568 annotated samples from 9 types of social relationship. We provide benchmark results on the PISC dataset, and qualitatively demonstrate the efficacy of the proposed model.
[ { "created": "Wed, 2 Aug 2017 08:13:28 GMT", "version": "v1" } ]
2017-08-03
[ [ "Li", "Junnan", "" ], [ "Wong", "Yongkang", "" ], [ "Zhao", "Qi", "" ], [ "Kankanhalli", "Mohan S.", "" ] ]
Since the beginning of early civilizations, social relationships derived from each individual fundamentally form the basis of social structure in our daily life. In the computer vision literature, much progress has been made in scene understanding, such as object detection and scene parsing. Recent research focuses on the relationship between objects based on its functionality and geometrical relations. In this work, we aim to study the problem of social relationship recognition, in still images. We have proposed a dual-glance model for social relationship recognition, where the first glance fixates at the individual pair of interest and the second glance deploys attention mechanism to explore contextual cues. We have also collected a new large scale People in Social Context (PISC) dataset, which comprises of 22,670 images and 76,568 annotated samples from 9 types of social relationship. We provide benchmark results on the PISC dataset, and qualitatively demonstrate the efficacy of the proposed model.
2306.06385
Arian Prabowo
Arian Prabowo, Kaixuan Chen, Hao Xue, Subbu Sethuvenkatraman, Flora D. Salim
Continually learning out-of-distribution spatiotemporal data for robust energy forecasting
15 pages, 3 figures, ECML PKDD ADS 2023. 2023 09 09 edit: repeated column in tab 3. in previous version
null
10.1007/978-3-031-43430-3_1
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Forecasting building energy usage is essential for promoting sustainability and reducing waste, as it enables building managers to optimize energy consumption and reduce costs. This importance is magnified during anomalous periods, such as the COVID-19 pandemic, which have disrupted occupancy patterns and made accurate forecasting more challenging. Forecasting energy usage during anomalous periods is difficult due to changes in occupancy patterns and energy usage behavior. One of the primary reasons for this is the shift in distribution of occupancy patterns, with many people working or learning from home. This has created a need for new forecasting methods that can adapt to changing occupancy patterns. Online learning has emerged as a promising solution to this challenge, as it enables building managers to adapt to changes in occupancy patterns and adjust energy usage accordingly. With online learning, models can be updated incrementally with each new data point, allowing them to learn and adapt in real-time. Another solution is to use human mobility data as a proxy for occupancy, leveraging the prevalence of mobile devices to track movement patterns and infer occupancy levels. Human mobility data can be useful in this context as it provides a way to monitor occupancy patterns without relying on traditional sensors or manual data collection methods. We have conducted extensive experiments using data from six buildings to test the efficacy of these approaches. However, deploying these methods in the real world presents several challenges.
[ { "created": "Sat, 10 Jun 2023 09:12:10 GMT", "version": "v1" }, { "created": "Sat, 9 Sep 2023 13:37:12 GMT", "version": "v2" } ]
2023-10-05
[ [ "Prabowo", "Arian", "" ], [ "Chen", "Kaixuan", "" ], [ "Xue", "Hao", "" ], [ "Sethuvenkatraman", "Subbu", "" ], [ "Salim", "Flora D.", "" ] ]
Forecasting building energy usage is essential for promoting sustainability and reducing waste, as it enables building managers to optimize energy consumption and reduce costs. This importance is magnified during anomalous periods, such as the COVID-19 pandemic, which have disrupted occupancy patterns and made accurate forecasting more challenging. Forecasting energy usage during anomalous periods is difficult due to changes in occupancy patterns and energy usage behavior. One of the primary reasons for this is the shift in distribution of occupancy patterns, with many people working or learning from home. This has created a need for new forecasting methods that can adapt to changing occupancy patterns. Online learning has emerged as a promising solution to this challenge, as it enables building managers to adapt to changes in occupancy patterns and adjust energy usage accordingly. With online learning, models can be updated incrementally with each new data point, allowing them to learn and adapt in real-time. Another solution is to use human mobility data as a proxy for occupancy, leveraging the prevalence of mobile devices to track movement patterns and infer occupancy levels. Human mobility data can be useful in this context as it provides a way to monitor occupancy patterns without relying on traditional sensors or manual data collection methods. We have conducted extensive experiments using data from six buildings to test the efficacy of these approaches. However, deploying these methods in the real world presents several challenges.
2004.14491
Shruti Agarwal
Shruti Agarwal (1), Tarek El-Gaaly (2), Hany Farid (1), Ser-Nam Lim (2) ((1) Univeristy of California, Berkeley, Berkeley, CA, USA, (2) Facebook Research, New York, NY, USA)
Detecting Deep-Fake Videos from Appearance and Behavior
null
IEEE Workshop on Image Forensics and Security, 2020
null
null
cs.CV cs.LG cs.MM eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Synthetically-generated audios and videos -- so-called deep fakes -- continue to capture the imagination of the computer-graphics and computer-vision communities. At the same time, the democratization of access to technology that can create sophisticated manipulated video of anybody saying anything continues to be of concern because of its power to disrupt democratic elections, commit small to large-scale fraud, fuel dis-information campaigns, and create non-consensual pornography. We describe a biometric-based forensic technique for detecting face-swap deep fakes. This technique combines a static biometric based on facial recognition with a temporal, behavioral biometric based on facial expressions and head movements, where the behavioral embedding is learned using a CNN with a metric-learning objective function. We show the efficacy of this approach across several large-scale video datasets, as well as in-the-wild deep fakes.
[ { "created": "Wed, 29 Apr 2020 21:38:22 GMT", "version": "v1" } ]
2021-01-29
[ [ "Agarwal", "Shruti", "" ], [ "El-Gaaly", "Tarek", "" ], [ "Farid", "Hany", "" ], [ "Lim", "Ser-Nam", "" ] ]
Synthetically-generated audios and videos -- so-called deep fakes -- continue to capture the imagination of the computer-graphics and computer-vision communities. At the same time, the democratization of access to technology that can create sophisticated manipulated video of anybody saying anything continues to be of concern because of its power to disrupt democratic elections, commit small to large-scale fraud, fuel dis-information campaigns, and create non-consensual pornography. We describe a biometric-based forensic technique for detecting face-swap deep fakes. This technique combines a static biometric based on facial recognition with a temporal, behavioral biometric based on facial expressions and head movements, where the behavioral embedding is learned using a CNN with a metric-learning objective function. We show the efficacy of this approach across several large-scale video datasets, as well as in-the-wild deep fakes.
2208.08417
Giridhar Kaushik Ramachandran
Giridhar Kaushik Ramachandran, Kevin Lybarger, Yaya Liu, Diwakar Mahajan, Jennifer J. Liang, Ching-Huei Tsou, Meliha Yetisgen, \"Ozlem Uzuner
Extracting Medication Changes in Clinical Narratives using Pre-trained Language Models
null
Journal of Biomedical Informatics.139.2023.104302.1532-0464
10.1016/j.jbi.2023.104302
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
An accurate and detailed account of patient medications, including medication changes within the patient timeline, is essential for healthcare providers to provide appropriate patient care. Healthcare providers or the patients themselves may initiate changes to patient medication. Medication changes take many forms, including prescribed medication and associated dosage modification. These changes provide information about the overall health of the patient and the rationale that led to the current care. Future care can then build on the resulting state of the patient. This work explores the automatic extraction of medication change information from free-text clinical notes. The Contextual Medication Event Dataset (CMED) is a corpus of clinical notes with annotations that characterize medication changes through multiple change-related attributes, including the type of change (start, stop, increase, etc.), initiator of the change, temporality, change likelihood, and negation. Using CMED, we identify medication mentions in clinical text and propose three novel high-performing BERT-based systems that resolve the annotated medication change characteristics. We demonstrate that our proposed systems improve medication change classification performance over the initial work exploring CMED.
[ { "created": "Wed, 17 Aug 2022 17:22:48 GMT", "version": "v1" }, { "created": "Thu, 12 Jan 2023 21:23:54 GMT", "version": "v2" } ]
2023-06-13
[ [ "Ramachandran", "Giridhar Kaushik", "" ], [ "Lybarger", "Kevin", "" ], [ "Liu", "Yaya", "" ], [ "Mahajan", "Diwakar", "" ], [ "Liang", "Jennifer J.", "" ], [ "Tsou", "Ching-Huei", "" ], [ "Yetisgen", "Meliha", "" ], [ "Uzuner", "Özlem", "" ] ]
An accurate and detailed account of patient medications, including medication changes within the patient timeline, is essential for healthcare providers to provide appropriate patient care. Healthcare providers or the patients themselves may initiate changes to patient medication. Medication changes take many forms, including prescribed medication and associated dosage modification. These changes provide information about the overall health of the patient and the rationale that led to the current care. Future care can then build on the resulting state of the patient. This work explores the automatic extraction of medication change information from free-text clinical notes. The Contextual Medication Event Dataset (CMED) is a corpus of clinical notes with annotations that characterize medication changes through multiple change-related attributes, including the type of change (start, stop, increase, etc.), initiator of the change, temporality, change likelihood, and negation. Using CMED, we identify medication mentions in clinical text and propose three novel high-performing BERT-based systems that resolve the annotated medication change characteristics. We demonstrate that our proposed systems improve medication change classification performance over the initial work exploring CMED.
1804.04212
Hugo Caselles-Dupr\'e
Hugo Caselles-Dupr\'e, Florian Lesaint, Jimena Royo-Letelier
Word2Vec applied to Recommendation: Hyperparameters Matter
This paper is published on the 12th ACM Conference on Recommender Systems, Vancouver, Canada, 2nd-7th October 2018
null
null
null
cs.IR cs.CL cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Skip-gram with negative sampling, a popular variant of Word2vec originally designed and tuned to create word embeddings for Natural Language Processing, has been used to create item embeddings with successful applications in recommendation. While these fields do not share the same type of data, neither evaluate on the same tasks, recommendation applications tend to use the same already tuned hyperparameters values, even if optimal hyperparameters values are often known to be data and task dependent. We thus investigate the marginal importance of each hyperparameter in a recommendation setting through large hyperparameter grid searches on various datasets. Results reveal that optimizing neglected hyperparameters, namely negative sampling distribution, number of epochs, subsampling parameter and window-size, significantly improves performance on a recommendation task, and can increase it by an order of magnitude. Importantly, we find that optimal hyperparameters configurations for Natural Language Processing tasks and Recommendation tasks are noticeably different.
[ { "created": "Wed, 11 Apr 2018 20:37:35 GMT", "version": "v1" }, { "created": "Tue, 8 May 2018 19:30:18 GMT", "version": "v2" }, { "created": "Wed, 29 Aug 2018 15:16:08 GMT", "version": "v3" } ]
2018-08-30
[ [ "Caselles-Dupré", "Hugo", "" ], [ "Lesaint", "Florian", "" ], [ "Royo-Letelier", "Jimena", "" ] ]
Skip-gram with negative sampling, a popular variant of Word2vec originally designed and tuned to create word embeddings for Natural Language Processing, has been used to create item embeddings with successful applications in recommendation. While these fields do not share the same type of data, neither evaluate on the same tasks, recommendation applications tend to use the same already tuned hyperparameters values, even if optimal hyperparameters values are often known to be data and task dependent. We thus investigate the marginal importance of each hyperparameter in a recommendation setting through large hyperparameter grid searches on various datasets. Results reveal that optimizing neglected hyperparameters, namely negative sampling distribution, number of epochs, subsampling parameter and window-size, significantly improves performance on a recommendation task, and can increase it by an order of magnitude. Importantly, we find that optimal hyperparameters configurations for Natural Language Processing tasks and Recommendation tasks are noticeably different.
1810.02003
Govind Ramnarayan
Ran Canetti, Aloni Cohen, Nishanth Dikkala, Govind Ramnarayan, Sarah Scheffler, Adam Smith
From Soft Classifiers to Hard Decisions: How fair can we be?
null
null
null
null
cs.LG cs.CY stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A popular methodology for building binary decision-making classifiers in the presence of imperfect information is to first construct a non-binary "scoring" classifier that is calibrated over all protected groups, and then to post-process this score to obtain a binary decision. We study the feasibility of achieving various fairness properties by post-processing calibrated scores, and then show that deferring post-processors allow for more fairness conditions to hold on the final decision. Specifically, we show: 1. There does not exist a general way to post-process a calibrated classifier to equalize protected groups' positive or negative predictive value (PPV or NPV). For certain "nice" calibrated classifiers, either PPV or NPV can be equalized when the post-processor uses different thresholds across protected groups, though there exist distributions of calibrated scores for which the two measures cannot be both equalized. When the post-processing consists of a single global threshold across all groups, natural fairness properties, such as equalizing PPV in a nontrivial way, do not hold even for "nice" classifiers. 2. When the post-processing is allowed to `defer' on some decisions (that is, to avoid making a decision by handing off some examples to a separate process), then for the non-deferred decisions, the resulting classifier can be made to equalize PPV, NPV, false positive rate (FPR) and false negative rate (FNR) across the protected groups. This suggests a way to partially evade the impossibility results of Chouldechova and Kleinberg et al., which preclude equalizing all of these measures simultaneously. We also present different deferring strategies and show how they affect the fairness properties of the overall system. We evaluate our post-processing techniques using the COMPAS data set from 2016.
[ { "created": "Wed, 3 Oct 2018 23:16:09 GMT", "version": "v1" }, { "created": "Mon, 21 Jan 2019 16:36:11 GMT", "version": "v2" } ]
2019-01-23
[ [ "Canetti", "Ran", "" ], [ "Cohen", "Aloni", "" ], [ "Dikkala", "Nishanth", "" ], [ "Ramnarayan", "Govind", "" ], [ "Scheffler", "Sarah", "" ], [ "Smith", "Adam", "" ] ]
A popular methodology for building binary decision-making classifiers in the presence of imperfect information is to first construct a non-binary "scoring" classifier that is calibrated over all protected groups, and then to post-process this score to obtain a binary decision. We study the feasibility of achieving various fairness properties by post-processing calibrated scores, and then show that deferring post-processors allow for more fairness conditions to hold on the final decision. Specifically, we show: 1. There does not exist a general way to post-process a calibrated classifier to equalize protected groups' positive or negative predictive value (PPV or NPV). For certain "nice" calibrated classifiers, either PPV or NPV can be equalized when the post-processor uses different thresholds across protected groups, though there exist distributions of calibrated scores for which the two measures cannot be both equalized. When the post-processing consists of a single global threshold across all groups, natural fairness properties, such as equalizing PPV in a nontrivial way, do not hold even for "nice" classifiers. 2. When the post-processing is allowed to `defer' on some decisions (that is, to avoid making a decision by handing off some examples to a separate process), then for the non-deferred decisions, the resulting classifier can be made to equalize PPV, NPV, false positive rate (FPR) and false negative rate (FNR) across the protected groups. This suggests a way to partially evade the impossibility results of Chouldechova and Kleinberg et al., which preclude equalizing all of these measures simultaneously. We also present different deferring strategies and show how they affect the fairness properties of the overall system. We evaluate our post-processing techniques using the COMPAS data set from 2016.
2102.00696
Selim Furkan Tekin
Selim Furkan Tekin, Arda Fazla and Suleyman Serdar Kozat
Numerical Weather Forecasting using Convolutional-LSTM with Attention and Context Matcher Mechanisms
- In our journal submission, we removed the integration of the observational data section since it was not used in the experiments. Thus, we also removed the authors from the paper who were responsible for that section. - In the second version, we also performed an experiment on WeatherBench. We compare our results with the Physical Weather Forecasting Models
null
null
null
cs.LG cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Numerical weather forecasting using high-resolution physical models often requires extensive computational resources on supercomputers, which diminishes their wide usage in most real-life applications. As a remedy, applying deep learning methods has revealed innovative solutions within this field. To this end, we introduce a novel deep learning architecture for forecasting high-resolution spatio-temporal weather data. Our approach extends the conventional encoder-decoder structure by integrating Convolutional Long-short Term Memory and Convolutional Neural Networks. In addition, we incorporate attention and context matcher mechanisms into the model architecture. Our Weather Model achieves significant performance improvements compared to baseline deep learning models, including ConvLSTM, TrajGRU, and U-Net. Our experimental evaluation involves high-scale, real-world benchmark numerical weather datasets, namely the ERA5 hourly dataset on pressure levels and WeatherBench. Our results demonstrate substantial improvements in identifying spatial and temporal correlations with attention matrices focusing on distinct parts of the input series to model atmospheric circulations. We also compare our model with high-resolution physical models using the benchmark metrics and show that our Weather Model is accurate and easy to interpret.
[ { "created": "Mon, 1 Feb 2021 08:30:42 GMT", "version": "v1" }, { "created": "Wed, 4 Oct 2023 18:56:52 GMT", "version": "v2" } ]
2023-10-06
[ [ "Tekin", "Selim Furkan", "" ], [ "Fazla", "Arda", "" ], [ "Kozat", "Suleyman Serdar", "" ] ]
Numerical weather forecasting using high-resolution physical models often requires extensive computational resources on supercomputers, which diminishes their wide usage in most real-life applications. As a remedy, applying deep learning methods has revealed innovative solutions within this field. To this end, we introduce a novel deep learning architecture for forecasting high-resolution spatio-temporal weather data. Our approach extends the conventional encoder-decoder structure by integrating Convolutional Long-short Term Memory and Convolutional Neural Networks. In addition, we incorporate attention and context matcher mechanisms into the model architecture. Our Weather Model achieves significant performance improvements compared to baseline deep learning models, including ConvLSTM, TrajGRU, and U-Net. Our experimental evaluation involves high-scale, real-world benchmark numerical weather datasets, namely the ERA5 hourly dataset on pressure levels and WeatherBench. Our results demonstrate substantial improvements in identifying spatial and temporal correlations with attention matrices focusing on distinct parts of the input series to model atmospheric circulations. We also compare our model with high-resolution physical models using the benchmark metrics and show that our Weather Model is accurate and easy to interpret.
1304.6274
Rohan Padhye
Rohan Padhye and Uday P. Khedker
Interprocedural Data Flow Analysis in Soot using Value Contexts
SOAP 2013 Final Version
null
null
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An interprocedural analysis is precise if it is flow sensitive and fully context-sensitive even in the presence of recursion. Many methods of interprocedural analysis sacrifice precision for scalability while some are precise but limited to only a certain class of problems. Soot currently supports interprocedural analysis of Java programs using graph reachability. However, this approach is restricted to IFDS/IDE problems, and is not suitable for general data flow frameworks such as heap reference analysis and points-to analysis which have non-distributive flow functions. We describe a general-purpose interprocedural analysis framework for Soot using data flow values for context-sensitivity. This framework is not restricted to problems with distributive flow functions, although the lattice must be finite. It combines the key ideas of the tabulation method of the functional approach and the technique of value-based termination of call string construction. The efficiency and precision of interprocedural analyses is heavily affected by the precision of the underlying call graph. This is especially important for object-oriented languages like Java where virtual method invocations cause an explosion of spurious call edges if the call graph is constructed naively. We have instantiated our framework with a flow and context-sensitive points-to analysis in Soot, which enables the construction of call graphs that are far more precise than those constructed by Soot's SPARK engine.
[ { "created": "Tue, 23 Apr 2013 13:02:09 GMT", "version": "v1" }, { "created": "Mon, 29 Jul 2013 06:48:43 GMT", "version": "v2" } ]
2013-07-30
[ [ "Padhye", "Rohan", "" ], [ "Khedker", "Uday P.", "" ] ]
An interprocedural analysis is precise if it is flow sensitive and fully context-sensitive even in the presence of recursion. Many methods of interprocedural analysis sacrifice precision for scalability while some are precise but limited to only a certain class of problems. Soot currently supports interprocedural analysis of Java programs using graph reachability. However, this approach is restricted to IFDS/IDE problems, and is not suitable for general data flow frameworks such as heap reference analysis and points-to analysis which have non-distributive flow functions. We describe a general-purpose interprocedural analysis framework for Soot using data flow values for context-sensitivity. This framework is not restricted to problems with distributive flow functions, although the lattice must be finite. It combines the key ideas of the tabulation method of the functional approach and the technique of value-based termination of call string construction. The efficiency and precision of interprocedural analyses is heavily affected by the precision of the underlying call graph. This is especially important for object-oriented languages like Java where virtual method invocations cause an explosion of spurious call edges if the call graph is constructed naively. We have instantiated our framework with a flow and context-sensitive points-to analysis in Soot, which enables the construction of call graphs that are far more precise than those constructed by Soot's SPARK engine.
1509.06016
Lei Deng
Lei Deng, Siyuan Huang, Yueqi Duan, Baohua Chen, Jie Zhou
Image Set Querying Based Localization
VCIP2015, 4 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conventional single image based localization methods usually fail to localize a querying image when there exist large variations between the querying image and the pre-built scene. To address this, we propose an image-set querying based localization approach. When the localization by a single image fails to work, the system will ask the user to capture more auxiliary images. First, a local 3D model is established for the querying image set. Then, the pose of the querying image set is estimated by solving a nonlinear optimization problem, which aims to match the local 3D model against the pre-built scene. Experiments have shown the effectiveness and feasibility of the proposed approach.
[ { "created": "Sun, 20 Sep 2015 13:49:30 GMT", "version": "v1" } ]
2015-09-22
[ [ "Deng", "Lei", "" ], [ "Huang", "Siyuan", "" ], [ "Duan", "Yueqi", "" ], [ "Chen", "Baohua", "" ], [ "Zhou", "Jie", "" ] ]
Conventional single image based localization methods usually fail to localize a querying image when there exist large variations between the querying image and the pre-built scene. To address this, we propose an image-set querying based localization approach. When the localization by a single image fails to work, the system will ask the user to capture more auxiliary images. First, a local 3D model is established for the querying image set. Then, the pose of the querying image set is estimated by solving a nonlinear optimization problem, which aims to match the local 3D model against the pre-built scene. Experiments have shown the effectiveness and feasibility of the proposed approach.
1905.05053
Guoxian Yu
Shixing Yao, Guoxian Yu, Jun Wang, Carlotta Domeniconi and Xiangliang Zhang
Multi-View Multiple Clustering
7 pages, 5 figures, uses ijcai19.sty
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiple clustering aims at exploring alternative clusterings to organize the data into meaningful groups from different perspectives. Existing multiple clustering algorithms are designed for single-view data. We assume that the individuality and commonality of multi-view data can be leveraged to generate high-quality and diverse clusterings. To this end, we propose a novel multi-view multiple clustering (MVMC) algorithm. MVMC first adapts multi-view self-representation learning to explore the individuality encoding matrices and the shared commonality matrix of multi-view data. It additionally reduces the redundancy (i.e., enhancing the individuality) among the matrices using the Hilbert-Schmidt Independence Criterion (HSIC), and collects shared information by forcing the shared matrix to be smooth across all views. It then uses matrix factorization on the individual matrices, along with the shared matrix, to generate diverse clusterings of high-quality. We further extend multiple co-clustering on multi-view data and propose a solution called multi-view multiple co-clustering (MVMCC). Our empirical study shows that MVMC (MVMCC) can exploit multi-view data to generate multiple high-quality and diverse clusterings (co-clusterings), with superior performance to the state-of-the-art methods.
[ { "created": "Mon, 13 May 2019 14:20:44 GMT", "version": "v1" } ]
2019-05-16
[ [ "Yao", "Shixing", "" ], [ "Yu", "Guoxian", "" ], [ "Wang", "Jun", "" ], [ "Domeniconi", "Carlotta", "" ], [ "Zhang", "Xiangliang", "" ] ]
Multiple clustering aims at exploring alternative clusterings to organize the data into meaningful groups from different perspectives. Existing multiple clustering algorithms are designed for single-view data. We assume that the individuality and commonality of multi-view data can be leveraged to generate high-quality and diverse clusterings. To this end, we propose a novel multi-view multiple clustering (MVMC) algorithm. MVMC first adapts multi-view self-representation learning to explore the individuality encoding matrices and the shared commonality matrix of multi-view data. It additionally reduces the redundancy (i.e., enhancing the individuality) among the matrices using the Hilbert-Schmidt Independence Criterion (HSIC), and collects shared information by forcing the shared matrix to be smooth across all views. It then uses matrix factorization on the individual matrices, along with the shared matrix, to generate diverse clusterings of high-quality. We further extend multiple co-clustering on multi-view data and propose a solution called multi-view multiple co-clustering (MVMCC). Our empirical study shows that MVMC (MVMCC) can exploit multi-view data to generate multiple high-quality and diverse clusterings (co-clusterings), with superior performance to the state-of-the-art methods.
2002.11474
Siyue Wang
Peiyan Dong, Siyue Wang, Wei Niu, Chengming Zhang, Sheng Lin, Zhengang Li, Yifan Gong, Bin Ren, Xue Lin, Yanzhi Wang, and Dingwen Tao
RTMobile: Beyond Real-Time Mobile Acceleration of RNNs for Speech Recognition
null
null
null
null
cs.SD cs.LG eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recurrent neural networks (RNNs) based automatic speech recognition has nowadays become prevalent on mobile devices such as smart phones. However, previous RNN compression techniques either suffer from hardware performance overhead due to irregularity or significant accuracy loss due to the preserved regularity for hardware friendliness. In this work, we propose RTMobile that leverages both a novel block-based pruning approach and compiler optimizations to accelerate RNN inference on mobile devices. Our proposed RTMobile is the first work that can achieve real-time RNN inference on mobile platforms. Experimental results demonstrate that RTMobile can significantly outperform existing RNN hardware acceleration methods in terms of inference accuracy and time. Compared with prior work on FPGA, RTMobile using Adreno 640 embedded GPU on GRU can improve the energy-efficiency by about 40$\times$ while maintaining the same inference time.
[ { "created": "Wed, 19 Feb 2020 00:07:32 GMT", "version": "v1" } ]
2020-02-28
[ [ "Dong", "Peiyan", "" ], [ "Wang", "Siyue", "" ], [ "Niu", "Wei", "" ], [ "Zhang", "Chengming", "" ], [ "Lin", "Sheng", "" ], [ "Li", "Zhengang", "" ], [ "Gong", "Yifan", "" ], [ "Ren", "Bin", "" ], [ "Lin", "Xue", "" ], [ "Wang", "Yanzhi", "" ], [ "Tao", "Dingwen", "" ] ]
Recurrent neural networks (RNNs) based automatic speech recognition has nowadays become prevalent on mobile devices such as smart phones. However, previous RNN compression techniques either suffer from hardware performance overhead due to irregularity or significant accuracy loss due to the preserved regularity for hardware friendliness. In this work, we propose RTMobile that leverages both a novel block-based pruning approach and compiler optimizations to accelerate RNN inference on mobile devices. Our proposed RTMobile is the first work that can achieve real-time RNN inference on mobile platforms. Experimental results demonstrate that RTMobile can significantly outperform existing RNN hardware acceleration methods in terms of inference accuracy and time. Compared with prior work on FPGA, RTMobile using Adreno 640 embedded GPU on GRU can improve the energy-efficiency by about 40$\times$ while maintaining the same inference time.
1802.01481
Jeffrey Lienert
Jeffrey Lienert, Laura Koehly, Felix Reed-Tsochas, Christopher Steven Marcum
An efficient counting method for the colored triad census
null
Social Networks 59 (2019) 136-142
10.1016/j.socnet.2019.04.003
null
cs.DS cs.SI
http://creativecommons.org/publicdomain/zero/1.0/
The triad census is an important approach to understand local structure in network science, providing comprehensive assessments of the observed relational configurations between triples of actors in a network. However, researchers are often interested in combinations of relational and categorical nodal attributes. In this case, it is desirable to account for the label, or color, of the nodes in the triad census. In this paper, we describe an efficient algorithm for constructing the colored triad census, based, in part, on existing methods for the classic triad census. We evaluate the performance of the algorithm using empirical and simulated data for both undirected and directed graphs. The results of the simulation demonstrate that the proposed algorithm reduces computational time many-fold over the naive approach. We also apply the colored triad census to the Zachary karate club network dataset. We simultaneously show the efficiency of the algorithm, and a way to conduct a statistical test on the census by forming a null distribution from 1,000 realizations of a mixing-matrix conditioned graph and comparing the observed colored triad counts to the expected. From this, we demonstrate the method's utility in our discussion of results about homophily, heterophily, and bridging, simultaneously gained via the colored triad census. In sum, the proposed algorithm for the colored triad census brings novel utility to social network analysis in an efficient package.
[ { "created": "Mon, 5 Feb 2018 15:56:30 GMT", "version": "v1" }, { "created": "Fri, 18 May 2018 15:21:29 GMT", "version": "v2" } ]
2019-05-02
[ [ "Lienert", "Jeffrey", "" ], [ "Koehly", "Laura", "" ], [ "Reed-Tsochas", "Felix", "" ], [ "Marcum", "Christopher Steven", "" ] ]
The triad census is an important approach to understand local structure in network science, providing comprehensive assessments of the observed relational configurations between triples of actors in a network. However, researchers are often interested in combinations of relational and categorical nodal attributes. In this case, it is desirable to account for the label, or color, of the nodes in the triad census. In this paper, we describe an efficient algorithm for constructing the colored triad census, based, in part, on existing methods for the classic triad census. We evaluate the performance of the algorithm using empirical and simulated data for both undirected and directed graphs. The results of the simulation demonstrate that the proposed algorithm reduces computational time many-fold over the naive approach. We also apply the colored triad census to the Zachary karate club network dataset. We simultaneously show the efficiency of the algorithm, and a way to conduct a statistical test on the census by forming a null distribution from 1,000 realizations of a mixing-matrix conditioned graph and comparing the observed colored triad counts to the expected. From this, we demonstrate the method's utility in our discussion of results about homophily, heterophily, and bridging, simultaneously gained via the colored triad census. In sum, the proposed algorithm for the colored triad census brings novel utility to social network analysis in an efficient package.
2302.06279
Gorka Abad
Gorka Abad, Oguzhan Ersoy, Stjepan Picek, Aitor Urbieta
Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data
To appear in Network and Distributed System Security (NDSS) Symposium 2024
NDSS Symposium 2024
10.14722/ndss.2024.24334
null
cs.CR cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Deep neural networks (DNNs) have demonstrated remarkable performance across various tasks, including image and speech recognition. However, maximizing the effectiveness of DNNs requires meticulous optimization of numerous hyperparameters and network parameters through training. Moreover, high-performance DNNs entail many parameters, which consume significant energy during training. In order to overcome these challenges, researchers have turned to spiking neural networks (SNNs), which offer enhanced energy efficiency and biologically plausible data processing capabilities, rendering them highly suitable for sensory data tasks, particularly in neuromorphic data. Despite their advantages, SNNs, like DNNs, are susceptible to various threats, including adversarial examples and backdoor attacks. Yet, the field of SNNs still needs to be explored in terms of understanding and countering these attacks. This paper delves into backdoor attacks in SNNs using neuromorphic datasets and diverse triggers. Specifically, we explore backdoor triggers within neuromorphic data that can manipulate their position and color, providing a broader scope of possibilities than conventional triggers in domains like images. We present various attack strategies, achieving an attack success rate of up to 100% while maintaining a negligible impact on clean accuracy. Furthermore, we assess these attacks' stealthiness, revealing that our most potent attacks possess significant stealth capabilities. Lastly, we adapt several state-of-the-art defenses from the image domain, evaluating their efficacy on neuromorphic data and uncovering instances where they fall short, leading to compromised performance.
[ { "created": "Mon, 13 Feb 2023 11:34:17 GMT", "version": "v1" }, { "created": "Mon, 3 Jul 2023 07:03:22 GMT", "version": "v2" }, { "created": "Mon, 5 Feb 2024 11:09:00 GMT", "version": "v3" } ]
2024-06-14
[ [ "Abad", "Gorka", "" ], [ "Ersoy", "Oguzhan", "" ], [ "Picek", "Stjepan", "" ], [ "Urbieta", "Aitor", "" ] ]
Deep neural networks (DNNs) have demonstrated remarkable performance across various tasks, including image and speech recognition. However, maximizing the effectiveness of DNNs requires meticulous optimization of numerous hyperparameters and network parameters through training. Moreover, high-performance DNNs entail many parameters, which consume significant energy during training. In order to overcome these challenges, researchers have turned to spiking neural networks (SNNs), which offer enhanced energy efficiency and biologically plausible data processing capabilities, rendering them highly suitable for sensory data tasks, particularly in neuromorphic data. Despite their advantages, SNNs, like DNNs, are susceptible to various threats, including adversarial examples and backdoor attacks. Yet, the field of SNNs still needs to be explored in terms of understanding and countering these attacks. This paper delves into backdoor attacks in SNNs using neuromorphic datasets and diverse triggers. Specifically, we explore backdoor triggers within neuromorphic data that can manipulate their position and color, providing a broader scope of possibilities than conventional triggers in domains like images. We present various attack strategies, achieving an attack success rate of up to 100% while maintaining a negligible impact on clean accuracy. Furthermore, we assess these attacks' stealthiness, revealing that our most potent attacks possess significant stealth capabilities. Lastly, we adapt several state-of-the-art defenses from the image domain, evaluating their efficacy on neuromorphic data and uncovering instances where they fall short, leading to compromised performance.
2205.01685
Sajal Saha
Sajal Saha, Anwar Haque, and Greg Sidebottom
Deep Sequence Modeling for Anomalous ISP Traffic Prediction
6 pages, 6 images, To appear in the Proceedings of IEEE International Conference on Communications, Seoul, South Korea, 2022. arXiv admin note: substantial text overlap with arXiv:2205.01300
null
null
null
cs.LG cs.NI eess.SP
http://creativecommons.org/licenses/by-sa/4.0/
Internet traffic in the real world is susceptible to various external and internal factors which may abruptly change the normal traffic flow. Those unexpected changes are considered outliers in traffic. However, deep sequence models have been used to predict complex IP traffic, but their comparative performance for anomalous traffic has not been studied extensively. In this paper, we investigated and evaluated the performance of different deep sequence models for anomalous traffic prediction. Several deep sequences models were implemented to predict real traffic without and with outliers and show the significance of outlier detection in real-world traffic prediction. First, two different outlier detection techniques, such as the Three-Sigma rule and Isolation Forest, were applied to identify the anomaly. Second, we adjusted those abnormal data points using the Backward Filling technique before training the model. Finally, the performance of different models was compared for abnormal and adjusted traffic. LSTM_Encoder_Decoder (LSTM_En_De) is the best prediction model in our experiment, reducing the deviation between actual and predicted traffic by more than 11\% after adjusting the outliers. All other models, including Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), LSTM_En_De with Attention layer (LSTM_En_De_Atn), Gated Recurrent Unit (GRU), show better prediction after replacing the outliers and decreasing prediction error by more than 29%, 24%, 19%, and 10% respectively. Our experimental results indicate that the outliers in the data can significantly impact the quality of the prediction. Thus, outlier detection and mitigation assist the deep sequence model in learning the general trend and making better predictions.
[ { "created": "Tue, 3 May 2022 17:01:45 GMT", "version": "v1" } ]
2022-05-05
[ [ "Saha", "Sajal", "" ], [ "Haque", "Anwar", "" ], [ "Sidebottom", "Greg", "" ] ]
Internet traffic in the real world is susceptible to various external and internal factors which may abruptly change the normal traffic flow. Those unexpected changes are considered outliers in traffic. However, deep sequence models have been used to predict complex IP traffic, but their comparative performance for anomalous traffic has not been studied extensively. In this paper, we investigated and evaluated the performance of different deep sequence models for anomalous traffic prediction. Several deep sequences models were implemented to predict real traffic without and with outliers and show the significance of outlier detection in real-world traffic prediction. First, two different outlier detection techniques, such as the Three-Sigma rule and Isolation Forest, were applied to identify the anomaly. Second, we adjusted those abnormal data points using the Backward Filling technique before training the model. Finally, the performance of different models was compared for abnormal and adjusted traffic. LSTM_Encoder_Decoder (LSTM_En_De) is the best prediction model in our experiment, reducing the deviation between actual and predicted traffic by more than 11\% after adjusting the outliers. All other models, including Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), LSTM_En_De with Attention layer (LSTM_En_De_Atn), Gated Recurrent Unit (GRU), show better prediction after replacing the outliers and decreasing prediction error by more than 29%, 24%, 19%, and 10% respectively. Our experimental results indicate that the outliers in the data can significantly impact the quality of the prediction. Thus, outlier detection and mitigation assist the deep sequence model in learning the general trend and making better predictions.
2405.06749
Dimitrios Kollias
Vasileios Karampinis, Anastasios Arsenos, Orfeas Filippopoulos, Evangelos Petrongonas, Christos Skliros, Dimitrios Kollias, Stefanos Kollias and Athanasios Voulodimos
Ensuring UAV Safety: A Vision-only and Real-time Framework for Collision Avoidance Through Object Detection, Tracking, and Distance Estimation
accepted at ICUAS 2024
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
In the last twenty years, unmanned aerial vehicles (UAVs) have garnered growing interest due to their expanding applications in both military and civilian domains. Detecting non-cooperative aerial vehicles with efficiency and estimating collisions accurately are pivotal for achieving fully autonomous aircraft and facilitating Advanced Air Mobility (AAM). This paper presents a deep-learning framework that utilizes optical sensors for the detection, tracking, and distance estimation of non-cooperative aerial vehicles. In implementing this comprehensive sensing framework, the availability of depth information is essential for enabling autonomous aerial vehicles to perceive and navigate around obstacles. In this work, we propose a method for estimating the distance information of a detected aerial object in real time using only the input of a monocular camera. In order to train our deep learning components for the object detection, tracking and depth estimation tasks we utilize the Amazon Airborne Object Tracking (AOT) Dataset. In contrast to previous approaches that integrate the depth estimation module into the object detector, our method formulates the problem as image-to-image translation. We employ a separate lightweight encoder-decoder network for efficient and robust depth estimation. In a nutshell, the object detection module identifies and localizes obstacles, conveying this information to both the tracking module for monitoring obstacle movement and the depth estimation module for calculating distances. Our approach is evaluated on the Airborne Object Tracking (AOT) dataset which is the largest (to the best of our knowledge) air-to-air airborne object dataset.
[ { "created": "Fri, 10 May 2024 18:06:41 GMT", "version": "v1" }, { "created": "Thu, 16 May 2024 14:24:37 GMT", "version": "v2" } ]
2024-05-17
[ [ "Karampinis", "Vasileios", "" ], [ "Arsenos", "Anastasios", "" ], [ "Filippopoulos", "Orfeas", "" ], [ "Petrongonas", "Evangelos", "" ], [ "Skliros", "Christos", "" ], [ "Kollias", "Dimitrios", "" ], [ "Kollias", "Stefanos", "" ], [ "Voulodimos", "Athanasios", "" ] ]
In the last twenty years, unmanned aerial vehicles (UAVs) have garnered growing interest due to their expanding applications in both military and civilian domains. Detecting non-cooperative aerial vehicles with efficiency and estimating collisions accurately are pivotal for achieving fully autonomous aircraft and facilitating Advanced Air Mobility (AAM). This paper presents a deep-learning framework that utilizes optical sensors for the detection, tracking, and distance estimation of non-cooperative aerial vehicles. In implementing this comprehensive sensing framework, the availability of depth information is essential for enabling autonomous aerial vehicles to perceive and navigate around obstacles. In this work, we propose a method for estimating the distance information of a detected aerial object in real time using only the input of a monocular camera. In order to train our deep learning components for the object detection, tracking and depth estimation tasks we utilize the Amazon Airborne Object Tracking (AOT) Dataset. In contrast to previous approaches that integrate the depth estimation module into the object detector, our method formulates the problem as image-to-image translation. We employ a separate lightweight encoder-decoder network for efficient and robust depth estimation. In a nutshell, the object detection module identifies and localizes obstacles, conveying this information to both the tracking module for monitoring obstacle movement and the depth estimation module for calculating distances. Our approach is evaluated on the Airborne Object Tracking (AOT) dataset which is the largest (to the best of our knowledge) air-to-air airborne object dataset.
1608.03819
Chenyou Fan
Chenyou Fan, David J. Crandall
DeepDiary: Automatic Caption Generation for Lifelogging Image Streams
This is an expanded preprint of a paper appearing at the ECCV International Workshop on Egocentric Perception, Interaction, and Computing
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lifelogging cameras capture everyday life from a first-person perspective, but generate so much data that it is hard for users to browse and organize their image collections effectively. In this paper, we propose to use automatic image captioning algorithms to generate textual representations of these collections. We develop and explore novel techniques based on deep learning to generate captions for both individual images and image streams, using temporal consistency constraints to create summaries that are both more compact and less noisy. We evaluate our techniques with quantitative and qualitative results, and apply captioning to an image retrieval application for finding potentially private images. Our results suggest that our automatic captioning algorithms, while imperfect, may work well enough to help users manage lifelogging photo collections.
[ { "created": "Fri, 12 Aug 2016 15:17:33 GMT", "version": "v1" } ]
2016-08-15
[ [ "Fan", "Chenyou", "" ], [ "Crandall", "David J.", "" ] ]
Lifelogging cameras capture everyday life from a first-person perspective, but generate so much data that it is hard for users to browse and organize their image collections effectively. In this paper, we propose to use automatic image captioning algorithms to generate textual representations of these collections. We develop and explore novel techniques based on deep learning to generate captions for both individual images and image streams, using temporal consistency constraints to create summaries that are both more compact and less noisy. We evaluate our techniques with quantitative and qualitative results, and apply captioning to an image retrieval application for finding potentially private images. Our results suggest that our automatic captioning algorithms, while imperfect, may work well enough to help users manage lifelogging photo collections.
2406.05374
Tao He
Tao He, Lizi Liao, Yixin Cao, Yuanxing Liu, Ming Liu, Zerui Chen, Bing Qin
Planning Like Human: A Dual-process Framework for Dialogue Planning
24 pages, 5 figures, ACL 2024 main conference
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In proactive dialogue, the challenge lies not just in generating responses but in steering conversations toward predetermined goals, a task where Large Language Models (LLMs) typically struggle due to their reactive nature. Traditional approaches to enhance dialogue planning in LLMs, ranging from elaborate prompt engineering to the integration of policy networks, either face efficiency issues or deliver suboptimal performance. Inspired by the dualprocess theory in psychology, which identifies two distinct modes of thinking - intuitive (fast) and analytical (slow), we propose the Dual-Process Dialogue Planning (DPDP) framework. DPDP embodies this theory through two complementary planning systems: an instinctive policy model for familiar contexts and a deliberative Monte Carlo Tree Search (MCTS) mechanism for complex, novel scenarios. This dual strategy is further coupled with a novel two-stage training regimen: offline Reinforcement Learning for robust initial policy model formation followed by MCTS-enhanced on-the-fly learning, which ensures a dynamic balance between efficiency and strategic depth. Our empirical evaluations across diverse dialogue tasks affirm DPDP's superiority in achieving both high-quality dialogues and operational efficiency, outpacing existing methods.
[ { "created": "Sat, 8 Jun 2024 06:52:47 GMT", "version": "v1" } ]
2024-06-11
[ [ "He", "Tao", "" ], [ "Liao", "Lizi", "" ], [ "Cao", "Yixin", "" ], [ "Liu", "Yuanxing", "" ], [ "Liu", "Ming", "" ], [ "Chen", "Zerui", "" ], [ "Qin", "Bing", "" ] ]
In proactive dialogue, the challenge lies not just in generating responses but in steering conversations toward predetermined goals, a task where Large Language Models (LLMs) typically struggle due to their reactive nature. Traditional approaches to enhance dialogue planning in LLMs, ranging from elaborate prompt engineering to the integration of policy networks, either face efficiency issues or deliver suboptimal performance. Inspired by the dualprocess theory in psychology, which identifies two distinct modes of thinking - intuitive (fast) and analytical (slow), we propose the Dual-Process Dialogue Planning (DPDP) framework. DPDP embodies this theory through two complementary planning systems: an instinctive policy model for familiar contexts and a deliberative Monte Carlo Tree Search (MCTS) mechanism for complex, novel scenarios. This dual strategy is further coupled with a novel two-stage training regimen: offline Reinforcement Learning for robust initial policy model formation followed by MCTS-enhanced on-the-fly learning, which ensures a dynamic balance between efficiency and strategic depth. Our empirical evaluations across diverse dialogue tasks affirm DPDP's superiority in achieving both high-quality dialogues and operational efficiency, outpacing existing methods.
2312.03051
Isaac Liao
Isaac Liao, Ziming Liu, Max Tegmark
Generating Interpretable Networks using Hypernetworks
15 pages, 7 figures
null
null
null
cs.LG cs.AI cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An essential goal in mechanistic interpretability to decode a network, i.e., to convert a neural network's raw weights to an interpretable algorithm. Given the difficulty of the decoding problem, progress has been made to understand the easier encoding problem, i.e., to convert an interpretable algorithm into network weights. Previous works focus on encoding existing algorithms into networks, which are interpretable by definition. However, focusing on encoding limits the possibility of discovering new algorithms that humans have never stumbled upon, but that are nevertheless interpretable. In this work, we explore the possibility of using hypernetworks to generate interpretable networks whose underlying algorithms are not yet known. The hypernetwork is carefully designed such that it can control network complexity, leading to a diverse family of interpretable algorithms ranked by their complexity. All of them are interpretable in hindsight, although some of them are less intuitive to humans, hence providing new insights regarding how to "think" like a neural network. For the task of computing L1 norms, hypernetworks find three algorithms: (a) the double-sided algorithm, (b) the convexity algorithm, (c) the pudding algorithm, although only the first algorithm was expected by the authors before experiments. We automatically classify these algorithms and analyze how these algorithmic phases develop during training, as well as how they are affected by complexity control. Furthermore, we show that a trained hypernetwork can correctly construct models for input dimensions not seen in training, demonstrating systematic generalization.
[ { "created": "Tue, 5 Dec 2023 18:55:32 GMT", "version": "v1" } ]
2023-12-07
[ [ "Liao", "Isaac", "" ], [ "Liu", "Ziming", "" ], [ "Tegmark", "Max", "" ] ]
An essential goal in mechanistic interpretability to decode a network, i.e., to convert a neural network's raw weights to an interpretable algorithm. Given the difficulty of the decoding problem, progress has been made to understand the easier encoding problem, i.e., to convert an interpretable algorithm into network weights. Previous works focus on encoding existing algorithms into networks, which are interpretable by definition. However, focusing on encoding limits the possibility of discovering new algorithms that humans have never stumbled upon, but that are nevertheless interpretable. In this work, we explore the possibility of using hypernetworks to generate interpretable networks whose underlying algorithms are not yet known. The hypernetwork is carefully designed such that it can control network complexity, leading to a diverse family of interpretable algorithms ranked by their complexity. All of them are interpretable in hindsight, although some of them are less intuitive to humans, hence providing new insights regarding how to "think" like a neural network. For the task of computing L1 norms, hypernetworks find three algorithms: (a) the double-sided algorithm, (b) the convexity algorithm, (c) the pudding algorithm, although only the first algorithm was expected by the authors before experiments. We automatically classify these algorithms and analyze how these algorithmic phases develop during training, as well as how they are affected by complexity control. Furthermore, we show that a trained hypernetwork can correctly construct models for input dimensions not seen in training, demonstrating systematic generalization.
1606.03662
Haishan Wu
Mengwen Xu, Tianyi Wang, Zhengwei Wu, Jingbo Zhou, Jian Li, Haishan Wu
Store Location Selection via Mining Search Query Logs of Baidu Maps
null
null
null
null
cs.AI cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Choosing a good location when opening a new store is crucial for the future success of a business. Traditional methods include offline manual survey, which is very time consuming, and analytic models based on census data, which are un- able to adapt to the dynamic market. The rapid increase of the availability of big data from various types of mobile devices, such as online query data and offline positioning data, provides us with the possibility to develop automatic and accurate data-driven prediction models for business store placement. In this paper, we propose a Demand Distribution Driven Store Placement (D3SP) framework for business store placement by mining search query data from Baidu Maps. D3SP first detects the spatial-temporal distributions of customer demands on different business services via query data from Baidu Maps, the largest online map search engine in China, and detects the gaps between demand and sup- ply. Then we determine candidate locations via clustering such gaps. In the final stage, we solve the location optimization problem by predicting and ranking the number of customers. We not only deploy supervised regression models to predict the number of customers, but also learn to rank models to directly rank the locations. We evaluate our framework on various types of businesses in real-world cases, and the experiments results demonstrate the effectiveness of our methods. D3SP as the core function for store placement has already been implemented as a core component of our business analytics platform and could be potentially used by chain store merchants on Baidu Nuomi.
[ { "created": "Sun, 12 Jun 2016 03:42:10 GMT", "version": "v1" } ]
2016-06-14
[ [ "Xu", "Mengwen", "" ], [ "Wang", "Tianyi", "" ], [ "Wu", "Zhengwei", "" ], [ "Zhou", "Jingbo", "" ], [ "Li", "Jian", "" ], [ "Wu", "Haishan", "" ] ]
Choosing a good location when opening a new store is crucial for the future success of a business. Traditional methods include offline manual survey, which is very time consuming, and analytic models based on census data, which are un- able to adapt to the dynamic market. The rapid increase of the availability of big data from various types of mobile devices, such as online query data and offline positioning data, provides us with the possibility to develop automatic and accurate data-driven prediction models for business store placement. In this paper, we propose a Demand Distribution Driven Store Placement (D3SP) framework for business store placement by mining search query data from Baidu Maps. D3SP first detects the spatial-temporal distributions of customer demands on different business services via query data from Baidu Maps, the largest online map search engine in China, and detects the gaps between demand and sup- ply. Then we determine candidate locations via clustering such gaps. In the final stage, we solve the location optimization problem by predicting and ranking the number of customers. We not only deploy supervised regression models to predict the number of customers, but also learn to rank models to directly rank the locations. We evaluate our framework on various types of businesses in real-world cases, and the experiments results demonstrate the effectiveness of our methods. D3SP as the core function for store placement has already been implemented as a core component of our business analytics platform and could be potentially used by chain store merchants on Baidu Nuomi.
1904.00378
Giuseppe Silano
Giuseppe Silano, Luigi Iannelli
MAT-Fly: An Educational Platform for Simulating Unmanned Aerial Vehicles Aimed to Detect and Track Moving Objects
11 pages, 15 figures, journal paper
IEEE Access, 2021
10.1109/ACCESS.2021.3064758
null
cs.RO cs.SY
http://creativecommons.org/licenses/by/4.0/
The main motivation of this work is to propose a simulation approach for a specific task within the Unmanned Aerial Vehicle (UAV) field, i.e., the visual detection and tracking of arbitrary moving objects. In particular, it is described MAT-Fly, a numerical simulation platform for multi-rotor aircraft characterized by the ease of use and control development. The platform is based on Matlab and the MathWorks Virtual Reality (VR) and Computer Vision System (CVS) toolboxes that work together to simulate the behavior of a quad-rotor while tracking a car that moves along a nontrivial path. The VR toolbox has been chosen due to the familiarity that students have with Matlab and because it does not require a notable effort by the user for the learning and development phase thanks to its simple structure. The overall architecture is quite modular so that each block can be easily replaced with others simplifying the code reuse and the platform customization. Some simple testbeds are presented to show the validity of the approach and how the platform works. The simulator is released as open-source, making it possible to go through any part of the system, and available for educational purposes.
[ { "created": "Sun, 31 Mar 2019 10:56:47 GMT", "version": "v1" }, { "created": "Wed, 21 Aug 2019 15:25:19 GMT", "version": "v2" }, { "created": "Fri, 17 Jan 2020 15:56:56 GMT", "version": "v3" }, { "created": "Wed, 10 Mar 2021 13:47:33 GMT", "version": "v4" } ]
2021-03-11
[ [ "Silano", "Giuseppe", "" ], [ "Iannelli", "Luigi", "" ] ]
The main motivation of this work is to propose a simulation approach for a specific task within the Unmanned Aerial Vehicle (UAV) field, i.e., the visual detection and tracking of arbitrary moving objects. In particular, it is described MAT-Fly, a numerical simulation platform for multi-rotor aircraft characterized by the ease of use and control development. The platform is based on Matlab and the MathWorks Virtual Reality (VR) and Computer Vision System (CVS) toolboxes that work together to simulate the behavior of a quad-rotor while tracking a car that moves along a nontrivial path. The VR toolbox has been chosen due to the familiarity that students have with Matlab and because it does not require a notable effort by the user for the learning and development phase thanks to its simple structure. The overall architecture is quite modular so that each block can be easily replaced with others simplifying the code reuse and the platform customization. Some simple testbeds are presented to show the validity of the approach and how the platform works. The simulator is released as open-source, making it possible to go through any part of the system, and available for educational purposes.
2401.14743
Takanori Ugai
Takanori Ugai, Shusaku Egami, Swe Nwe Nwe Htun, Kouji Kozaki, Takahiro Kawamura, Ken Fukuda
Synthetic Multimodal Dataset for Empowering Safety and Well-being in Home Environments
7 pages, 2 figures,4 tables
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper presents a synthetic multimodal dataset of daily activities that fuses video data from a 3D virtual space simulator with knowledge graphs depicting the spatiotemporal context of the activities. The dataset is developed for the Knowledge Graph Reasoning Challenge for Social Issues (KGRC4SI), which focuses on identifying and addressing hazardous situations in the home environment. The dataset is available to the public as a valuable resource for researchers and practitioners developing innovative solutions recognizing human behaviors to enhance safety and well-being in
[ { "created": "Fri, 26 Jan 2024 10:05:41 GMT", "version": "v1" } ]
2024-01-29
[ [ "Ugai", "Takanori", "" ], [ "Egami", "Shusaku", "" ], [ "Htun", "Swe Nwe Nwe", "" ], [ "Kozaki", "Kouji", "" ], [ "Kawamura", "Takahiro", "" ], [ "Fukuda", "Ken", "" ] ]
This paper presents a synthetic multimodal dataset of daily activities that fuses video data from a 3D virtual space simulator with knowledge graphs depicting the spatiotemporal context of the activities. The dataset is developed for the Knowledge Graph Reasoning Challenge for Social Issues (KGRC4SI), which focuses on identifying and addressing hazardous situations in the home environment. The dataset is available to the public as a valuable resource for researchers and practitioners developing innovative solutions recognizing human behaviors to enhance safety and well-being in
2301.04733
Weihua Zhou
Chen Zhao, Zhihui Xu, Jingfeng Jiang, Michele Esposito, Drew Pienta, Guang-Uei Hung, Weihua Zhou
AGMN: Association Graph-based Graph Matching Network for Coronary Artery Semantic Labeling on Invasive Coronary Angiograms
26 pages, 7 figures
null
10.1016/j.patcog.2023.109789
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Semantic labeling of coronary arterial segments in invasive coronary angiography (ICA) is important for automated assessment and report generation of coronary artery stenosis in the computer-aided diagnosis of coronary artery disease (CAD). Inspired by the training procedure of interventional cardiologists for interpreting the structure of coronary arteries, we propose an association graph-based graph matching network (AGMN) for coronary arterial semantic labeling. We first extract the vascular tree from invasive coronary angiography (ICA) and convert it into multiple individual graphs. Then, an association graph is constructed from two individual graphs where each vertex represents the relationship between two arterial segments. Using the association graph, the AGMN extracts the vertex features by the embedding module, aggregates the features from adjacent vertices and edges by graph convolution network, and decodes the features to generate the semantic mappings between arteries. By learning the mapping of arterial branches between two individual graphs, the unlabeled arterial segments are classified by the labeled segments to achieve semantic labeling. A dataset containing 263 ICAs was employed to train and validate the proposed model, and a five-fold cross-validation scheme was performed. Our AGMN model achieved an average accuracy of 0.8264, an average precision of 0.8276, an average recall of 0.8264, and an average F1-score of 0.8262, which significantly outperformed existing coronary artery semantic labeling methods. In conclusion, we have developed and validated a new algorithm with high accuracy, interpretability, and robustness for coronary artery semantic labeling on ICAs.
[ { "created": "Wed, 11 Jan 2023 21:54:28 GMT", "version": "v1" } ]
2023-12-22
[ [ "Zhao", "Chen", "" ], [ "Xu", "Zhihui", "" ], [ "Jiang", "Jingfeng", "" ], [ "Esposito", "Michele", "" ], [ "Pienta", "Drew", "" ], [ "Hung", "Guang-Uei", "" ], [ "Zhou", "Weihua", "" ] ]
Semantic labeling of coronary arterial segments in invasive coronary angiography (ICA) is important for automated assessment and report generation of coronary artery stenosis in the computer-aided diagnosis of coronary artery disease (CAD). Inspired by the training procedure of interventional cardiologists for interpreting the structure of coronary arteries, we propose an association graph-based graph matching network (AGMN) for coronary arterial semantic labeling. We first extract the vascular tree from invasive coronary angiography (ICA) and convert it into multiple individual graphs. Then, an association graph is constructed from two individual graphs where each vertex represents the relationship between two arterial segments. Using the association graph, the AGMN extracts the vertex features by the embedding module, aggregates the features from adjacent vertices and edges by graph convolution network, and decodes the features to generate the semantic mappings between arteries. By learning the mapping of arterial branches between two individual graphs, the unlabeled arterial segments are classified by the labeled segments to achieve semantic labeling. A dataset containing 263 ICAs was employed to train and validate the proposed model, and a five-fold cross-validation scheme was performed. Our AGMN model achieved an average accuracy of 0.8264, an average precision of 0.8276, an average recall of 0.8264, and an average F1-score of 0.8262, which significantly outperformed existing coronary artery semantic labeling methods. In conclusion, we have developed and validated a new algorithm with high accuracy, interpretability, and robustness for coronary artery semantic labeling on ICAs.
2104.03904
Bilal Abu-Salih
Bilal Abu-Salih, Pornpit Wongthongtham, Dengya Zhu, Kit Yan Chan, Amit Rudra
Social Big Data: An Overview and Applications
null
null
10.1007/978-981-33-6652-7_1
null
cs.SI
http://creativecommons.org/licenses/by-nc-nd/4.0/
The emergence of online social media services has made a qualitative leap and brought profound changes to various aspects of human, cultural, intellectual, and social life. These significant Big data tributaries have further transformed the businesses processes by establishing convergent and transparent dialogues between businesses and their customers. Therefore, analysing the flow of social data content is necessary in order to enhance business practices, to augment brand awareness, to develop insights on target markets, to detect and identify positive and negative customer sentiments, etc., thereby achieving the hoped-for added value. This chapter presents an overview of Social Big Data term and definition. This chapter also lays the foundation for several applications and analytics that are broadly discussed in this book.
[ { "created": "Thu, 1 Apr 2021 14:39:23 GMT", "version": "v1" } ]
2021-04-09
[ [ "Abu-Salih", "Bilal", "" ], [ "Wongthongtham", "Pornpit", "" ], [ "Zhu", "Dengya", "" ], [ "Chan", "Kit Yan", "" ], [ "Rudra", "Amit", "" ] ]
The emergence of online social media services has made a qualitative leap and brought profound changes to various aspects of human, cultural, intellectual, and social life. These significant Big data tributaries have further transformed the businesses processes by establishing convergent and transparent dialogues between businesses and their customers. Therefore, analysing the flow of social data content is necessary in order to enhance business practices, to augment brand awareness, to develop insights on target markets, to detect and identify positive and negative customer sentiments, etc., thereby achieving the hoped-for added value. This chapter presents an overview of Social Big Data term and definition. This chapter also lays the foundation for several applications and analytics that are broadly discussed in this book.
2308.15768
Michelle S Lam
Michelle S. Lam, Ayush Pandit, Colin H. Kalicki, Rachit Gupta, Poonam Sahoo, Dana\"e Metaxa
Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising
To appear at CSCW 2023
null
10.1145/3610209
null
cs.HC cs.CY
http://creativecommons.org/licenses/by-nc-sa/4.0/
Algorithm audits are powerful tools for studying black-box systems. While very effective in examining technical components, the method stops short of a sociotechnical frame, which would also consider users as an integral and dynamic part of the system. Addressing this gap, we propose the concept of sociotechnical auditing: auditing methods that evaluate algorithmic systems at the sociotechnical level, focusing on the interplay between algorithms and users as each impacts the other. Just as algorithm audits probe an algorithm with varied inputs and observe outputs, a sociotechnical audit (STA) additionally probes users, exposing them to different algorithmic behavior and measuring resulting attitudes and behaviors. To instantiate this method, we develop Intervenr, a platform for conducting browser-based, longitudinal sociotechnical audits with consenting, compensated participants. Intervenr investigates the algorithmic content users encounter online and coordinates systematic client-side interventions to understand how users change in response. As a case study, we deploy Intervenr in a two-week sociotechnical audit of online advertising (N=244) to investigate the central premise that personalized ad targeting is more effective on users. In the first week, we collect all browser ads delivered to users, and in the second, we deploy an ablation-style intervention that disrupts normal targeting by randomly pairing participants and swapping all their ads. We collect user-oriented metrics (self-reported ad interest and feeling of representation) and advertiser-oriented metrics (ad views, clicks, and recognition) throughout, along with a total of over 500,000 ads. Our STA finds that targeted ads indeed perform better with users, but also that users begin to acclimate to different ads in only a week, casting doubt on the primacy of personalized ad targeting given the impact of repeated exposure.
[ { "created": "Wed, 30 Aug 2023 05:26:47 GMT", "version": "v1" } ]
2023-08-31
[ [ "Lam", "Michelle S.", "" ], [ "Pandit", "Ayush", "" ], [ "Kalicki", "Colin H.", "" ], [ "Gupta", "Rachit", "" ], [ "Sahoo", "Poonam", "" ], [ "Metaxa", "Danaë", "" ] ]
Algorithm audits are powerful tools for studying black-box systems. While very effective in examining technical components, the method stops short of a sociotechnical frame, which would also consider users as an integral and dynamic part of the system. Addressing this gap, we propose the concept of sociotechnical auditing: auditing methods that evaluate algorithmic systems at the sociotechnical level, focusing on the interplay between algorithms and users as each impacts the other. Just as algorithm audits probe an algorithm with varied inputs and observe outputs, a sociotechnical audit (STA) additionally probes users, exposing them to different algorithmic behavior and measuring resulting attitudes and behaviors. To instantiate this method, we develop Intervenr, a platform for conducting browser-based, longitudinal sociotechnical audits with consenting, compensated participants. Intervenr investigates the algorithmic content users encounter online and coordinates systematic client-side interventions to understand how users change in response. As a case study, we deploy Intervenr in a two-week sociotechnical audit of online advertising (N=244) to investigate the central premise that personalized ad targeting is more effective on users. In the first week, we collect all browser ads delivered to users, and in the second, we deploy an ablation-style intervention that disrupts normal targeting by randomly pairing participants and swapping all their ads. We collect user-oriented metrics (self-reported ad interest and feeling of representation) and advertiser-oriented metrics (ad views, clicks, and recognition) throughout, along with a total of over 500,000 ads. Our STA finds that targeted ads indeed perform better with users, but also that users begin to acclimate to different ads in only a week, casting doubt on the primacy of personalized ad targeting given the impact of repeated exposure.
2204.13997
W B Langdon
William B. Langdon
Failed Disruption Propagation in Integer Genetic Programming
Long version of GECCO 2022 poster
null
10.1145/3520304.3528878
null
cs.NE cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
We inject a random value into the evaluation of highly evolved deep integer GP trees 9743720 times and find 99.7percent Suggesting crossover and mutation's impact are dissipated and seldom propagate outside the program. Indeed only errors near the root node have impact and disruption falls exponentially with depth at between exp(-depth/3) and exp(-depth/5) for recursive Fibonacci GP trees, allowing five to seven levels of nesting between the runtime perturbation and an optimal test oracle for it to detect most errors. Information theory explains this locally flat fitness landscape is due to FDP. Overflow is not important and instead, integer GP, like deep symbolic regression floating point GP and software in general, is not fragile, is robust, is not chaotic and suffers little from Lorenz' butterfly. Keywords: genetic algorithms, genetic programming, SBSE, information loss, information funnels, entropy, evolvability, mutational robustness, optimal test oracle placement, neutral networks, software robustness, correctness attraction, diversity, software testing, theory of bloat, introns
[ { "created": "Mon, 4 Apr 2022 07:20:52 GMT", "version": "v1" } ]
2022-05-02
[ [ "Langdon", "William B.", "" ] ]
We inject a random value into the evaluation of highly evolved deep integer GP trees 9743720 times and find 99.7percent Suggesting crossover and mutation's impact are dissipated and seldom propagate outside the program. Indeed only errors near the root node have impact and disruption falls exponentially with depth at between exp(-depth/3) and exp(-depth/5) for recursive Fibonacci GP trees, allowing five to seven levels of nesting between the runtime perturbation and an optimal test oracle for it to detect most errors. Information theory explains this locally flat fitness landscape is due to FDP. Overflow is not important and instead, integer GP, like deep symbolic regression floating point GP and software in general, is not fragile, is robust, is not chaotic and suffers little from Lorenz' butterfly. Keywords: genetic algorithms, genetic programming, SBSE, information loss, information funnels, entropy, evolvability, mutational robustness, optimal test oracle placement, neutral networks, software robustness, correctness attraction, diversity, software testing, theory of bloat, introns
1612.03316
Omar Alonso
Omar Alonso
Label Visualization and Exploration in IR
null
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is a renaissance in visual analytics systems for data analysis and sharing, in particular, in the current wave of big data applications. We introduce RAVE, a prototype that automates the generation of an interface that uses facets and visualization techniques for exploring and analyzing relevance assessments data sets collected via crowdsourcing. We present a technical description of the main components and demonstrate its use.
[ { "created": "Sat, 10 Dec 2016 16:33:06 GMT", "version": "v1" } ]
2016-12-13
[ [ "Alonso", "Omar", "" ] ]
There is a renaissance in visual analytics systems for data analysis and sharing, in particular, in the current wave of big data applications. We introduce RAVE, a prototype that automates the generation of an interface that uses facets and visualization techniques for exploring and analyzing relevance assessments data sets collected via crowdsourcing. We present a technical description of the main components and demonstrate its use.
2407.07046
Yangmin Li
Yangmin Li, Ruiqi Zhu, Wengen Li
CorMulT: A Semi-supervised Modality Correlation-aware Multimodal Transformer for Sentiment Analysis
null
null
null
null
cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multimodal sentiment analysis is an active research area that combines multiple data modalities, e.g., text, image and audio, to analyze human emotions and benefits a variety of applications. Existing multimodal sentiment analysis methods can be classified as modality interaction-based methods, modality transformation-based methods and modality similarity-based methods. However, most of these methods highly rely on the strong correlations between modalities, and cannot fully uncover and utilize the correlations between modalities to enhance sentiment analysis. Therefore, these methods usually achieve bad performance for identifying the sentiment of multimodal data with weak correlations. To address this issue, we proposed a two-stage semi-supervised model termed Correlation-aware Multimodal Transformer (CorMulT) which consists pre-training stage and prediction stage. At the pre-training stage, a modality correlation contrastive learning module is designed to efficiently learn modality correlation coefficients between different modalities. At the prediction stage, the learned correlation coefficients are fused with modality representations to make the sentiment prediction. According to the experiments on the popular multimodal dataset CMU-MOSEI, CorMulT obviously surpasses state-of-the-art multimodal sentiment analysis methods.
[ { "created": "Tue, 9 Jul 2024 17:07:29 GMT", "version": "v1" } ]
2024-07-10
[ [ "Li", "Yangmin", "" ], [ "Zhu", "Ruiqi", "" ], [ "Li", "Wengen", "" ] ]
Multimodal sentiment analysis is an active research area that combines multiple data modalities, e.g., text, image and audio, to analyze human emotions and benefits a variety of applications. Existing multimodal sentiment analysis methods can be classified as modality interaction-based methods, modality transformation-based methods and modality similarity-based methods. However, most of these methods highly rely on the strong correlations between modalities, and cannot fully uncover and utilize the correlations between modalities to enhance sentiment analysis. Therefore, these methods usually achieve bad performance for identifying the sentiment of multimodal data with weak correlations. To address this issue, we proposed a two-stage semi-supervised model termed Correlation-aware Multimodal Transformer (CorMulT) which consists pre-training stage and prediction stage. At the pre-training stage, a modality correlation contrastive learning module is designed to efficiently learn modality correlation coefficients between different modalities. At the prediction stage, the learned correlation coefficients are fused with modality representations to make the sentiment prediction. According to the experiments on the popular multimodal dataset CMU-MOSEI, CorMulT obviously surpasses state-of-the-art multimodal sentiment analysis methods.
2407.14417
HamidReza Imani
HamidReza Imani, Abdolah Amirany, and Tarek El-Ghazawi
Mixture of Experts with Mixture of Precisions for Tuning Quality of Service
null
null
null
null
cs.DC cs.AI cs.LG cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The increasing demand for deploying large Mixture-of-Experts (MoE) models in resource-constrained environments necessitates efficient approaches to address their high memory and computational requirements challenges. Moreover, given that tasks come in different user-defined constraints and the available resources change over time in multi-tenant environments, it is necessary to design an approach which provides a flexible configuration space. This paper presents an adaptive serving approach for the efficient deployment of MoE models, capitalizing on partial quantization of the experts. By dynamically determining the number of quantized experts and their distribution across CPU and GPU, our approach explores the Pareto frontier and offers a fine-grained range of configurations for tuning throughput and model quality. Our evaluation on an NVIDIA A100 GPU using a Mixtral 8x7B MoE model for three language modelling benchmarks demonstrates that the throughput of token generation can be adjusted from 0.63 to 13.00 token per second. This enhancement comes with a marginal perplexity increase of 2.62 to 2.80, 6.48 to 7.24, and 3.24 to 3.53 for WikiText2, PTB, and C4 datasets respectively under maximum quantization. These results highlight the practical applicability of our approach in dynamic and accuracy-sensitive applications where both memory usage and output quality are important.
[ { "created": "Fri, 19 Jul 2024 15:42:49 GMT", "version": "v1" } ]
2024-07-22
[ [ "Imani", "HamidReza", "" ], [ "Amirany", "Abdolah", "" ], [ "El-Ghazawi", "Tarek", "" ] ]
The increasing demand for deploying large Mixture-of-Experts (MoE) models in resource-constrained environments necessitates efficient approaches to address their high memory and computational requirements challenges. Moreover, given that tasks come in different user-defined constraints and the available resources change over time in multi-tenant environments, it is necessary to design an approach which provides a flexible configuration space. This paper presents an adaptive serving approach for the efficient deployment of MoE models, capitalizing on partial quantization of the experts. By dynamically determining the number of quantized experts and their distribution across CPU and GPU, our approach explores the Pareto frontier and offers a fine-grained range of configurations for tuning throughput and model quality. Our evaluation on an NVIDIA A100 GPU using a Mixtral 8x7B MoE model for three language modelling benchmarks demonstrates that the throughput of token generation can be adjusted from 0.63 to 13.00 token per second. This enhancement comes with a marginal perplexity increase of 2.62 to 2.80, 6.48 to 7.24, and 3.24 to 3.53 for WikiText2, PTB, and C4 datasets respectively under maximum quantization. These results highlight the practical applicability of our approach in dynamic and accuracy-sensitive applications where both memory usage and output quality are important.
1811.04199
Amir Ashouri
Amir H. Ashouri, Tarek S. Abdelrahman, Alwyn Dos Remedios
Fast On-the-fly Retraining-free Sparsification of Convolutional Neural Networks
Extended Version of Our Accepted Paper in NIPS 2018, CDNNRIA Workshop: (https://nips.cc/Conferences/2018/Schedule?showEvent=10941)- Reviews are available at OpenReview (https://openreview.net/forum?id=rkz1YD0vjm)
Elsevier Neurocomputing, 2019
10.1016/j.neucom.2019.08.063
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern Convolutional Neural Networks (CNNs) are complex, encompassing millions of parameters. Their deployment exerts computational, storage and energy demands, particularly on embedded platforms. Existing approaches to prune or sparsify CNNs require retraining to maintain inference accuracy. Such retraining is not feasible in some contexts. In this paper, we explore the sparsification of CNNs by proposing three model-independent methods. Our methods are applied on-the-fly and require no retraining. We show that the state-of-the-art models' weights can be reduced by up to 73% (compression factor of 3.7x) without incurring more than 5% loss in Top-5 accuracy. Additional fine-tuning gains only 8% in sparsity, which indicates that our fast on-the-fly methods are effective.
[ { "created": "Sat, 10 Nov 2018 05:43:36 GMT", "version": "v1" }, { "created": "Tue, 13 Nov 2018 23:54:25 GMT", "version": "v2" }, { "created": "Sun, 8 Sep 2019 17:03:08 GMT", "version": "v3" } ]
2019-09-10
[ [ "Ashouri", "Amir H.", "" ], [ "Abdelrahman", "Tarek S.", "" ], [ "Remedios", "Alwyn Dos", "" ] ]
Modern Convolutional Neural Networks (CNNs) are complex, encompassing millions of parameters. Their deployment exerts computational, storage and energy demands, particularly on embedded platforms. Existing approaches to prune or sparsify CNNs require retraining to maintain inference accuracy. Such retraining is not feasible in some contexts. In this paper, we explore the sparsification of CNNs by proposing three model-independent methods. Our methods are applied on-the-fly and require no retraining. We show that the state-of-the-art models' weights can be reduced by up to 73% (compression factor of 3.7x) without incurring more than 5% loss in Top-5 accuracy. Additional fine-tuning gains only 8% in sparsity, which indicates that our fast on-the-fly methods are effective.
2404.14637
Larissa Salerno
Larissa Salerno, Christoph Treude, Patanamon Thongtatunam
Open Source Software Development Tool Installation: Challenges and Strategies For Novice Developers
null
null
null
null
cs.SE
http://creativecommons.org/licenses/by-sa/4.0/
As the world of technology advances, so do the tools that software developers use to create new programs. In recent years, software development tools have become more popular, allowing developers to work more efficiently and produce higher-quality software. Still, installing such tools can be challenging for novice developers at the early stage of their careers, as they may face challenges, such as compatibility issues (e.g., operating systems). Therefore, this work aims to investigate the challenges novice developers face in software development when installing software development tools. To investigate these, we conducted an analysis of 24 live software installation sessions to observe challenges and comprehend their actions, the strategies they apply, and the type of source of information they consult when encountering challenges. Our findings show that unclear documentation, such as installation instructions, and inadequate feedback during the installation process are common challenges faced by novice developers. Moreover, reformulating search queries and relying on non-official documentation were some of the strategies employed to overcome challenges. Based on our findings, we provide practical recommendations for tool vendors, tool users, and researchers.
[ { "created": "Tue, 23 Apr 2024 00:25:57 GMT", "version": "v1" } ]
2024-04-24
[ [ "Salerno", "Larissa", "" ], [ "Treude", "Christoph", "" ], [ "Thongtatunam", "Patanamon", "" ] ]
As the world of technology advances, so do the tools that software developers use to create new programs. In recent years, software development tools have become more popular, allowing developers to work more efficiently and produce higher-quality software. Still, installing such tools can be challenging for novice developers at the early stage of their careers, as they may face challenges, such as compatibility issues (e.g., operating systems). Therefore, this work aims to investigate the challenges novice developers face in software development when installing software development tools. To investigate these, we conducted an analysis of 24 live software installation sessions to observe challenges and comprehend their actions, the strategies they apply, and the type of source of information they consult when encountering challenges. Our findings show that unclear documentation, such as installation instructions, and inadequate feedback during the installation process are common challenges faced by novice developers. Moreover, reformulating search queries and relying on non-official documentation were some of the strategies employed to overcome challenges. Based on our findings, we provide practical recommendations for tool vendors, tool users, and researchers.
2304.07926
Bruno Tafur
Bruno Tafur and Advait Sarkar
User Perceptions of Automatic Fake News Detection: Can Algorithms Fight Online Misinformation?
null
null
null
null
cs.HC cs.CY
http://creativecommons.org/licenses/by/4.0/
Fake news detection algorithms apply machine learning to various news attributes and their relationships. However, their success is usually evaluated based on how the algorithm performs on a static benchmark, independent of real users. On the other hand, studies of user trust in fake news has identified relevant factors such as the user's previous beliefs, the article format, and the source's reputation. We present a user study (n=40) evaluating how warnings issued by fake news detection algorithms affect the user's ability to detect misinformation. We find that such warnings strongly influence users' perception of the truth, that even a moderately accurate classifier can improve overall user accuracy, and that users tend to be biased towards agreeing with the algorithm, even when it is incorrect.
[ { "created": "Mon, 17 Apr 2023 00:37:53 GMT", "version": "v1" } ]
2023-04-18
[ [ "Tafur", "Bruno", "" ], [ "Sarkar", "Advait", "" ] ]
Fake news detection algorithms apply machine learning to various news attributes and their relationships. However, their success is usually evaluated based on how the algorithm performs on a static benchmark, independent of real users. On the other hand, studies of user trust in fake news has identified relevant factors such as the user's previous beliefs, the article format, and the source's reputation. We present a user study (n=40) evaluating how warnings issued by fake news detection algorithms affect the user's ability to detect misinformation. We find that such warnings strongly influence users' perception of the truth, that even a moderately accurate classifier can improve overall user accuracy, and that users tend to be biased towards agreeing with the algorithm, even when it is incorrect.
2301.10352
Kenneth Clarkson
Kenneth L. Clarkson, Shashanka Ubaru, Elizabeth Yang
Capacity Analysis of Vector Symbolic Architectures
null
null
null
null
cs.LG cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hyperdimensional computing (HDC) is a biologically-inspired framework which represents symbols with high-dimensional vectors, and uses vector operations to manipulate them. The ensemble of a particular vector space and a prescribed set of vector operations (including one addition-like for "bundling" and one outer-product-like for "binding") form a *vector symbolic architecture* (VSA). While VSAs have been employed in numerous applications and have been studied empirically, many theoretical questions about VSAs remain open. We analyze the *representation capacities* of four common VSAs: MAP-I, MAP-B, and two VSAs based on sparse binary vectors. "Representation capacity' here refers to bounds on the dimensions of the VSA vectors required to perform certain symbolic tasks, such as testing for set membership $i \in S$ and estimating set intersection sizes $|X \cap Y|$ for two sets of symbols $X$ and $Y$, to a given degree of accuracy. We also analyze the ability of a novel variant of a Hopfield network (a simple model of associative memory) to perform some of the same tasks that are typically asked of VSAs. In addition to providing new bounds on VSA capacities, our analyses establish and leverage connections between VSAs, "sketching" (dimensionality reduction) algorithms, and Bloom filters.
[ { "created": "Tue, 24 Jan 2023 23:43:25 GMT", "version": "v1" }, { "created": "Tue, 14 Feb 2023 20:50:04 GMT", "version": "v2" } ]
2023-02-16
[ [ "Clarkson", "Kenneth L.", "" ], [ "Ubaru", "Shashanka", "" ], [ "Yang", "Elizabeth", "" ] ]
Hyperdimensional computing (HDC) is a biologically-inspired framework which represents symbols with high-dimensional vectors, and uses vector operations to manipulate them. The ensemble of a particular vector space and a prescribed set of vector operations (including one addition-like for "bundling" and one outer-product-like for "binding") form a *vector symbolic architecture* (VSA). While VSAs have been employed in numerous applications and have been studied empirically, many theoretical questions about VSAs remain open. We analyze the *representation capacities* of four common VSAs: MAP-I, MAP-B, and two VSAs based on sparse binary vectors. "Representation capacity' here refers to bounds on the dimensions of the VSA vectors required to perform certain symbolic tasks, such as testing for set membership $i \in S$ and estimating set intersection sizes $|X \cap Y|$ for two sets of symbols $X$ and $Y$, to a given degree of accuracy. We also analyze the ability of a novel variant of a Hopfield network (a simple model of associative memory) to perform some of the same tasks that are typically asked of VSAs. In addition to providing new bounds on VSA capacities, our analyses establish and leverage connections between VSAs, "sketching" (dimensionality reduction) algorithms, and Bloom filters.
1706.06806
Rakesh Venkat
Yuval Rabani, Rakesh Venkat
Approximating Sparsest Cut in Low Rank Graphs via Embeddings from Approximately Low-Dimensional Spaces
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of embedding a finite set of points $\{x_1, \ldots, x_n\} \in \mathbb{R}^d$ that satisfy $\ell_2^2$ triangle inequalities into $\ell_1$, when the points are approximately low-dimensional. Goemans (unpublished, appears in a work of [Magen and Moharammi, 2008]) showed that such points residing in \emph{exactly} $d$ dimensions can be embedded into $\ell_1$ with distortion at most $\sqrt{d}$. We prove the following robust analogue of this statement: if there exists a $r$-dimensional subspace $\Pi$ such that the projections onto this subspace satisfy $\sum_{i,j \in [n]}\Vert \Pi x_i - \Pi x_j \Vert _2^2 \geq \Omega(1) \sum_{i,j \in [n]}\Vert x_i - x_j \Vert _2^2$, then there is an embedding of the points into $\ell_1$ with $O(\sqrt{r})$ average distortion. A consequence of this result is that the integrality gap of the well-known Goemans-Linial SDP relaxation for the Uniform Sparsest Cut problem is $O(\sqrt{r})$ on graphs $G$ whose $r$-th smallest normalized eigenvalue of the Laplacian satisfies $\lambda_r(G)/n \geq \Omega(1)\Phi_{SDP} (G)$. Our result improves upon the previously known bound of $O(r)$ on the average distortion, and the integrality gap of the Goemans-Linial SDP under the same preconditions, proven in the previous works of [Deshpande and Venkat, 2014] and [Deshpande, Harsha and Venkat, 2016].
[ { "created": "Wed, 21 Jun 2017 09:38:37 GMT", "version": "v1" } ]
2017-06-22
[ [ "Rabani", "Yuval", "" ], [ "Venkat", "Rakesh", "" ] ]
We consider the problem of embedding a finite set of points $\{x_1, \ldots, x_n\} \in \mathbb{R}^d$ that satisfy $\ell_2^2$ triangle inequalities into $\ell_1$, when the points are approximately low-dimensional. Goemans (unpublished, appears in a work of [Magen and Moharammi, 2008]) showed that such points residing in \emph{exactly} $d$ dimensions can be embedded into $\ell_1$ with distortion at most $\sqrt{d}$. We prove the following robust analogue of this statement: if there exists a $r$-dimensional subspace $\Pi$ such that the projections onto this subspace satisfy $\sum_{i,j \in [n]}\Vert \Pi x_i - \Pi x_j \Vert _2^2 \geq \Omega(1) \sum_{i,j \in [n]}\Vert x_i - x_j \Vert _2^2$, then there is an embedding of the points into $\ell_1$ with $O(\sqrt{r})$ average distortion. A consequence of this result is that the integrality gap of the well-known Goemans-Linial SDP relaxation for the Uniform Sparsest Cut problem is $O(\sqrt{r})$ on graphs $G$ whose $r$-th smallest normalized eigenvalue of the Laplacian satisfies $\lambda_r(G)/n \geq \Omega(1)\Phi_{SDP} (G)$. Our result improves upon the previously known bound of $O(r)$ on the average distortion, and the integrality gap of the Goemans-Linial SDP under the same preconditions, proven in the previous works of [Deshpande and Venkat, 2014] and [Deshpande, Harsha and Venkat, 2016].
1501.00512
Massimiliano Dal Mas
Massimiliano Dal Mas
Function of Forgetfulness for the Tedium of Oblivion on Liquidity of Ontology Matching
4 pages, 1 figure; for details see: http://www.maxdalmas.com
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The shallow and fragile knowledge on the Web does not examine in depth the things: it behaves lightly. The conditions created by the Web makes our attention labile and especially fickle, it's unable to concentrate for long as we are trained to "surf" without going though never in depth. The Web also brings with it the added advantage of a nearly availability infinite knowledge but leads to a loss of the ability to retain and evaluate that knowledge within us increasing forgetfulness of knowledge. In this paper we show how the "function of forgetfulness" appears linked to tedium and oblivion of knowledge through the liquidity of ontology matching.
[ { "created": "Fri, 2 Jan 2015 23:19:01 GMT", "version": "v1" } ]
2015-01-06
[ [ "Mas", "Massimiliano Dal", "" ] ]
The shallow and fragile knowledge on the Web does not examine in depth the things: it behaves lightly. The conditions created by the Web makes our attention labile and especially fickle, it's unable to concentrate for long as we are trained to "surf" without going though never in depth. The Web also brings with it the added advantage of a nearly availability infinite knowledge but leads to a loss of the ability to retain and evaluate that knowledge within us increasing forgetfulness of knowledge. In this paper we show how the "function of forgetfulness" appears linked to tedium and oblivion of knowledge through the liquidity of ontology matching.
2108.08712
Matias Valdenegro-Toro
Matias Valdenegro-Toro
Teaching Uncertainty Quantification in Machine Learning through Use Cases
2nd Teaching in Machine Learning Workshop, Camera Ready, 5 pages, 3 figures
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Uncertainty in machine learning is not generally taught as general knowledge in Machine Learning course curricula. In this paper we propose a short curriculum for a course about uncertainty in machine learning, and complement the course with a selection of use cases, aimed to trigger discussion and let students play with the concepts of uncertainty in a programming setting. Our use cases cover the concept of output uncertainty, Bayesian neural networks and weight distributions, sources of uncertainty, and out of distribution detection. We expect that this curriculum and set of use cases motivates the community to adopt these important concepts into courses for safety in AI.
[ { "created": "Thu, 19 Aug 2021 14:22:17 GMT", "version": "v1" } ]
2021-08-20
[ [ "Valdenegro-Toro", "Matias", "" ] ]
Uncertainty in machine learning is not generally taught as general knowledge in Machine Learning course curricula. In this paper we propose a short curriculum for a course about uncertainty in machine learning, and complement the course with a selection of use cases, aimed to trigger discussion and let students play with the concepts of uncertainty in a programming setting. Our use cases cover the concept of output uncertainty, Bayesian neural networks and weight distributions, sources of uncertainty, and out of distribution detection. We expect that this curriculum and set of use cases motivates the community to adopt these important concepts into courses for safety in AI.
1911.05870
Kushagra Mahajan
Kushagra Mahajan, Monika Sharma, Lovekesh Vig
Character Keypoint-based Homography Estimation in Scanned Documents for Efficient Information Extraction
6 pages, 4 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Precise homography estimation between multiple images is a pre-requisite for many computer vision applications. One application that is particularly relevant in today's digital era is the alignment of scanned or camera-captured document images such as insurance claim forms for information extraction. Traditional learning based approaches perform poorly due to the absence of an appropriate gradient. Feature based keypoint extraction techniques for homography estimation in real scene images either detect an extremely large number of inconsistent keypoints due to sharp textual edges, or produce inaccurate keypoint correspondences due to variations in illumination and viewpoint differences between document images. In this paper, we propose a novel algorithm for aligning scanned or camera-captured document images using character based keypoints and a reference template. The algorithm is both fast and accurate and utilizes a standard Optical character recognition (OCR) engine such as Tesseract to find character based unambiguous keypoints, which are utilized to identify precise keypoint correspondences between two images. Finally, the keypoints are used to compute the homography mapping between a test document and a template. We evaluated the proposed approach for information extraction on two real world anonymized datasets comprised of health insurance claim forms and the results support the viability of the proposed technique.
[ { "created": "Thu, 14 Nov 2019 00:44:55 GMT", "version": "v1" } ]
2019-11-15
[ [ "Mahajan", "Kushagra", "" ], [ "Sharma", "Monika", "" ], [ "Vig", "Lovekesh", "" ] ]
Precise homography estimation between multiple images is a pre-requisite for many computer vision applications. One application that is particularly relevant in today's digital era is the alignment of scanned or camera-captured document images such as insurance claim forms for information extraction. Traditional learning based approaches perform poorly due to the absence of an appropriate gradient. Feature based keypoint extraction techniques for homography estimation in real scene images either detect an extremely large number of inconsistent keypoints due to sharp textual edges, or produce inaccurate keypoint correspondences due to variations in illumination and viewpoint differences between document images. In this paper, we propose a novel algorithm for aligning scanned or camera-captured document images using character based keypoints and a reference template. The algorithm is both fast and accurate and utilizes a standard Optical character recognition (OCR) engine such as Tesseract to find character based unambiguous keypoints, which are utilized to identify precise keypoint correspondences between two images. Finally, the keypoints are used to compute the homography mapping between a test document and a template. We evaluated the proposed approach for information extraction on two real world anonymized datasets comprised of health insurance claim forms and the results support the viability of the proposed technique.
2006.00165
Ashraf Tantawy
Ashraf Tantawy, Sherif Abdelwahed, and Abdelkarim Erradi
Cyber LOPA: An Integrated Approach for the Design of Dependable and Secure Cyber Physical Systems
Preprint version of the published paper
IEEE Transactions on Reliability, VOL. 71, NO. 2, JUNE 2022
10.1109/TR.2022.3163652
null
cs.CR cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Safety risk assessment is an essential process to ensure a dependable Cyber-Physical System (CPS) design. Traditional risk assessment considers only physical failures. For modern CPS, failures caused by cyber attacks are on the rise. The focus of latest research effort is on safety-security lifecycle integration and the expansion of modeling formalisms for risk assessment to incorporate security failures. The interaction between safety and security lifecycles and its impact on the overall system design, as well as the reliability loss resulting from ignoring security failures are some of the overlooked research questions. This paper addresses these research questions by presenting a new safety design method named Cyber Layer Of Protection Analysis (CLOPA) that extends existing LOPA framework to include failures caused by cyber attacks. The proposed method provides a rigorous mathematical formulation that expresses quantitatively the trade-off between designing a highly-reliable versus a highly-secure CPS. We further propose a co-design lifecycle process that integrates the safety and security risk assessment processes. We evaluate the proposed CLOPA approach and the integrated lifecycle on a practical case study of a process reactor controlled by an industrial control testbed, and provide a comparison between the proposed CLOPA and current LOPA risk assessment practice.
[ { "created": "Sat, 30 May 2020 03:53:18 GMT", "version": "v1" }, { "created": "Sat, 6 Jun 2020 18:53:26 GMT", "version": "v2" }, { "created": "Thu, 15 Jul 2021 12:32:14 GMT", "version": "v3" }, { "created": "Wed, 17 Aug 2022 16:05:51 GMT", "version": "v4" } ]
2022-08-18
[ [ "Tantawy", "Ashraf", "" ], [ "Abdelwahed", "Sherif", "" ], [ "Erradi", "Abdelkarim", "" ] ]
Safety risk assessment is an essential process to ensure a dependable Cyber-Physical System (CPS) design. Traditional risk assessment considers only physical failures. For modern CPS, failures caused by cyber attacks are on the rise. The focus of latest research effort is on safety-security lifecycle integration and the expansion of modeling formalisms for risk assessment to incorporate security failures. The interaction between safety and security lifecycles and its impact on the overall system design, as well as the reliability loss resulting from ignoring security failures are some of the overlooked research questions. This paper addresses these research questions by presenting a new safety design method named Cyber Layer Of Protection Analysis (CLOPA) that extends existing LOPA framework to include failures caused by cyber attacks. The proposed method provides a rigorous mathematical formulation that expresses quantitatively the trade-off between designing a highly-reliable versus a highly-secure CPS. We further propose a co-design lifecycle process that integrates the safety and security risk assessment processes. We evaluate the proposed CLOPA approach and the integrated lifecycle on a practical case study of a process reactor controlled by an industrial control testbed, and provide a comparison between the proposed CLOPA and current LOPA risk assessment practice.
2402.12377
Christian Reiser
Christian Reiser, Stephan Garbin, Pratul P. Srinivasan, Dor Verbin, Richard Szeliski, Ben Mildenhall, Jonathan T. Barron, Peter Hedman, Andreas Geiger
Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based View Synthesis
Project page at https://binary-opacity-grid.github.io
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While surface-based view synthesis algorithms are appealing due to their low computational requirements, they often struggle to reproduce thin structures. In contrast, more expensive methods that model the scene's geometry as a volumetric density field (e.g. NeRF) excel at reconstructing fine geometric detail. However, density fields often represent geometry in a "fuzzy" manner, which hinders exact localization of the surface. In this work, we modify density fields to encourage them to converge towards surfaces, without compromising their ability to reconstruct thin structures. First, we employ a discrete opacity grid representation instead of a continuous density field, which allows opacity values to discontinuously transition from zero to one at the surface. Second, we anti-alias by casting multiple rays per pixel, which allows occlusion boundaries and subpixel structures to be modelled without using semi-transparent voxels. Third, we minimize the binary entropy of the opacity values, which facilitates the extraction of surface geometry by encouraging opacity values to binarize towards the end of training. Lastly, we develop a fusion-based meshing strategy followed by mesh simplification and appearance model fitting. The compact meshes produced by our model can be rendered in real-time on mobile devices and achieve significantly higher view synthesis quality compared to existing mesh-based approaches.
[ { "created": "Mon, 19 Feb 2024 18:59:41 GMT", "version": "v1" } ]
2024-02-20
[ [ "Reiser", "Christian", "" ], [ "Garbin", "Stephan", "" ], [ "Srinivasan", "Pratul P.", "" ], [ "Verbin", "Dor", "" ], [ "Szeliski", "Richard", "" ], [ "Mildenhall", "Ben", "" ], [ "Barron", "Jonathan T.", "" ], [ "Hedman", "Peter", "" ], [ "Geiger", "Andreas", "" ] ]
While surface-based view synthesis algorithms are appealing due to their low computational requirements, they often struggle to reproduce thin structures. In contrast, more expensive methods that model the scene's geometry as a volumetric density field (e.g. NeRF) excel at reconstructing fine geometric detail. However, density fields often represent geometry in a "fuzzy" manner, which hinders exact localization of the surface. In this work, we modify density fields to encourage them to converge towards surfaces, without compromising their ability to reconstruct thin structures. First, we employ a discrete opacity grid representation instead of a continuous density field, which allows opacity values to discontinuously transition from zero to one at the surface. Second, we anti-alias by casting multiple rays per pixel, which allows occlusion boundaries and subpixel structures to be modelled without using semi-transparent voxels. Third, we minimize the binary entropy of the opacity values, which facilitates the extraction of surface geometry by encouraging opacity values to binarize towards the end of training. Lastly, we develop a fusion-based meshing strategy followed by mesh simplification and appearance model fitting. The compact meshes produced by our model can be rendered in real-time on mobile devices and achieve significantly higher view synthesis quality compared to existing mesh-based approaches.
1609.07053
Johannes Bjerva
Johannes Bjerva and Barbara Plank and Johan Bos
Semantic Tagging with Deep Residual Networks
COLING 2016, camera ready version
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We propose a novel semantic tagging task, sem-tagging, tailored for the purpose of multilingual semantic parsing, and present the first tagger using deep residual networks (ResNets). Our tagger uses both word and character representations and includes a novel residual bypass architecture. We evaluate the tagset both intrinsically on the new task of semantic tagging, as well as on Part-of-Speech (POS) tagging. Our system, consisting of a ResNet and an auxiliary loss function predicting our semantic tags, significantly outperforms prior results on English Universal Dependencies POS tagging (95.71% accuracy on UD v1.2 and 95.67% accuracy on UD v1.3).
[ { "created": "Thu, 22 Sep 2016 16:34:00 GMT", "version": "v1" }, { "created": "Mon, 31 Oct 2016 18:33:13 GMT", "version": "v2" } ]
2016-11-01
[ [ "Bjerva", "Johannes", "" ], [ "Plank", "Barbara", "" ], [ "Bos", "Johan", "" ] ]
We propose a novel semantic tagging task, sem-tagging, tailored for the purpose of multilingual semantic parsing, and present the first tagger using deep residual networks (ResNets). Our tagger uses both word and character representations and includes a novel residual bypass architecture. We evaluate the tagset both intrinsically on the new task of semantic tagging, as well as on Part-of-Speech (POS) tagging. Our system, consisting of a ResNet and an auxiliary loss function predicting our semantic tags, significantly outperforms prior results on English Universal Dependencies POS tagging (95.71% accuracy on UD v1.2 and 95.67% accuracy on UD v1.3).
2104.03532
Pieter van Goor
Pieter van Goor and Robert Mahony
An Equivariant Filter for Visual Inertial Odometry
11 pages, 3 figures, to be published as {van Goor, P., Mahony, R.. (2021). An Equivariant Filter for Visual Inertial Odometry. 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020.}
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual Inertial Odometry (VIO) is of great interest due the ubiquity of devices equipped with both a monocular camera and Inertial Measurement Unit (IMU). Methods based on the extended Kalman Filter remain popular in VIO due to their low memory requirements, CPU usage, and processing time when compared to optimisation-based methods. In this paper, we analyse the VIO problem from a geometric perspective and propose a novel formulation on a smooth quotient manifold where the equivalence relationship is the well-known invariance of VIO to choice of reference frame. We propose a novel Lie group that acts transitively on this manifold and is compatible with the visual measurements. This structure allows for the application of Equivariant Filter (EqF) design leading to a novel filter for the VIO problem. Combined with a very simple vision processing front-end, the proposed filter demonstrates state-of-the-art performance on the EuRoC dataset compared to other EKF-based VIO algorithms.
[ { "created": "Thu, 8 Apr 2021 06:28:12 GMT", "version": "v1" } ]
2021-04-09
[ [ "van Goor", "Pieter", "" ], [ "Mahony", "Robert", "" ] ]
Visual Inertial Odometry (VIO) is of great interest due the ubiquity of devices equipped with both a monocular camera and Inertial Measurement Unit (IMU). Methods based on the extended Kalman Filter remain popular in VIO due to their low memory requirements, CPU usage, and processing time when compared to optimisation-based methods. In this paper, we analyse the VIO problem from a geometric perspective and propose a novel formulation on a smooth quotient manifold where the equivalence relationship is the well-known invariance of VIO to choice of reference frame. We propose a novel Lie group that acts transitively on this manifold and is compatible with the visual measurements. This structure allows for the application of Equivariant Filter (EqF) design leading to a novel filter for the VIO problem. Combined with a very simple vision processing front-end, the proposed filter demonstrates state-of-the-art performance on the EuRoC dataset compared to other EKF-based VIO algorithms.
1705.08759
Stefan Lee
Qing Sun, Stefan Lee, Dhruv Batra
Bidirectional Beam Search: Forward-Backward Inference in Neural Sequence Models for Fill-in-the-Blank Image Captioning
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop the first approximate inference algorithm for 1-Best (and M-Best) decoding in bidirectional neural sequence models by extending Beam Search (BS) to reason about both forward and backward time dependencies. Beam Search (BS) is a widely used approximate inference algorithm for decoding sequences from unidirectional neural sequence models. Interestingly, approximate inference in bidirectional models remains an open problem, despite their significant advantage in modeling information from both the past and future. To enable the use of bidirectional models, we present Bidirectional Beam Search (BiBS), an efficient algorithm for approximate bidirectional inference.To evaluate our method and as an interesting problem in its own right, we introduce a novel Fill-in-the-Blank Image Captioning task which requires reasoning about both past and future sentence structure to reconstruct sensible image descriptions. We use this task as well as the Visual Madlibs dataset to demonstrate the effectiveness of our approach, consistently outperforming all baseline methods.
[ { "created": "Wed, 24 May 2017 13:42:47 GMT", "version": "v1" } ]
2017-05-25
[ [ "Sun", "Qing", "" ], [ "Lee", "Stefan", "" ], [ "Batra", "Dhruv", "" ] ]
We develop the first approximate inference algorithm for 1-Best (and M-Best) decoding in bidirectional neural sequence models by extending Beam Search (BS) to reason about both forward and backward time dependencies. Beam Search (BS) is a widely used approximate inference algorithm for decoding sequences from unidirectional neural sequence models. Interestingly, approximate inference in bidirectional models remains an open problem, despite their significant advantage in modeling information from both the past and future. To enable the use of bidirectional models, we present Bidirectional Beam Search (BiBS), an efficient algorithm for approximate bidirectional inference.To evaluate our method and as an interesting problem in its own right, we introduce a novel Fill-in-the-Blank Image Captioning task which requires reasoning about both past and future sentence structure to reconstruct sensible image descriptions. We use this task as well as the Visual Madlibs dataset to demonstrate the effectiveness of our approach, consistently outperforming all baseline methods.
2201.11369
Ting-Chun Lin
Ting-Chun Lin, Min-Hsiu Hsieh
$c^3$-Locally Testable Codes from Lossless Expanders
null
null
null
null
cs.IT cs.CC math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A locally testable code (LTC) is an error correcting code with a property tester. The tester tests if a word is codeword by reading constant random bits and rejects the word with probability proportional to the distance from the word to the closest codeword. An important open question until recently is whether there exist $c^3$-LTCs which are LTCs with constant rate, constant relative distance and constant locality. In this work, we construct a new LTC family using 1-sided lossless expanders and balanced products.
[ { "created": "Thu, 27 Jan 2022 08:10:13 GMT", "version": "v1" }, { "created": "Fri, 28 Jan 2022 07:33:50 GMT", "version": "v2" } ]
2022-01-31
[ [ "Lin", "Ting-Chun", "" ], [ "Hsieh", "Min-Hsiu", "" ] ]
A locally testable code (LTC) is an error correcting code with a property tester. The tester tests if a word is codeword by reading constant random bits and rejects the word with probability proportional to the distance from the word to the closest codeword. An important open question until recently is whether there exist $c^3$-LTCs which are LTCs with constant rate, constant relative distance and constant locality. In this work, we construct a new LTC family using 1-sided lossless expanders and balanced products.
1608.02201
Hussein Al-Barazanchi
Hussein A. Al-Barazanchi, Hussam Qassim, Abhishek Verma
Residual CNDS
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Convolutional Neural networks nowadays are of tremendous importance for any image classification system. One of the most investigated methods to increase the accuracy of CNN is by increasing the depth of CNN. Increasing the depth by stacking more layers also increases the difficulty of training besides making it computationally expensive. Some research found that adding auxiliary forks after intermediate layers increases the accuracy. Specifying which intermediate layer shoud have the fork just addressed recently. Where a simple rule were used to detect the position of intermediate layers that needs the auxiliary supervision fork. This technique known as convolutional neural networks with deep supervision (CNDS). This technique enhanced the accuracy of classification over the straight forward CNN used on the MIT places dataset and ImageNet. In the other side, Residual Learning is another technique emerged recently to ease the training of very deep CNN. Residual Learning framwork changed the learning of layers from unreferenced functions to learning residual function with regard to the layer's input. Residual Learning achieved state of arts results on ImageNet 2015 and COCO competitions. In this paper, we study the effect of adding residual connections to CNDS network. Our experiments results show increasing of accuracy over using CNDS only.
[ { "created": "Sun, 7 Aug 2016 10:34:02 GMT", "version": "v1" } ]
2016-08-09
[ [ "Al-Barazanchi", "Hussein A.", "" ], [ "Qassim", "Hussam", "" ], [ "Verma", "Abhishek", "" ] ]
Convolutional Neural networks nowadays are of tremendous importance for any image classification system. One of the most investigated methods to increase the accuracy of CNN is by increasing the depth of CNN. Increasing the depth by stacking more layers also increases the difficulty of training besides making it computationally expensive. Some research found that adding auxiliary forks after intermediate layers increases the accuracy. Specifying which intermediate layer shoud have the fork just addressed recently. Where a simple rule were used to detect the position of intermediate layers that needs the auxiliary supervision fork. This technique known as convolutional neural networks with deep supervision (CNDS). This technique enhanced the accuracy of classification over the straight forward CNN used on the MIT places dataset and ImageNet. In the other side, Residual Learning is another technique emerged recently to ease the training of very deep CNN. Residual Learning framwork changed the learning of layers from unreferenced functions to learning residual function with regard to the layer's input. Residual Learning achieved state of arts results on ImageNet 2015 and COCO competitions. In this paper, we study the effect of adding residual connections to CNDS network. Our experiments results show increasing of accuracy over using CNDS only.
1401.3493
Uzi Zahavi
Uzi Zahavi, Ariel Felner, Neil Burch, Robert C. Holte
Predicting the Performance of IDA* using Conditional Distributions
null
Journal Of Artificial Intelligence Research, Volume 37, pages 41-83, 2010
10.1613/jair.2890
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Korf, Reid, and Edelkamp introduced a formula to predict the number of nodes IDA* will expand on a single iteration for a given consistent heuristic, and experimentally demonstrated that it could make very accurate predictions. In this paper we show that, in addition to requiring the heuristic to be consistent, their formulas predictions are accurate only at levels of the brute-force search tree where the heuristic values obey the unconditional distribution that they defined and then used in their formula. We then propose a new formula that works well without these requirements, i.e., it can make accurate predictions of IDA*s performance for inconsistent heuristics and if the heuristic values in any level do not obey the unconditional distribution. In order to achieve this we introduce the conditional distribution of heuristic values which is a generalization of their unconditional heuristic distribution. We also provide extensions of our formula that handle individual start states and the augmentation of IDA* with bidirectional pathmax (BPMX), a technique for propagating heuristic values when inconsistent heuristics are used. Experimental results demonstrate the accuracy of our new method and all its variations.
[ { "created": "Wed, 15 Jan 2014 05:41:44 GMT", "version": "v1" } ]
2014-01-16
[ [ "Zahavi", "Uzi", "" ], [ "Felner", "Ariel", "" ], [ "Burch", "Neil", "" ], [ "Holte", "Robert C.", "" ] ]
Korf, Reid, and Edelkamp introduced a formula to predict the number of nodes IDA* will expand on a single iteration for a given consistent heuristic, and experimentally demonstrated that it could make very accurate predictions. In this paper we show that, in addition to requiring the heuristic to be consistent, their formulas predictions are accurate only at levels of the brute-force search tree where the heuristic values obey the unconditional distribution that they defined and then used in their formula. We then propose a new formula that works well without these requirements, i.e., it can make accurate predictions of IDA*s performance for inconsistent heuristics and if the heuristic values in any level do not obey the unconditional distribution. In order to achieve this we introduce the conditional distribution of heuristic values which is a generalization of their unconditional heuristic distribution. We also provide extensions of our formula that handle individual start states and the augmentation of IDA* with bidirectional pathmax (BPMX), a technique for propagating heuristic values when inconsistent heuristics are used. Experimental results demonstrate the accuracy of our new method and all its variations.
1103.4513
Jeffrey Shallit
Erik D. Demaine, Sarah Eisenstat, Jeffrey Shallit, David A. Wilson
Remarks on separating words
null
null
null
null
cs.FL cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The separating words problem asks for the size of the smallest DFA needed to distinguish between two words of length <= n (by accepting one and rejecting the other). In this paper we survey what is known and unknown about the problem, consider some variations, and prove several new results.
[ { "created": "Wed, 23 Mar 2011 13:31:37 GMT", "version": "v1" } ]
2011-03-24
[ [ "Demaine", "Erik D.", "" ], [ "Eisenstat", "Sarah", "" ], [ "Shallit", "Jeffrey", "" ], [ "Wilson", "David A.", "" ] ]
The separating words problem asks for the size of the smallest DFA needed to distinguish between two words of length <= n (by accepting one and rejecting the other). In this paper we survey what is known and unknown about the problem, consider some variations, and prove several new results.
2309.02297
Kiran Karra
Axel Cortes-Cubero, Juan P. Madrigal-Cianci, Kiran Karra, Zixuan Zhang
Smoothening block rewards: How much should miners pay for mining pools?
15 pages, 1 figure
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-sa/4.0/
The rewards a blockchain miner earns vary with time. Most of the time is spent mining without receiving any rewards, and only occasionally the miner wins a block and earns a reward. Mining pools smoothen the stochastic flow of rewards, and in the ideal case, provide a steady flow of rewards over time. Smooth block rewards allow miners to choose an optimal mining power growth strategy that will result in a higher reward yield for a given investment. We quantify the economic advantage for a given miner of having smooth rewards, and use this to define a maximum percentage of rewards that a miner should be willing to pay for the mining pool services.
[ { "created": "Tue, 5 Sep 2023 14:59:01 GMT", "version": "v1" } ]
2023-09-06
[ [ "Cortes-Cubero", "Axel", "" ], [ "Madrigal-Cianci", "Juan P.", "" ], [ "Karra", "Kiran", "" ], [ "Zhang", "Zixuan", "" ] ]
The rewards a blockchain miner earns vary with time. Most of the time is spent mining without receiving any rewards, and only occasionally the miner wins a block and earns a reward. Mining pools smoothen the stochastic flow of rewards, and in the ideal case, provide a steady flow of rewards over time. Smooth block rewards allow miners to choose an optimal mining power growth strategy that will result in a higher reward yield for a given investment. We quantify the economic advantage for a given miner of having smooth rewards, and use this to define a maximum percentage of rewards that a miner should be willing to pay for the mining pool services.
2306.03189
Margot Madina
Margot Madina, Itziar Gonzalez-Dios, Melanie Siegel
Easy-to-Read in Germany: A Survey on its Current State and Available Resources
10th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, 2023
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Easy-to-Read Language (E2R) is a controlled language variant that makes any written text more accessible through the use of clear, direct and simple language. It is mainly aimed at people with cognitive or intellectual disabilities, among other target users. Plain Language (PL), on the other hand, is a variant of a given language, which aims to promote the use of simple language to communicate information. German counts with Leichte Sprache (LS), its version of E2R, and Einfache Sprache (ES), its version of PL. In recent years, important developments have been conducted in the field of LS. This paper offers an updated overview of the existing Natural Language Processing (NLP) tools and resources for LS. Besides, it also aims to set out the situation with regard to LS and ES in Germany.
[ { "created": "Mon, 5 Jun 2023 19:00:25 GMT", "version": "v1" } ]
2023-06-07
[ [ "Madina", "Margot", "" ], [ "Gonzalez-Dios", "Itziar", "" ], [ "Siegel", "Melanie", "" ] ]
Easy-to-Read Language (E2R) is a controlled language variant that makes any written text more accessible through the use of clear, direct and simple language. It is mainly aimed at people with cognitive or intellectual disabilities, among other target users. Plain Language (PL), on the other hand, is a variant of a given language, which aims to promote the use of simple language to communicate information. German counts with Leichte Sprache (LS), its version of E2R, and Einfache Sprache (ES), its version of PL. In recent years, important developments have been conducted in the field of LS. This paper offers an updated overview of the existing Natural Language Processing (NLP) tools and resources for LS. Besides, it also aims to set out the situation with regard to LS and ES in Germany.
2110.00937
Xuequan Lu
Sheldon Fung, Xuequan Lu, Mantas Mykolaitis, Gediminas Kostkevicius, Domantas Ozerenskis
Anatomical Landmarks Localization for 3D Foot Point Clouds
submitted for review
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
3D anatomical landmarks play an important role in health research. Their automated prediction/localization thus becomes a vital task. In this paper, we introduce a deformation method for 3D anatomical landmarks prediction. It utilizes a source model with anatomical landmarks which are annotated by clinicians, and deforms this model non-rigidly to match the target model. Two constraints are introduced in the optimization, which are responsible for alignment and smoothness, respectively. Experiments are performed on our dataset and the results demonstrate the robustness of our method, and show that it yields better performance than the state-of-the-art techniques in most cases.
[ { "created": "Sun, 3 Oct 2021 06:24:40 GMT", "version": "v1" } ]
2021-10-05
[ [ "Fung", "Sheldon", "" ], [ "Lu", "Xuequan", "" ], [ "Mykolaitis", "Mantas", "" ], [ "Kostkevicius", "Gediminas", "" ], [ "Ozerenskis", "Domantas", "" ] ]
3D anatomical landmarks play an important role in health research. Their automated prediction/localization thus becomes a vital task. In this paper, we introduce a deformation method for 3D anatomical landmarks prediction. It utilizes a source model with anatomical landmarks which are annotated by clinicians, and deforms this model non-rigidly to match the target model. Two constraints are introduced in the optimization, which are responsible for alignment and smoothness, respectively. Experiments are performed on our dataset and the results demonstrate the robustness of our method, and show that it yields better performance than the state-of-the-art techniques in most cases.
1412.1254
Philip Bille
Philip Bille, Pawel Gawrychowski, Inge Li Goertz, Gad M. Landau, and Oren Weimann
Longest Common Extensions in Trees
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The longest common extension (LCE) of two indices in a string is the length of the longest identical substrings starting at these two indices. The LCE problem asks to preprocess a string into a compact data structure that supports fast LCE queries. In this paper we generalize the LCE problem to trees and suggest a few applications of LCE in trees to tries and XML databases. Given a labeled and rooted tree $T$ of size $n$, the goal is to preprocess $T$ into a compact data structure that support the following LCE queries between subpaths and subtrees in $T$. Let $v_1$, $v_2$, $w_1$, and $w_2$ be nodes of $T$ such that $w_1$ and $w_2$ are descendants of $v_1$ and $v_2$ respectively. \begin{itemize} \item $\LCEPP(v_1, w_1, v_2, w_2)$: (path-path $\LCE$) return the longest common prefix of the paths $v_1 \leadsto w_1$ and $v_2 \leadsto w_2$. \item $\LCEPT(v_1, w_1, v_2)$: (path-tree $\LCE$) return maximal path-path LCE of the path $v_1 \leadsto w_1$ and any path from $v_2$ to a descendant leaf. \item $\LCETT(v_1, v_2)$: (tree-tree $\LCE$) return a maximal path-path LCE of any pair of paths from $v_1$ and $v_2$ to descendant leaves. \end{itemize} We present the first non-trivial bounds for supporting these queries. For $\LCEPP$ queries, we present a linear-space solution with $O(\log^{*} n)$ query time. For $\LCEPT$ queries, we present a linear-space solution with $O((\log\log n)^{2})$ query time, and complement this with a lower bound showing that any path-tree LCE structure of size $O(n \polylog(n))$ must necessarily use $\Omega(\log\log n)$ time to answer queries. For $\LCETT$ queries, we present a time-space trade-off, that given any parameter $\tau$, $1 \leq \tau \leq n$, leads to an $O(n\tau)$ space and $O(n/\tau)$ query-time solution. This is complemented with a reduction to the the set intersection problem implying that a fast linear space solution is not likely to exist.
[ { "created": "Wed, 3 Dec 2014 10:02:47 GMT", "version": "v1" }, { "created": "Tue, 31 Mar 2015 18:06:22 GMT", "version": "v2" }, { "created": "Thu, 9 Jul 2015 14:55:22 GMT", "version": "v3" } ]
2015-07-10
[ [ "Bille", "Philip", "" ], [ "Gawrychowski", "Pawel", "" ], [ "Goertz", "Inge Li", "" ], [ "Landau", "Gad M.", "" ], [ "Weimann", "Oren", "" ] ]
The longest common extension (LCE) of two indices in a string is the length of the longest identical substrings starting at these two indices. The LCE problem asks to preprocess a string into a compact data structure that supports fast LCE queries. In this paper we generalize the LCE problem to trees and suggest a few applications of LCE in trees to tries and XML databases. Given a labeled and rooted tree $T$ of size $n$, the goal is to preprocess $T$ into a compact data structure that support the following LCE queries between subpaths and subtrees in $T$. Let $v_1$, $v_2$, $w_1$, and $w_2$ be nodes of $T$ such that $w_1$ and $w_2$ are descendants of $v_1$ and $v_2$ respectively. \begin{itemize} \item $\LCEPP(v_1, w_1, v_2, w_2)$: (path-path $\LCE$) return the longest common prefix of the paths $v_1 \leadsto w_1$ and $v_2 \leadsto w_2$. \item $\LCEPT(v_1, w_1, v_2)$: (path-tree $\LCE$) return maximal path-path LCE of the path $v_1 \leadsto w_1$ and any path from $v_2$ to a descendant leaf. \item $\LCETT(v_1, v_2)$: (tree-tree $\LCE$) return a maximal path-path LCE of any pair of paths from $v_1$ and $v_2$ to descendant leaves. \end{itemize} We present the first non-trivial bounds for supporting these queries. For $\LCEPP$ queries, we present a linear-space solution with $O(\log^{*} n)$ query time. For $\LCEPT$ queries, we present a linear-space solution with $O((\log\log n)^{2})$ query time, and complement this with a lower bound showing that any path-tree LCE structure of size $O(n \polylog(n))$ must necessarily use $\Omega(\log\log n)$ time to answer queries. For $\LCETT$ queries, we present a time-space trade-off, that given any parameter $\tau$, $1 \leq \tau \leq n$, leads to an $O(n\tau)$ space and $O(n/\tau)$ query-time solution. This is complemented with a reduction to the the set intersection problem implying that a fast linear space solution is not likely to exist.
2005.11623
Zhihao Duan
Zhihao Duan, M. Ozan Tezcan, Hayato Nakamura, Prakash Ishwar, Janusz Konrad
RAPiD: Rotation-Aware People Detection in Overhead Fisheye Images
CVPR 2020 OmniCV Workshop paper extended version
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent methods for people detection in overhead, fisheye images either use radially-aligned bounding boxes to represent people, assuming people always appear along image radius or require significant pre-/post-processing which radically increases computational complexity. In this work, we develop an end-to-end rotation-aware people detection method, named RAPiD, that detects people using arbitrarily-oriented bounding boxes. Our fully-convolutional neural network directly regresses the angle of each bounding box using a periodic loss function, which accounts for angle periodicities. We have also created a new dataset with spatio-temporal annotations of rotated bounding boxes, for people detection as well as other vision tasks in overhead fisheye videos. We show that our simple, yet effective method outperforms state-of-the-art results on three fisheye-image datasets. Code and dataset are available at http://vip.bu.edu/rapid .
[ { "created": "Sat, 23 May 2020 23:47:18 GMT", "version": "v1" } ]
2020-05-26
[ [ "Duan", "Zhihao", "" ], [ "Tezcan", "M. Ozan", "" ], [ "Nakamura", "Hayato", "" ], [ "Ishwar", "Prakash", "" ], [ "Konrad", "Janusz", "" ] ]
Recent methods for people detection in overhead, fisheye images either use radially-aligned bounding boxes to represent people, assuming people always appear along image radius or require significant pre-/post-processing which radically increases computational complexity. In this work, we develop an end-to-end rotation-aware people detection method, named RAPiD, that detects people using arbitrarily-oriented bounding boxes. Our fully-convolutional neural network directly regresses the angle of each bounding box using a periodic loss function, which accounts for angle periodicities. We have also created a new dataset with spatio-temporal annotations of rotated bounding boxes, for people detection as well as other vision tasks in overhead fisheye videos. We show that our simple, yet effective method outperforms state-of-the-art results on three fisheye-image datasets. Code and dataset are available at http://vip.bu.edu/rapid .
2111.03171
Victor Reis
Daniel Dadush, Haotian Jiang, Victor Reis
A New Framework for Matrix Discrepancy: Partial Coloring Bounds via Mirror Descent
24 pages
null
null
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
Motivated by the Matrix Spencer conjecture, we study the problem of finding signed sums of matrices with a small matrix norm. A well-known strategy to obtain these signs is to prove, given matrices $A_1, \dots, A_n \in \mathbb{R}^{m \times m}$, a Gaussian measure lower bound of $2^{-O(n)}$ for a scaling of the discrepancy body $\{x \in \mathbb{R}^n: \| \sum_{i=1}^n x_i A_i\| \leq 1\}$. We show this is equivalent to covering its polar with $2^{O(n)}$ translates of the cube $\frac{1}{n} B^n_\infty$, and construct such a cover via mirror descent. As applications of our framework, we show: $\bullet$ Matrix Spencer for Low-Rank Matrices. If the matrices satisfy $\|A_i\|_{\mathrm{op}} \leq 1$ and $\mathrm{rank}(A_i) \leq r$, we can efficiently find a coloring $x \in \{\pm 1\}^n$ with discrepancy $\|\sum_{i=1}^n x_i A_i \|_{\mathrm{op}} \lesssim \sqrt{n \log (\min(rm/n, r))}$. This improves upon the naive $O(\sqrt{n \log r})$ bound for random coloring and proves the matrix Spencer conjecture when $r m \leq n$. $\bullet$ Matrix Spencer for Block Diagonal Matrices. For block diagonal matrices with $\|A_i\|_{\mathrm{op}} \leq 1$ and block size $h$, we can efficiently find a coloring $x \in \{\pm 1\}^n$ with $\|\sum_{i=1}^n x_i A_i \|_{\mathrm{op}} \lesssim \sqrt{n \log (hm/n)}$. Using our proof, we reduce the matrix Spencer conjecture to the existence of a $O(\log(m/n))$ quantum relative entropy net on the spectraplex. $\bullet$ Matrix Discrepancy for Schatten Norms. We generalize our discrepancy bound for matrix Spencer to Schatten norms $2 \le p \leq q$. Given $\|A_i\|_{S_p} \leq 1$ and $\mathrm{rank}(A_i) \leq r$, we can efficiently find a partial coloring $x \in [-1,1]^n$ with $|\{i : |x_i| = 1\}| \ge n/2$ and $\|\sum_{i=1}^n x_i A_i\|_{S_q} \lesssim \sqrt{n \min(p, \log(rk))} \cdot k^{1/p-1/q}$, where $k := \min(1,m/n)$.
[ { "created": "Thu, 4 Nov 2021 21:44:53 GMT", "version": "v1" } ]
2021-11-08
[ [ "Dadush", "Daniel", "" ], [ "Jiang", "Haotian", "" ], [ "Reis", "Victor", "" ] ]
Motivated by the Matrix Spencer conjecture, we study the problem of finding signed sums of matrices with a small matrix norm. A well-known strategy to obtain these signs is to prove, given matrices $A_1, \dots, A_n \in \mathbb{R}^{m \times m}$, a Gaussian measure lower bound of $2^{-O(n)}$ for a scaling of the discrepancy body $\{x \in \mathbb{R}^n: \| \sum_{i=1}^n x_i A_i\| \leq 1\}$. We show this is equivalent to covering its polar with $2^{O(n)}$ translates of the cube $\frac{1}{n} B^n_\infty$, and construct such a cover via mirror descent. As applications of our framework, we show: $\bullet$ Matrix Spencer for Low-Rank Matrices. If the matrices satisfy $\|A_i\|_{\mathrm{op}} \leq 1$ and $\mathrm{rank}(A_i) \leq r$, we can efficiently find a coloring $x \in \{\pm 1\}^n$ with discrepancy $\|\sum_{i=1}^n x_i A_i \|_{\mathrm{op}} \lesssim \sqrt{n \log (\min(rm/n, r))}$. This improves upon the naive $O(\sqrt{n \log r})$ bound for random coloring and proves the matrix Spencer conjecture when $r m \leq n$. $\bullet$ Matrix Spencer for Block Diagonal Matrices. For block diagonal matrices with $\|A_i\|_{\mathrm{op}} \leq 1$ and block size $h$, we can efficiently find a coloring $x \in \{\pm 1\}^n$ with $\|\sum_{i=1}^n x_i A_i \|_{\mathrm{op}} \lesssim \sqrt{n \log (hm/n)}$. Using our proof, we reduce the matrix Spencer conjecture to the existence of a $O(\log(m/n))$ quantum relative entropy net on the spectraplex. $\bullet$ Matrix Discrepancy for Schatten Norms. We generalize our discrepancy bound for matrix Spencer to Schatten norms $2 \le p \leq q$. Given $\|A_i\|_{S_p} \leq 1$ and $\mathrm{rank}(A_i) \leq r$, we can efficiently find a partial coloring $x \in [-1,1]^n$ with $|\{i : |x_i| = 1\}| \ge n/2$ and $\|\sum_{i=1}^n x_i A_i\|_{S_q} \lesssim \sqrt{n \min(p, \log(rk))} \cdot k^{1/p-1/q}$, where $k := \min(1,m/n)$.
2012.15503
Qilong Zhang
Lianli Gao, Qilong Zhang, Jingkuan Song and Heng Tao Shen
Patch-wise++ Perturbation for Adversarial Targeted Attacks
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although great progress has been made on adversarial attacks for deep neural networks (DNNs), their transferability is still unsatisfactory, especially for targeted attacks. There are two problems behind that have been long overlooked: 1) the conventional setting of $T$ iterations with the step size of $\epsilon/T$ to comply with the $\epsilon$-constraint. In this case, most of the pixels are allowed to add very small noise, much less than $\epsilon$; and 2) usually manipulating pixel-wise noise. However, features of a pixel extracted by DNNs are influenced by its surrounding regions, and different DNNs generally focus on different discriminative regions in recognition. To tackle these issues, our previous work proposes a patch-wise iterative method (PIM) aimed at crafting adversarial examples with high transferability. Specifically, we introduce an amplification factor to the step size in each iteration, and one pixel's overall gradient overflowing the $\epsilon$-constraint is properly assigned to its surrounding regions by a project kernel. But targeted attacks aim to push the adversarial examples into the territory of a specific class, and the amplification factor may lead to underfitting. Thus, we introduce the temperature and propose a patch-wise++ iterative method (PIM++) to further improve transferability without significantly sacrificing the performance of the white-box attack. Our method can be generally integrated to any gradient-based attack methods. Compared with the current state-of-the-art attack methods, we significantly improve the success rate by 33.1\% for defense models and 31.4\% for normally trained models on average.
[ { "created": "Thu, 31 Dec 2020 08:40:42 GMT", "version": "v1" }, { "created": "Thu, 7 Jan 2021 07:34:21 GMT", "version": "v2" }, { "created": "Tue, 8 Jun 2021 12:52:44 GMT", "version": "v3" } ]
2021-06-09
[ [ "Gao", "Lianli", "" ], [ "Zhang", "Qilong", "" ], [ "Song", "Jingkuan", "" ], [ "Shen", "Heng Tao", "" ] ]
Although great progress has been made on adversarial attacks for deep neural networks (DNNs), their transferability is still unsatisfactory, especially for targeted attacks. There are two problems behind that have been long overlooked: 1) the conventional setting of $T$ iterations with the step size of $\epsilon/T$ to comply with the $\epsilon$-constraint. In this case, most of the pixels are allowed to add very small noise, much less than $\epsilon$; and 2) usually manipulating pixel-wise noise. However, features of a pixel extracted by DNNs are influenced by its surrounding regions, and different DNNs generally focus on different discriminative regions in recognition. To tackle these issues, our previous work proposes a patch-wise iterative method (PIM) aimed at crafting adversarial examples with high transferability. Specifically, we introduce an amplification factor to the step size in each iteration, and one pixel's overall gradient overflowing the $\epsilon$-constraint is properly assigned to its surrounding regions by a project kernel. But targeted attacks aim to push the adversarial examples into the territory of a specific class, and the amplification factor may lead to underfitting. Thus, we introduce the temperature and propose a patch-wise++ iterative method (PIM++) to further improve transferability without significantly sacrificing the performance of the white-box attack. Our method can be generally integrated to any gradient-based attack methods. Compared with the current state-of-the-art attack methods, we significantly improve the success rate by 33.1\% for defense models and 31.4\% for normally trained models on average.
2203.04258
Joachim Bruneau-Queyreix
Matthieu Pigaglio, Joachim Bruneau-Queyreix, David Bromberg, Davide Frey, Etienne Rivi\`ere, Laurent R\'eveill\`ere
RAPTEE: Leveraging trusted execution environments for Byzantine-tolerant peer sampling services
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Peer sampling is a first-class abstraction used in distributed systems for overlay management and information dissemination. The goal of peer sampling is to continuously build and refresh a partial and local view of the full membership of a dynamic, large-scale distributed system. Malicious nodes under the control of an adversary may aim at being over-represented in the views of correct nodes, increasing their impact on the proper operation of protocols built over peer sampling. State-of-the-art Byzantine resilient peer sampling protocols reduce this bias as long as Byzantines are not overly present. This paper studies the benefits brought to the resilience of peer sampling services when considering that a small portion of trusted nodes can run code whose authenticity and integrity can be assessed within a trusted execution environment, and specifically Intel's software guard extensions technology (SGX). We present RAPTEE, a protocol that builds and leverages trusted gossip-based communications to hamper an adversary's ability to increase its system-wide representation in the views of all nodes. We apply RAPTEE to BRAHMS, the most resilient peer sampling protocol to date. Experiments with 10,000 nodes show that with only 1% of SGX-capable devices, RAPTEE can reduce the proportion of Byzantine IDs in the view of honest nodes by up to 17% when the system contains 10% of Byzantine nodes. In addition, the security guarantees of RAPTEE hold even in the presence of a powerful attacker attempting to identify trusted nodes and injecting view-poisoned trusted nodes.
[ { "created": "Tue, 8 Mar 2022 18:20:30 GMT", "version": "v1" } ]
2022-03-09
[ [ "Pigaglio", "Matthieu", "" ], [ "Bruneau-Queyreix", "Joachim", "" ], [ "Bromberg", "David", "" ], [ "Frey", "Davide", "" ], [ "Rivière", "Etienne", "" ], [ "Réveillère", "Laurent", "" ] ]
Peer sampling is a first-class abstraction used in distributed systems for overlay management and information dissemination. The goal of peer sampling is to continuously build and refresh a partial and local view of the full membership of a dynamic, large-scale distributed system. Malicious nodes under the control of an adversary may aim at being over-represented in the views of correct nodes, increasing their impact on the proper operation of protocols built over peer sampling. State-of-the-art Byzantine resilient peer sampling protocols reduce this bias as long as Byzantines are not overly present. This paper studies the benefits brought to the resilience of peer sampling services when considering that a small portion of trusted nodes can run code whose authenticity and integrity can be assessed within a trusted execution environment, and specifically Intel's software guard extensions technology (SGX). We present RAPTEE, a protocol that builds and leverages trusted gossip-based communications to hamper an adversary's ability to increase its system-wide representation in the views of all nodes. We apply RAPTEE to BRAHMS, the most resilient peer sampling protocol to date. Experiments with 10,000 nodes show that with only 1% of SGX-capable devices, RAPTEE can reduce the proportion of Byzantine IDs in the view of honest nodes by up to 17% when the system contains 10% of Byzantine nodes. In addition, the security guarantees of RAPTEE hold even in the presence of a powerful attacker attempting to identify trusted nodes and injecting view-poisoned trusted nodes.
1612.07309
Li Li
Li Li, Zhu Li, Bin Li, Dong Liu, and Houqiang Li
Pseudo Sequence based 2-D hierarchical reference structure for Light-Field Image Compression
10 pages, 4 figures, 2 tables
null
10.1109/JSTSP.2017.2725198
null
cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a novel pseudo sequence based 2-D hierarchical reference structure for light-field image compression. In the proposed scheme, we first decompose the light-field image into multiple views and organize them into a 2-D coding structure according to the spatial coordinates of the corresponding microlens. Then we mainly develop three technologies to optimize the 2-D coding structure. First, we divide all the views into four quadrants, and all the views are encoded one quadrant after another to reduce the reference buffer size as much as possible. Inside each quadrant, all the views are encoded hierarchically to fully exploit the correlations between different views. Second, we propose to use the distance between the current view and its reference views as the criteria for selecting better reference frames for each inter view. Third, we propose to use the spatial relative positions between different views to achieve more accurate motion vector scaling. The whole scheme is implemented in the reference software of High Efficiency Video Coding. The experimental results demonstrate that the proposed novel pseudo-sequence based 2-D hierarchical structure can achieve maximum 14.2% bit-rate savings compared with the state-of-the-art light-field image compression method.
[ { "created": "Wed, 21 Dec 2016 20:31:57 GMT", "version": "v1" } ]
2017-11-22
[ [ "Li", "Li", "" ], [ "Li", "Zhu", "" ], [ "Li", "Bin", "" ], [ "Liu", "Dong", "" ], [ "Li", "Houqiang", "" ] ]
In this paper, we present a novel pseudo sequence based 2-D hierarchical reference structure for light-field image compression. In the proposed scheme, we first decompose the light-field image into multiple views and organize them into a 2-D coding structure according to the spatial coordinates of the corresponding microlens. Then we mainly develop three technologies to optimize the 2-D coding structure. First, we divide all the views into four quadrants, and all the views are encoded one quadrant after another to reduce the reference buffer size as much as possible. Inside each quadrant, all the views are encoded hierarchically to fully exploit the correlations between different views. Second, we propose to use the distance between the current view and its reference views as the criteria for selecting better reference frames for each inter view. Third, we propose to use the spatial relative positions between different views to achieve more accurate motion vector scaling. The whole scheme is implemented in the reference software of High Efficiency Video Coding. The experimental results demonstrate that the proposed novel pseudo-sequence based 2-D hierarchical structure can achieve maximum 14.2% bit-rate savings compared with the state-of-the-art light-field image compression method.
2105.10569
Aileen Nielsen
Aileen Nielsen
Measuring Lay Reactions to Personal Data Markets
null
null
null
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
The recording, aggregation, and exchange of personal data is necessary to the development of socially-relevant machine learning applications. However, anecdotal and survey evidence show that ordinary people feel discontent and even anger regarding data collection practices that are currently typical and legal. This suggests that personal data markets in their current form do not adhere to the norms applied by ordinary people. The present study experimentally probes whether market transactions in a typical online scenario are accepted when evaluated by lay people. The results show that a high percentage of study participants refused to participate in a data pricing exercise, even in a commercial context where market rules would typically be expected to apply. For those participants who did price the data, the median price was an order of magnitude higher than the market price. These results call into question the notice and consent market paradigm that is used by technology firms and government regulators when evaluating data flows. The results also point to a conceptual mismatch between cultural and legal expectations regarding the use of personal data.
[ { "created": "Fri, 21 May 2021 20:56:19 GMT", "version": "v1" } ]
2021-05-25
[ [ "Nielsen", "Aileen", "" ] ]
The recording, aggregation, and exchange of personal data is necessary to the development of socially-relevant machine learning applications. However, anecdotal and survey evidence show that ordinary people feel discontent and even anger regarding data collection practices that are currently typical and legal. This suggests that personal data markets in their current form do not adhere to the norms applied by ordinary people. The present study experimentally probes whether market transactions in a typical online scenario are accepted when evaluated by lay people. The results show that a high percentage of study participants refused to participate in a data pricing exercise, even in a commercial context where market rules would typically be expected to apply. For those participants who did price the data, the median price was an order of magnitude higher than the market price. These results call into question the notice and consent market paradigm that is used by technology firms and government regulators when evaluating data flows. The results also point to a conceptual mismatch between cultural and legal expectations regarding the use of personal data.
1210.1639
Vaneet Aggarwal
Melissa Duarte, Ashutosh Sabharwal, Vaneet Aggarwal, Rittwik Jana, K. K. Ramakrishnan, Christopher Rice and N. K. Shankaranarayanan
Design and Characterization of a Full-duplex Multi-antenna System for WiFi networks
44 page, 11 figures, 8 tables. Submitted to IEEE Transactions on Vehicular Technology, Oct 2012
IEEE Transactions on Vehicular Technology, vol.63, no.3, pp.1160--1177, March 2014
10.1109/TVT.2013.2284712
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present an experimental and simulation based study to evaluate the use of full-duplex as a mode in practical IEEE 802.11 networks. To enable the study, we designed a 20 MHz multi-antenna OFDM full-duplex physical layer and a full-duplex capable MAC protocol which is backward compatible with current 802.11. Our extensive over-the-air experiments, simulations and analysis demonstrate the following two results. First, the use of multiple antennas at the physical layer leads to a higher ergodic throughput than its hardware-equivalent multi-antenna half-duplex counterparts, for SNRs above the median SNR encountered in practical WiFi deployments. Second, the proposed MAC translates the physical layer rate gain into near doubling of throughput for multi-node single-AP networks. The two combined results allow us to conclude that there are potentially significant benefits gained from including a full-duplex mode in future WiFi standards.
[ { "created": "Fri, 5 Oct 2012 03:40:45 GMT", "version": "v1" }, { "created": "Mon, 8 Oct 2012 02:32:05 GMT", "version": "v2" } ]
2014-07-15
[ [ "Duarte", "Melissa", "" ], [ "Sabharwal", "Ashutosh", "" ], [ "Aggarwal", "Vaneet", "" ], [ "Jana", "Rittwik", "" ], [ "Ramakrishnan", "K. K.", "" ], [ "Rice", "Christopher", "" ], [ "Shankaranarayanan", "N. K.", "" ] ]
In this paper, we present an experimental and simulation based study to evaluate the use of full-duplex as a mode in practical IEEE 802.11 networks. To enable the study, we designed a 20 MHz multi-antenna OFDM full-duplex physical layer and a full-duplex capable MAC protocol which is backward compatible with current 802.11. Our extensive over-the-air experiments, simulations and analysis demonstrate the following two results. First, the use of multiple antennas at the physical layer leads to a higher ergodic throughput than its hardware-equivalent multi-antenna half-duplex counterparts, for SNRs above the median SNR encountered in practical WiFi deployments. Second, the proposed MAC translates the physical layer rate gain into near doubling of throughput for multi-node single-AP networks. The two combined results allow us to conclude that there are potentially significant benefits gained from including a full-duplex mode in future WiFi standards.
2207.10777
Khalid Oublal
Khalid Oublal and Xinyi Dai
An advanced combination of semi-supervised Normalizing Flow & Yolo (YoloNF) to detect and recognize vehicle license plates
arXiv admin note: text overlap with arXiv:1802.09567 by other authors; text overlap with arXiv:2012.06737 by other authors without attribution
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Fully Automatic License Plate Recognition (ALPR) has been a frequent research topic due to several practical applications. However, many of the current solutions are still not robust enough in real situations, commonly depending on many constraints. This paper presents a robust and efficient ALPR system based on the state-of-the-art YOLO object detector and Normalizing flows. The model uses two new strategies. Firstly, a two-stage network using YOLO and a normalization flow-based model for normalization to detect Licenses Plates (LP) and recognize the LP with numbers and Arabic characters. Secondly, Multi-scale image transformations are implemented to provide a solution to the problem of the YOLO cropped LP detection including significant background noise. Furthermore, extensive experiments are led on a new dataset with realistic scenarios, we introduce a larger public annotated dataset collected from Moroccan plates. We demonstrate that our proposed model can learn on a small number of samples free of single or multiple characters. The dataset will also be made publicly available to encourage further studies and research on plate detection and recognition.
[ { "created": "Thu, 21 Jul 2022 22:22:57 GMT", "version": "v1" } ]
2022-07-25
[ [ "Oublal", "Khalid", "" ], [ "Dai", "Xinyi", "" ] ]
Fully Automatic License Plate Recognition (ALPR) has been a frequent research topic due to several practical applications. However, many of the current solutions are still not robust enough in real situations, commonly depending on many constraints. This paper presents a robust and efficient ALPR system based on the state-of-the-art YOLO object detector and Normalizing flows. The model uses two new strategies. Firstly, a two-stage network using YOLO and a normalization flow-based model for normalization to detect Licenses Plates (LP) and recognize the LP with numbers and Arabic characters. Secondly, Multi-scale image transformations are implemented to provide a solution to the problem of the YOLO cropped LP detection including significant background noise. Furthermore, extensive experiments are led on a new dataset with realistic scenarios, we introduce a larger public annotated dataset collected from Moroccan plates. We demonstrate that our proposed model can learn on a small number of samples free of single or multiple characters. The dataset will also be made publicly available to encourage further studies and research on plate detection and recognition.
2006.13341
Liliane Almeida
Liliane Rodrigues de Almeida, Gilson A. Giraldi, Marcelo Bernardes Vieira
Applying Lie Groups Approaches for Rigid Registration of Point Clouds
29 pages, 4 figures, 1 table
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In the last decades, some literature appeared using the Lie groups theory to solve problems in computer vision. On the other hand, Lie algebraic representations of the transformations therein were introduced to overcome the difficulties behind group structure by mapping the transformation groups to linear spaces. In this paper we focus on application of Lie groups and Lie algebras to find the rigid transformation that best register two surfaces represented by point clouds. The so called pairwise rigid registration can be formulated by comparing intrinsic second-order orientation tensors that encode local geometry. These tensors can be (locally) represented by symmetric non-negative definite matrices. In this paper we interpret the obtained tensor field as a multivariate normal model. So, we start with the fact that the space of Gaussians can be equipped with a Lie group structure, that is isomorphic to a subgroup of the upper triangular matrices. Consequently, the associated Lie algebra structure enables us to handle Gaussians, and consequently, to compare orientation tensors, with Euclidean operations. We apply this methodology to variants of the Iterative Closest Point (ICP), a known technique for pairwise registration. We compare the obtained results with the original implementations that apply the comparative tensor shape factor (CTSF), which is a similarity notion based on the eigenvalues of the orientation tensors. We notice that the similarity measure in tensor spaces directly derived from Lie's approach is not invariant under rotations, which is a problem in terms of rigid registration. Despite of this, the performed computational experiments show promising results when embedding orientation tensor fields in Lie algebras.
[ { "created": "Tue, 23 Jun 2020 21:26:57 GMT", "version": "v1" } ]
2020-06-25
[ [ "de Almeida", "Liliane Rodrigues", "" ], [ "Giraldi", "Gilson A.", "" ], [ "Vieira", "Marcelo Bernardes", "" ] ]
In the last decades, some literature appeared using the Lie groups theory to solve problems in computer vision. On the other hand, Lie algebraic representations of the transformations therein were introduced to overcome the difficulties behind group structure by mapping the transformation groups to linear spaces. In this paper we focus on application of Lie groups and Lie algebras to find the rigid transformation that best register two surfaces represented by point clouds. The so called pairwise rigid registration can be formulated by comparing intrinsic second-order orientation tensors that encode local geometry. These tensors can be (locally) represented by symmetric non-negative definite matrices. In this paper we interpret the obtained tensor field as a multivariate normal model. So, we start with the fact that the space of Gaussians can be equipped with a Lie group structure, that is isomorphic to a subgroup of the upper triangular matrices. Consequently, the associated Lie algebra structure enables us to handle Gaussians, and consequently, to compare orientation tensors, with Euclidean operations. We apply this methodology to variants of the Iterative Closest Point (ICP), a known technique for pairwise registration. We compare the obtained results with the original implementations that apply the comparative tensor shape factor (CTSF), which is a similarity notion based on the eigenvalues of the orientation tensors. We notice that the similarity measure in tensor spaces directly derived from Lie's approach is not invariant under rotations, which is a problem in terms of rigid registration. Despite of this, the performed computational experiments show promising results when embedding orientation tensor fields in Lie algebras.
0708.2353
Vladimir Vovk
Vladimir Vovk
Continuous and randomized defensive forecasting: unified view
10 pages. The new version: (1) relaxes the assumption that the outcome space is finite, and now it is only assumed to be compact; (2) shows that in the case where the outcome space is finite of cardinality C, the randomized forecasts can be chosen concentrated on a finite set of cardinality at most C
null
null
null
cs.LG
null
Defensive forecasting is a method of transforming laws of probability (stated in game-theoretic terms as strategies for Sceptic) into forecasting algorithms. There are two known varieties of defensive forecasting: "continuous", in which Sceptic's moves are assumed to depend on the forecasts in a (semi)continuous manner and which produces deterministic forecasts, and "randomized", in which the dependence of Sceptic's moves on the forecasts is arbitrary and Forecaster's moves are allowed to be randomized. This note shows that the randomized variety can be obtained from the continuous variety by smearing Sceptic's moves to make them continuous.
[ { "created": "Fri, 17 Aug 2007 12:18:24 GMT", "version": "v1" }, { "created": "Thu, 23 Aug 2007 12:44:34 GMT", "version": "v2" } ]
2007-08-23
[ [ "Vovk", "Vladimir", "" ] ]
Defensive forecasting is a method of transforming laws of probability (stated in game-theoretic terms as strategies for Sceptic) into forecasting algorithms. There are two known varieties of defensive forecasting: "continuous", in which Sceptic's moves are assumed to depend on the forecasts in a (semi)continuous manner and which produces deterministic forecasts, and "randomized", in which the dependence of Sceptic's moves on the forecasts is arbitrary and Forecaster's moves are allowed to be randomized. This note shows that the randomized variety can be obtained from the continuous variety by smearing Sceptic's moves to make them continuous.
2209.12028
Lichen Zhao
Lichen Zhao, Daigang Cai, Jing Zhang, Lu Sheng, Dong Xu, Rui Zheng, Yinjie Zhao, Lipeng Wang and Xibo Fan
Towards Explainable 3D Grounded Visual Question Answering: A New Benchmark and Strong Baseline
13 pages, 10 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, 3D vision-and-language tasks have attracted increasing research interest. Compared to other vision-and-language tasks, the 3D visual question answering (VQA) task is less exploited and is more susceptible to language priors and co-reference ambiguity. Meanwhile, a couple of recently proposed 3D VQA datasets do not well support 3D VQA task due to their limited scale and annotation methods. In this work, we formally define and address a 3D grounded VQA task by collecting a new 3D VQA dataset, referred to as FE-3DGQA, with diverse and relatively free-form question-answer pairs, as well as dense and completely grounded bounding box annotations. To achieve more explainable answers, we labelled the objects appeared in the complex QA pairs with different semantic types, including answer-grounded objects (both appeared and not appeared in the questions), and contextual objects for answer-grounded objects. We also propose a new 3D VQA framework to effectively predict the completely visually grounded and explainable answer. Extensive experiments verify that our newly collected benchmark datasets can be effectively used to evaluate various 3D VQA methods from different aspects and our newly proposed framework also achieves state-of-the-art performance on the new benchmark dataset. Both the newly collected dataset and our codes will be publicly available at http://github.com/zlccccc/3DGQA.
[ { "created": "Sat, 24 Sep 2022 15:09:02 GMT", "version": "v1" } ]
2022-09-27
[ [ "Zhao", "Lichen", "" ], [ "Cai", "Daigang", "" ], [ "Zhang", "Jing", "" ], [ "Sheng", "Lu", "" ], [ "Xu", "Dong", "" ], [ "Zheng", "Rui", "" ], [ "Zhao", "Yinjie", "" ], [ "Wang", "Lipeng", "" ], [ "Fan", "Xibo", "" ] ]
Recently, 3D vision-and-language tasks have attracted increasing research interest. Compared to other vision-and-language tasks, the 3D visual question answering (VQA) task is less exploited and is more susceptible to language priors and co-reference ambiguity. Meanwhile, a couple of recently proposed 3D VQA datasets do not well support 3D VQA task due to their limited scale and annotation methods. In this work, we formally define and address a 3D grounded VQA task by collecting a new 3D VQA dataset, referred to as FE-3DGQA, with diverse and relatively free-form question-answer pairs, as well as dense and completely grounded bounding box annotations. To achieve more explainable answers, we labelled the objects appeared in the complex QA pairs with different semantic types, including answer-grounded objects (both appeared and not appeared in the questions), and contextual objects for answer-grounded objects. We also propose a new 3D VQA framework to effectively predict the completely visually grounded and explainable answer. Extensive experiments verify that our newly collected benchmark datasets can be effectively used to evaluate various 3D VQA methods from different aspects and our newly proposed framework also achieves state-of-the-art performance on the new benchmark dataset. Both the newly collected dataset and our codes will be publicly available at http://github.com/zlccccc/3DGQA.
1810.06917
Fragkiskos Malliaros
Abdulkadir \c{C}elikkanat, Fragkiskos D. Malliaros
TNE: A Latent Model for Representation Learning on Networks
9 pages
null
null
null
cs.LG cs.SI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Network representation learning (NRL) methods aim to map each vertex into a low dimensional space by preserving the local and global structure of a given network, and in recent years they have received a significant attention thanks to their success in several challenging problems. Although various approaches have been proposed to compute node embeddings, many successful methods benefit from random walks in order to transform a given network into a collection of sequences of nodes and then they target to learn the representation of nodes by predicting the context of each vertex within the sequence. In this paper, we introduce a general framework to enhance the embeddings of nodes acquired by means of the random walk-based approaches. Similar to the notion of topical word embeddings in NLP, the proposed method assigns each vertex to a topic with the favor of various statistical models and community detection methods, and then generates the enhanced community representations. We evaluate our method on two downstream tasks: node classification and link prediction. The experimental results demonstrate that the incorporation of vertex and topic embeddings outperform widely-known baseline NRL methods.
[ { "created": "Tue, 16 Oct 2018 10:26:47 GMT", "version": "v1" } ]
2018-10-17
[ [ "Çelikkanat", "Abdulkadir", "" ], [ "Malliaros", "Fragkiskos D.", "" ] ]
Network representation learning (NRL) methods aim to map each vertex into a low dimensional space by preserving the local and global structure of a given network, and in recent years they have received a significant attention thanks to their success in several challenging problems. Although various approaches have been proposed to compute node embeddings, many successful methods benefit from random walks in order to transform a given network into a collection of sequences of nodes and then they target to learn the representation of nodes by predicting the context of each vertex within the sequence. In this paper, we introduce a general framework to enhance the embeddings of nodes acquired by means of the random walk-based approaches. Similar to the notion of topical word embeddings in NLP, the proposed method assigns each vertex to a topic with the favor of various statistical models and community detection methods, and then generates the enhanced community representations. We evaluate our method on two downstream tasks: node classification and link prediction. The experimental results demonstrate that the incorporation of vertex and topic embeddings outperform widely-known baseline NRL methods.
2408.05065
Wenwen Min
Lin Huang, Xiaofei Liu, Shunfang Wang and Wenwen Min
Masked adversarial neural network for cell type deconvolution in spatial transcriptomics
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurately determining cell type composition in disease-relevant tissues is crucial for identifying disease targets. Most existing spatial transcriptomics (ST) technologies cannot achieve single-cell resolution, making it challenging to accurately determine cell types. To address this issue, various deconvolution methods have been developed. Most of these methods use single-cell RNA sequencing (scRNA-seq) data from the same tissue as a reference to infer cell types in ST data spots. However, they often overlook the differences between scRNA-seq and ST data. To overcome this limitation, we propose a Masked Adversarial Neural Network (MACD). MACD employs adversarial learning to align real ST data with simulated ST data generated from scRNA-seq data. By mapping them into a unified latent space, it can minimize the differences between the two types of data. Additionally, MACD uses masking techniques to effectively learn the features of real ST data and mitigate noise. We evaluated MACD on 32 simulated datasets and 2 real datasets, demonstrating its accuracy in performing cell type deconvolution. All code and public datasets used in this paper are available at https://github.com/wenwenmin/MACD and https://zenodo.org/records/12804822.
[ { "created": "Fri, 9 Aug 2024 13:46:28 GMT", "version": "v1" } ]
2024-08-12
[ [ "Huang", "Lin", "" ], [ "Liu", "Xiaofei", "" ], [ "Wang", "Shunfang", "" ], [ "Min", "Wenwen", "" ] ]
Accurately determining cell type composition in disease-relevant tissues is crucial for identifying disease targets. Most existing spatial transcriptomics (ST) technologies cannot achieve single-cell resolution, making it challenging to accurately determine cell types. To address this issue, various deconvolution methods have been developed. Most of these methods use single-cell RNA sequencing (scRNA-seq) data from the same tissue as a reference to infer cell types in ST data spots. However, they often overlook the differences between scRNA-seq and ST data. To overcome this limitation, we propose a Masked Adversarial Neural Network (MACD). MACD employs adversarial learning to align real ST data with simulated ST data generated from scRNA-seq data. By mapping them into a unified latent space, it can minimize the differences between the two types of data. Additionally, MACD uses masking techniques to effectively learn the features of real ST data and mitigate noise. We evaluated MACD on 32 simulated datasets and 2 real datasets, demonstrating its accuracy in performing cell type deconvolution. All code and public datasets used in this paper are available at https://github.com/wenwenmin/MACD and https://zenodo.org/records/12804822.
2407.19705
Jingwei Zhu
Jingwei Zhu, Minghuan Tan, Min Yang, Ruixue Li, Hamid Alinejad-Rokny
CollectiveSFT: Scaling Large Language Models for Chinese Medical Benchmark with Collective Instructions in Healthcare
Technical Report
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
The rapid progress in Large Language Models (LLMs) has prompted the creation of numerous benchmarks to evaluate their capabilities.This study focuses on the Comprehensive Medical Benchmark in Chinese (CMB), showcasing how dataset diversity and distribution in supervised fine-tuning (SFT) may enhance LLM performance.Remarkably, We successfully trained a smaller base model to achieve scores comparable to larger models, indicating that a diverse and well-distributed dataset can optimize performance regardless of model size.This study suggests that even smaller models may reach high performance levels with carefully curated and varied datasets. By integrating a wide range of instructional content, our approach addresses potential issues such as data quality inconsistencies. Our results imply that a broader spectrum of training data may enhance a model's ability to generalize and perform effectively across different medical scenarios, highlighting the importance of dataset quality and diversity in fine-tuning processes. We open-source the model for future research at https://github.com/CAS-SIAT-XinHai/CollectiveSFT
[ { "created": "Mon, 29 Jul 2024 05:00:48 GMT", "version": "v1" }, { "created": "Tue, 30 Jul 2024 08:23:05 GMT", "version": "v2" } ]
2024-07-31
[ [ "Zhu", "Jingwei", "" ], [ "Tan", "Minghuan", "" ], [ "Yang", "Min", "" ], [ "Li", "Ruixue", "" ], [ "Alinejad-Rokny", "Hamid", "" ] ]
The rapid progress in Large Language Models (LLMs) has prompted the creation of numerous benchmarks to evaluate their capabilities.This study focuses on the Comprehensive Medical Benchmark in Chinese (CMB), showcasing how dataset diversity and distribution in supervised fine-tuning (SFT) may enhance LLM performance.Remarkably, We successfully trained a smaller base model to achieve scores comparable to larger models, indicating that a diverse and well-distributed dataset can optimize performance regardless of model size.This study suggests that even smaller models may reach high performance levels with carefully curated and varied datasets. By integrating a wide range of instructional content, our approach addresses potential issues such as data quality inconsistencies. Our results imply that a broader spectrum of training data may enhance a model's ability to generalize and perform effectively across different medical scenarios, highlighting the importance of dataset quality and diversity in fine-tuning processes. We open-source the model for future research at https://github.com/CAS-SIAT-XinHai/CollectiveSFT
2106.13901
Ramya Srinivasan
Ramya Srinivasan and Devi Parikh
Building Bridges: Generative Artworks to Explore AI Ethics
null
CVPR Workshop on Ethical Considerations in Creative Applications of Computer Vision, 2022
null
null
cs.CY cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society. Across academia, industry, and government bodies, a variety of endeavours are being pursued towards enhancing AI ethics. A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests. These different perspectives are often not understood, due in part to communication gaps.For example, AI researchers who design and develop AI models are not necessarily aware of the instability induced in consumers' lives by the compounded effects of AI decisions. Educating different stakeholders about their roles and responsibilities in the broader context becomes necessary. In this position paper, we outline some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools for surfacing different perspectives. We hope to spark interdisciplinary discussions about computational creativity broadly as a tool for enhancing AI ethics.
[ { "created": "Fri, 25 Jun 2021 22:31:55 GMT", "version": "v1" } ]
2022-05-18
[ [ "Srinivasan", "Ramya", "" ], [ "Parikh", "Devi", "" ] ]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society. Across academia, industry, and government bodies, a variety of endeavours are being pursued towards enhancing AI ethics. A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests. These different perspectives are often not understood, due in part to communication gaps.For example, AI researchers who design and develop AI models are not necessarily aware of the instability induced in consumers' lives by the compounded effects of AI decisions. Educating different stakeholders about their roles and responsibilities in the broader context becomes necessary. In this position paper, we outline some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools for surfacing different perspectives. We hope to spark interdisciplinary discussions about computational creativity broadly as a tool for enhancing AI ethics.
2211.05986
Swati Sharma
Swati Sharma, Aditi Partap, Maria Angels de Luis Balaguer, Sara Malvar, Ranveer Chandra
DeepG2P: Fusing Multi-Modal Data to Improve Crop Production
Under review in AISTATS2023
null
null
null
cs.LG cs.CY
http://creativecommons.org/licenses/by-nc-nd/4.0/
Agriculture is at the heart of the solution to achieve sustainability in feeding the world population, but advancing our understanding on how agricultural output responds to climatic variability is still needed. Precision Agriculture (PA), which is a management strategy that uses technology such as remote sensing, Geographical Information System (GIS), and machine learning for decision making in the field, has emerged as a promising approach to enhance crop production, increase yield, and reduce water and nutrient losses and environmental impacts. In this context, multiple models to predict agricultural phenotypes, such as crop yield, from genomics (G), environment (E), weather and soil, and field management practices (M) have been developed. These models have traditionally been based on mechanistic or statistical approaches. However, AI approaches are intrinsically well-suited to model complex interactions and have more recently been developed, outperforming classical methods. Here, we present a Natural Language Processing (NLP)-based neural network architecture to process the G, E and M inputs and their interactions. We show that by modeling DNA as natural language, our approach performs better than previous approaches when tested for new environments and similarly to other approaches for unseen seed varieties.
[ { "created": "Fri, 11 Nov 2022 03:32:44 GMT", "version": "v1" } ]
2022-11-14
[ [ "Sharma", "Swati", "" ], [ "Partap", "Aditi", "" ], [ "Balaguer", "Maria Angels de Luis", "" ], [ "Malvar", "Sara", "" ], [ "Chandra", "Ranveer", "" ] ]
Agriculture is at the heart of the solution to achieve sustainability in feeding the world population, but advancing our understanding on how agricultural output responds to climatic variability is still needed. Precision Agriculture (PA), which is a management strategy that uses technology such as remote sensing, Geographical Information System (GIS), and machine learning for decision making in the field, has emerged as a promising approach to enhance crop production, increase yield, and reduce water and nutrient losses and environmental impacts. In this context, multiple models to predict agricultural phenotypes, such as crop yield, from genomics (G), environment (E), weather and soil, and field management practices (M) have been developed. These models have traditionally been based on mechanistic or statistical approaches. However, AI approaches are intrinsically well-suited to model complex interactions and have more recently been developed, outperforming classical methods. Here, we present a Natural Language Processing (NLP)-based neural network architecture to process the G, E and M inputs and their interactions. We show that by modeling DNA as natural language, our approach performs better than previous approaches when tested for new environments and similarly to other approaches for unseen seed varieties.
1401.4532
Yanfei Yan
Yanfei Yan, Ling Liu, Cong Ling
Polar Lattices for Strong Secrecy Over the Mod-$\Lambda$ Gaussian Wiretap Channel
7 pages, 2 figures, extended version of a paper submitted to ISIT'14
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Polar lattices, which are constructed from polar codes, are provably good for the additive white Gaussian noise (AWGN) channel. In this work, we propose a new polar lattice construction that achieves the secrecy capacity under the strong secrecy criterion over the mod-$\Lambda$ Gaussian wiretap channel. This construction leads to an AWGN-good lattice and a secrecy-good lattice simultaneously. The design methodology is mainly based on the equivalence in terms of polarization between the $\Lambda/\Lambda'$ channel in lattice coding and the equivalent channel derived from the chain rule of mutual information in multilevel coding.
[ { "created": "Sat, 18 Jan 2014 11:41:44 GMT", "version": "v1" }, { "created": "Fri, 24 Jan 2014 10:49:01 GMT", "version": "v2" } ]
2014-01-27
[ [ "Yan", "Yanfei", "" ], [ "Liu", "Ling", "" ], [ "Ling", "Cong", "" ] ]
Polar lattices, which are constructed from polar codes, are provably good for the additive white Gaussian noise (AWGN) channel. In this work, we propose a new polar lattice construction that achieves the secrecy capacity under the strong secrecy criterion over the mod-$\Lambda$ Gaussian wiretap channel. This construction leads to an AWGN-good lattice and a secrecy-good lattice simultaneously. The design methodology is mainly based on the equivalence in terms of polarization between the $\Lambda/\Lambda'$ channel in lattice coding and the equivalent channel derived from the chain rule of mutual information in multilevel coding.
2202.14019
Paritosh Parmar
Paritosh Parmar, Amol Gharat, Helge Rhodin
Domain Knowledge-Informed Self-Supervised Representations for Workout Form Assessment
null
null
null
null
cs.CV cs.AI cs.HC cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Maintaining proper form while exercising is important for preventing injuries and maximizing muscle mass gains. Detecting errors in workout form naturally requires estimating human's body pose. However, off-the-shelf pose estimators struggle to perform well on the videos recorded in gym scenarios due to factors such as camera angles, occlusion from gym equipment, illumination, and clothing. To aggravate the problem, the errors to be detected in the workouts are very subtle. To that end, we propose to learn exercise-oriented image and video representations from unlabeled samples such that a small dataset annotated by experts suffices for supervised error detection. In particular, our domain knowledge-informed self-supervised approaches (pose contrastive learning and motion disentangling) exploit the harmonic motion of the exercise actions, and capitalize on the large variances in camera angles, clothes, and illumination to learn powerful representations. To facilitate our self-supervised pretraining, and supervised finetuning, we curated a new exercise dataset, \emph{Fitness-AQA} (\url{https://github.com/ParitoshParmar/Fitness-AQA}), comprising of three exercises: BackSquat, BarbellRow, and OverheadPress. It has been annotated by expert trainers for multiple crucial and typically occurring exercise errors. Experimental results show that our self-supervised representations outperform off-the-shelf 2D- and 3D-pose estimators and several other baselines. We also show that our approaches can be applied to other domains/tasks such as pose estimation and dive quality assessment.
[ { "created": "Mon, 28 Feb 2022 18:40:02 GMT", "version": "v1" }, { "created": "Fri, 21 Oct 2022 17:10:15 GMT", "version": "v2" } ]
2022-10-24
[ [ "Parmar", "Paritosh", "" ], [ "Gharat", "Amol", "" ], [ "Rhodin", "Helge", "" ] ]
Maintaining proper form while exercising is important for preventing injuries and maximizing muscle mass gains. Detecting errors in workout form naturally requires estimating human's body pose. However, off-the-shelf pose estimators struggle to perform well on the videos recorded in gym scenarios due to factors such as camera angles, occlusion from gym equipment, illumination, and clothing. To aggravate the problem, the errors to be detected in the workouts are very subtle. To that end, we propose to learn exercise-oriented image and video representations from unlabeled samples such that a small dataset annotated by experts suffices for supervised error detection. In particular, our domain knowledge-informed self-supervised approaches (pose contrastive learning and motion disentangling) exploit the harmonic motion of the exercise actions, and capitalize on the large variances in camera angles, clothes, and illumination to learn powerful representations. To facilitate our self-supervised pretraining, and supervised finetuning, we curated a new exercise dataset, \emph{Fitness-AQA} (\url{https://github.com/ParitoshParmar/Fitness-AQA}), comprising of three exercises: BackSquat, BarbellRow, and OverheadPress. It has been annotated by expert trainers for multiple crucial and typically occurring exercise errors. Experimental results show that our self-supervised representations outperform off-the-shelf 2D- and 3D-pose estimators and several other baselines. We also show that our approaches can be applied to other domains/tasks such as pose estimation and dive quality assessment.