id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1104.0824
Deepesh Ranka
Deepesh Ranka, Ashwani K. Rana, Rakesh Kumar Yadav, Kamalesh Yadav, Devendra Giri
Performance evaluation of FD-SOI Mosfets for different metal gate work function
14 pages,12 figures,International Journal of VLSI design & Communication Systems (VLSICS) Vol.2, No.1, March 2011
International Journal of VLSI design & Communication Systems (VLSICS) Vol.2, No.1, March 2011
null
null
cs.OH
http://creativecommons.org/licenses/by/3.0/
Fully depleted (FD) Silicon on Insulator (SOI) metal oxide Field Effect Transistor (MOSFET) Is the Leading Contender for Sun 65nm Regime. This paper presents a study of effects of work functions of metal gate on the performance of FD-SOI MOSFET. Sentaurus TCAD simulation tool is used to investigate the effect of work function of gates ont he performance FDSOI MOSFET. Specific channel length of the device that had been concentrated is 25nm. From simulation we observed that by changing the work function of the metal gates of FD-SOI MOSFET we can change the threshold voltage. Hence by using this technique we can set the appropriate threshold voltage of FD-SOI MOSFET at same voltage and we can decrease the leakage current, gate tunneling current and short channel effects and increase drive current.
[ { "created": "Mon, 4 Apr 2011 07:09:22 GMT", "version": "v1" } ]
2011-04-06
[ [ "Ranka", "Deepesh", "" ], [ "Rana", "Ashwani K.", "" ], [ "Yadav", "Rakesh Kumar", "" ], [ "Yadav", "Kamalesh", "" ], [ "Giri", "Devendra", "" ] ]
Fully depleted (FD) Silicon on Insulator (SOI) metal oxide Field Effect Transistor (MOSFET) Is the Leading Contender for Sun 65nm Regime. This paper presents a study of effects of work functions of metal gate on the performance of FD-SOI MOSFET. Sentaurus TCAD simulation tool is used to investigate the effect of work function of gates ont he performance FDSOI MOSFET. Specific channel length of the device that had been concentrated is 25nm. From simulation we observed that by changing the work function of the metal gates of FD-SOI MOSFET we can change the threshold voltage. Hence by using this technique we can set the appropriate threshold voltage of FD-SOI MOSFET at same voltage and we can decrease the leakage current, gate tunneling current and short channel effects and increase drive current.
2112.11689
Yuhang Wu
Yuhang Wu, Tengteng Huang, Haotian Yao, Chi Zhang, Yuanjie Shao, Chuchu Han, Changxin Gao, Nong Sang
Multi-Centroid Representation Network for Domain Adaptive Person Re-ID
Accepted by AAAI2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, many approaches tackle the Unsupervised Domain Adaptive person re-identification (UDA re-ID) problem through pseudo-label-based contrastive learning. During training, a uni-centroid representation is obtained by simply averaging all the instance features from a cluster with the same pseudo label. However, a cluster may contain images with different identities (label noises) due to the imperfect clustering results, which makes the uni-centroid representation inappropriate. In this paper, we present a novel Multi-Centroid Memory (MCM) to adaptively capture different identity information within the cluster. MCM can effectively alleviate the issue of label noises by selecting proper positive/negative centroids for the query image. Moreover, we further propose two strategies to improve the contrastive learning process. First, we present a Domain-Specific Contrastive Learning (DSCL) mechanism to fully explore intradomain information by comparing samples only from the same domain. Second, we propose Second-Order Nearest Interpolation (SONI) to obtain abundant and informative negative samples. We integrate MCM, DSCL, and SONI into a unified framework named Multi-Centroid Representation Network (MCRN). Extensive experiments demonstrate the superiority of MCRN over state-of-the-art approaches on multiple UDA re-ID tasks and fully unsupervised re-ID tasks.
[ { "created": "Wed, 22 Dec 2021 06:40:21 GMT", "version": "v1" } ]
2021-12-23
[ [ "Wu", "Yuhang", "" ], [ "Huang", "Tengteng", "" ], [ "Yao", "Haotian", "" ], [ "Zhang", "Chi", "" ], [ "Shao", "Yuanjie", "" ], [ "Han", "Chuchu", "" ], [ "Gao", "Changxin", "" ], [ "Sang", "Nong", "" ] ]
Recently, many approaches tackle the Unsupervised Domain Adaptive person re-identification (UDA re-ID) problem through pseudo-label-based contrastive learning. During training, a uni-centroid representation is obtained by simply averaging all the instance features from a cluster with the same pseudo label. However, a cluster may contain images with different identities (label noises) due to the imperfect clustering results, which makes the uni-centroid representation inappropriate. In this paper, we present a novel Multi-Centroid Memory (MCM) to adaptively capture different identity information within the cluster. MCM can effectively alleviate the issue of label noises by selecting proper positive/negative centroids for the query image. Moreover, we further propose two strategies to improve the contrastive learning process. First, we present a Domain-Specific Contrastive Learning (DSCL) mechanism to fully explore intradomain information by comparing samples only from the same domain. Second, we propose Second-Order Nearest Interpolation (SONI) to obtain abundant and informative negative samples. We integrate MCM, DSCL, and SONI into a unified framework named Multi-Centroid Representation Network (MCRN). Extensive experiments demonstrate the superiority of MCRN over state-of-the-art approaches on multiple UDA re-ID tasks and fully unsupervised re-ID tasks.
2210.07991
Shimian Zhang
Shimian Zhang, Skanda Bharadwaj, Keaton Kraiger, Yashasvi Asthana, Hong Zhang, Robert Collins, Yanxi Liu
Novel 3D Scene Understanding Applications From Recurrence in a Single Image
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We demonstrate the utility of recurring pattern discovery from a single image for spatial understanding of a 3D scene in terms of (1) vanishing point detection, (2) hypothesizing 3D translation symmetry and (3) counting the number of RP instances in the image. Furthermore, we illustrate the feasibility of leveraging RP discovery output to form a more precise, quantitative text description of the scene. Our quantitative evaluations on a new 1K+ Recurring Pattern (RP) benchmark with diverse variations show that visual perception of recurrence from one single view leads to scene understanding outcomes that are as good as or better than existing supervised methods and/or unsupervised methods that use millions of images.
[ { "created": "Fri, 14 Oct 2022 17:45:05 GMT", "version": "v1" } ]
2022-10-17
[ [ "Zhang", "Shimian", "" ], [ "Bharadwaj", "Skanda", "" ], [ "Kraiger", "Keaton", "" ], [ "Asthana", "Yashasvi", "" ], [ "Zhang", "Hong", "" ], [ "Collins", "Robert", "" ], [ "Liu", "Yanxi", "" ] ]
We demonstrate the utility of recurring pattern discovery from a single image for spatial understanding of a 3D scene in terms of (1) vanishing point detection, (2) hypothesizing 3D translation symmetry and (3) counting the number of RP instances in the image. Furthermore, we illustrate the feasibility of leveraging RP discovery output to form a more precise, quantitative text description of the scene. Our quantitative evaluations on a new 1K+ Recurring Pattern (RP) benchmark with diverse variations show that visual perception of recurrence from one single view leads to scene understanding outcomes that are as good as or better than existing supervised methods and/or unsupervised methods that use millions of images.
1609.07370
Yi Ren
Yi Ren, Yaniv Romano, Michael Elad
Example-Based Image Synthesis via Randomized Patch-Matching
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image and texture synthesis is a challenging task that has long been drawing attention in the fields of image processing, graphics, and machine learning. This problem consists of modelling the desired type of images, either through training examples or via a parametric modeling, and then generating images that belong to the same statistical origin. This work addresses the image synthesis task, focusing on two specific families of images -- handwritten digits and face images. This paper offers two main contributions. First, we suggest a simple and intuitive algorithm capable of generating such images in a unified way. The proposed approach taken is pyramidal, consisting of upscaling and refining the estimated image several times. For each upscaling stage, the algorithm randomly draws small patches from a patch database, and merges these to form a coherent and novel image with high visual quality. The second contribution is a general framework for the evaluation of the generation performance, which combines three aspects: the likelihood, the originality and the spread of the synthesized images. We assess the proposed synthesis scheme and show that the results are similar in nature, and yet different from the ones found in the training set, suggesting that true synthesis effect has been obtained.
[ { "created": "Fri, 23 Sep 2016 14:08:30 GMT", "version": "v1" } ]
2016-09-26
[ [ "Ren", "Yi", "" ], [ "Romano", "Yaniv", "" ], [ "Elad", "Michael", "" ] ]
Image and texture synthesis is a challenging task that has long been drawing attention in the fields of image processing, graphics, and machine learning. This problem consists of modelling the desired type of images, either through training examples or via a parametric modeling, and then generating images that belong to the same statistical origin. This work addresses the image synthesis task, focusing on two specific families of images -- handwritten digits and face images. This paper offers two main contributions. First, we suggest a simple and intuitive algorithm capable of generating such images in a unified way. The proposed approach taken is pyramidal, consisting of upscaling and refining the estimated image several times. For each upscaling stage, the algorithm randomly draws small patches from a patch database, and merges these to form a coherent and novel image with high visual quality. The second contribution is a general framework for the evaluation of the generation performance, which combines three aspects: the likelihood, the originality and the spread of the synthesized images. We assess the proposed synthesis scheme and show that the results are similar in nature, and yet different from the ones found in the training set, suggesting that true synthesis effect has been obtained.
1602.02899
F. Ozgur Catak
Ferhat \"Ozg\"ur \c{C}atak
Secure Multi-Party Computation Based Privacy Preserving Extreme Learning Machine Algorithm Over Vertically Distributed Data
22nd International Conference, ICONIP 2015
null
10.1007/978-3-319-26535-3_39
null
cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Especially in the Big Data era, the usage of different classification methods is increasing day by day. The success of these classification methods depends on the effectiveness of learning methods. Extreme learning machine (ELM) classification algorithm is a relatively new learning method built on feed-forward neural-network. ELM classification algorithm is a simple and fast method that can create a model from high-dimensional data sets. Traditional ELM learning algorithm implicitly assumes complete access to whole data set. This is a major privacy concern in most of cases. Sharing of private data (i.e. medical records) is prevented because of security concerns. In this research, we propose an efficient and secure privacy-preserving learning algorithm for ELM classification over data that is vertically partitioned among several parties. The new learning method preserves the privacy on numerical attributes, builds a classification model without sharing private data without disclosing the data of each party to others.
[ { "created": "Tue, 9 Feb 2016 08:37:26 GMT", "version": "v1" } ]
2016-02-10
[ [ "Çatak", "Ferhat Özgür", "" ] ]
Especially in the Big Data era, the usage of different classification methods is increasing day by day. The success of these classification methods depends on the effectiveness of learning methods. Extreme learning machine (ELM) classification algorithm is a relatively new learning method built on feed-forward neural-network. ELM classification algorithm is a simple and fast method that can create a model from high-dimensional data sets. Traditional ELM learning algorithm implicitly assumes complete access to whole data set. This is a major privacy concern in most of cases. Sharing of private data (i.e. medical records) is prevented because of security concerns. In this research, we propose an efficient and secure privacy-preserving learning algorithm for ELM classification over data that is vertically partitioned among several parties. The new learning method preserves the privacy on numerical attributes, builds a classification model without sharing private data without disclosing the data of each party to others.
1505.05502
Ricardo Gon\c{c}alves
Ricardo Gon\c{c}alves and Matthias Knorr and Jo\~ao Leite
Towards Efficient Evolving Multi-Context Systems (Preliminary Report)
International Workshop on Reactive Concepts in Knowledge Representation (ReactKnow 2014), co-located with the 21st European Conference on Artificial Intelligence (ECAI 2014). Proceedings of the International Workshop on Reactive Concepts in Knowledge Representation (ReactKnow 2014), pages 39-45, technical report, ISSN 1430-3701, Leipzig University, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-150562 . arXiv admin note: substantial text overlap with arXiv:1505.05368
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Managed Multi-Context Systems (mMCSs) provide a general framework for integrating knowledge represented in heterogeneous KR formalisms. Recently, evolving Multi-Context Systems (eMCSs) have been introduced as an extension of mMCSs that add the ability to both react to, and reason in the presence of commonly temporary dynamic observations, and evolve by incorporating new knowledge. However, the general complexity of such an expressive formalism may simply be too high in cases where huge amounts of information have to be processed within a limited short amount of time, or even instantaneously. In this paper, we investigate under which conditions eMCSs may scale in such situations and we show that such polynomial eMCSs can be applied in a practical use case.
[ { "created": "Wed, 20 May 2015 13:33:52 GMT", "version": "v1" } ]
2015-05-22
[ [ "Gonçalves", "Ricardo", "" ], [ "Knorr", "Matthias", "" ], [ "Leite", "João", "" ] ]
Managed Multi-Context Systems (mMCSs) provide a general framework for integrating knowledge represented in heterogeneous KR formalisms. Recently, evolving Multi-Context Systems (eMCSs) have been introduced as an extension of mMCSs that add the ability to both react to, and reason in the presence of commonly temporary dynamic observations, and evolve by incorporating new knowledge. However, the general complexity of such an expressive formalism may simply be too high in cases where huge amounts of information have to be processed within a limited short amount of time, or even instantaneously. In this paper, we investigate under which conditions eMCSs may scale in such situations and we show that such polynomial eMCSs can be applied in a practical use case.
1901.01062
Xueliang Li
Xueliang Li, Yuming Yang, Yepang liu, John P. Gallagher, Kaishun Wu
Detecting and Diagnosing Energy Issues for Mobile Applications
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Energy efficiency is an important criterion to judge the quality of mobile apps, but one third of our randomly sampled apps suffer from energy issues that can quickly drain battery power. To understand these issues, we conducted an empirical study on 27 well-maintained apps such as Chrome and Firefox, whose issue tracking systems are publicly accessible. Our study revealed that the main root causes of energy issues include unnecessary workload and excessively frequent operations. Surprisingly, these issues are beyond the application of present technology on energy issue detection. We also found that 20.6% of energy issues can only manifest themselves under specific contexts such as poor network performance, but such contexts are again neglected by present technology. Therefore, we proposed a novel testing framework for detecting energy issues in real-world apps. Our framework examines apps with well-designed input sequences and runtime contexts. To identify the root causes mentioned above, we employed a machine learning algorithm to cluster the workloads and further evaluate their necessity. For the issues concealed by the specific contexts, we carefully set up several execution contexts to pinpoint them. More importantly, we developed leading edge technology, e.g. pre-designing input sequences with potential energy overuse and tuning tests on-the-fly, to achieve high efficacy in detecting energy issues. A large-scale evaluation shows that 91.6% issues detected in our test were previously unknown to developers. On average, these issues double the energy costs of the apps. Furthermore, our test achieves a low number of false positives. Finally, we show how our test reports can help developers fix the issues.
[ { "created": "Fri, 4 Jan 2019 11:43:55 GMT", "version": "v1" }, { "created": "Fri, 22 Mar 2019 11:47:45 GMT", "version": "v2" } ]
2019-04-02
[ [ "Li", "Xueliang", "" ], [ "Yang", "Yuming", "" ], [ "liu", "Yepang", "" ], [ "Gallagher", "John P.", "" ], [ "Wu", "Kaishun", "" ] ]
Energy efficiency is an important criterion to judge the quality of mobile apps, but one third of our randomly sampled apps suffer from energy issues that can quickly drain battery power. To understand these issues, we conducted an empirical study on 27 well-maintained apps such as Chrome and Firefox, whose issue tracking systems are publicly accessible. Our study revealed that the main root causes of energy issues include unnecessary workload and excessively frequent operations. Surprisingly, these issues are beyond the application of present technology on energy issue detection. We also found that 20.6% of energy issues can only manifest themselves under specific contexts such as poor network performance, but such contexts are again neglected by present technology. Therefore, we proposed a novel testing framework for detecting energy issues in real-world apps. Our framework examines apps with well-designed input sequences and runtime contexts. To identify the root causes mentioned above, we employed a machine learning algorithm to cluster the workloads and further evaluate their necessity. For the issues concealed by the specific contexts, we carefully set up several execution contexts to pinpoint them. More importantly, we developed leading edge technology, e.g. pre-designing input sequences with potential energy overuse and tuning tests on-the-fly, to achieve high efficacy in detecting energy issues. A large-scale evaluation shows that 91.6% issues detected in our test were previously unknown to developers. On average, these issues double the energy costs of the apps. Furthermore, our test achieves a low number of false positives. Finally, we show how our test reports can help developers fix the issues.
2401.06204
Qilei Zhang
Qilei Zhang and John H. Mott
An Exploratory Assessment of LLM's Potential Toward Flight Trajectory Reconstruction Analysis
6 pages
null
null
null
cs.LG cs.AI eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models (LLMs) hold transformative potential in aviation, particularly in reconstructing flight trajectories. This paper investigates this potential, grounded in the notion that LLMs excel at processing sequential data and deciphering complex data structures. Utilizing the LLaMA 2 model, a pre-trained open-source LLM, the study focuses on reconstructing flight trajectories using Automatic Dependent Surveillance-Broadcast (ADS-B) data with irregularities inherent in real-world scenarios. The findings demonstrate the model's proficiency in filtering noise and estimating both linear and curved flight trajectories. However, the analysis also reveals challenges in managing longer data sequences, which may be attributed to the token length limitations of LLM models. The study's insights underscore the promise of LLMs in flight trajectory reconstruction and open new avenues for their broader application across the aviation and transportation sectors.
[ { "created": "Thu, 11 Jan 2024 17:59:18 GMT", "version": "v1" } ]
2024-01-15
[ [ "Zhang", "Qilei", "" ], [ "Mott", "John H.", "" ] ]
Large Language Models (LLMs) hold transformative potential in aviation, particularly in reconstructing flight trajectories. This paper investigates this potential, grounded in the notion that LLMs excel at processing sequential data and deciphering complex data structures. Utilizing the LLaMA 2 model, a pre-trained open-source LLM, the study focuses on reconstructing flight trajectories using Automatic Dependent Surveillance-Broadcast (ADS-B) data with irregularities inherent in real-world scenarios. The findings demonstrate the model's proficiency in filtering noise and estimating both linear and curved flight trajectories. However, the analysis also reveals challenges in managing longer data sequences, which may be attributed to the token length limitations of LLM models. The study's insights underscore the promise of LLMs in flight trajectory reconstruction and open new avenues for their broader application across the aviation and transportation sectors.
2106.12700
Cheng Jie
Cheng Jie, Da Xu, Zigeng Wang, Lu Wang, Wei Shen
An Efficient Group-based Search Engine Marketing System for E-Commerce
null
null
null
null
cs.CL
http://creativecommons.org/publicdomain/zero/1.0/
With the increasing scale of search engine marketing, designing an efficient bidding system is becoming paramount for the success of e-commerce companies. The critical challenges faced by a modern industrial-level bidding system include: 1. the catalog is enormous, and the relevant bidding features are of high sparsity; 2. the large volume of bidding requests induces significant computation burden to both the offline and online serving. Leveraging extraneous user-item information proves essential to mitigate the sparsity issue, for which we exploit the natural language signals from the users' query and the contextual knowledge from the products. In particular, we extract the vector representations of ads via the Transformer model and leverage their geometric relation to building collaborative bidding predictions via clustering. The two-step procedure also significantly reduces the computation stress of bid evaluation and optimization. In this paper, we introduce the end-to-end structure of the bidding system for search engine marketing for Walmart e-commerce, which successfully handles tens of millions of bids each day. We analyze the online and offline performances of our approach and discuss how we find it as a production-efficient solution.
[ { "created": "Thu, 24 Jun 2021 00:12:07 GMT", "version": "v1" }, { "created": "Fri, 25 Jun 2021 01:27:47 GMT", "version": "v2" }, { "created": "Sat, 17 Jul 2021 22:39:56 GMT", "version": "v3" }, { "created": "Fri, 6 Aug 2021 05:04:34 GMT", "version": "v4" } ]
2021-08-09
[ [ "Jie", "Cheng", "" ], [ "Xu", "Da", "" ], [ "Wang", "Zigeng", "" ], [ "Wang", "Lu", "" ], [ "Shen", "Wei", "" ] ]
With the increasing scale of search engine marketing, designing an efficient bidding system is becoming paramount for the success of e-commerce companies. The critical challenges faced by a modern industrial-level bidding system include: 1. the catalog is enormous, and the relevant bidding features are of high sparsity; 2. the large volume of bidding requests induces significant computation burden to both the offline and online serving. Leveraging extraneous user-item information proves essential to mitigate the sparsity issue, for which we exploit the natural language signals from the users' query and the contextual knowledge from the products. In particular, we extract the vector representations of ads via the Transformer model and leverage their geometric relation to building collaborative bidding predictions via clustering. The two-step procedure also significantly reduces the computation stress of bid evaluation and optimization. In this paper, we introduce the end-to-end structure of the bidding system for search engine marketing for Walmart e-commerce, which successfully handles tens of millions of bids each day. We analyze the online and offline performances of our approach and discuss how we find it as a production-efficient solution.
1003.5787
Niharjyoti Sarangi
Rakesh Mohanty, Niharjyoti Sarangi, Sukant kumar Bishi
A secured Cryptographic Hashing Algorithm
4 pages,2 figures, 1 tabular data set
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cryptographic hash functions for calculating the message digest of a message has been in practical use as an effective measure to maintain message integrity since a few decades. This message digest is unique, irreversible and avoids all types of collisions for any given input string. The message digest calculated from this algorithm is propagated in the communication medium along with the original message from the sender side and on the receiver side integrity of the message can be verified by recalculating the message digest of the received message and comparing the two digest values. In this paper we have designed and developed a new algorithm for calculating the message digest of any message and implemented t using a high level programming language. An experimental analysis and comparison with the existing MD5 hashing algorithm, which is predominantly being used as a cryptographic hashing tool, shows this algorithm to provide more randomness and greater strength from intrusion attacks. In this algorithm the plaintext message string is converted into binary string and fragmented into blocks of 128 bits after being padded with user defined padding bits. Then using a pseudo random number generator a key is generated for each block and operated with the respective block by a bitwise operator. This process is terated for the whole message and finally a fixed length message digest is obtained.
[ { "created": "Tue, 30 Mar 2010 10:43:48 GMT", "version": "v1" } ]
2010-03-31
[ [ "Mohanty", "Rakesh", "" ], [ "Sarangi", "Niharjyoti", "" ], [ "Bishi", "Sukant kumar", "" ] ]
Cryptographic hash functions for calculating the message digest of a message has been in practical use as an effective measure to maintain message integrity since a few decades. This message digest is unique, irreversible and avoids all types of collisions for any given input string. The message digest calculated from this algorithm is propagated in the communication medium along with the original message from the sender side and on the receiver side integrity of the message can be verified by recalculating the message digest of the received message and comparing the two digest values. In this paper we have designed and developed a new algorithm for calculating the message digest of any message and implemented t using a high level programming language. An experimental analysis and comparison with the existing MD5 hashing algorithm, which is predominantly being used as a cryptographic hashing tool, shows this algorithm to provide more randomness and greater strength from intrusion attacks. In this algorithm the plaintext message string is converted into binary string and fragmented into blocks of 128 bits after being padded with user defined padding bits. Then using a pseudo random number generator a key is generated for each block and operated with the respective block by a bitwise operator. This process is terated for the whole message and finally a fixed length message digest is obtained.
1307.1024
Abdelhakim Herrouz
Abdelhakim Herrouz, Chabane Khentout and Mahieddine Djoudi
Overview of Web Content Mining Tools
06 pages
The International Journal of Engineering And Science (IJES), Vol.2, Issue 6, June 2013, pp. 106-110, 2013
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nowadays, the Web has become one of the most widespread platforms for information change and retrieval. As it becomes easier to publish documents, as the number of users, and thus publishers, increases and as the number of documents grows, searching for information is turning into a cumbersome and time-consuming operation. Due to heterogeneity and unstructured nature of the data available on the WWW, Web mining uses various data mining techniques to discover useful knowledge from Web hyperlinks, page content and usage log. The main uses of web content mining are to gather, categorize, organize and provide the best possible information available on the Web to the user requesting the information. The mining tools are imperative to scanning the many HTML documents, images, and text. Then, the result is used by the search engines. In this paper, we first introduce the concepts related to web mining; we then present an overview of different Web Content Mining tools. We conclude by presenting a comparative table of these tools based on some pertinent criteria.
[ { "created": "Tue, 2 Jul 2013 19:57:29 GMT", "version": "v1" } ]
2013-07-04
[ [ "Herrouz", "Abdelhakim", "" ], [ "Khentout", "Chabane", "" ], [ "Djoudi", "Mahieddine", "" ] ]
Nowadays, the Web has become one of the most widespread platforms for information change and retrieval. As it becomes easier to publish documents, as the number of users, and thus publishers, increases and as the number of documents grows, searching for information is turning into a cumbersome and time-consuming operation. Due to heterogeneity and unstructured nature of the data available on the WWW, Web mining uses various data mining techniques to discover useful knowledge from Web hyperlinks, page content and usage log. The main uses of web content mining are to gather, categorize, organize and provide the best possible information available on the Web to the user requesting the information. The mining tools are imperative to scanning the many HTML documents, images, and text. Then, the result is used by the search engines. In this paper, we first introduce the concepts related to web mining; we then present an overview of different Web Content Mining tools. We conclude by presenting a comparative table of these tools based on some pertinent criteria.
2211.14552
Jilan Xu
Junlin Hou, Jilan Xu, Fan Xiao, Rui-Wei Zhao, Yuejie Zhang, Haidong Zou, Lina Lu, Wenwen Xue, Rui Feng
Cross-Field Transformer for Diabetic Retinopathy Grading on Two-field Fundus Images
BIBM 2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Automatic diabetic retinopathy (DR) grading based on fundus photography has been widely explored to benefit the routine screening and early treatment. Existing researches generally focus on single-field fundus images, which have limited field of view for precise eye examinations. In clinical applications, ophthalmologists adopt two-field fundus photography as the dominating tool, where the information from each field (i.e.,macula-centric and optic disc-centric) is highly correlated and complementary, and benefits comprehensive decisions. However, automatic DR grading based on two-field fundus photography remains a challenging task due to the lack of publicly available datasets and effective fusion strategies. In this work, we first construct a new benchmark dataset (DRTiD) for DR grading, consisting of 3,100 two-field fundus images. To the best of our knowledge, it is the largest public DR dataset with diverse and high-quality two-field images. Then, we propose a novel DR grading approach, namely Cross-Field Transformer (CrossFiT), to capture the correspondence between two fields as well as the long-range spatial correlations within each field. Considering the inherent two-field geometric constraints, we particularly define aligned position embeddings to preserve relative consistent position in fundus. Besides, we perform masked cross-field attention during interaction to flter the noisy relations between fields. Extensive experiments on our DRTiD dataset and a public DeepDRiD dataset demonstrate the effectiveness of our CrossFiT network. The new dataset and the source code of CrossFiT will be publicly available at https://github.com/FDU-VTS/DRTiD.
[ { "created": "Sat, 26 Nov 2022 12:39:57 GMT", "version": "v1" }, { "created": "Thu, 1 Dec 2022 08:10:27 GMT", "version": "v2" } ]
2022-12-02
[ [ "Hou", "Junlin", "" ], [ "Xu", "Jilan", "" ], [ "Xiao", "Fan", "" ], [ "Zhao", "Rui-Wei", "" ], [ "Zhang", "Yuejie", "" ], [ "Zou", "Haidong", "" ], [ "Lu", "Lina", "" ], [ "Xue", "Wenwen", "" ], [ "Feng", "Rui", "" ] ]
Automatic diabetic retinopathy (DR) grading based on fundus photography has been widely explored to benefit the routine screening and early treatment. Existing researches generally focus on single-field fundus images, which have limited field of view for precise eye examinations. In clinical applications, ophthalmologists adopt two-field fundus photography as the dominating tool, where the information from each field (i.e.,macula-centric and optic disc-centric) is highly correlated and complementary, and benefits comprehensive decisions. However, automatic DR grading based on two-field fundus photography remains a challenging task due to the lack of publicly available datasets and effective fusion strategies. In this work, we first construct a new benchmark dataset (DRTiD) for DR grading, consisting of 3,100 two-field fundus images. To the best of our knowledge, it is the largest public DR dataset with diverse and high-quality two-field images. Then, we propose a novel DR grading approach, namely Cross-Field Transformer (CrossFiT), to capture the correspondence between two fields as well as the long-range spatial correlations within each field. Considering the inherent two-field geometric constraints, we particularly define aligned position embeddings to preserve relative consistent position in fundus. Besides, we perform masked cross-field attention during interaction to flter the noisy relations between fields. Extensive experiments on our DRTiD dataset and a public DeepDRiD dataset demonstrate the effectiveness of our CrossFiT network. The new dataset and the source code of CrossFiT will be publicly available at https://github.com/FDU-VTS/DRTiD.
1501.02573
Robert Koenighofer
Roderick Bloem and Bettina Koenighofer and Robert Koenighofer and Chao Wang
Shield Synthesis: Runtime Enforcement for Reactive Systems
This is an extended version of [5], featuring an additional appendix
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scalability issues may prevent users from verifying critical properties of a complex hardware design. In this situation, we propose to synthesize a "safety shield" that is attached to the design to enforce the properties at run time. Shield synthesis can succeed where model checking and reactive synthesis fail, because it only considers a small set of critical properties, as opposed to the complex design, or the complete specification in the case of reactive synthesis. The shield continuously monitors the input/output of the design and corrects its erroneous output only if necessary, and as little as possible, so other non-critical properties are likely to be retained. Although runtime enforcement has been studied in other domains such as action systems, reactive systems pose unique challenges where the shield must act without delay. We thus present the first shield synthesis solution for reactive hardware systems and report our experimental results. This is an extended version of [5], featuring an additional appendix.
[ { "created": "Mon, 12 Jan 2015 09:04:57 GMT", "version": "v1" }, { "created": "Fri, 16 Jan 2015 15:58:47 GMT", "version": "v2" } ]
2015-01-19
[ [ "Bloem", "Roderick", "" ], [ "Koenighofer", "Bettina", "" ], [ "Koenighofer", "Robert", "" ], [ "Wang", "Chao", "" ] ]
Scalability issues may prevent users from verifying critical properties of a complex hardware design. In this situation, we propose to synthesize a "safety shield" that is attached to the design to enforce the properties at run time. Shield synthesis can succeed where model checking and reactive synthesis fail, because it only considers a small set of critical properties, as opposed to the complex design, or the complete specification in the case of reactive synthesis. The shield continuously monitors the input/output of the design and corrects its erroneous output only if necessary, and as little as possible, so other non-critical properties are likely to be retained. Although runtime enforcement has been studied in other domains such as action systems, reactive systems pose unique challenges where the shield must act without delay. We thus present the first shield synthesis solution for reactive hardware systems and report our experimental results. This is an extended version of [5], featuring an additional appendix.
1708.00129
Andy Kitchen
Andy Kitchen, Jarrel Seah
Deep Generative Adversarial Neural Networks for Realistic Prostate Lesion MRI Synthesis
8 pages, 5 figures, 2 tables
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative Adversarial Neural Networks (GANs) are applied to the synthetic generation of prostate lesion MRI images. GANs have been applied to a variety of natural images, is shown show that the same techniques can be used in the medical domain to create realistic looking synthetic lesion images. 16mm x 16mm patches are extracted from 330 MRI scans from the SPIE ProstateX Challenge 2016 and used to train a Deep Convolutional Generative Adversarial Neural Network (DCGAN) utilizing cutting edge techniques. Synthetic outputs are compared to real images and the implicit latent representations induced by the GAN are explored. Training techniques and successful neural network architectures are explained in detail.
[ { "created": "Tue, 1 Aug 2017 02:09:12 GMT", "version": "v1" } ]
2017-08-02
[ [ "Kitchen", "Andy", "" ], [ "Seah", "Jarrel", "" ] ]
Generative Adversarial Neural Networks (GANs) are applied to the synthetic generation of prostate lesion MRI images. GANs have been applied to a variety of natural images, is shown show that the same techniques can be used in the medical domain to create realistic looking synthetic lesion images. 16mm x 16mm patches are extracted from 330 MRI scans from the SPIE ProstateX Challenge 2016 and used to train a Deep Convolutional Generative Adversarial Neural Network (DCGAN) utilizing cutting edge techniques. Synthetic outputs are compared to real images and the implicit latent representations induced by the GAN are explored. Training techniques and successful neural network architectures are explained in detail.
cs/0509063
Krzysztof R. Apt
Krzysztof R. Apt
Order Independence and Rationalizability
Appeared in: Proc. of the 10th conference on Theoretical Aspects of Rationality and Knowledge (TARK X), pp. 22-38 (2005)
null
null
null
cs.GT
null
Two natural strategy elimination procedures have been studied for strategic games. The first one involves the notion of (strict, weak, etc) dominance and the second the notion of rationalizability. In the case of dominance the criterion of order independence allowed us to clarify which notions and under what circumstances are robust. In the case of rationalizability this criterion has not been considered. In this paper we investigate the problem of order independence for rationalizability by focusing on three naturally entailed reduction relations on games. These reduction relations are distinguished by the adopted reference point for the notion of a better response. Additionally, they are parametrized by the adopted system of beliefs. We show that for one reduction relation the outcome of its (possibly transfinite) iterations does not depend on the order of elimination of the strategies. This result does not hold for the other two reduction relations. However, under a natural assumption the iterations of all three reduction relations yield the same outcome. The obtained order independence results apply to the frameworks considered in Bernheim 84 and Pearce 84. For finite games the iterations of all three reduction relations coincide and the order independence holds for three natural systems of beliefs considered in the literature.
[ { "created": "Tue, 20 Sep 2005 08:27:26 GMT", "version": "v1" } ]
2007-05-23
[ [ "Apt", "Krzysztof R.", "" ] ]
Two natural strategy elimination procedures have been studied for strategic games. The first one involves the notion of (strict, weak, etc) dominance and the second the notion of rationalizability. In the case of dominance the criterion of order independence allowed us to clarify which notions and under what circumstances are robust. In the case of rationalizability this criterion has not been considered. In this paper we investigate the problem of order independence for rationalizability by focusing on three naturally entailed reduction relations on games. These reduction relations are distinguished by the adopted reference point for the notion of a better response. Additionally, they are parametrized by the adopted system of beliefs. We show that for one reduction relation the outcome of its (possibly transfinite) iterations does not depend on the order of elimination of the strategies. This result does not hold for the other two reduction relations. However, under a natural assumption the iterations of all three reduction relations yield the same outcome. The obtained order independence results apply to the frameworks considered in Bernheim 84 and Pearce 84. For finite games the iterations of all three reduction relations coincide and the order independence holds for three natural systems of beliefs considered in the literature.
2210.14210
Sudharshan Suresh
Sudharshan Suresh, Zilin Si, Stuart Anderson, Michael Kaess, Mustafa Mukadam
MidasTouch: Monte-Carlo inference over distributions across sliding touch
Accepted at CoRL 2022 (Oral). Project website: https://suddhu.github.io/midastouch-tactile/
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present MidasTouch, a tactile perception system for online global localization of a vision-based touch sensor sliding on an object surface. This framework takes in posed tactile images over time, and outputs an evolving distribution of sensor pose on the object's surface, without the need for visual priors. Our key insight is to estimate local surface geometry with tactile sensing, learn a compact representation for it, and disambiguate these signals over a long time horizon. The backbone of MidasTouch is a Monte-Carlo particle filter, with a measurement model based on a tactile code network learned from tactile simulation. This network, inspired by LIDAR place recognition, compactly summarizes local surface geometries. These generated codes are efficiently compared against a precomputed tactile codebook per-object, to update the pose distribution. We further release the YCB-Slide dataset of real-world and simulated forceful sliding interactions between a vision-based tactile sensor and standard YCB objects. While single-touch localization can be inherently ambiguous, we can quickly localize our sensor by traversing salient surface geometries. Project page: https://suddhu.github.io/midastouch-tactile/
[ { "created": "Tue, 25 Oct 2022 17:55:09 GMT", "version": "v1" } ]
2022-10-26
[ [ "Suresh", "Sudharshan", "" ], [ "Si", "Zilin", "" ], [ "Anderson", "Stuart", "" ], [ "Kaess", "Michael", "" ], [ "Mukadam", "Mustafa", "" ] ]
We present MidasTouch, a tactile perception system for online global localization of a vision-based touch sensor sliding on an object surface. This framework takes in posed tactile images over time, and outputs an evolving distribution of sensor pose on the object's surface, without the need for visual priors. Our key insight is to estimate local surface geometry with tactile sensing, learn a compact representation for it, and disambiguate these signals over a long time horizon. The backbone of MidasTouch is a Monte-Carlo particle filter, with a measurement model based on a tactile code network learned from tactile simulation. This network, inspired by LIDAR place recognition, compactly summarizes local surface geometries. These generated codes are efficiently compared against a precomputed tactile codebook per-object, to update the pose distribution. We further release the YCB-Slide dataset of real-world and simulated forceful sliding interactions between a vision-based tactile sensor and standard YCB objects. While single-touch localization can be inherently ambiguous, we can quickly localize our sensor by traversing salient surface geometries. Project page: https://suddhu.github.io/midastouch-tactile/
2112.05598
Sverker Rasmuson
Sverker Rasmuson, Erik Sintorn, Ulf Assarsson
PERF: Performant, Explicit Radiance Fields
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel way of approaching image-based 3D reconstruction based on radiance fields. The problem of volumetric reconstruction is formulated as a non-linear least-squares problem and solved explicitly without the use of neural networks. This enables the use of solvers with a higher rate of convergence than what is typically used for neural networks, and fewer iterations are required until convergence. The volume is represented using a grid of voxels, with the scene surrounded by a hierarchy of environment maps. This makes it possible to get clean reconstructions of 360{\deg} scenes where the foreground and background is separated. A number of synthetic and real scenes from well known benchmark-suites are successfully reconstructed with quality on par with state-of-the-art methods, but at significantly reduced reconstruction times.
[ { "created": "Fri, 10 Dec 2021 15:29:00 GMT", "version": "v1" } ]
2021-12-13
[ [ "Rasmuson", "Sverker", "" ], [ "Sintorn", "Erik", "" ], [ "Assarsson", "Ulf", "" ] ]
We present a novel way of approaching image-based 3D reconstruction based on radiance fields. The problem of volumetric reconstruction is formulated as a non-linear least-squares problem and solved explicitly without the use of neural networks. This enables the use of solvers with a higher rate of convergence than what is typically used for neural networks, and fewer iterations are required until convergence. The volume is represented using a grid of voxels, with the scene surrounded by a hierarchy of environment maps. This makes it possible to get clean reconstructions of 360{\deg} scenes where the foreground and background is separated. A number of synthetic and real scenes from well known benchmark-suites are successfully reconstructed with quality on par with state-of-the-art methods, but at significantly reduced reconstruction times.
1509.07040
Yuheng Bu
Yuheng Bu, Shaofeng Zou, Yingbin Liang and Venugopal V. Veeravalli
Universal Outlying sequence detection For Continuous Observations
null
null
null
null
cs.IT math.IT math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The following detection problem is studied, in which there are $M$ sequences of samples out of which one outlier sequence needs to be detected. Each typical sequence contains $n$ independent and identically distributed (i.i.d.) continuous observations from a known distribution $\pi$, and the outlier sequence contains $n$ i.i.d. observations from an outlier distribution $\mu$, which is distinct from $\pi$, but otherwise unknown. A universal test based on KL divergence is built to approximate the maximum likelihood test, with known $\pi$ and unknown $\mu$. A data-dependent partitions based KL divergence estimator is employed. Such a KL divergence estimator is further shown to converge to its true value exponentially fast when the density ratio satisfies $0<K_1\leq \frac{d\mu}{d\pi}\leq K_2$, where $K_1$ and $K_2$ are positive constants, and this further implies that the test is exponentially consistent. The performance of the test is compared with that of a recently introduced test for this problem based on the machine learning approach of maximum mean discrepancy (MMD). We identify regimes in which the KL divergence based test is better than the MMD based test.
[ { "created": "Wed, 23 Sep 2015 15:56:59 GMT", "version": "v1" }, { "created": "Wed, 7 Oct 2015 19:02:48 GMT", "version": "v2" } ]
2015-10-08
[ [ "Bu", "Yuheng", "" ], [ "Zou", "Shaofeng", "" ], [ "Liang", "Yingbin", "" ], [ "Veeravalli", "Venugopal V.", "" ] ]
The following detection problem is studied, in which there are $M$ sequences of samples out of which one outlier sequence needs to be detected. Each typical sequence contains $n$ independent and identically distributed (i.i.d.) continuous observations from a known distribution $\pi$, and the outlier sequence contains $n$ i.i.d. observations from an outlier distribution $\mu$, which is distinct from $\pi$, but otherwise unknown. A universal test based on KL divergence is built to approximate the maximum likelihood test, with known $\pi$ and unknown $\mu$. A data-dependent partitions based KL divergence estimator is employed. Such a KL divergence estimator is further shown to converge to its true value exponentially fast when the density ratio satisfies $0<K_1\leq \frac{d\mu}{d\pi}\leq K_2$, where $K_1$ and $K_2$ are positive constants, and this further implies that the test is exponentially consistent. The performance of the test is compared with that of a recently introduced test for this problem based on the machine learning approach of maximum mean discrepancy (MMD). We identify regimes in which the KL divergence based test is better than the MMD based test.
2102.13519
Stefan Bl\"ucher
Stefan Bl\"ucher, Johanna Vielhaben and Nils Strodthoff
PredDiff: Explanations and Interactions from Conditional Expectations
35 pages, 20 Figures, accepted journal version, code available at https://github.com/AI4HealthUOL/preddiff-interactions
Artificial Intelligence 312 (2022) 103774
10.1016/j.artint.2022.103774
null
cs.LG cs.AI stat.ML
http://creativecommons.org/licenses/by-nc-nd/4.0/
PredDiff is a model-agnostic, local attribution method that is firmly rooted in probability theory. Its simple intuition is to measure prediction changes while marginalizing features. In this work, we clarify properties of PredDiff and its close connection to Shapley values. We stress important differences between classification and regression, which require a specific treatment within both formalisms. We extend PredDiff by introducing a new, well-founded measure for interaction effects between arbitrary feature subsets. The study of interaction effects represents an inevitable step towards a comprehensive understanding of black-box models and is particularly important for science applications. Equipped with our novel interaction measure, PredDiff is a promising model-agnostic approach for obtaining reliable, numerically inexpensive and theoretically sound attributions.
[ { "created": "Fri, 26 Feb 2021 14:46:47 GMT", "version": "v1" }, { "created": "Mon, 26 Apr 2021 14:27:07 GMT", "version": "v2" }, { "created": "Wed, 20 Oct 2021 08:54:14 GMT", "version": "v3" }, { "created": "Thu, 8 Sep 2022 14:18:50 GMT", "version": "v4" } ]
2023-07-12
[ [ "Blücher", "Stefan", "" ], [ "Vielhaben", "Johanna", "" ], [ "Strodthoff", "Nils", "" ] ]
PredDiff is a model-agnostic, local attribution method that is firmly rooted in probability theory. Its simple intuition is to measure prediction changes while marginalizing features. In this work, we clarify properties of PredDiff and its close connection to Shapley values. We stress important differences between classification and regression, which require a specific treatment within both formalisms. We extend PredDiff by introducing a new, well-founded measure for interaction effects between arbitrary feature subsets. The study of interaction effects represents an inevitable step towards a comprehensive understanding of black-box models and is particularly important for science applications. Equipped with our novel interaction measure, PredDiff is a promising model-agnostic approach for obtaining reliable, numerically inexpensive and theoretically sound attributions.
2310.06397
Kaiming Huang
Kaiming Huang, Mathias Payer, Zhiyun Qian, Jack Sampson, Gang Tan, Trent Jaeger
Top of the Heap: Efficient Memory Error Protection for Many Heap Objects
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Exploits against heap memory errors continue to be a major concern. Although many defenses have been proposed, heap data are not protected from attacks that exploit memory errors systematically. Research defenses focus on complete coverage of heap objects, often giving up on comprehensive memory safety protection and/or incurring high costs in performance overhead and memory usage. In this paper, we propose a solution for heap memory safety enforcement that aims to provide comprehensive protection from memory errors efficiently by protecting those heap objects whose accesses are provably safe from memory errors. Specifically, we present the Uriah system that statically validates spatial and type memory safety for heap objects, isolating compliant objects on a safe heap that enforces temporal type safety to prevent attacks on memory reuse. Using Uriah, 71.9% of heap allocation sites can be shown to produce objects (73% of allocations are found safe) that satisfy spatial and type safety, which are then isolated using Uriah's heap allocator from memory accesses via unsafe heap objects. Uriah only incurs 2.9% overhead and only uses 9.3% more memory on SPEC CPU2006 (C/C++) benchmarks, showing that many heap objects can be protected from all classes of memory errors efficiently.
[ { "created": "Tue, 10 Oct 2023 08:04:08 GMT", "version": "v1" } ]
2023-10-11
[ [ "Huang", "Kaiming", "" ], [ "Payer", "Mathias", "" ], [ "Qian", "Zhiyun", "" ], [ "Sampson", "Jack", "" ], [ "Tan", "Gang", "" ], [ "Jaeger", "Trent", "" ] ]
Exploits against heap memory errors continue to be a major concern. Although many defenses have been proposed, heap data are not protected from attacks that exploit memory errors systematically. Research defenses focus on complete coverage of heap objects, often giving up on comprehensive memory safety protection and/or incurring high costs in performance overhead and memory usage. In this paper, we propose a solution for heap memory safety enforcement that aims to provide comprehensive protection from memory errors efficiently by protecting those heap objects whose accesses are provably safe from memory errors. Specifically, we present the Uriah system that statically validates spatial and type memory safety for heap objects, isolating compliant objects on a safe heap that enforces temporal type safety to prevent attacks on memory reuse. Using Uriah, 71.9% of heap allocation sites can be shown to produce objects (73% of allocations are found safe) that satisfy spatial and type safety, which are then isolated using Uriah's heap allocator from memory accesses via unsafe heap objects. Uriah only incurs 2.9% overhead and only uses 9.3% more memory on SPEC CPU2006 (C/C++) benchmarks, showing that many heap objects can be protected from all classes of memory errors efficiently.
2407.11033
Yuyan Chen
Yuyan Chen, Qiang Fu, Ge Fan, Lun Du, Jian-Guang Lou, Shi Han, Dongmei Zhang, Zhixu Li, Yanghua Xiao
Hadamard Adapter: An Extreme Parameter-Efficient Adapter Tuning Method for Pre-trained Language Models
Accepted to CIKM 2023 (Long Paper)
null
null
null
cs.LG cs.CL
http://creativecommons.org/licenses/by/4.0/
Recent years, Pre-trained Language models (PLMs) have swept into various fields of artificial intelligence and achieved great success. However, most PLMs, such as T5 and GPT3, have a huge amount of parameters, fine-tuning them is often expensive and time consuming, and storing them takes up a lot of space. Therefore, it is necessary to adopt a parameter-efficient approach to reduce parameters of PLMs in fine-tuning without compromising their performance in downstream tasks. In this paper, we design a novel adapter which only acts on self-attention outputs in PLMs. This adapter adopts element-wise linear transformation using Hadamard product, hence named as Hadamard adapter, requires the fewest parameters compared to previous parameter-efficient adapters. In addition, we also summarize some tuning patterns for Hadamard adapter shared by various downstream tasks, expecting to provide some guidance for further parameter reduction with shared adapters in future studies. The experiments conducted on the widely-used GLUE benchmark with several SOTA PLMs prove that the Hadamard adapter achieves competitive performance with only 0.033\% parameters compared with full fine-tuning, and it has the fewest parameters compared with other adapters. Moreover, we further find that there is also some redundant layers in the Hadamard adapter which can be removed to achieve more parameter efficiency with only 0.022\% parameters.
[ { "created": "Thu, 4 Jul 2024 18:21:28 GMT", "version": "v1" } ]
2024-07-17
[ [ "Chen", "Yuyan", "" ], [ "Fu", "Qiang", "" ], [ "Fan", "Ge", "" ], [ "Du", "Lun", "" ], [ "Lou", "Jian-Guang", "" ], [ "Han", "Shi", "" ], [ "Zhang", "Dongmei", "" ], [ "Li", "Zhixu", "" ], [ "Xiao", "Yanghua", "" ] ]
Recent years, Pre-trained Language models (PLMs) have swept into various fields of artificial intelligence and achieved great success. However, most PLMs, such as T5 and GPT3, have a huge amount of parameters, fine-tuning them is often expensive and time consuming, and storing them takes up a lot of space. Therefore, it is necessary to adopt a parameter-efficient approach to reduce parameters of PLMs in fine-tuning without compromising their performance in downstream tasks. In this paper, we design a novel adapter which only acts on self-attention outputs in PLMs. This adapter adopts element-wise linear transformation using Hadamard product, hence named as Hadamard adapter, requires the fewest parameters compared to previous parameter-efficient adapters. In addition, we also summarize some tuning patterns for Hadamard adapter shared by various downstream tasks, expecting to provide some guidance for further parameter reduction with shared adapters in future studies. The experiments conducted on the widely-used GLUE benchmark with several SOTA PLMs prove that the Hadamard adapter achieves competitive performance with only 0.033\% parameters compared with full fine-tuning, and it has the fewest parameters compared with other adapters. Moreover, we further find that there is also some redundant layers in the Hadamard adapter which can be removed to achieve more parameter efficiency with only 0.022\% parameters.
1803.08069
Jaime Fentanes
Jaime Pulido Fentanes, Iain Gould, Tom Duckett, Simon Pearson and Grzegorz Cielniak
3D Soil Compaction Mapping through Kriging-based Exploration with a Mobile Robot
Submitted paper, to IEEE Robotics and Automation Letters (RA-L) special issue on Precision Agricultural Robotics and Autonomous Farming Technologies. Not reviewed
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an automated method for creating spatial maps of soil condition with an outdoor mobile robot. Effective soil mapping on farms can enhance yields, reduce inputs and help protect the environment. Traditionally, data are collected manually at an arbitrary set of locations, then soil maps are constructed offline using Kriging, a form of Gaussian process regression. This process is laborious and costly, limiting the quality and resolution of the resulting information. Instead, we propose to use an outdoor mobile robot for automatic collection of soil condition data, building soil maps online and also adapting the robot's exploration strategy on-the-fly based on the current quality of the map. We show how using Kriging variance as a reward function for robotic exploration allows for both more efficient data collection and better soil models. This work presents the theoretical foundations for our proposal and an experimental comparison of exploration strategies using soil compaction data from a field generated with a mobile robot.
[ { "created": "Wed, 21 Mar 2018 18:05:13 GMT", "version": "v1" } ]
2018-03-23
[ [ "Fentanes", "Jaime Pulido", "" ], [ "Gould", "Iain", "" ], [ "Duckett", "Tom", "" ], [ "Pearson", "Simon", "" ], [ "Cielniak", "Grzegorz", "" ] ]
This paper presents an automated method for creating spatial maps of soil condition with an outdoor mobile robot. Effective soil mapping on farms can enhance yields, reduce inputs and help protect the environment. Traditionally, data are collected manually at an arbitrary set of locations, then soil maps are constructed offline using Kriging, a form of Gaussian process regression. This process is laborious and costly, limiting the quality and resolution of the resulting information. Instead, we propose to use an outdoor mobile robot for automatic collection of soil condition data, building soil maps online and also adapting the robot's exploration strategy on-the-fly based on the current quality of the map. We show how using Kriging variance as a reward function for robotic exploration allows for both more efficient data collection and better soil models. This work presents the theoretical foundations for our proposal and an experimental comparison of exploration strategies using soil compaction data from a field generated with a mobile robot.
1905.01591
Hoang Nguyen Thai
Hoang NT, Choong Jun Jin, Tsuyoshi Murata
Learning Graph Neural Networks with Noisy Labels
5 pages, 4 figures, 3 tables; Appeared as a poster presentation at Limited Labeled Data (LLD) Workshop, ICLR 2019
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the robustness to symmetric label noise of GNNs training procedures. By combining the nonlinear neural message-passing models (e.g. Graph Isomorphism Networks, GraphSAGE, etc.) with loss correction methods, we present a noise-tolerant approach for the graph classification task. Our experiments show that test accuracy can be improved under the artificial symmetric noisy setting.
[ { "created": "Sun, 5 May 2019 03:27:50 GMT", "version": "v1" } ]
2019-05-07
[ [ "NT", "Hoang", "" ], [ "Jin", "Choong Jun", "" ], [ "Murata", "Tsuyoshi", "" ] ]
We study the robustness to symmetric label noise of GNNs training procedures. By combining the nonlinear neural message-passing models (e.g. Graph Isomorphism Networks, GraphSAGE, etc.) with loss correction methods, we present a noise-tolerant approach for the graph classification task. Our experiments show that test accuracy can be improved under the artificial symmetric noisy setting.
1904.04428
Hao Peng
Hao Peng, Ankur P. Parikh, Manaal Faruqui, Bhuwan Dhingra, Dipanjan Das
Text Generation with Exemplar-based Adaptive Decoding
NAACL 2019
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel conditioned text generation model. It draws inspiration from traditional template-based text generation techniques, where the source provides the content (i.e., what to say), and the template influences how to say it. Building on the successful encoder-decoder paradigm, it first encodes the content representation from the given input text; to produce the output, it retrieves exemplar text from the training data as "soft templates," which are then used to construct an exemplar-specific decoder. We evaluate the proposed model on abstractive text summarization and data-to-text generation. Empirical results show that this model achieves strong performance and outperforms comparable baselines.
[ { "created": "Tue, 9 Apr 2019 02:34:30 GMT", "version": "v1" }, { "created": "Wed, 10 Apr 2019 22:03:53 GMT", "version": "v2" } ]
2019-04-12
[ [ "Peng", "Hao", "" ], [ "Parikh", "Ankur P.", "" ], [ "Faruqui", "Manaal", "" ], [ "Dhingra", "Bhuwan", "" ], [ "Das", "Dipanjan", "" ] ]
We propose a novel conditioned text generation model. It draws inspiration from traditional template-based text generation techniques, where the source provides the content (i.e., what to say), and the template influences how to say it. Building on the successful encoder-decoder paradigm, it first encodes the content representation from the given input text; to produce the output, it retrieves exemplar text from the training data as "soft templates," which are then used to construct an exemplar-specific decoder. We evaluate the proposed model on abstractive text summarization and data-to-text generation. Empirical results show that this model achieves strong performance and outperforms comparable baselines.
1411.7883
Luca Del Pero
Luca Del Pero, Susanna Ricco, Rahul Sukthankar, Vittorio Ferrari
Articulated motion discovery using pairs of trajectories
10 pages, 5 figures, 2 tables
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose an unsupervised approach for discovering characteristic motion patterns in videos of highly articulated objects performing natural, unscripted behaviors, such as tigers in the wild. We discover consistent patterns in a bottom-up manner by analyzing the relative displacements of large numbers of ordered trajectory pairs through time, such that each trajectory is attached to a different moving part on the object. The pairs of trajectories descriptor relies entirely on motion and is more discriminative than state-of-the-art features that employ single trajectories. Our method generates temporal video intervals, each automatically trimmed to one instance of the discovered behavior, and clusters them by type (e.g., running, turning head, drinking water). We present experiments on two datasets: dogs from YouTube-Objects and a new dataset of National Geographic tiger videos. Results confirm that our proposed descriptor outperforms existing appearance- and trajectory-based descriptors (e.g., HOG and DTFs) on both datasets and enables us to segment unconstrained animal video into intervals containing single behaviors.
[ { "created": "Fri, 28 Nov 2014 14:43:03 GMT", "version": "v1" }, { "created": "Tue, 16 Dec 2014 13:56:07 GMT", "version": "v2" }, { "created": "Fri, 24 Apr 2015 15:29:06 GMT", "version": "v3" } ]
2015-04-27
[ [ "Del Pero", "Luca", "" ], [ "Ricco", "Susanna", "" ], [ "Sukthankar", "Rahul", "" ], [ "Ferrari", "Vittorio", "" ] ]
We propose an unsupervised approach for discovering characteristic motion patterns in videos of highly articulated objects performing natural, unscripted behaviors, such as tigers in the wild. We discover consistent patterns in a bottom-up manner by analyzing the relative displacements of large numbers of ordered trajectory pairs through time, such that each trajectory is attached to a different moving part on the object. The pairs of trajectories descriptor relies entirely on motion and is more discriminative than state-of-the-art features that employ single trajectories. Our method generates temporal video intervals, each automatically trimmed to one instance of the discovered behavior, and clusters them by type (e.g., running, turning head, drinking water). We present experiments on two datasets: dogs from YouTube-Objects and a new dataset of National Geographic tiger videos. Results confirm that our proposed descriptor outperforms existing appearance- and trajectory-based descriptors (e.g., HOG and DTFs) on both datasets and enables us to segment unconstrained animal video into intervals containing single behaviors.
2303.14219
EPTCS
Clemens Grabmayer (GSSI)
Proceedings Twelfth International Workshop on Computing with Terms and Graphs
null
EPTCS 377, 2023
10.4204/EPTCS.377
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
The workshop TERMGRAPH 2022 took place at Technion in Haifa, Israel, on August 1, 2022, in the Pre-FLoC workshop block (July 31-August 1) of FLoC 2022 (Federated Logic Conference 2022, July 31-August 12). As such, TERMGRAPH 2022 was a one-day satellite event of the conference FSCD 2022 (Formal Structures of Computation and Deduction 2020, August 2-5).
[ { "created": "Fri, 24 Mar 2023 18:26:26 GMT", "version": "v1" }, { "created": "Sun, 2 Apr 2023 20:10:28 GMT", "version": "v2" } ]
2023-04-04
[ [ "Grabmayer", "Clemens", "", "GSSI" ] ]
The workshop TERMGRAPH 2022 took place at Technion in Haifa, Israel, on August 1, 2022, in the Pre-FLoC workshop block (July 31-August 1) of FLoC 2022 (Federated Logic Conference 2022, July 31-August 12). As such, TERMGRAPH 2022 was a one-day satellite event of the conference FSCD 2022 (Formal Structures of Computation and Deduction 2020, August 2-5).
1403.3034
Phillip James
Phillip James, Markus Roggenbach
Encapsulating Formal Methods within Domain Specific Languages: A Solution for Verifying Railway Scheme Plans
null
null
null
null
cs.SE cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The development and application of formal methods is a long standing research topic within the field of computer science. One particular challenge that remains is the uptake of formal methods into industrial practices. This paper introduces a methodology for developing domain specific languages for modelling and verification to aid in the uptake of formal methods within industry. It illustrates the successful application of this methodology within the railway domain. The presented methodology addresses issues surrounding faithful modelling, scalability of verification and accessibility to modelling and verification processes for practitioners within the domain.
[ { "created": "Sun, 9 Mar 2014 21:42:56 GMT", "version": "v1" }, { "created": "Sun, 23 Mar 2014 15:20:51 GMT", "version": "v2" } ]
2014-03-25
[ [ "James", "Phillip", "" ], [ "Roggenbach", "Markus", "" ] ]
The development and application of formal methods is a long standing research topic within the field of computer science. One particular challenge that remains is the uptake of formal methods into industrial practices. This paper introduces a methodology for developing domain specific languages for modelling and verification to aid in the uptake of formal methods within industry. It illustrates the successful application of this methodology within the railway domain. The presented methodology addresses issues surrounding faithful modelling, scalability of verification and accessibility to modelling and verification processes for practitioners within the domain.
2010.12607
Ra\'ul Nozal
Ra\'ul Nozal, Jose Luis Bosque and Ramon Beivide
Towards Co-execution on Commodity Heterogeneous Systems: Optimizations for Time-Constrained Scenarios
8 pages, 6 figures, conference
2019 International Conference on High Performance Computing & Simulation (HPCS), pp. 628-635
10.1109/HPCS48598.2019.9188188
null
cs.DC cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Heterogeneous systems are present from powerful supercomputers, to mobile devices, including desktop computers, thanks to their excellent performance and energy consumption. The ubiquity of these architectures in both desktop systems and medium-sized service servers allow enough variability to exploit a wide range of problems, such as multimedia workloads, video encoding, image filtering and inference in machine learning. Due to the heterogeneity, some efforts have been done to reduce the programming effort and preserve performance portability, but these systems include a set of challenges. The context in which applications offload the workload along with the management overheads introduced when doing co-execution, penalize the performance gains under time-constrained scenarios. Therefore, this paper proposes optimizations for the EngineCL runtime to reduce the penalization when co-executing in commodity systems, as well as algorithmic improvements when load balancing. An exhaustive experimental evaluation is performed, showing optimization improvements of 7.5\% and 17.4\% for binary and ROI-based offloading modes, respectively. Thanks to all the optimizations, the new load balancing algorithm is always the most efficient scheduling configuration, achieving an average efficiency of 0.84 under a pessimistic scenario.
[ { "created": "Fri, 23 Oct 2020 18:32:27 GMT", "version": "v1" } ]
2020-10-27
[ [ "Nozal", "Raúl", "" ], [ "Bosque", "Jose Luis", "" ], [ "Beivide", "Ramon", "" ] ]
Heterogeneous systems are present from powerful supercomputers, to mobile devices, including desktop computers, thanks to their excellent performance and energy consumption. The ubiquity of these architectures in both desktop systems and medium-sized service servers allow enough variability to exploit a wide range of problems, such as multimedia workloads, video encoding, image filtering and inference in machine learning. Due to the heterogeneity, some efforts have been done to reduce the programming effort and preserve performance portability, but these systems include a set of challenges. The context in which applications offload the workload along with the management overheads introduced when doing co-execution, penalize the performance gains under time-constrained scenarios. Therefore, this paper proposes optimizations for the EngineCL runtime to reduce the penalization when co-executing in commodity systems, as well as algorithmic improvements when load balancing. An exhaustive experimental evaluation is performed, showing optimization improvements of 7.5\% and 17.4\% for binary and ROI-based offloading modes, respectively. Thanks to all the optimizations, the new load balancing algorithm is always the most efficient scheduling configuration, achieving an average efficiency of 0.84 under a pessimistic scenario.
2005.07954
Izumi Haruta
Izumi Haruta, Koji Mineshima, Daisuke Bekki
Logical Inferences with Comparatives and Generalized Quantifiers
To appear in the Proceedings of the Association for Computational Linguistics: Student Research Workshop (ACL-SRW 2020)
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Comparative constructions pose a challenge in Natural Language Inference (NLI), which is the task of determining whether a text entails a hypothesis. Comparatives are structurally complex in that they interact with other linguistic phenomena such as quantifiers, numerals, and lexical antonyms. In formal semantics, there is a rich body of work on comparatives and gradable expressions using the notion of degree. However, a logical inference system for comparatives has not been sufficiently developed for use in the NLI task. In this paper, we present a compositional semantics that maps various comparative constructions in English to semantic representations via Combinatory Categorial Grammar (CCG) parsers and combine it with an inference system based on automated theorem proving. We evaluate our system on three NLI datasets that contain complex logical inferences with comparatives, generalized quantifiers, and numerals. We show that the system outperforms previous logic-based systems as well as recent deep learning-based models.
[ { "created": "Sat, 16 May 2020 11:11:48 GMT", "version": "v1" } ]
2020-05-19
[ [ "Haruta", "Izumi", "" ], [ "Mineshima", "Koji", "" ], [ "Bekki", "Daisuke", "" ] ]
Comparative constructions pose a challenge in Natural Language Inference (NLI), which is the task of determining whether a text entails a hypothesis. Comparatives are structurally complex in that they interact with other linguistic phenomena such as quantifiers, numerals, and lexical antonyms. In formal semantics, there is a rich body of work on comparatives and gradable expressions using the notion of degree. However, a logical inference system for comparatives has not been sufficiently developed for use in the NLI task. In this paper, we present a compositional semantics that maps various comparative constructions in English to semantic representations via Combinatory Categorial Grammar (CCG) parsers and combine it with an inference system based on automated theorem proving. We evaluate our system on three NLI datasets that contain complex logical inferences with comparatives, generalized quantifiers, and numerals. We show that the system outperforms previous logic-based systems as well as recent deep learning-based models.
2212.05786
Chao Hu
Chao Hu, Shengxin Lai
Multi-scale Feature Imitation for Unsupervised Anomaly Localization
International Joint Conference on Neural Networks 2023
International Joint Conference on Neural Networks 2023
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The unsupervised anomaly localization task faces the challenge of missing anomaly sample training, detecting multiple types of anomalies, and dealing with the proportion of the area of multiple anomalies. A separate teacher-student feature imitation network structure and a multi-scale processing strategy combining an image and feature pyramid are proposed to solve these problems. A network module importance search method based on gradient descent optimization is proposed to simplify the network structure. The experimental results show that the proposed algorithm performs better than the feature modeling anomaly localization method on the real industrial product detection dataset in the same period. The multi-scale strategy can effectively improve the effect compared with the benchmark method.
[ { "created": "Mon, 12 Dec 2022 09:21:24 GMT", "version": "v1" }, { "created": "Tue, 13 Dec 2022 02:40:13 GMT", "version": "v2" } ]
2022-12-14
[ [ "Hu", "Chao", "" ], [ "Lai", "Shengxin", "" ] ]
The unsupervised anomaly localization task faces the challenge of missing anomaly sample training, detecting multiple types of anomalies, and dealing with the proportion of the area of multiple anomalies. A separate teacher-student feature imitation network structure and a multi-scale processing strategy combining an image and feature pyramid are proposed to solve these problems. A network module importance search method based on gradient descent optimization is proposed to simplify the network structure. The experimental results show that the proposed algorithm performs better than the feature modeling anomaly localization method on the real industrial product detection dataset in the same period. The multi-scale strategy can effectively improve the effect compared with the benchmark method.
2101.11282
Wei Chen
Wei Chen, Yu Liu, Weiping Wang, Erwin Bakker, Theodoros Georgiou, Paul Fieguth, Li Liu, and Michael S. Lew
Deep Learning for Instance Retrieval: A Survey
IEEE Transactions on Pattern Analysis and Machine Intelligence
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
In recent years a vast amount of visual content has been generated and shared from many fields, such as social media platforms, medical imaging, and robotics. This abundance of content creation and sharing has introduced new challenges, particularly that of searching databases for similar content-Content Based Image Retrieval (CBIR)-a long-established research area in which improved efficiency and accuracy are needed for real-time retrieval. Artificial intelligence has made progress in CBIR and has significantly facilitated the process of instance search. In this survey we review recent instance retrieval works that are developed based on deep learning algorithms and techniques, with the survey organized by deep network architecture types, deep features, feature embedding and aggregation methods, and network fine-tuning strategies. Our survey considers a wide variety of recent methods, whereby we identify milestone work, reveal connections among various methods and present the commonly used benchmarks, evaluation results, common challenges, and propose promising future directions.
[ { "created": "Wed, 27 Jan 2021 09:32:58 GMT", "version": "v1" }, { "created": "Wed, 3 Feb 2021 00:33:32 GMT", "version": "v2" }, { "created": "Sat, 8 Jan 2022 11:35:01 GMT", "version": "v3" }, { "created": "Sun, 30 Oct 2022 05:39:12 GMT", "version": "v4" } ]
2022-11-01
[ [ "Chen", "Wei", "" ], [ "Liu", "Yu", "" ], [ "Wang", "Weiping", "" ], [ "Bakker", "Erwin", "" ], [ "Georgiou", "Theodoros", "" ], [ "Fieguth", "Paul", "" ], [ "Liu", "Li", "" ], [ "Lew", "Michael S.", "" ] ]
In recent years a vast amount of visual content has been generated and shared from many fields, such as social media platforms, medical imaging, and robotics. This abundance of content creation and sharing has introduced new challenges, particularly that of searching databases for similar content-Content Based Image Retrieval (CBIR)-a long-established research area in which improved efficiency and accuracy are needed for real-time retrieval. Artificial intelligence has made progress in CBIR and has significantly facilitated the process of instance search. In this survey we review recent instance retrieval works that are developed based on deep learning algorithms and techniques, with the survey organized by deep network architecture types, deep features, feature embedding and aggregation methods, and network fine-tuning strategies. Our survey considers a wide variety of recent methods, whereby we identify milestone work, reveal connections among various methods and present the commonly used benchmarks, evaluation results, common challenges, and propose promising future directions.
2109.07788
Prasanth Sengadu Suresh
Prasanth Sengadu Suresh, Prashant Doshi
Marginal MAP Estimation for Inverse RL under Occlusion with Observer Noise
null
null
null
null
cs.RO cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
We consider the problem of learning the behavioral preferences of an expert engaged in a task from noisy and partially-observable demonstrations. This is motivated by real-world applications such as a line robot learning from observing a human worker, where some observations are occluded by environmental objects that cannot be removed. Furthermore, robotic perception tends to be imperfect and noisy. Previous techniques for inverse reinforcement learning (IRL) take the approach of either omitting the missing portions or inferring it as part of expectation-maximization, which tends to be slow and prone to local optima. We present a new method that generalizes the well-known Bayesian maximum-a-posteriori (MAP) IRL method by marginalizing the occluded portions of the trajectory. This is additionally extended with an observation model to account for perception noise. We show that the marginal MAP (MMAP) approach significantly improves on the previous IRL technique under occlusion in both formative evaluations on a toy problem and in a summative evaluation on an onion sorting line task by a robot.
[ { "created": "Thu, 16 Sep 2021 08:20:52 GMT", "version": "v1" } ]
2021-09-17
[ [ "Suresh", "Prasanth Sengadu", "" ], [ "Doshi", "Prashant", "" ] ]
We consider the problem of learning the behavioral preferences of an expert engaged in a task from noisy and partially-observable demonstrations. This is motivated by real-world applications such as a line robot learning from observing a human worker, where some observations are occluded by environmental objects that cannot be removed. Furthermore, robotic perception tends to be imperfect and noisy. Previous techniques for inverse reinforcement learning (IRL) take the approach of either omitting the missing portions or inferring it as part of expectation-maximization, which tends to be slow and prone to local optima. We present a new method that generalizes the well-known Bayesian maximum-a-posteriori (MAP) IRL method by marginalizing the occluded portions of the trajectory. This is additionally extended with an observation model to account for perception noise. We show that the marginal MAP (MMAP) approach significantly improves on the previous IRL technique under occlusion in both formative evaluations on a toy problem and in a summative evaluation on an onion sorting line task by a robot.
1607.03257
Benjamin Elizalde
Benjamin Elizalde, Guan-Lin Chao, Ming Zeng, Ian Lane
City-Identification of Flickr Videos Using Semantic Acoustic Features
null
null
null
null
cs.MM cs.CV cs.SD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
City-identification of videos aims to determine the likelihood of a video belonging to a set of cities. In this paper, we present an approach using only audio, thus we do not use any additional modality such as images, user-tags or geo-tags. In this manner, we show to what extent the city-location of videos correlates to their acoustic information. Success in this task suggests improvements can be made to complement the other modalities. In particular, we present a method to compute and use semantic acoustic features to perform city-identification and the features show semantic evidence of the identification. The semantic evidence is given by a taxonomy of urban sounds and expresses the potential presence of these sounds in the city- soundtracks. We used the MediaEval Placing Task set, which contains Flickr videos labeled by city. In addition, we used the UrbanSound8K set containing audio clips labeled by sound- type. Our method improved the state-of-the-art performance and provides a novel semantic approach to this task
[ { "created": "Tue, 12 Jul 2016 08:30:45 GMT", "version": "v1" } ]
2016-07-13
[ [ "Elizalde", "Benjamin", "" ], [ "Chao", "Guan-Lin", "" ], [ "Zeng", "Ming", "" ], [ "Lane", "Ian", "" ] ]
City-identification of videos aims to determine the likelihood of a video belonging to a set of cities. In this paper, we present an approach using only audio, thus we do not use any additional modality such as images, user-tags or geo-tags. In this manner, we show to what extent the city-location of videos correlates to their acoustic information. Success in this task suggests improvements can be made to complement the other modalities. In particular, we present a method to compute and use semantic acoustic features to perform city-identification and the features show semantic evidence of the identification. The semantic evidence is given by a taxonomy of urban sounds and expresses the potential presence of these sounds in the city- soundtracks. We used the MediaEval Placing Task set, which contains Flickr videos labeled by city. In addition, we used the UrbanSound8K set containing audio clips labeled by sound- type. Our method improved the state-of-the-art performance and provides a novel semantic approach to this task
2406.10580
Xiaochen Ma
Xiaochen Ma, Xuekang Zhu, Lei Su, Bo Du, Zhuohang Jiang, Bingkui Tong, Zeyu Lei, Xinyu Yang, Chi-Man Pun, Jiancheng Lv, Jizhe Zhou
IMDL-BenCo: A Comprehensive Benchmark and Codebase for Image Manipulation Detection & Localization
Technical report
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
A comprehensive benchmark is yet to be established in the Image Manipulation Detection \& Localization (IMDL) field. The absence of such a benchmark leads to insufficient and misleading model evaluations, severely undermining the development of this field. However, the scarcity of open-sourced baseline models and inconsistent training and evaluation protocols make conducting rigorous experiments and faithful comparisons among IMDL models challenging. To address these challenges, we introduce IMDL-BenCo, the first comprehensive IMDL benchmark and modular codebase. IMDL-BenCo:~\textbf{i)} decomposes the IMDL framework into standardized, reusable components and revises the model construction pipeline, improving coding efficiency and customization flexibility;~\textbf{ii)} fully implements or incorporates training code for state-of-the-art models to establish a comprehensive IMDL benchmark; and~\textbf{iii)} conducts deep analysis based on the established benchmark and codebase, offering new insights into IMDL model architecture, dataset characteristics, and evaluation standards. Specifically, IMDL-BenCo includes common processing algorithms, 8 state-of-the-art IMDL models (1 of which are reproduced from scratch), 2 sets of standard training and evaluation protocols, 15 GPU-accelerated evaluation metrics, and 3 kinds of robustness evaluation. This benchmark and codebase represent a significant leap forward in calibrating the current progress in the IMDL field and inspiring future breakthroughs. Code is available at: https://github.com/scu-zjz/IMDLBenCo
[ { "created": "Sat, 15 Jun 2024 09:44:54 GMT", "version": "v1" } ]
2024-06-18
[ [ "Ma", "Xiaochen", "" ], [ "Zhu", "Xuekang", "" ], [ "Su", "Lei", "" ], [ "Du", "Bo", "" ], [ "Jiang", "Zhuohang", "" ], [ "Tong", "Bingkui", "" ], [ "Lei", "Zeyu", "" ], [ "Yang", "Xinyu", "" ], [ "Pun", "Chi-Man", "" ], [ "Lv", "Jiancheng", "" ], [ "Zhou", "Jizhe", "" ] ]
A comprehensive benchmark is yet to be established in the Image Manipulation Detection \& Localization (IMDL) field. The absence of such a benchmark leads to insufficient and misleading model evaluations, severely undermining the development of this field. However, the scarcity of open-sourced baseline models and inconsistent training and evaluation protocols make conducting rigorous experiments and faithful comparisons among IMDL models challenging. To address these challenges, we introduce IMDL-BenCo, the first comprehensive IMDL benchmark and modular codebase. IMDL-BenCo:~\textbf{i)} decomposes the IMDL framework into standardized, reusable components and revises the model construction pipeline, improving coding efficiency and customization flexibility;~\textbf{ii)} fully implements or incorporates training code for state-of-the-art models to establish a comprehensive IMDL benchmark; and~\textbf{iii)} conducts deep analysis based on the established benchmark and codebase, offering new insights into IMDL model architecture, dataset characteristics, and evaluation standards. Specifically, IMDL-BenCo includes common processing algorithms, 8 state-of-the-art IMDL models (1 of which are reproduced from scratch), 2 sets of standard training and evaluation protocols, 15 GPU-accelerated evaluation metrics, and 3 kinds of robustness evaluation. This benchmark and codebase represent a significant leap forward in calibrating the current progress in the IMDL field and inspiring future breakthroughs. Code is available at: https://github.com/scu-zjz/IMDLBenCo
2312.05401
Ergun Akleman
Sitong Deng and Ergun Akleman
A Digital Compositing Approach to obtain Animated Chinese Still-life Paintings with Global Effects
14 pages
null
null
null
cs.GR
http://creativecommons.org/licenses/by/4.0/
In this work, we present a method for turning Chinese still-life paintings with global illumination effects into dynamic paintings with moving lights. Our goal is to preserve the original look and feel of still-life paintings with moving lights and objects. We have developed a deceptively simple method that can be computed as a composite of two animated texture images using an animated rendering. The compositing process can be implemented directly in an animation system such as AfterEffect, which allows for the basic compositing operation over animations. It is also possible to control the colors by changing the material colors in animated rendering. We have provided a proof-of-concept based on an original digital Still-Life painting that is in realist Chinese style. This approach can be used to turn almost any still-life painting into a dynamic painting.
[ { "created": "Fri, 8 Dec 2023 22:53:59 GMT", "version": "v1" } ]
2023-12-12
[ [ "Deng", "Sitong", "" ], [ "Akleman", "Ergun", "" ] ]
In this work, we present a method for turning Chinese still-life paintings with global illumination effects into dynamic paintings with moving lights. Our goal is to preserve the original look and feel of still-life paintings with moving lights and objects. We have developed a deceptively simple method that can be computed as a composite of two animated texture images using an animated rendering. The compositing process can be implemented directly in an animation system such as AfterEffect, which allows for the basic compositing operation over animations. It is also possible to control the colors by changing the material colors in animated rendering. We have provided a proof-of-concept based on an original digital Still-Life painting that is in realist Chinese style. This approach can be used to turn almost any still-life painting into a dynamic painting.
2003.00916
Bert Abrath
Bert Abrath, Bart Coppens, Jens Van den Broeck, Brecht Wyseur, Alessandro Cabutto, Paolo Falcarin, Bjorn De Sutter
Code Renewability for Native Software Protection
30 pages
null
10.1145/3404891
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Software protection aims at safeguarding assets embedded in software by preventing and delaying reverse engineering and tampering attacks. This paper presents an architecture and supporting tool flow to renew parts of native applications dynamically. Renewed and diversified code and data belonging to either the original application or to linked-in protections are delivered from a secure server to a client on demand. This results in frequent changes to the software components when they are under attack, thus making attacks harder. By supporting various forms of diversification and renewability, novel protection combinations become available, and existing combinations become stronger. The prototype implementation is evaluated on a number of industrial use cases.
[ { "created": "Mon, 2 Mar 2020 13:45:04 GMT", "version": "v1" } ]
2020-06-25
[ [ "Abrath", "Bert", "" ], [ "Coppens", "Bart", "" ], [ "Broeck", "Jens Van den", "" ], [ "Wyseur", "Brecht", "" ], [ "Cabutto", "Alessandro", "" ], [ "Falcarin", "Paolo", "" ], [ "De Sutter", "Bjorn", "" ] ]
Software protection aims at safeguarding assets embedded in software by preventing and delaying reverse engineering and tampering attacks. This paper presents an architecture and supporting tool flow to renew parts of native applications dynamically. Renewed and diversified code and data belonging to either the original application or to linked-in protections are delivered from a secure server to a client on demand. This results in frequent changes to the software components when they are under attack, thus making attacks harder. By supporting various forms of diversification and renewability, novel protection combinations become available, and existing combinations become stronger. The prototype implementation is evaluated on a number of industrial use cases.
1705.09776
Xinfeng Zhang
Lingyu Duan, Wei Sun, Xinfeng Zhang, Shiqi Wang, Jie Chen, Jianxiong Yin, Simon See, Tiejun Huang, Alex C. Kot, Wen Gao
Fast MPEG-CDVS Encoder with GPU-CPU Hybrid Computing
null
null
10.1109/TIP.2018.2794203
null
cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The compact descriptors for visual search (CDVS) standard from ISO/IEC moving pictures experts group (MPEG) has succeeded in enabling the interoperability for efficient and effective image retrieval by standardizing the bitstream syntax of compact feature descriptors. However, the intensive computation of CDVS encoder unfortunately hinders its widely deployment in industry for large-scale visual search. In this paper, we revisit the merits of low complexity design of CDVS core techniques and present a very fast CDVS encoder by leveraging the massive parallel execution resources of GPU. We elegantly shift the computation-intensive and parallel-friendly modules to the state-of-the-arts GPU platforms, in which the thread block allocation and the memory access are jointly optimized to eliminate performance loss. In addition, those operations with heavy data dependence are allocated to CPU to resolve the extra but non-necessary computation burden for GPU. Furthermore, we have demonstrated the proposed fast CDVS encoder can work well with those convolution neural network approaches which has harmoniously leveraged the advantages of GPU platforms, and yielded significant performance improvements. Comprehensive experimental results over benchmarks are evaluated, which has shown that the fast CDVS encoder using GPU-CPU hybrid computing is promising for scalable visual search.
[ { "created": "Sat, 27 May 2017 06:59:37 GMT", "version": "v1" }, { "created": "Fri, 9 Jun 2017 11:26:11 GMT", "version": "v2" } ]
2018-03-14
[ [ "Duan", "Lingyu", "" ], [ "Sun", "Wei", "" ], [ "Zhang", "Xinfeng", "" ], [ "Wang", "Shiqi", "" ], [ "Chen", "Jie", "" ], [ "Yin", "Jianxiong", "" ], [ "See", "Simon", "" ], [ "Huang", "Tiejun", "" ], [ "Kot", "Alex C.", "" ], [ "Gao", "Wen", "" ] ]
The compact descriptors for visual search (CDVS) standard from ISO/IEC moving pictures experts group (MPEG) has succeeded in enabling the interoperability for efficient and effective image retrieval by standardizing the bitstream syntax of compact feature descriptors. However, the intensive computation of CDVS encoder unfortunately hinders its widely deployment in industry for large-scale visual search. In this paper, we revisit the merits of low complexity design of CDVS core techniques and present a very fast CDVS encoder by leveraging the massive parallel execution resources of GPU. We elegantly shift the computation-intensive and parallel-friendly modules to the state-of-the-arts GPU platforms, in which the thread block allocation and the memory access are jointly optimized to eliminate performance loss. In addition, those operations with heavy data dependence are allocated to CPU to resolve the extra but non-necessary computation burden for GPU. Furthermore, we have demonstrated the proposed fast CDVS encoder can work well with those convolution neural network approaches which has harmoniously leveraged the advantages of GPU platforms, and yielded significant performance improvements. Comprehensive experimental results over benchmarks are evaluated, which has shown that the fast CDVS encoder using GPU-CPU hybrid computing is promising for scalable visual search.
2208.00316
Guilherme Paulino-Passos
Guilherme Paulino-Passos and Francesca Toni
On Interactive Explanations as Non-Monotonic Reasoning
Corrected version for the XAI-IJCAI 2022 workshop, expands on the XLoKR-KR 2022 workshop
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work shows issues of consistency with explanations, with methods generating local explanations that seem reasonable instance-wise, but that are inconsistent across instances. This suggests not only that instance-wise explanations can be unreliable, but mainly that, when interacting with a system via multiple inputs, a user may actually lose confidence in the system. To better analyse this issue, in this work we treat explanations as objects that can be subject to reasoning and present a formal model of the interactive scenario between user and system, via sequences of inputs, outputs, and explanations. We argue that explanations can be thought of as committing to some model behaviour (even if only prima facie), suggesting a form of entailment, which, we argue, should be thought of as non-monotonic. This allows: 1) to solve some considered inconsistencies in explanation, such as via a specificity relation; 2) to consider properties from the non-monotonic reasoning literature and discuss their desirability, gaining more insight on the interactive explanation scenario.
[ { "created": "Sat, 30 Jul 2022 22:08:35 GMT", "version": "v1" } ]
2022-08-02
[ [ "Paulino-Passos", "Guilherme", "" ], [ "Toni", "Francesca", "" ] ]
Recent work shows issues of consistency with explanations, with methods generating local explanations that seem reasonable instance-wise, but that are inconsistent across instances. This suggests not only that instance-wise explanations can be unreliable, but mainly that, when interacting with a system via multiple inputs, a user may actually lose confidence in the system. To better analyse this issue, in this work we treat explanations as objects that can be subject to reasoning and present a formal model of the interactive scenario between user and system, via sequences of inputs, outputs, and explanations. We argue that explanations can be thought of as committing to some model behaviour (even if only prima facie), suggesting a form of entailment, which, we argue, should be thought of as non-monotonic. This allows: 1) to solve some considered inconsistencies in explanation, such as via a specificity relation; 2) to consider properties from the non-monotonic reasoning literature and discuss their desirability, gaining more insight on the interactive explanation scenario.
2208.00536
J\k{e}drzej Ko{\l}odziejski
J\k{e}drzej Ko{\l}odziejski and Bartek Klin
Countdown $\mu$-calculus
30 pages, 1 figure
null
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
We introduce the countdown $\mu$-calculus, an extension of the modal $\mu$-calculus with ordinal approximations of fixpoint operators. In addition to properties definable in the classical calculus, it can express (un)boundedness properties such as the existence of arbitrarily long sequences of specific actions. The standard correspondence with parity games and automata extends to suitably defined countdown games and automata. However, unlike in the classical setting, the scalar fragment is provably weaker than the full vectorial calculus and corresponds to automata satisfying a simple syntactic condition. We establish some facts, in particular decidability of the model checking problem and strictness of the hierarchy induced by the maximal allowed nesting of our new operators.
[ { "created": "Sun, 31 Jul 2022 22:47:44 GMT", "version": "v1" } ]
2022-08-02
[ [ "Kołodziejski", "Jędrzej", "" ], [ "Klin", "Bartek", "" ] ]
We introduce the countdown $\mu$-calculus, an extension of the modal $\mu$-calculus with ordinal approximations of fixpoint operators. In addition to properties definable in the classical calculus, it can express (un)boundedness properties such as the existence of arbitrarily long sequences of specific actions. The standard correspondence with parity games and automata extends to suitably defined countdown games and automata. However, unlike in the classical setting, the scalar fragment is provably weaker than the full vectorial calculus and corresponds to automata satisfying a simple syntactic condition. We establish some facts, in particular decidability of the model checking problem and strictness of the hierarchy induced by the maximal allowed nesting of our new operators.
1502.03951
Pascal Weil
Howard Straubing and Pascal Weil
Varieties
This is a chapter in an upcoming Handbook of Automata Theory
Chapter 16 in Handbook of Automata Theory (Jean-Eric Pin ed.), EMS Publishing, 2021
10.4171/Automata
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This text is devoted to the theory of varieties, which provides an important tool, based in universal algebra, for the classification of regular languages. In the introductory section, we present a number of examples that illustrate and motivate the fundamental concepts. We do this for the most part without proofs, and often without precise definitions, leaving these to the formal development of the theory that begins in Section 2. Our presentation of the theory draws heavily on the work of Gehrke, Grigorieff and Pin (2008) on the equational theory of lattices of regular languages. In the subsequent sections we consider in more detail aspects of varieties that were only briefly evoked in the introduction: Decidability, operations on languages, and characterizations in formal logic.
[ { "created": "Fri, 13 Feb 2015 11:40:06 GMT", "version": "v1" }, { "created": "Thu, 2 Jul 2015 07:45:02 GMT", "version": "v2" }, { "created": "Mon, 14 May 2018 16:18:44 GMT", "version": "v3" } ]
2021-11-19
[ [ "Straubing", "Howard", "" ], [ "Weil", "Pascal", "" ] ]
This text is devoted to the theory of varieties, which provides an important tool, based in universal algebra, for the classification of regular languages. In the introductory section, we present a number of examples that illustrate and motivate the fundamental concepts. We do this for the most part without proofs, and often without precise definitions, leaving these to the formal development of the theory that begins in Section 2. Our presentation of the theory draws heavily on the work of Gehrke, Grigorieff and Pin (2008) on the equational theory of lattices of regular languages. In the subsequent sections we consider in more detail aspects of varieties that were only briefly evoked in the introduction: Decidability, operations on languages, and characterizations in formal logic.
1308.5338
EPTCS
Andrea Ocone (School of Informatics, University of Edinburgh), Guido Sanguinetti (School of Informatics, University of Edinburgh)
A stochastic hybrid model of a biological filter
In Proceedings HAS 2013, arXiv:1308.4904
EPTCS 124, 2013, pp. 100-108
10.4204/EPTCS.124.10
null
cs.LG cs.CE q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a hybrid model of a biological filter, a genetic circuit which removes fast fluctuations in the cell's internal representation of the extra cellular environment. The model takes the classic feed-forward loop (FFL) motif and represents it as a network of continuous protein concentrations and binary, unobserved gene promoter states. We address the problem of statistical inference and parameter learning for this class of models from partial, discrete time observations. We show that the hybrid representation leads to an efficient algorithm for approximate statistical inference in this circuit, and show its effectiveness on a simulated data set.
[ { "created": "Sat, 24 Aug 2013 14:34:38 GMT", "version": "v1" } ]
2013-08-27
[ [ "Ocone", "Andrea", "", "School of Informatics, University of Edinburgh" ], [ "Sanguinetti", "Guido", "", "School of Informatics, University of Edinburgh" ] ]
We present a hybrid model of a biological filter, a genetic circuit which removes fast fluctuations in the cell's internal representation of the extra cellular environment. The model takes the classic feed-forward loop (FFL) motif and represents it as a network of continuous protein concentrations and binary, unobserved gene promoter states. We address the problem of statistical inference and parameter learning for this class of models from partial, discrete time observations. We show that the hybrid representation leads to an efficient algorithm for approximate statistical inference in this circuit, and show its effectiveness on a simulated data set.
1712.08352
Edgard Marx
Edgard Marx (1 and 2), Tommaso Soru (1), Andr\'e Valdestilhas (1) ((1) University of Leipzig, (2) Leipzig University of Applied Sciences)
Triple Scoring Using a Hybrid Fact Validation Approach - The Catsear Triple Scorer at WSDM Cup 2017
Triple Scorer at WSDM Cup 2017, see arXiv:1712.08081
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the continuous increase of data daily published in knowledge bases across the Web, one of the main issues is regarding information relevance. In most knowledge bases, a triple (i.e., a statement composed by subject, predicate, and object) can be only true or false. However, triples can be assigned a score to have information sorted by relevance. In this work, we describe the participation of the Catsear team in the Triple Scoring Challenge at the WSDM Cup 2017. The Catsear approach scores triples by combining the answers coming from three different sources using a linear regression classifier. We show how our approach achieved an Accuracy2 value of 79.58% and the overall 4th place.
[ { "created": "Fri, 22 Dec 2017 09:04:55 GMT", "version": "v1" } ]
2017-12-25
[ [ "Marx", "Edgard", "", "1 and 2" ], [ "Soru", "Tommaso", "" ], [ "Valdestilhas", "André", "" ] ]
With the continuous increase of data daily published in knowledge bases across the Web, one of the main issues is regarding information relevance. In most knowledge bases, a triple (i.e., a statement composed by subject, predicate, and object) can be only true or false. However, triples can be assigned a score to have information sorted by relevance. In this work, we describe the participation of the Catsear team in the Triple Scoring Challenge at the WSDM Cup 2017. The Catsear approach scores triples by combining the answers coming from three different sources using a linear regression classifier. We show how our approach achieved an Accuracy2 value of 79.58% and the overall 4th place.
1906.06350
Ekram Hossain
Ahmed Refaey, Karim Hammad, Sebastian Magierowski, and Ekram Hossain
A Blockchain Policy and Charging Control Framework for Roaming in Cellular Networks
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As a technology foundation of cryptocurrencies, blockchain enables decentralized peer-to-peer trading through consensus mechanisms without the involvement of a third party. Blockchain has been regarded as an auspicious technology for future cellular networks. It is able to provide solutions to problems related to mobile operators and user trust, embedded smart contracts, security concerns, pricing (e.g. for roaming), etc. When applying blockchain to cellular networks, there are significant challenges in terms of deployment and application, due to resource-constrained transactions. This article begins by introducing the basic concept of blockchain and then moves on to illustrate its benefits and limitations in the roaming system. Two models of roaming-based blockchain technologies are offered to show their suitability for cellular networks as opposed to traditional technology. Finally, potential issues and challenges of roaming-based blockchains are addressed and evaluated using the roaming use case in the EU.
[ { "created": "Fri, 14 Jun 2019 18:07:00 GMT", "version": "v1" } ]
2019-06-18
[ [ "Refaey", "Ahmed", "" ], [ "Hammad", "Karim", "" ], [ "Magierowski", "Sebastian", "" ], [ "Hossain", "Ekram", "" ] ]
As a technology foundation of cryptocurrencies, blockchain enables decentralized peer-to-peer trading through consensus mechanisms without the involvement of a third party. Blockchain has been regarded as an auspicious technology for future cellular networks. It is able to provide solutions to problems related to mobile operators and user trust, embedded smart contracts, security concerns, pricing (e.g. for roaming), etc. When applying blockchain to cellular networks, there are significant challenges in terms of deployment and application, due to resource-constrained transactions. This article begins by introducing the basic concept of blockchain and then moves on to illustrate its benefits and limitations in the roaming system. Two models of roaming-based blockchain technologies are offered to show their suitability for cellular networks as opposed to traditional technology. Finally, potential issues and challenges of roaming-based blockchains are addressed and evaluated using the roaming use case in the EU.
2204.02964
Yuxin Fang
Yuxin Fang, Shusheng Yang, Shijie Wang, Yixiao Ge, Ying Shan, Xinggang Wang
Unleashing Vanilla Vision Transformer with Masked Image Modeling for Object Detection
v2: more analysis & stronger results. Preprint. Work in progress. Code and pre-trained models are available at https://github.com/hustvl/MIMDet
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an approach to efficiently and effectively adapt a masked image modeling (MIM) pre-trained vanilla Vision Transformer (ViT) for object detection, which is based on our two novel observations: (i) A MIM pre-trained vanilla ViT encoder can work surprisingly well in the challenging object-level recognition scenario even with randomly sampled partial observations, e.g., only 25% $\sim$ 50% of the input embeddings. (ii) In order to construct multi-scale representations for object detection from single-scale ViT, a randomly initialized compact convolutional stem supplants the pre-trained large kernel patchify stem, and its intermediate features can naturally serve as the higher resolution inputs of a feature pyramid network without further upsampling or other manipulations. While the pre-trained ViT is only regarded as the 3$^{rd}$-stage of our detector's backbone instead of the whole feature extractor. This results in a ConvNet-ViT hybrid feature extractor. The proposed detector, named MIMDet, enables a MIM pre-trained vanilla ViT to outperform hierarchical Swin Transformer by 2.5 box AP and 2.6 mask AP on COCO, and achieves better results compared with the previous best adapted vanilla ViT detector using a more modest fine-tuning recipe while converging 2.8$\times$ faster. Code and pre-trained models are available at https://github.com/hustvl/MIMDet.
[ { "created": "Wed, 6 Apr 2022 17:59:04 GMT", "version": "v1" }, { "created": "Thu, 19 May 2022 03:41:11 GMT", "version": "v2" } ]
2022-05-20
[ [ "Fang", "Yuxin", "" ], [ "Yang", "Shusheng", "" ], [ "Wang", "Shijie", "" ], [ "Ge", "Yixiao", "" ], [ "Shan", "Ying", "" ], [ "Wang", "Xinggang", "" ] ]
We present an approach to efficiently and effectively adapt a masked image modeling (MIM) pre-trained vanilla Vision Transformer (ViT) for object detection, which is based on our two novel observations: (i) A MIM pre-trained vanilla ViT encoder can work surprisingly well in the challenging object-level recognition scenario even with randomly sampled partial observations, e.g., only 25% $\sim$ 50% of the input embeddings. (ii) In order to construct multi-scale representations for object detection from single-scale ViT, a randomly initialized compact convolutional stem supplants the pre-trained large kernel patchify stem, and its intermediate features can naturally serve as the higher resolution inputs of a feature pyramid network without further upsampling or other manipulations. While the pre-trained ViT is only regarded as the 3$^{rd}$-stage of our detector's backbone instead of the whole feature extractor. This results in a ConvNet-ViT hybrid feature extractor. The proposed detector, named MIMDet, enables a MIM pre-trained vanilla ViT to outperform hierarchical Swin Transformer by 2.5 box AP and 2.6 mask AP on COCO, and achieves better results compared with the previous best adapted vanilla ViT detector using a more modest fine-tuning recipe while converging 2.8$\times$ faster. Code and pre-trained models are available at https://github.com/hustvl/MIMDet.
1912.09893
Federico Errica
Federico Errica, Marco Podda, Davide Bacciu, Alessio Micheli
A Fair Comparison of Graph Neural Networks for Graph Classification
Extended version of the paper published at the International Conference on Learning Representations (ICLR), 2020. Additional results are shown in the appendix
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Experimental reproducibility and replicability are critical topics in machine learning. Authors have often raised concerns about their lack in scientific publications to improve the quality of the field. Recently, the graph representation learning field has attracted the attention of a wide research community, which resulted in a large stream of works. As such, several Graph Neural Network models have been developed to effectively tackle graph classification. However, experimental procedures often lack rigorousness and are hardly reproducible. Motivated by this, we provide an overview of common practices that should be avoided to fairly compare with the state of the art. To counter this troubling trend, we ran more than 47000 experiments in a controlled and uniform framework to re-evaluate five popular models across nine common benchmarks. Moreover, by comparing GNNs with structure-agnostic baselines we provide convincing evidence that, on some datasets, structural information has not been exploited yet. We believe that this work can contribute to the development of the graph learning field, by providing a much needed grounding for rigorous evaluations of graph classification models.
[ { "created": "Fri, 20 Dec 2019 15:40:50 GMT", "version": "v1" }, { "created": "Tue, 7 Jan 2020 13:49:46 GMT", "version": "v2" }, { "created": "Thu, 17 Feb 2022 20:19:28 GMT", "version": "v3" } ]
2022-02-21
[ [ "Errica", "Federico", "" ], [ "Podda", "Marco", "" ], [ "Bacciu", "Davide", "" ], [ "Micheli", "Alessio", "" ] ]
Experimental reproducibility and replicability are critical topics in machine learning. Authors have often raised concerns about their lack in scientific publications to improve the quality of the field. Recently, the graph representation learning field has attracted the attention of a wide research community, which resulted in a large stream of works. As such, several Graph Neural Network models have been developed to effectively tackle graph classification. However, experimental procedures often lack rigorousness and are hardly reproducible. Motivated by this, we provide an overview of common practices that should be avoided to fairly compare with the state of the art. To counter this troubling trend, we ran more than 47000 experiments in a controlled and uniform framework to re-evaluate five popular models across nine common benchmarks. Moreover, by comparing GNNs with structure-agnostic baselines we provide convincing evidence that, on some datasets, structural information has not been exploited yet. We believe that this work can contribute to the development of the graph learning field, by providing a much needed grounding for rigorous evaluations of graph classification models.
1305.4682
Jiayi Chen
Jiayi Chen and Q. T. Zhang
Joint Space Decomposition-and-Synthesis Theory for K-User MIMO Channels: Interference Alignment and DoF Region
Withdraw because lack of converse theorem for the result
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies DoF of interference alignment in K-user MIMO interference channels.
[ { "created": "Tue, 21 May 2013 00:34:25 GMT", "version": "v1" }, { "created": "Thu, 8 Aug 2013 00:49:56 GMT", "version": "v2" } ]
2013-08-09
[ [ "Chen", "Jiayi", "" ], [ "Zhang", "Q. T.", "" ] ]
This paper studies DoF of interference alignment in K-user MIMO interference channels.
2310.02970
Erik J Bekkers
Erik J Bekkers, Sharvaree Vadgama, Rob D Hesselink, Putri A van der Linden, David W Romero
Fast, Expressive SE$(n)$ Equivariant Networks through Weight-Sharing in Position-Orientation Space
Our code is publicly available at https://github.com/ebekkers/ponita . Published at ICLR 2024
null
null
null
cs.LG math.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Based on the theory of homogeneous spaces we derive geometrically optimal edge attributes to be used within the flexible message-passing framework. We formalize the notion of weight sharing in convolutional networks as the sharing of message functions over point-pairs that should be treated equally. We define equivalence classes of point-pairs that are identical up to a transformation in the group and derive attributes that uniquely identify these classes. Weight sharing is then obtained by conditioning message functions on these attributes. As an application of the theory, we develop an efficient equivariant group convolutional network for processing 3D point clouds. The theory of homogeneous spaces tells us how to do group convolutions with feature maps over the homogeneous space of positions $\mathbb{R}^3$, position and orientations $\mathbb{R}^3 {\times} S^2$, and the group $SE(3)$ itself. Among these, $\mathbb{R}^3 {\times} S^2$ is an optimal choice due to the ability to represent directional information, which $\mathbb{R}^3$ methods cannot, and it significantly enhances computational efficiency compared to indexing features on the full $SE(3)$ group. We support this claim with state-of-the-art results -- in accuracy and speed -- on five different benchmarks in 2D and 3D, including interatomic potential energy prediction, trajectory forecasting in N-body systems, and generating molecules via equivariant diffusion models.
[ { "created": "Wed, 4 Oct 2023 17:06:32 GMT", "version": "v1" }, { "created": "Fri, 24 Nov 2023 08:48:32 GMT", "version": "v2" }, { "created": "Fri, 15 Mar 2024 09:21:33 GMT", "version": "v3" } ]
2024-03-18
[ [ "Bekkers", "Erik J", "" ], [ "Vadgama", "Sharvaree", "" ], [ "Hesselink", "Rob D", "" ], [ "van der Linden", "Putri A", "" ], [ "Romero", "David W", "" ] ]
Based on the theory of homogeneous spaces we derive geometrically optimal edge attributes to be used within the flexible message-passing framework. We formalize the notion of weight sharing in convolutional networks as the sharing of message functions over point-pairs that should be treated equally. We define equivalence classes of point-pairs that are identical up to a transformation in the group and derive attributes that uniquely identify these classes. Weight sharing is then obtained by conditioning message functions on these attributes. As an application of the theory, we develop an efficient equivariant group convolutional network for processing 3D point clouds. The theory of homogeneous spaces tells us how to do group convolutions with feature maps over the homogeneous space of positions $\mathbb{R}^3$, position and orientations $\mathbb{R}^3 {\times} S^2$, and the group $SE(3)$ itself. Among these, $\mathbb{R}^3 {\times} S^2$ is an optimal choice due to the ability to represent directional information, which $\mathbb{R}^3$ methods cannot, and it significantly enhances computational efficiency compared to indexing features on the full $SE(3)$ group. We support this claim with state-of-the-art results -- in accuracy and speed -- on five different benchmarks in 2D and 3D, including interatomic potential energy prediction, trajectory forecasting in N-body systems, and generating molecules via equivariant diffusion models.
2101.08533
Yunpeng Gong
Yunpeng Gong, Liqing Huang, Lifei Chen
Eliminate Deviation with Deviation for Data Augmentation and a General Multi-modal Data Learning Method
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
One of the challenges of computer vision is that it needs to adapt to color deviations in changeable environments. Therefore, minimizing the adverse effects of color deviation on the prediction is one of the main goals of vision task. Current solutions focus on using generative models to augment training data to enhance the invariance of input variation. However, such methods often introduce new noise, which limits the gain from generated data. To this end, this paper proposes a strategy eliminate deviation with deviation, which is named Random Color Dropout (RCD). Our hypothesis is that if there are color deviation between the query image and the gallery image, the retrieval results of some examples will be better after ignoring the color information. Specifically, this strategy balances the weights between color features and color-independent features in the neural network by dropouting partial color information in the training data, so as to overcome the effect of color devitaion. The proposed RCD can be combined with various existing ReID models without changing the learning strategy, and can be applied to other computer vision fields, such as object detection. Experiments on several ReID baselines and three common large-scale datasets such as Market1501, DukeMTMC, and MSMT17 have verified the effectiveness of this method. Experiments on Cross-domain tests have shown that this strategy is significant eliminating the domain gap. Furthermore, in order to understand the working mechanism of RCD, we analyzed the effectiveness of this strategy from the perspective of classification, which reveals that it may be better to utilize many instead of all of color information in visual tasks with strong domain variations.
[ { "created": "Thu, 21 Jan 2021 10:33:02 GMT", "version": "v1" }, { "created": "Wed, 7 Apr 2021 08:26:49 GMT", "version": "v2" }, { "created": "Mon, 31 May 2021 15:15:14 GMT", "version": "v3" }, { "created": "Tue, 1 Jun 2021 01:30:13 GMT", "version": "v4" }, { "created": "Mon, 13 Jun 2022 14:16:14 GMT", "version": "v5" } ]
2022-06-14
[ [ "Gong", "Yunpeng", "" ], [ "Huang", "Liqing", "" ], [ "Chen", "Lifei", "" ] ]
One of the challenges of computer vision is that it needs to adapt to color deviations in changeable environments. Therefore, minimizing the adverse effects of color deviation on the prediction is one of the main goals of vision task. Current solutions focus on using generative models to augment training data to enhance the invariance of input variation. However, such methods often introduce new noise, which limits the gain from generated data. To this end, this paper proposes a strategy eliminate deviation with deviation, which is named Random Color Dropout (RCD). Our hypothesis is that if there are color deviation between the query image and the gallery image, the retrieval results of some examples will be better after ignoring the color information. Specifically, this strategy balances the weights between color features and color-independent features in the neural network by dropouting partial color information in the training data, so as to overcome the effect of color devitaion. The proposed RCD can be combined with various existing ReID models without changing the learning strategy, and can be applied to other computer vision fields, such as object detection. Experiments on several ReID baselines and three common large-scale datasets such as Market1501, DukeMTMC, and MSMT17 have verified the effectiveness of this method. Experiments on Cross-domain tests have shown that this strategy is significant eliminating the domain gap. Furthermore, in order to understand the working mechanism of RCD, we analyzed the effectiveness of this strategy from the perspective of classification, which reveals that it may be better to utilize many instead of all of color information in visual tasks with strong domain variations.
2209.14842
Zafi Sherhan Syed
Muhammad Shehram Shah Syed, Zafi Sherhan Syed and Abbas Syed
Classification of Vocal Bursts for ACII 2022 A-VB-Type Competition using Convolutional Neural Networks and Deep Acoustic Embeddings
Report for our submission to the ACII 2022 Affective Vocal Bursts (A-VB) Competition
null
null
null
cs.SD eess.AS
http://creativecommons.org/licenses/by-nc-nd/4.0/
This report provides a brief description of our proposed solution for the Vocal Burst Type classification task of the ACII 2022 Affective Vocal Bursts (A-VB) Competition. We experimented with two approaches as part of our solution for the task at hand. The first of which is based on convolutional neural networks trained on Mel Spectrograms, and the second is based on average pooling of deep acoustic embeddings from a pretrained wav2vec2 model. Our best performing model achieves an unweighted average recall (UAR) of 0.5190 for the test partition, compared to the chance-level UAR of 0.1250 and a baseline of 0.4172. Thus, an improvement of around 20% over the challenge baseline. The results reported in this document demonstrate the efficacy of our proposed approaches to solve the AV-B Type Classification task.
[ { "created": "Thu, 29 Sep 2022 14:58:23 GMT", "version": "v1" }, { "created": "Thu, 13 Oct 2022 04:55:38 GMT", "version": "v2" } ]
2022-10-14
[ [ "Syed", "Muhammad Shehram Shah", "" ], [ "Syed", "Zafi Sherhan", "" ], [ "Syed", "Abbas", "" ] ]
This report provides a brief description of our proposed solution for the Vocal Burst Type classification task of the ACII 2022 Affective Vocal Bursts (A-VB) Competition. We experimented with two approaches as part of our solution for the task at hand. The first of which is based on convolutional neural networks trained on Mel Spectrograms, and the second is based on average pooling of deep acoustic embeddings from a pretrained wav2vec2 model. Our best performing model achieves an unweighted average recall (UAR) of 0.5190 for the test partition, compared to the chance-level UAR of 0.1250 and a baseline of 0.4172. Thus, an improvement of around 20% over the challenge baseline. The results reported in this document demonstrate the efficacy of our proposed approaches to solve the AV-B Type Classification task.
1604.08137
Valmir C. Barbosa
Fabiano de S. Oliveira, Valmir C. Barbosa
On the mediation of program allocation in high-demand environments
This version addresses a few minor issues and fixes a derivative
null
null
null
cs.DC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we challenge the widely accepted premise that, in order to carry out a distributed computation, say on the cloud, users have to inform, along with all the inputs that the algorithm in use requires, the number of processors to be used. We discuss the complicated nature of deciding the value of such parameter, should it be chosen optimally, and propose the alternative scenario in which this choice is passed on to the server side for automatic determination. We show that the allocation problem arising from this alternative is NP-hard only weakly, being therefore solvable in pseudo-polynomial time. In our proposal, one key component on which the automatic determination of the number of processors is based is the cost model. The one we use, which is being increasingly adopted in the wake of the cloud-computing movement, posits that each single execution of a program is to be subject to current circumstances on both user and server side, and as such be priced independently of all others. Running through our proposal is thus a critique of the established common sense that sizing a set of processors to handle a submission to some provider is entirely up to the user.
[ { "created": "Wed, 27 Apr 2016 16:48:45 GMT", "version": "v1" }, { "created": "Thu, 28 Apr 2016 17:43:43 GMT", "version": "v2" }, { "created": "Mon, 6 May 2019 17:48:56 GMT", "version": "v3" }, { "created": "Fri, 20 Sep 2019 17:22:17 GMT", "version": "v4" } ]
2019-09-23
[ [ "Oliveira", "Fabiano de S.", "" ], [ "Barbosa", "Valmir C.", "" ] ]
In this paper we challenge the widely accepted premise that, in order to carry out a distributed computation, say on the cloud, users have to inform, along with all the inputs that the algorithm in use requires, the number of processors to be used. We discuss the complicated nature of deciding the value of such parameter, should it be chosen optimally, and propose the alternative scenario in which this choice is passed on to the server side for automatic determination. We show that the allocation problem arising from this alternative is NP-hard only weakly, being therefore solvable in pseudo-polynomial time. In our proposal, one key component on which the automatic determination of the number of processors is based is the cost model. The one we use, which is being increasingly adopted in the wake of the cloud-computing movement, posits that each single execution of a program is to be subject to current circumstances on both user and server side, and as such be priced independently of all others. Running through our proposal is thus a critique of the established common sense that sizing a set of processors to handle a submission to some provider is entirely up to the user.
2202.03316
Fabio Saracco
Mattia Mattei, Manuel Pratelli, Guido Caldarelli, Marinella Petrocchi, and Fabio Saracco
Bow-Tie Structures of Twitter Discursive Communities
47 pages, 25 figures, 7 tables
Sci Rep 12, 12944 (2022)
10.1038/s41598-022-16603-7
null
cs.SI physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the analysis of Twitter debate, the recent literature focused on discursive communities, i.e. clusters of accounts interacting among themselves via retweets. In the present work, we studied discursive communities in 8 different thematic Twitter datasets in various languages. Surprisingly, we observed that almost all discursive communities therein display a bow-tie structure during political or societal debates. Instead, they are absent when the argument of the discussion is different as sport events, as in the case of Euro2020 Turkish and Italian datasets. We furthermore analysed the quality of the content created in the various sectors of the different discursive communities, using the domain annotation from the fact-checking website Newsguard: we observe that, when the discursive community is affected by m/disinformation, the content with the lowest quality is the ones produced and shared in SCC and, in particular, a strong incidence of low- or non-reputable messages is present in the flow of retweets between the SCC and the OUT sectors. In this sense, in discursive communities affected by m/disinformation, the greatest part of the accounts has access to a great variety of contents, but whose quality is, in general, quite low; such a situation perfectly describes the phenomenon of infodemic, i.e. the access to "an excessive amount of information about a problem, which makes it difficult to identify a solution", according to WHO).
[ { "created": "Mon, 7 Feb 2022 16:01:03 GMT", "version": "v1" }, { "created": "Tue, 28 Jun 2022 16:06:23 GMT", "version": "v2" } ]
2022-08-01
[ [ "Mattei", "Mattia", "" ], [ "Pratelli", "Manuel", "" ], [ "Caldarelli", "Guido", "" ], [ "Petrocchi", "Marinella", "" ], [ "Saracco", "Fabio", "" ] ]
In the analysis of Twitter debate, the recent literature focused on discursive communities, i.e. clusters of accounts interacting among themselves via retweets. In the present work, we studied discursive communities in 8 different thematic Twitter datasets in various languages. Surprisingly, we observed that almost all discursive communities therein display a bow-tie structure during political or societal debates. Instead, they are absent when the argument of the discussion is different as sport events, as in the case of Euro2020 Turkish and Italian datasets. We furthermore analysed the quality of the content created in the various sectors of the different discursive communities, using the domain annotation from the fact-checking website Newsguard: we observe that, when the discursive community is affected by m/disinformation, the content with the lowest quality is the ones produced and shared in SCC and, in particular, a strong incidence of low- or non-reputable messages is present in the flow of retweets between the SCC and the OUT sectors. In this sense, in discursive communities affected by m/disinformation, the greatest part of the accounts has access to a great variety of contents, but whose quality is, in general, quite low; such a situation perfectly describes the phenomenon of infodemic, i.e. the access to "an excessive amount of information about a problem, which makes it difficult to identify a solution", according to WHO).
1710.00852
Rui A. da Costa
N. A. M. Ara\'ujo, R. A. da Costa, S. N. Dorogovtsev, and J. F. F. Mendes
Finding the optimal nets for self-folding Kirigami
6 pages, 5 figures, Supplemental Material, Source Code
Phys. Rev. Lett. 120, 188001 (2018)
10.1103/PhysRevLett.120.188001
null
cs.DS cond-mat.soft cond-mat.stat-mech physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Three-dimensional shells can be synthesized from the spontaneous self-folding of two-dimensional templates of interconnected panels, called nets. However, some nets are more likely to self-fold into the desired shell under random movements. The optimal nets are the ones that maximize the number of vertex connections, i.e., vertices that have only two of its faces cut away from each other in the net. Previous methods for finding such nets are based on random search and thus do not guarantee the optimal solution. Here, we propose a deterministic procedure. We map the connectivity of the shell into a shell graph, where the nodes and links of the graph represent the vertices and edges of the shell, respectively. Identifying the nets that maximize the number of vertex connections corresponds to finding the set of maximum leaf spanning trees of the shell graph. This method allows not only to design the self-assembly of much larger shell structures but also to apply additional design criteria, as a complete catalog of the maximum leaf spanning trees is obtained.
[ { "created": "Mon, 2 Oct 2017 18:11:45 GMT", "version": "v1" }, { "created": "Sat, 7 Jul 2018 21:13:32 GMT", "version": "v2" } ]
2018-07-10
[ [ "Araújo", "N. A. M.", "" ], [ "da Costa", "R. A.", "" ], [ "Dorogovtsev", "S. N.", "" ], [ "Mendes", "J. F. F.", "" ] ]
Three-dimensional shells can be synthesized from the spontaneous self-folding of two-dimensional templates of interconnected panels, called nets. However, some nets are more likely to self-fold into the desired shell under random movements. The optimal nets are the ones that maximize the number of vertex connections, i.e., vertices that have only two of its faces cut away from each other in the net. Previous methods for finding such nets are based on random search and thus do not guarantee the optimal solution. Here, we propose a deterministic procedure. We map the connectivity of the shell into a shell graph, where the nodes and links of the graph represent the vertices and edges of the shell, respectively. Identifying the nets that maximize the number of vertex connections corresponds to finding the set of maximum leaf spanning trees of the shell graph. This method allows not only to design the self-assembly of much larger shell structures but also to apply additional design criteria, as a complete catalog of the maximum leaf spanning trees is obtained.
2110.03800
Yassaman Ommi Ms.
Yassaman Ommi, Matin Yousefabadi, Faezeh Faez, Amirmojtaba Sabour, Mahdieh Soleymani Baghshah, Hamid R. Rabiee
CCGG: A Deep Autoregressive Model for Class-Conditional Graph Generation
null
null
10.1145/3487553.3524721
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph data structures are fundamental for studying connected entities. With an increase in the number of applications where data is represented as graphs, the problem of graph generation has recently become a hot topic. However, despite its significance, conditional graph generation that creates graphs with desired features is relatively less explored in previous studies. This paper addresses the problem of class-conditional graph generation that uses class labels as generation constraints by introducing the Class Conditioned Graph Generator (CCGG). We built CCGG by injecting the class information as an additional input into a graph generator model and including a classification loss in its total loss along with a gradient passing trick. Our experiments show that CCGG outperforms existing conditional graph generation methods on various datasets. It also manages to maintain the quality of the generated graphs in terms of distribution-based evaluation metrics.
[ { "created": "Thu, 7 Oct 2021 21:24:07 GMT", "version": "v1" }, { "created": "Mon, 25 Apr 2022 09:18:14 GMT", "version": "v2" } ]
2022-04-26
[ [ "Ommi", "Yassaman", "" ], [ "Yousefabadi", "Matin", "" ], [ "Faez", "Faezeh", "" ], [ "Sabour", "Amirmojtaba", "" ], [ "Baghshah", "Mahdieh Soleymani", "" ], [ "Rabiee", "Hamid R.", "" ] ]
Graph data structures are fundamental for studying connected entities. With an increase in the number of applications where data is represented as graphs, the problem of graph generation has recently become a hot topic. However, despite its significance, conditional graph generation that creates graphs with desired features is relatively less explored in previous studies. This paper addresses the problem of class-conditional graph generation that uses class labels as generation constraints by introducing the Class Conditioned Graph Generator (CCGG). We built CCGG by injecting the class information as an additional input into a graph generator model and including a classification loss in its total loss along with a gradient passing trick. Our experiments show that CCGG outperforms existing conditional graph generation methods on various datasets. It also manages to maintain the quality of the generated graphs in terms of distribution-based evaluation metrics.
2303.14792
Mehdi Delrobaei
Fateme Zare, Paniz Sedighi, Mehdi Delrobaei
A Wearable RFID-Based Navigation System for the Visually Impaired
6 pages, 6 figures, 3 tables
null
10.1109/ICRoM57054.2022.10025351
null
cs.HC cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recent studies have focused on developing advanced assistive devices to help blind or visually impaired people. Navigation is challenging for this community; however, developing a simple yet reliable navigation system is still an unmet need. This study targets the navigation problem and proposes a wearable assistive system. We developed a smart glove and shoe set based on radio-frequency identification technology to assist visually impaired people with navigation and orientation in indoor environments. The system enables the user to find the directions through audio feedback. To evaluate the device's performance, we designed a simple experimental setup. The proposed system has a simple structure and can be personalized according to the user's requirements. The results identified that the platform is reliable, power efficient, and accurate enough for indoor navigation.
[ { "created": "Sun, 26 Mar 2023 18:30:57 GMT", "version": "v1" } ]
2023-03-28
[ [ "Zare", "Fateme", "" ], [ "Sedighi", "Paniz", "" ], [ "Delrobaei", "Mehdi", "" ] ]
Recent studies have focused on developing advanced assistive devices to help blind or visually impaired people. Navigation is challenging for this community; however, developing a simple yet reliable navigation system is still an unmet need. This study targets the navigation problem and proposes a wearable assistive system. We developed a smart glove and shoe set based on radio-frequency identification technology to assist visually impaired people with navigation and orientation in indoor environments. The system enables the user to find the directions through audio feedback. To evaluate the device's performance, we designed a simple experimental setup. The proposed system has a simple structure and can be personalized according to the user's requirements. The results identified that the platform is reliable, power efficient, and accurate enough for indoor navigation.
2405.11819
Chris C. Emezue
Chris Emezue
Beyond MLE: Investigating SEARNN for Low-Resourced Neural Machine Translation
In fulfillment of the 2024 practical coursework of IFT6132 course: https://www-labs.iro.umontreal.ca/~slacoste/teaching/ift6132/W24/
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Structured prediction tasks, like machine translation, involve learning functions that map structured inputs to structured outputs. Recurrent Neural Networks (RNNs) have historically been a popular choice for such tasks, including in natural language processing (NLP) applications. However, training RNNs using Maximum Likelihood Estimation (MLE) has its limitations, including exposure bias and a mismatch between training and testing metrics. SEARNN, based on the learning to search (L2S) framework, has been proposed as an alternative to MLE for RNN training. This project explored the potential of SEARNN to improve machine translation for low-resourced African languages -- a challenging task characterized by limited training data availability and the morphological complexity of the languages. Through experiments conducted on translation for English to Igbo, French to \ewe, and French to \ghomala directions, this project evaluated the efficacy of SEARNN over MLE in addressing the unique challenges posed by these languages. With an average BLEU score improvement of $5.4$\% over the MLE objective, we proved that SEARNN is indeed a viable algorithm to effectively train RNNs on machine translation for low-resourced languages.
[ { "created": "Mon, 20 May 2024 06:28:43 GMT", "version": "v1" } ]
2024-05-21
[ [ "Emezue", "Chris", "" ] ]
Structured prediction tasks, like machine translation, involve learning functions that map structured inputs to structured outputs. Recurrent Neural Networks (RNNs) have historically been a popular choice for such tasks, including in natural language processing (NLP) applications. However, training RNNs using Maximum Likelihood Estimation (MLE) has its limitations, including exposure bias and a mismatch between training and testing metrics. SEARNN, based on the learning to search (L2S) framework, has been proposed as an alternative to MLE for RNN training. This project explored the potential of SEARNN to improve machine translation for low-resourced African languages -- a challenging task characterized by limited training data availability and the morphological complexity of the languages. Through experiments conducted on translation for English to Igbo, French to \ewe, and French to \ghomala directions, this project evaluated the efficacy of SEARNN over MLE in addressing the unique challenges posed by these languages. With an average BLEU score improvement of $5.4$\% over the MLE objective, we proved that SEARNN is indeed a viable algorithm to effectively train RNNs on machine translation for low-resourced languages.
1803.06911
Sheng Jin
Sheng Jin
Unsupervised Semantic Deep Hashing
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, deep hashing methods have been proved to be efficient since it employs convolutional neural network to learn features and hashing codes simultaneously. However, these methods are mostly supervised. In real-world application, it is a time-consuming and overloaded task for annotating a large number of images. In this paper, we propose a novel unsupervised deep hashing method for large-scale image retrieval. Our method, namely unsupervised semantic deep hashing (\textbf{USDH}), uses semantic information preserved in the CNN feature layer to guide the training of network. We enforce four criteria on hashing codes learning based on VGG-19 model: 1) preserving relevant information of feature space in hashing space; 2) minimizing quantization loss between binary-like codes and hashing codes; 3) improving the usage of each bit in hashing codes by using maximum information entropy, and 4) invariant to image rotation. Extensive experiments on CIFAR-10, NUSWIDE have demonstrated that \textbf{USDH} outperforms several state-of-the-art unsupervised hashing methods for image retrieval. We also conduct experiments on Oxford 17 datasets for fine-grained classification to verify its efficiency for other computer vision tasks.
[ { "created": "Mon, 19 Mar 2018 13:42:23 GMT", "version": "v1" } ]
2018-03-20
[ [ "Jin", "Sheng", "" ] ]
In recent years, deep hashing methods have been proved to be efficient since it employs convolutional neural network to learn features and hashing codes simultaneously. However, these methods are mostly supervised. In real-world application, it is a time-consuming and overloaded task for annotating a large number of images. In this paper, we propose a novel unsupervised deep hashing method for large-scale image retrieval. Our method, namely unsupervised semantic deep hashing (\textbf{USDH}), uses semantic information preserved in the CNN feature layer to guide the training of network. We enforce four criteria on hashing codes learning based on VGG-19 model: 1) preserving relevant information of feature space in hashing space; 2) minimizing quantization loss between binary-like codes and hashing codes; 3) improving the usage of each bit in hashing codes by using maximum information entropy, and 4) invariant to image rotation. Extensive experiments on CIFAR-10, NUSWIDE have demonstrated that \textbf{USDH} outperforms several state-of-the-art unsupervised hashing methods for image retrieval. We also conduct experiments on Oxford 17 datasets for fine-grained classification to verify its efficiency for other computer vision tasks.
2205.12673
Prakhar Gupta
Prakhar Gupta, Cathy Jiao, Yi-Ting Yeh, Shikib Mehri, Maxine Eskenazi and Jeffrey P. Bigham
InstructDial: Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuning
EMNLP 2022
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Instruction tuning is an emergent paradigm in NLP wherein natural language instructions are leveraged with language models to induce zero-shot performance on unseen tasks. Instructions have been shown to enable good performance on unseen tasks and datasets in both large and small language models. Dialogue is an especially interesting area to explore instruction tuning because dialogue systems perform multiple kinds of tasks related to language (e.g., natural language understanding and generation, domain-specific interaction), yet instruction tuning has not been systematically explored for dialogue-related tasks. We introduce InstructDial, an instruction tuning framework for dialogue, which consists of a repository of 48 diverse dialogue tasks in a unified text-to-text format created from 59 openly available dialogue datasets. Next, we explore cross-task generalization ability on models tuned on InstructDial across diverse dialogue tasks. Our analysis reveals that InstructDial enables good zero-shot performance on unseen datasets and tasks such as dialogue evaluation and intent detection, and even better performance in a few-shot setting. To ensure that models adhere to instructions, we introduce novel meta-tasks. We establish benchmark zero-shot and few-shot performance of models trained using the proposed framework on multiple dialogue tasks.
[ { "created": "Wed, 25 May 2022 11:37:06 GMT", "version": "v1" }, { "created": "Wed, 26 Oct 2022 17:10:03 GMT", "version": "v2" } ]
2022-10-27
[ [ "Gupta", "Prakhar", "" ], [ "Jiao", "Cathy", "" ], [ "Yeh", "Yi-Ting", "" ], [ "Mehri", "Shikib", "" ], [ "Eskenazi", "Maxine", "" ], [ "Bigham", "Jeffrey P.", "" ] ]
Instruction tuning is an emergent paradigm in NLP wherein natural language instructions are leveraged with language models to induce zero-shot performance on unseen tasks. Instructions have been shown to enable good performance on unseen tasks and datasets in both large and small language models. Dialogue is an especially interesting area to explore instruction tuning because dialogue systems perform multiple kinds of tasks related to language (e.g., natural language understanding and generation, domain-specific interaction), yet instruction tuning has not been systematically explored for dialogue-related tasks. We introduce InstructDial, an instruction tuning framework for dialogue, which consists of a repository of 48 diverse dialogue tasks in a unified text-to-text format created from 59 openly available dialogue datasets. Next, we explore cross-task generalization ability on models tuned on InstructDial across diverse dialogue tasks. Our analysis reveals that InstructDial enables good zero-shot performance on unseen datasets and tasks such as dialogue evaluation and intent detection, and even better performance in a few-shot setting. To ensure that models adhere to instructions, we introduce novel meta-tasks. We establish benchmark zero-shot and few-shot performance of models trained using the proposed framework on multiple dialogue tasks.
2405.11554
David Braun
David Braun
DAC-JAX: A JAX Implementation of the Descript Audio Codec
5 pages, 3 figures, 2 tables
null
null
null
cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
We present an open-source implementation of the Descript Audio Codec (DAC) using Google's JAX ecosystem of Flax, Optax, Orbax, AUX, and CLU. Our codebase enables the reuse of model weights from the original PyTorch DAC, and we confirm that the two implementations produce equivalent token sequences and decoded audio if given the same input. We provide a training and fine-tuning script which supports device parallelism, although we have only verified it using brief training runs with a small dataset. Even with limited GPU memory, the original DAC can compress or decompress a long audio file by processing it as a sequence of overlapping "chunks." We implement this feature in JAX and benchmark the performance on two types of GPUs. On a consumer-grade GPU, DAC-JAX outperforms the original DAC for compression and decompression at all chunk sizes. However, on a high-performance, cluster-based GPU, DAC-JAX outperforms the original DAC for small chunk sizes but performs worse for large chunks.
[ { "created": "Sun, 19 May 2024 14:07:31 GMT", "version": "v1" } ]
2024-05-21
[ [ "Braun", "David", "" ] ]
We present an open-source implementation of the Descript Audio Codec (DAC) using Google's JAX ecosystem of Flax, Optax, Orbax, AUX, and CLU. Our codebase enables the reuse of model weights from the original PyTorch DAC, and we confirm that the two implementations produce equivalent token sequences and decoded audio if given the same input. We provide a training and fine-tuning script which supports device parallelism, although we have only verified it using brief training runs with a small dataset. Even with limited GPU memory, the original DAC can compress or decompress a long audio file by processing it as a sequence of overlapping "chunks." We implement this feature in JAX and benchmark the performance on two types of GPUs. On a consumer-grade GPU, DAC-JAX outperforms the original DAC for compression and decompression at all chunk sizes. However, on a high-performance, cluster-based GPU, DAC-JAX outperforms the original DAC for small chunk sizes but performs worse for large chunks.
1803.01126
Fabio Calefato
Fabio Calefato, Giuseppe Iaffaldano, Filippo Lanubile, Bogdan Vasilescu
On Developers' Personality in Large-scale Distributed Projects: The Case of the Apache Ecosystem
In Proc. Int'l Conf. on Global Software Engineering (ICGSE'18), Gothenburg, Sweden, May 28-29, 2018
In Proc. Int'l Conf. on Global Software Engineering (ICGSE'18), Gothenburg, Sweden, May 28-29, 2018
10.1145/3196369.3196372
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large-scale distributed projects are typically the results of collective efforts performed by multiple developers, each one having a different personality. The study of developers' personalities has the potential of explaining their' behavior in various contexts. For example, the propensity to trust others, a critical factor to the success of global software engineering - has been found to influence positively the result of code reviews in distributed projects. In this paper, we perform a quantitative analysis of developers' personality in open source software projects, intended as an extreme form of distributed projects in which no single organization controls the project. We mine ecosystem-level data from the code commits and email messages contributed by the developers working on the Apache Software Foundation (ASF) projects, as representative of large scale-distributed projects. We find that developers become over time more conscientious, agreeable, and neurotic. Moreover, personality traits do not vary with their role, membership, and extent of contribution to the projects. We also find evidence that more open and more agreeable developers are more likely to become project contributors.
[ { "created": "Sat, 3 Mar 2018 08:42:08 GMT", "version": "v1" }, { "created": "Wed, 14 Mar 2018 10:29:03 GMT", "version": "v2" }, { "created": "Fri, 25 May 2018 14:14:28 GMT", "version": "v3" }, { "created": "Mon, 24 Sep 2018 13:48:38 GMT", "version": "v4" } ]
2021-07-30
[ [ "Calefato", "Fabio", "" ], [ "Iaffaldano", "Giuseppe", "" ], [ "Lanubile", "Filippo", "" ], [ "Vasilescu", "Bogdan", "" ] ]
Large-scale distributed projects are typically the results of collective efforts performed by multiple developers, each one having a different personality. The study of developers' personalities has the potential of explaining their' behavior in various contexts. For example, the propensity to trust others, a critical factor to the success of global software engineering - has been found to influence positively the result of code reviews in distributed projects. In this paper, we perform a quantitative analysis of developers' personality in open source software projects, intended as an extreme form of distributed projects in which no single organization controls the project. We mine ecosystem-level data from the code commits and email messages contributed by the developers working on the Apache Software Foundation (ASF) projects, as representative of large scale-distributed projects. We find that developers become over time more conscientious, agreeable, and neurotic. Moreover, personality traits do not vary with their role, membership, and extent of contribution to the projects. We also find evidence that more open and more agreeable developers are more likely to become project contributors.
2103.00519
Andreas Holzinger
Andreas Holzinger, Anna Saranti, Heimo Mueller
KANDINSKYPatterns -- An experimental exploration environment for Pattern Analysis and Machine Intelligence
12 pages, submitted to IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), currently under review
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Machine intelligence is very successful at standard recognition tasks when having high-quality training data. There is still a significant gap between machine-level pattern recognition and human-level concept learning. Humans can learn under uncertainty from only a few examples and generalize these concepts to solve new problems. The growing interest in explainable machine intelligence, requires experimental environments and diagnostic tests to analyze weaknesses in existing approaches to drive progress in the field. In this paper, we discuss existing diagnostic tests and test data sets such as CLEVR, CLEVERER, CLOSURE, CURI, Bongard-LOGO, V-PROM, and present our own experimental environment: The KANDINSKYPatterns, named after the Russian artist Wassily Kandinksy, who made theoretical contributions to compositivity, i.e. that all perceptions consist of geometrically elementary individual components. This was experimentally proven by Hubel &Wiesel in the 1960s and became the basis for machine learning approaches such as the Neocognitron and the even later Deep Learning. While KANDINSKYPatterns have computationally controllable properties on the one hand, bringing ground truth, they are also easily distinguishable by human observers, i.e., controlled patterns can be described by both humans and algorithms, making them another important contribution to international research in machine intelligence.
[ { "created": "Sun, 28 Feb 2021 14:09:59 GMT", "version": "v1" } ]
2021-03-02
[ [ "Holzinger", "Andreas", "" ], [ "Saranti", "Anna", "" ], [ "Mueller", "Heimo", "" ] ]
Machine intelligence is very successful at standard recognition tasks when having high-quality training data. There is still a significant gap between machine-level pattern recognition and human-level concept learning. Humans can learn under uncertainty from only a few examples and generalize these concepts to solve new problems. The growing interest in explainable machine intelligence, requires experimental environments and diagnostic tests to analyze weaknesses in existing approaches to drive progress in the field. In this paper, we discuss existing diagnostic tests and test data sets such as CLEVR, CLEVERER, CLOSURE, CURI, Bongard-LOGO, V-PROM, and present our own experimental environment: The KANDINSKYPatterns, named after the Russian artist Wassily Kandinksy, who made theoretical contributions to compositivity, i.e. that all perceptions consist of geometrically elementary individual components. This was experimentally proven by Hubel &Wiesel in the 1960s and became the basis for machine learning approaches such as the Neocognitron and the even later Deep Learning. While KANDINSKYPatterns have computationally controllable properties on the one hand, bringing ground truth, they are also easily distinguishable by human observers, i.e., controlled patterns can be described by both humans and algorithms, making them another important contribution to international research in machine intelligence.
2310.15931
Weiye Zhang
Weiye Zhang, Wenshuai Yu, Licong Zhuang, Xiaoyi Zhang, Zhi Zeng and Jiasong Zhu
GO-FEAP: Global Optimal UAV Planner Using Frontier-Omission-Aware Exploration and Altitude-Stratified Planning
7 pages,29 figures
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Autonomous exploration is a fundamental problem for various applications of unmanned aerial vehicles(UAVs). Existing methods, however, are demonstrated to static local optima and two-dimensional exploration. To address these challenges, this paper introduces GO-FEAP (Global Optimal UAV Planner Using Frontier-Omission-Aware Exploration and Altitude-Stratified Planning), aiming to achieve efficient and complete three-dimensional exploration. Frontier-Omission-Aware Exploration module presented in this work takes into account multiple pivotal factors, encompassing frontier distance, nearby frontier count, frontier duration, and frontier categorization, for a comprehensive assessment of frontier importance. Furthermore, to tackle scenarios with substantial vertical variations, we introduce the Altitude-Stratified Planning strategy, which stratifies the three-dimensional space based on altitude, conducting global-local planning for each stratum. The objective of global planning is to identify the most optimal frontier for exploration, followed by viewpoint selection and local path optimization based on frontier type, ultimately generating dynamically feasible three-dimensional spatial exploration trajectories. We present extensive benchmark and real-world tests, in which our method completes the exploration tasks with unprecedented completeness compared to state-of-the-art approaches.
[ { "created": "Tue, 24 Oct 2023 15:28:06 GMT", "version": "v1" } ]
2023-10-25
[ [ "Zhang", "Weiye", "" ], [ "Yu", "Wenshuai", "" ], [ "Zhuang", "Licong", "" ], [ "Zhang", "Xiaoyi", "" ], [ "Zeng", "Zhi", "" ], [ "Zhu", "Jiasong", "" ] ]
Autonomous exploration is a fundamental problem for various applications of unmanned aerial vehicles(UAVs). Existing methods, however, are demonstrated to static local optima and two-dimensional exploration. To address these challenges, this paper introduces GO-FEAP (Global Optimal UAV Planner Using Frontier-Omission-Aware Exploration and Altitude-Stratified Planning), aiming to achieve efficient and complete three-dimensional exploration. Frontier-Omission-Aware Exploration module presented in this work takes into account multiple pivotal factors, encompassing frontier distance, nearby frontier count, frontier duration, and frontier categorization, for a comprehensive assessment of frontier importance. Furthermore, to tackle scenarios with substantial vertical variations, we introduce the Altitude-Stratified Planning strategy, which stratifies the three-dimensional space based on altitude, conducting global-local planning for each stratum. The objective of global planning is to identify the most optimal frontier for exploration, followed by viewpoint selection and local path optimization based on frontier type, ultimately generating dynamically feasible three-dimensional spatial exploration trajectories. We present extensive benchmark and real-world tests, in which our method completes the exploration tasks with unprecedented completeness compared to state-of-the-art approaches.
1804.04272
Lars Ruthotto
Lars Ruthotto and Eldad Haber
Deep Neural Networks Motivated by Partial Differential Equations
9 pages, 4 figures, 1 table
null
null
null
cs.LG math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Partial differential equations (PDEs) are indispensable for modeling many physical phenomena and also commonly used for solving image processing tasks. In the latter area, PDE-based approaches interpret image data as discretizations of multivariate functions and the output of image processing algorithms as solutions to certain PDEs. Posing image processing problems in the infinite dimensional setting provides powerful tools for their analysis and solution. Over the last few decades, the reinterpretation of classical image processing problems through the PDE lens has been creating multiple celebrated approaches that benefit a vast area of tasks including image segmentation, denoising, registration, and reconstruction. In this paper, we establish a new PDE-interpretation of a class of deep convolutional neural networks (CNN) that are commonly used to learn from speech, image, and video data. Our interpretation includes convolution residual neural networks (ResNet), which are among the most promising approaches for tasks such as image classification having improved the state-of-the-art performance in prestigious benchmark challenges. Despite their recent successes, deep ResNets still face some critical challenges associated with their design, immense computational costs and memory requirements, and lack of understanding of their reasoning. Guided by well-established PDE theory, we derive three new ResNet architectures that fall into two new classes: parabolic and hyperbolic CNNs. We demonstrate how PDE theory can provide new insights and algorithms for deep learning and demonstrate the competitiveness of three new CNN architectures using numerical experiments.
[ { "created": "Thu, 12 Apr 2018 01:40:55 GMT", "version": "v1" }, { "created": "Mon, 10 Dec 2018 21:51:10 GMT", "version": "v2" } ]
2018-12-12
[ [ "Ruthotto", "Lars", "" ], [ "Haber", "Eldad", "" ] ]
Partial differential equations (PDEs) are indispensable for modeling many physical phenomena and also commonly used for solving image processing tasks. In the latter area, PDE-based approaches interpret image data as discretizations of multivariate functions and the output of image processing algorithms as solutions to certain PDEs. Posing image processing problems in the infinite dimensional setting provides powerful tools for their analysis and solution. Over the last few decades, the reinterpretation of classical image processing problems through the PDE lens has been creating multiple celebrated approaches that benefit a vast area of tasks including image segmentation, denoising, registration, and reconstruction. In this paper, we establish a new PDE-interpretation of a class of deep convolutional neural networks (CNN) that are commonly used to learn from speech, image, and video data. Our interpretation includes convolution residual neural networks (ResNet), which are among the most promising approaches for tasks such as image classification having improved the state-of-the-art performance in prestigious benchmark challenges. Despite their recent successes, deep ResNets still face some critical challenges associated with their design, immense computational costs and memory requirements, and lack of understanding of their reasoning. Guided by well-established PDE theory, we derive three new ResNet architectures that fall into two new classes: parabolic and hyperbolic CNNs. We demonstrate how PDE theory can provide new insights and algorithms for deep learning and demonstrate the competitiveness of three new CNN architectures using numerical experiments.
1608.05347
Feras Saad
Feras Saad, Vikash Mansinghka
Probabilistic Data Analysis with Probabilistic Programming
null
null
null
null
cs.AI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Probabilistic techniques are central to data analysis, but different approaches can be difficult to apply, combine, and compare. This paper introduces composable generative population models (CGPMs), a computational abstraction that extends directed graphical models and can be used to describe and compose a broad class of probabilistic data analysis techniques. Examples include hierarchical Bayesian models, multivariate kernel methods, discriminative machine learning, clustering algorithms, dimensionality reduction, and arbitrary probabilistic programs. We also demonstrate the integration of CGPMs into BayesDB, a probabilistic programming platform that can express data analysis tasks using a modeling language and a structured query language. The practical value is illustrated in two ways. First, CGPMs are used in an analysis that identifies satellite data records which probably violate Kepler's Third Law, by composing causal probabilistic programs with non-parametric Bayes in under 50 lines of probabilistic code. Second, for several representative data analysis tasks, we report on lines of code and accuracy measurements of various CGPMs, plus comparisons with standard baseline solutions from Python and MATLAB libraries.
[ { "created": "Thu, 18 Aug 2016 17:47:53 GMT", "version": "v1" } ]
2016-08-19
[ [ "Saad", "Feras", "" ], [ "Mansinghka", "Vikash", "" ] ]
Probabilistic techniques are central to data analysis, but different approaches can be difficult to apply, combine, and compare. This paper introduces composable generative population models (CGPMs), a computational abstraction that extends directed graphical models and can be used to describe and compose a broad class of probabilistic data analysis techniques. Examples include hierarchical Bayesian models, multivariate kernel methods, discriminative machine learning, clustering algorithms, dimensionality reduction, and arbitrary probabilistic programs. We also demonstrate the integration of CGPMs into BayesDB, a probabilistic programming platform that can express data analysis tasks using a modeling language and a structured query language. The practical value is illustrated in two ways. First, CGPMs are used in an analysis that identifies satellite data records which probably violate Kepler's Third Law, by composing causal probabilistic programs with non-parametric Bayes in under 50 lines of probabilistic code. Second, for several representative data analysis tasks, we report on lines of code and accuracy measurements of various CGPMs, plus comparisons with standard baseline solutions from Python and MATLAB libraries.
1405.1655
Abuzer Yakaryilmaz
Abuzer Yakaryilmaz and A. C. Cem Say, and H. G\"okalp Demirci
Debates with small transparent quantum verifiers
18 pages. A revised and extended version. A preliminary version appeared in the proceedings of DLT2014
null
null
null
cs.CC cs.FL quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a model where two opposing provers debate over the membership status of a given string in a language, trying to convince a weak verifier whose coins are visible to all. We show that the incorporation of just two qubits to an otherwise classical constant-space verifier raises the class of debatable languages from at most $\mathsf{NP}$ to the collection of all Turing-decidable languages (recursive languages). When the verifier is further constrained to make the correct decision with probability 1, the corresponding class goes up from the regular languages up to at least $\mathsf{E}$. We also show that the quantum model outperforms its classical counterpart when restricted to run in polynomial time, and demonstrate some non-context-free languages which have such short debates with quantum verifiers.
[ { "created": "Wed, 7 May 2014 16:08:56 GMT", "version": "v1" }, { "created": "Thu, 9 Jul 2015 22:55:20 GMT", "version": "v2" } ]
2015-07-13
[ [ "Yakaryilmaz", "Abuzer", "" ], [ "Say", "A. C. Cem", "" ], [ "Demirci", "H. Gökalp", "" ] ]
We study a model where two opposing provers debate over the membership status of a given string in a language, trying to convince a weak verifier whose coins are visible to all. We show that the incorporation of just two qubits to an otherwise classical constant-space verifier raises the class of debatable languages from at most $\mathsf{NP}$ to the collection of all Turing-decidable languages (recursive languages). When the verifier is further constrained to make the correct decision with probability 1, the corresponding class goes up from the regular languages up to at least $\mathsf{E}$. We also show that the quantum model outperforms its classical counterpart when restricted to run in polynomial time, and demonstrate some non-context-free languages which have such short debates with quantum verifiers.
1504.00774
Hiro Ito
Hiro Ito (UEC, Japan and CREST, JST, Japan) and Takahiro Ueda (Komatsu Ltd., Japan)
How to solve the cake-cutting problem in sublinear time
15 pages, no figure
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we show algorithms for solving the cake-cutting problem in sublinear-time. More specifically, we preassign (simple) fair portions to o(n) players in o(n)-time, and minimize the damage to the rest of the players. All currently known algorithms require Omega(n)-time, even when assigning a portion to just one player, and it is nontrivial to revise these algorithms to run in $o(n)$-time since many of the remaining players, who have not been asked any queries, may not be satisfied with the remaining cake. To challenge this problem, we begin by providing a framework for solving the cake-cutting problem in sublinear-time. Generally speaking, solving a problem in sublinear-time requires the use of approximations. However, in our framework, we introduce the concept of "eps n-victims," which means that eps n players (victims) may not get fair portions, where 0< eps =< 1 is an arbitrary constant. In our framework, an algorithm consists of the following two parts: In the first (Preassigning) part, it distributes fair portions to r < n players in o(n)-time. In the second (Completion) part, it distributes fair portions to the remaining n-r players except for the eps n victims in poly}(n)-time. There are two variations on the r players in the first part. Specifically, whether they can or cannot be designated. We will then present algorithms in this framework. In particular, an O(r/eps)-time algorithm for r =< eps n/127 undesignated players with eps n-victims, and an O~(r^2/eps)-time algorithm for r =< eps e^{{sqrt{ln{n}}}/{7}} designated players and eps =< 1/e with eps n-victims are presented.
[ { "created": "Fri, 3 Apr 2015 08:24:31 GMT", "version": "v1" }, { "created": "Thu, 23 Jul 2015 10:48:52 GMT", "version": "v2" } ]
2015-07-24
[ [ "Ito", "Hiro", "", "UEC, Japan and CREST, JST, Japan" ], [ "Ueda", "Takahiro", "", "Komatsu\n Ltd., Japan" ] ]
In this paper, we show algorithms for solving the cake-cutting problem in sublinear-time. More specifically, we preassign (simple) fair portions to o(n) players in o(n)-time, and minimize the damage to the rest of the players. All currently known algorithms require Omega(n)-time, even when assigning a portion to just one player, and it is nontrivial to revise these algorithms to run in $o(n)$-time since many of the remaining players, who have not been asked any queries, may not be satisfied with the remaining cake. To challenge this problem, we begin by providing a framework for solving the cake-cutting problem in sublinear-time. Generally speaking, solving a problem in sublinear-time requires the use of approximations. However, in our framework, we introduce the concept of "eps n-victims," which means that eps n players (victims) may not get fair portions, where 0< eps =< 1 is an arbitrary constant. In our framework, an algorithm consists of the following two parts: In the first (Preassigning) part, it distributes fair portions to r < n players in o(n)-time. In the second (Completion) part, it distributes fair portions to the remaining n-r players except for the eps n victims in poly}(n)-time. There are two variations on the r players in the first part. Specifically, whether they can or cannot be designated. We will then present algorithms in this framework. In particular, an O(r/eps)-time algorithm for r =< eps n/127 undesignated players with eps n-victims, and an O~(r^2/eps)-time algorithm for r =< eps e^{{sqrt{ln{n}}}/{7}} designated players and eps =< 1/e with eps n-victims are presented.
1706.02828
Juan David Arcila Moreno
Juan David Arcila Moreno, Santiago Passos and Mauricio Toro
On-line Assembling Mitochondrial DNA from de novo transcriptome
3 pages
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper is focused in designing an efficient on-line algorithm to reconstruct a DNA sequence and search the genes in it, we assume that the segment have no mutation or reading error, the algorithm is based on de Bruijn Graph for reconstructing the DNA from the segments taking k-mers large enough no to generate cycles, once the sequence is ready a Boyer-Moore's algorithm implementation is used to search the genes inside de sequence using starts and stop codons, this solution give a high performance when all genes can be found, and there is no need to read all the segments to reach maximum number of genes, but due to the online nature one cannot be sure about the finals genes given
[ { "created": "Fri, 9 Jun 2017 04:04:18 GMT", "version": "v1" } ]
2017-06-12
[ [ "Moreno", "Juan David Arcila", "" ], [ "Passos", "Santiago", "" ], [ "Toro", "Mauricio", "" ] ]
This paper is focused in designing an efficient on-line algorithm to reconstruct a DNA sequence and search the genes in it, we assume that the segment have no mutation or reading error, the algorithm is based on de Bruijn Graph for reconstructing the DNA from the segments taking k-mers large enough no to generate cycles, once the sequence is ready a Boyer-Moore's algorithm implementation is used to search the genes inside de sequence using starts and stop codons, this solution give a high performance when all genes can be found, and there is no need to read all the segments to reach maximum number of genes, but due to the online nature one cannot be sure about the finals genes given
1806.04234
Ulle Endriss
Ulle Endriss
Lecture Notes on Fair Division
null
null
null
null
cs.AI cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fair division is the problem of dividing one or several goods amongst two or more agents in a way that satisfies a suitable fairness criterion. These Notes provide a succinct introduction to the field. We cover three main topics. First, we need to define what is to be understood by a "fair" allocation of goods to individuals. We present an overview of the most important fairness criteria (as well as the closely related criteria for economic efficiency) developed in the literature, together with a short discussion of their axiomatic foundations. Second, we give an introduction to cake-cutting procedures as an example of methods for fairly dividing a single divisible resource amongst a group of individuals. Third, we discuss the combinatorial optimisation problem of fairly allocating a set of indivisible goods to a group of agents, covering both centralised algorithms (similar to auctions) and a distributed approach based on negotiation. While the classical literature on fair division has largely developed within Economics, these Notes are specifically written for readers with a background in Computer Science or similar, and who may be (or may wish to be) engaged in research in Artificial Intelligence, Multiagent Systems, or Computational Social Choice. References for further reading, as well as a small number of exercises, are included. Notes prepared for a tutorial at the 11th European Agent Systems Summer School (EASSS-2009), Torino, Italy, 31 August and 1 September 2009. Updated for a tutorial at the COST-ADT Doctoral School on Computational Social Choice, Estoril, Portugal, 9--14 April 2010.
[ { "created": "Mon, 11 Jun 2018 20:41:23 GMT", "version": "v1" } ]
2018-06-13
[ [ "Endriss", "Ulle", "" ] ]
Fair division is the problem of dividing one or several goods amongst two or more agents in a way that satisfies a suitable fairness criterion. These Notes provide a succinct introduction to the field. We cover three main topics. First, we need to define what is to be understood by a "fair" allocation of goods to individuals. We present an overview of the most important fairness criteria (as well as the closely related criteria for economic efficiency) developed in the literature, together with a short discussion of their axiomatic foundations. Second, we give an introduction to cake-cutting procedures as an example of methods for fairly dividing a single divisible resource amongst a group of individuals. Third, we discuss the combinatorial optimisation problem of fairly allocating a set of indivisible goods to a group of agents, covering both centralised algorithms (similar to auctions) and a distributed approach based on negotiation. While the classical literature on fair division has largely developed within Economics, these Notes are specifically written for readers with a background in Computer Science or similar, and who may be (or may wish to be) engaged in research in Artificial Intelligence, Multiagent Systems, or Computational Social Choice. References for further reading, as well as a small number of exercises, are included. Notes prepared for a tutorial at the 11th European Agent Systems Summer School (EASSS-2009), Torino, Italy, 31 August and 1 September 2009. Updated for a tutorial at the COST-ADT Doctoral School on Computational Social Choice, Estoril, Portugal, 9--14 April 2010.
2404.15436
Simon Baeuerle
Alina Pleli, Simon Baeuerle, Michel Janus, Jonas Barth, Ralf Mikut, Hendrik P. A. Lensch
Iterative Cluster Harvesting for Wafer Map Defect Patterns
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unsupervised clustering of wafer map defect patterns is challenging because the appearance of certain defect patterns varies significantly. This includes changing shape, location, density, and rotation of the defect area on the wafer. We present a harvesting approach, which can cluster even challenging defect patterns of wafer maps well. Our approach makes use of a well-known, three-step procedure: feature extraction, dimension reduction, and clustering. The novelty in our approach lies in repeating dimensionality reduction and clustering iteratively while filtering out one cluster per iteration according to its silhouette score. This method leads to an improvement of clustering performance in general and is especially useful for difficult defect patterns. The low computational effort allows for a quick assessment of large datasets and can be used to support manual labeling efforts. We benchmark against related approaches from the literature and show improved results on a real-world industrial dataset.
[ { "created": "Tue, 23 Apr 2024 18:26:11 GMT", "version": "v1" } ]
2024-04-25
[ [ "Pleli", "Alina", "" ], [ "Baeuerle", "Simon", "" ], [ "Janus", "Michel", "" ], [ "Barth", "Jonas", "" ], [ "Mikut", "Ralf", "" ], [ "Lensch", "Hendrik P. A.", "" ] ]
Unsupervised clustering of wafer map defect patterns is challenging because the appearance of certain defect patterns varies significantly. This includes changing shape, location, density, and rotation of the defect area on the wafer. We present a harvesting approach, which can cluster even challenging defect patterns of wafer maps well. Our approach makes use of a well-known, three-step procedure: feature extraction, dimension reduction, and clustering. The novelty in our approach lies in repeating dimensionality reduction and clustering iteratively while filtering out one cluster per iteration according to its silhouette score. This method leads to an improvement of clustering performance in general and is especially useful for difficult defect patterns. The low computational effort allows for a quick assessment of large datasets and can be used to support manual labeling efforts. We benchmark against related approaches from the literature and show improved results on a real-world industrial dataset.
2207.00928
Jing Li
Qidan Zhu, Jing Li, Fei Yuan, Quan Gan
Continuous Sign Language Recognition via Temporal Super-Resolution Network
13 pages, 11 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Aiming at the problem that the spatial-temporal hierarchical continuous sign language recognition model based on deep learning has a large amount of computation, which limits the real-time application of the model, this paper proposes a temporal super-resolution network(TSRNet). The data is reconstructed into a dense feature sequence to reduce the overall model computation while keeping the final recognition accuracy loss to a minimum. The continuous sign language recognition model(CSLR) via TSRNet mainly consists of three parts: frame-level feature extraction, time series feature extraction and TSRNet, where TSRNet is located between frame-level feature extraction and time-series feature extraction, which mainly includes two branches: detail descriptor and rough descriptor. The sparse frame-level features are fused through the features obtained by the two designed branches as the reconstructed dense frame-level feature sequence, and the connectionist temporal classification(CTC) loss is used for training and optimization after the time-series feature extraction part. To better recover semantic-level information, the overall model is trained with the self-generating adversarial training method proposed in this paper to reduce the model error rate. The training method regards the TSRNet as the generator, and the frame-level processing part and the temporal processing part as the discriminator. In addition, in order to unify the evaluation criteria of model accuracy loss under different benchmarks, this paper proposes word error rate deviation(WERD), which takes the error rate between the estimated word error rate (WER) and the reference WER obtained by the reconstructed frame-level feature sequence and the complete original frame-level feature sequence as the WERD. Experiments on two large-scale sign language datasets demonstrate the effectiveness of the proposed model.
[ { "created": "Sun, 3 Jul 2022 00:55:45 GMT", "version": "v1" } ]
2022-07-05
[ [ "Zhu", "Qidan", "" ], [ "Li", "Jing", "" ], [ "Yuan", "Fei", "" ], [ "Gan", "Quan", "" ] ]
Aiming at the problem that the spatial-temporal hierarchical continuous sign language recognition model based on deep learning has a large amount of computation, which limits the real-time application of the model, this paper proposes a temporal super-resolution network(TSRNet). The data is reconstructed into a dense feature sequence to reduce the overall model computation while keeping the final recognition accuracy loss to a minimum. The continuous sign language recognition model(CSLR) via TSRNet mainly consists of three parts: frame-level feature extraction, time series feature extraction and TSRNet, where TSRNet is located between frame-level feature extraction and time-series feature extraction, which mainly includes two branches: detail descriptor and rough descriptor. The sparse frame-level features are fused through the features obtained by the two designed branches as the reconstructed dense frame-level feature sequence, and the connectionist temporal classification(CTC) loss is used for training and optimization after the time-series feature extraction part. To better recover semantic-level information, the overall model is trained with the self-generating adversarial training method proposed in this paper to reduce the model error rate. The training method regards the TSRNet as the generator, and the frame-level processing part and the temporal processing part as the discriminator. In addition, in order to unify the evaluation criteria of model accuracy loss under different benchmarks, this paper proposes word error rate deviation(WERD), which takes the error rate between the estimated word error rate (WER) and the reference WER obtained by the reconstructed frame-level feature sequence and the complete original frame-level feature sequence as the WERD. Experiments on two large-scale sign language datasets demonstrate the effectiveness of the proposed model.
2302.06174
Daniel Hienert
Ricardo Schiffers, Dagmar Kern, Daniel Hienert
Evaluation of Word Embeddings for the Social Sciences
null
In Proceedings of the 6th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature 2022, edited by Stefania Degaetano, Anna Kazantseva, Nils Reiter, and Stan Szpakowicz, 1-6
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Word embeddings are an essential instrument in many NLP tasks. Most available resources are trained on general language from Web corpora or Wikipedia dumps. However, word embeddings for domain-specific language are rare, in particular for the social science domain. Therefore, in this work, we describe the creation and evaluation of word embedding models based on 37,604 open-access social science research papers. In the evaluation, we compare domain-specific and general language models for (i) language coverage, (ii) diversity, and (iii) semantic relationships. We found that the created domain-specific model, even with a relatively small vocabulary size, covers a large part of social science concepts, their neighborhoods are diverse in comparison to more general models. Across all relation types, we found a more extensive coverage of semantic relationships.
[ { "created": "Mon, 13 Feb 2023 08:23:03 GMT", "version": "v1" } ]
2023-02-14
[ [ "Schiffers", "Ricardo", "" ], [ "Kern", "Dagmar", "" ], [ "Hienert", "Daniel", "" ] ]
Word embeddings are an essential instrument in many NLP tasks. Most available resources are trained on general language from Web corpora or Wikipedia dumps. However, word embeddings for domain-specific language are rare, in particular for the social science domain. Therefore, in this work, we describe the creation and evaluation of word embedding models based on 37,604 open-access social science research papers. In the evaluation, we compare domain-specific and general language models for (i) language coverage, (ii) diversity, and (iii) semantic relationships. We found that the created domain-specific model, even with a relatively small vocabulary size, covers a large part of social science concepts, their neighborhoods are diverse in comparison to more general models. Across all relation types, we found a more extensive coverage of semantic relationships.
1706.05254
Margaretha Gansterer
Margaretha Gansterer and Richard F. Hartl
Collaborative vehicle routing: a survey
null
null
null
null
cs.MA cs.AI cs.CY math.OC physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In horizontal collaborations, carriers form coalitions in order to perform parts of their logistics operations jointly. By exchanging transportation requests among each other, they can operate more efficiently and in a more sustainable way. Collaborative vehicle routing has been extensively discussed in the literature. We identify three major streams of research: (i) centralized collaborative planning, (ii) decentralized planning without auctions, and (ii) auction-based decentralized planning. For each of them we give a structured overview on the state of knowledge and discuss future research directions.
[ { "created": "Tue, 13 Jun 2017 20:21:00 GMT", "version": "v1" } ]
2017-06-19
[ [ "Gansterer", "Margaretha", "" ], [ "Hartl", "Richard F.", "" ] ]
In horizontal collaborations, carriers form coalitions in order to perform parts of their logistics operations jointly. By exchanging transportation requests among each other, they can operate more efficiently and in a more sustainable way. Collaborative vehicle routing has been extensively discussed in the literature. We identify three major streams of research: (i) centralized collaborative planning, (ii) decentralized planning without auctions, and (ii) auction-based decentralized planning. For each of them we give a structured overview on the state of knowledge and discuss future research directions.
2108.11994
Melika Golestani
Melika Golestani, Seyedeh Zahra Razavi, and Heshaam Faili
A New Sentence Ordering Method Using BERT Pretrained Model
7 pages, 4 figures, 2020 11th International Conference on Information and Knowledge Technology (IKT)
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Building systems with capability of natural language understanding (NLU) has been one of the oldest areas of AI. An essential component of NLU is to detect logical succession of events contained in a text. The task of sentence ordering is proposed to learn succession of events with applications in AI tasks. The performance of previous works employing statistical methods is poor, while the neural networks-based approaches are in serious need of large corpora for model learning. In this paper, we propose a method for sentence ordering which does not need a training phase and consequently a large corpus for learning. To this end, we generate sentence embedding using BERT pre-trained model and measure sentence similarity using cosine similarity score. We suggest this score as an indicator of sequential events' level of coherence. We finally sort the sentences through brute-force search to maximize overall similarities of the sequenced sentences. Our proposed method outperformed other baselines on ROCStories, a corpus of 5-sentence human-made stories. The method is specifically more efficient than neural network-based methods when no huge corpus is available. Among other advantages of this method are its interpretability and needlessness to linguistic knowledge.
[ { "created": "Thu, 26 Aug 2021 18:47:15 GMT", "version": "v1" } ]
2021-08-30
[ [ "Golestani", "Melika", "" ], [ "Razavi", "Seyedeh Zahra", "" ], [ "Faili", "Heshaam", "" ] ]
Building systems with capability of natural language understanding (NLU) has been one of the oldest areas of AI. An essential component of NLU is to detect logical succession of events contained in a text. The task of sentence ordering is proposed to learn succession of events with applications in AI tasks. The performance of previous works employing statistical methods is poor, while the neural networks-based approaches are in serious need of large corpora for model learning. In this paper, we propose a method for sentence ordering which does not need a training phase and consequently a large corpus for learning. To this end, we generate sentence embedding using BERT pre-trained model and measure sentence similarity using cosine similarity score. We suggest this score as an indicator of sequential events' level of coherence. We finally sort the sentences through brute-force search to maximize overall similarities of the sequenced sentences. Our proposed method outperformed other baselines on ROCStories, a corpus of 5-sentence human-made stories. The method is specifically more efficient than neural network-based methods when no huge corpus is available. Among other advantages of this method are its interpretability and needlessness to linguistic knowledge.
2308.12490
Yu-Wen Chen
Yu-Wen Chen, Zhou Yu, Julia Hirschberg
MultiPA: A Multi-task Speech Pronunciation Assessment Model for Open Response Scenarios
INTERSPEECH 2024
null
null
null
cs.CL cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pronunciation assessment models designed for open response scenarios enable users to practice language skills in a manner similar to real-life communication. However, previous open-response pronunciation assessment models have predominantly focused on a single pronunciation task, such as sentence-level accuracy, rather than offering a comprehensive assessment in various aspects. We propose MultiPA, a Multitask Pronunciation Assessment model that provides sentence-level accuracy, fluency, prosody, and word-level accuracy assessment for open responses. We examined the correlation between different pronunciation tasks and showed the benefits of multi-task learning. Our model reached the state-of-the-art performance on existing in-domain data sets and effectively generalized to an out-of-domain dataset that we newly collected. The experimental results demonstrate the practical utility of our model in real-world applications.
[ { "created": "Thu, 24 Aug 2023 01:24:09 GMT", "version": "v1" }, { "created": "Wed, 5 Jun 2024 02:16:42 GMT", "version": "v2" } ]
2024-06-06
[ [ "Chen", "Yu-Wen", "" ], [ "Yu", "Zhou", "" ], [ "Hirschberg", "Julia", "" ] ]
Pronunciation assessment models designed for open response scenarios enable users to practice language skills in a manner similar to real-life communication. However, previous open-response pronunciation assessment models have predominantly focused on a single pronunciation task, such as sentence-level accuracy, rather than offering a comprehensive assessment in various aspects. We propose MultiPA, a Multitask Pronunciation Assessment model that provides sentence-level accuracy, fluency, prosody, and word-level accuracy assessment for open responses. We examined the correlation between different pronunciation tasks and showed the benefits of multi-task learning. Our model reached the state-of-the-art performance on existing in-domain data sets and effectively generalized to an out-of-domain dataset that we newly collected. The experimental results demonstrate the practical utility of our model in real-world applications.
2406.19934
Chuanqi Cheng
Chuanqi Cheng, Jian Guan, Wei Wu, Rui Yan
From the Least to the Most: Building a Plug-and-Play Visual Reasoner via Data Synthesis
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
We explore multi-step reasoning in vision-language models (VLMs). The problem is challenging, as reasoning data consisting of multiple steps of visual and language processing are barely available. To overcome the challenge, we first introduce a least-to-most visual reasoning paradigm, which interleaves steps of decomposing a question into sub-questions and invoking external tools for resolving sub-questions. Based on the paradigm, we further propose a novel data synthesis approach that can automatically create questions and multi-step reasoning paths for an image in a bottom-up manner. Our approach divides the complex synthesis task into a few simple sub-tasks, and (almost entirely) relies on open-sourced models to accomplish the sub-tasks. Therefore, the entire synthesis process is reproducible and cost-efficient, and the synthesized data is quality guaranteed. With the approach, we construct $50$k visual reasoning examples. Then, we develop a visual reasoner through supervised fine-tuning, which is capable of generally enhancing the reasoning abilities of a wide range of existing VLMs in a plug-and-play fashion. Extensive experiments indicate that the visual reasoner can consistently and significantly improve four VLMs on four VQA benchmarks. Our code and dataset are available at https://github.com/steven-ccq/VisualReasoner.
[ { "created": "Fri, 28 Jun 2024 14:04:10 GMT", "version": "v1" } ]
2024-07-01
[ [ "Cheng", "Chuanqi", "" ], [ "Guan", "Jian", "" ], [ "Wu", "Wei", "" ], [ "Yan", "Rui", "" ] ]
We explore multi-step reasoning in vision-language models (VLMs). The problem is challenging, as reasoning data consisting of multiple steps of visual and language processing are barely available. To overcome the challenge, we first introduce a least-to-most visual reasoning paradigm, which interleaves steps of decomposing a question into sub-questions and invoking external tools for resolving sub-questions. Based on the paradigm, we further propose a novel data synthesis approach that can automatically create questions and multi-step reasoning paths for an image in a bottom-up manner. Our approach divides the complex synthesis task into a few simple sub-tasks, and (almost entirely) relies on open-sourced models to accomplish the sub-tasks. Therefore, the entire synthesis process is reproducible and cost-efficient, and the synthesized data is quality guaranteed. With the approach, we construct $50$k visual reasoning examples. Then, we develop a visual reasoner through supervised fine-tuning, which is capable of generally enhancing the reasoning abilities of a wide range of existing VLMs in a plug-and-play fashion. Extensive experiments indicate that the visual reasoner can consistently and significantly improve four VLMs on four VQA benchmarks. Our code and dataset are available at https://github.com/steven-ccq/VisualReasoner.
2311.13008
Tobin South
Alex Berke, Tobin South, Robert Mahari, Kent Larson, Alex Pentland
zkTax: A pragmatic way to support zero-knowledge tax disclosures
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by-sa/4.0/
Tax returns contain key financial information of interest to third parties: public officials are asked to share financial data for transparency, companies seek to assess the financial status of business partners, and individuals need to prove their income to landlords or to receive benefits. Tax returns also contain sensitive data such that sharing them in their entirety undermines privacy. We introduce a zero-knowledge tax disclosure system (zkTax) that allows individuals and organizations to make provable claims about select information in their tax returns without revealing additional information, which can be independently verified by third parties. The system consists of three distinct services that can be distributed: a tax authority provides tax documents signed with a public key; a Redact & Prove Service enables users to produce a redacted version of the tax documents with a zero-knowledge proof attesting the provenance of the redacted data; a Verify Service enables anyone to verify the proof. We implement a prototype with a user interface, compatible with U.S. tax forms, and demonstrate how this design could be implemented with minimal changes to existing tax infrastructure. Our system is designed to be extensible to other contexts and jurisdictions. This work provides a practical example of how distributed tools leveraging cryptography can enhance existing government or financial infrastructures, providing immediate transparency alongside privacy without system overhauls.
[ { "created": "Tue, 21 Nov 2023 21:34:10 GMT", "version": "v1" }, { "created": "Sun, 24 Mar 2024 13:54:08 GMT", "version": "v2" } ]
2024-03-26
[ [ "Berke", "Alex", "" ], [ "South", "Tobin", "" ], [ "Mahari", "Robert", "" ], [ "Larson", "Kent", "" ], [ "Pentland", "Alex", "" ] ]
Tax returns contain key financial information of interest to third parties: public officials are asked to share financial data for transparency, companies seek to assess the financial status of business partners, and individuals need to prove their income to landlords or to receive benefits. Tax returns also contain sensitive data such that sharing them in their entirety undermines privacy. We introduce a zero-knowledge tax disclosure system (zkTax) that allows individuals and organizations to make provable claims about select information in their tax returns without revealing additional information, which can be independently verified by third parties. The system consists of three distinct services that can be distributed: a tax authority provides tax documents signed with a public key; a Redact & Prove Service enables users to produce a redacted version of the tax documents with a zero-knowledge proof attesting the provenance of the redacted data; a Verify Service enables anyone to verify the proof. We implement a prototype with a user interface, compatible with U.S. tax forms, and demonstrate how this design could be implemented with minimal changes to existing tax infrastructure. Our system is designed to be extensible to other contexts and jurisdictions. This work provides a practical example of how distributed tools leveraging cryptography can enhance existing government or financial infrastructures, providing immediate transparency alongside privacy without system overhauls.
cs/0502003
Sandor P. Fekete
Alexander Kroeller, Dennis Pfisterer, Carsten Buschmann, Sandor P. Fekete, and Stefan Fischer
Shawn: A new approach to simulating wireless sensor networks
10 pages, 2 figures, 2 tables, Latex, to appear in Design, Analysis, and Simulation of Distributed Systems 2005
null
null
null
cs.DC cs.PF
null
We consider the simulation of wireless sensor networks (WSN) using a new approach. We present Shawn, an open-source discrete-event simulator that has considerable differences to all other existing simulators. Shawn is very powerful in simulating large scale networks with an abstract point of view. It is, to the best of our knowledge, the first simulator to support generic high-level algorithms as well as distributed protocols on exactly the same underlying networks.
[ { "created": "Tue, 1 Feb 2005 12:23:26 GMT", "version": "v1" } ]
2007-05-23
[ [ "Kroeller", "Alexander", "" ], [ "Pfisterer", "Dennis", "" ], [ "Buschmann", "Carsten", "" ], [ "Fekete", "Sandor P.", "" ], [ "Fischer", "Stefan", "" ] ]
We consider the simulation of wireless sensor networks (WSN) using a new approach. We present Shawn, an open-source discrete-event simulator that has considerable differences to all other existing simulators. Shawn is very powerful in simulating large scale networks with an abstract point of view. It is, to the best of our knowledge, the first simulator to support generic high-level algorithms as well as distributed protocols on exactly the same underlying networks.
1402.7216
Jana Paz\'urikov\'a
Jana Paz\'urikov\'a
Large-Scale Molecular Dynamics Simulations for Highly Parallel Infrastructures
thesis proposal
null
null
null
cs.DC cs.CE physics.comp-ph
http://creativecommons.org/licenses/by-nc-sa/3.0/
Computational chemistry allows researchers to experiment in sillico: by running a computer simulations of a biological or chemical processes of interest. Molecular dynamics with molecular mechanics model of interactions simulates N-body problem of atoms$-$it computes movements of atoms according to Newtonian physics and empirical descriptions of atomic electrostatic interactions. These simulations require high performance computing resources, as evaluations within each step are computationally demanding and billions of steps are needed to reach interesting timescales. Current methods decompose the spatial domain of the problem and calculate on parallel/distributed infrastructures. Even the methods with the highest strong scaling hit the limit at half a million cores: they are not able to cut the time to result if provided with more processors. At the dawn of exascale computing with massively parallel computational resources, we want to increase the level of parallelism by incorporating parallel-in-time computation to molecular dynamics simulations. Calculation of results in several successive time points simultaneously without a priori knowledge has been examined with no major success. We will study and implement a novel combinations of methods that according to our theoretical analyses should achieve promising speed-up compared to sequential-in-time calculation.
[ { "created": "Fri, 28 Feb 2014 11:59:08 GMT", "version": "v1" } ]
2014-03-03
[ [ "Pazúriková", "Jana", "" ] ]
Computational chemistry allows researchers to experiment in sillico: by running a computer simulations of a biological or chemical processes of interest. Molecular dynamics with molecular mechanics model of interactions simulates N-body problem of atoms$-$it computes movements of atoms according to Newtonian physics and empirical descriptions of atomic electrostatic interactions. These simulations require high performance computing resources, as evaluations within each step are computationally demanding and billions of steps are needed to reach interesting timescales. Current methods decompose the spatial domain of the problem and calculate on parallel/distributed infrastructures. Even the methods with the highest strong scaling hit the limit at half a million cores: they are not able to cut the time to result if provided with more processors. At the dawn of exascale computing with massively parallel computational resources, we want to increase the level of parallelism by incorporating parallel-in-time computation to molecular dynamics simulations. Calculation of results in several successive time points simultaneously without a priori knowledge has been examined with no major success. We will study and implement a novel combinations of methods that according to our theoretical analyses should achieve promising speed-up compared to sequential-in-time calculation.
0704.3646
Joseph Y. Halpern
Ittai Abraham, Danny Dolev, and Joseph Y. Halpern
Lower Bounds on Implementing Robust and Resilient Mediators
null
null
null
null
cs.GT cs.CR cs.DC
null
We consider games that have (k,t)-robust equilibria when played with a mediator, where an equilibrium is (k,t)-robust if it tolerates deviations by coalitions of size up to k and deviations by up to $t$ players with unknown utilities. We prove lower bounds that match upper bounds on the ability to implement such mediators using cheap talk (that is, just allowing communication among the players). The bounds depend on (a) the relationship between k, t, and n, the total number of players in the system; (b) whether players know the exact utilities of other players; (c) whether there are broadcast channels or just point-to-point channels; (d) whether cryptography is available; and (e) whether the game has a $k+t)-punishment strategy; that is, a strategy that, if used by all but at most $k+t$ players, guarantees that every player gets a worse outcome than they do with the equilibrium strategy.
[ { "created": "Fri, 27 Apr 2007 01:32:15 GMT", "version": "v1" }, { "created": "Thu, 6 Dec 2007 22:17:40 GMT", "version": "v2" } ]
2007-12-07
[ [ "Abraham", "Ittai", "" ], [ "Dolev", "Danny", "" ], [ "Halpern", "Joseph Y.", "" ] ]
We consider games that have (k,t)-robust equilibria when played with a mediator, where an equilibrium is (k,t)-robust if it tolerates deviations by coalitions of size up to k and deviations by up to $t$ players with unknown utilities. We prove lower bounds that match upper bounds on the ability to implement such mediators using cheap talk (that is, just allowing communication among the players). The bounds depend on (a) the relationship between k, t, and n, the total number of players in the system; (b) whether players know the exact utilities of other players; (c) whether there are broadcast channels or just point-to-point channels; (d) whether cryptography is available; and (e) whether the game has a $k+t)-punishment strategy; that is, a strategy that, if used by all but at most $k+t$ players, guarantees that every player gets a worse outcome than they do with the equilibrium strategy.
2109.02613
Deepak Sridhar
Deepak Sridhar, Niamul Quader, Srikanth Muralidharan, Yaoxin Li, Peng Dai, Juwei Lu
Class Semantics-based Attention for Action Detection
Accepted to ICCV 2021
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Action localization networks are often structured as a feature encoder sub-network and a localization sub-network, where the feature encoder learns to transform an input video to features that are useful for the localization sub-network to generate reliable action proposals. While some of the encoded features may be more useful for generating action proposals, prior action localization approaches do not include any attention mechanism that enables the localization sub-network to attend more to the more important features. In this paper, we propose a novel attention mechanism, the Class Semantics-based Attention (CSA), that learns from the temporal distribution of semantics of action classes present in an input video to find the importance scores of the encoded features, which are used to provide attention to the more useful encoded features. We demonstrate on two popular action detection datasets that incorporating our novel attention mechanism provides considerable performance gains on competitive action detection models (e.g., around 6.2% improvement over BMN action detection baseline to obtain 47.5% mAP on the THUMOS-14 dataset), and a new state-of-the-art of 36.25% mAP on the ActivityNet v1.3 dataset. Further, the CSA localization model family which includes BMN-CSA, was part of the second-placed submission at the 2021 ActivityNet action localization challenge. Our attention mechanism outperforms prior self-attention modules such as the squeeze-and-excitation in action detection task. We also observe that our attention mechanism is complementary to such self-attention modules in that performance improvements are seen when both are used together.
[ { "created": "Mon, 6 Sep 2021 17:22:46 GMT", "version": "v1" } ]
2021-09-07
[ [ "Sridhar", "Deepak", "" ], [ "Quader", "Niamul", "" ], [ "Muralidharan", "Srikanth", "" ], [ "Li", "Yaoxin", "" ], [ "Dai", "Peng", "" ], [ "Lu", "Juwei", "" ] ]
Action localization networks are often structured as a feature encoder sub-network and a localization sub-network, where the feature encoder learns to transform an input video to features that are useful for the localization sub-network to generate reliable action proposals. While some of the encoded features may be more useful for generating action proposals, prior action localization approaches do not include any attention mechanism that enables the localization sub-network to attend more to the more important features. In this paper, we propose a novel attention mechanism, the Class Semantics-based Attention (CSA), that learns from the temporal distribution of semantics of action classes present in an input video to find the importance scores of the encoded features, which are used to provide attention to the more useful encoded features. We demonstrate on two popular action detection datasets that incorporating our novel attention mechanism provides considerable performance gains on competitive action detection models (e.g., around 6.2% improvement over BMN action detection baseline to obtain 47.5% mAP on the THUMOS-14 dataset), and a new state-of-the-art of 36.25% mAP on the ActivityNet v1.3 dataset. Further, the CSA localization model family which includes BMN-CSA, was part of the second-placed submission at the 2021 ActivityNet action localization challenge. Our attention mechanism outperforms prior self-attention modules such as the squeeze-and-excitation in action detection task. We also observe that our attention mechanism is complementary to such self-attention modules in that performance improvements are seen when both are used together.
1904.10215
Mordechai Shalom
Yuval Emek, Shay Kutten, Mordechai Shalom, Shmuel Zaks
Multicast Communications in Tree Networks with Heterogeneous Capacity Constraints
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A widely studied problem in communication networks is that of finding the maximum number of communication requests that can be scheduled concurrently, subject to node and/or link capacity constraints. In this paper, we consider the problem of finding the largest number of multicast communication requests that can be serviced simultaneously by a network of tree topology, subject to heterogeneous capacity constraints. This problem generalizes the following two problems studied in the literature: a) the problem of finding a largest induced $k$-colorable subgraph of a chordal graph, b) the maximum multi-commodity flow problem in tree networks. The problem is already known to be NP-hard and to admit a $c$-approximation ($c \approx 1.58$) in the case of homogeneous capacity constraints. We first show that the problem is much harder to approximate in the heterogeneous case. We then use a generalization of a classical algorithm to obtain an $M$-approximation where $M$ is the maximum number of leaves of the subtrees representing the multicast communications. Surprisingly, the same algorithm, though in various disguises, is used in the literature at least four times to solve related problems (though the analysis is different). The special case of the problem where instances are restricted to unicast communications in a star topology network is known to be polynomial-time solvable. We extend this result and show that the problem can be solved in polynomial time for a set of paths in a tree that share a common vertex.
[ { "created": "Tue, 23 Apr 2019 09:13:36 GMT", "version": "v1" }, { "created": "Fri, 22 May 2020 14:27:42 GMT", "version": "v2" } ]
2020-05-25
[ [ "Emek", "Yuval", "" ], [ "Kutten", "Shay", "" ], [ "Shalom", "Mordechai", "" ], [ "Zaks", "Shmuel", "" ] ]
A widely studied problem in communication networks is that of finding the maximum number of communication requests that can be scheduled concurrently, subject to node and/or link capacity constraints. In this paper, we consider the problem of finding the largest number of multicast communication requests that can be serviced simultaneously by a network of tree topology, subject to heterogeneous capacity constraints. This problem generalizes the following two problems studied in the literature: a) the problem of finding a largest induced $k$-colorable subgraph of a chordal graph, b) the maximum multi-commodity flow problem in tree networks. The problem is already known to be NP-hard and to admit a $c$-approximation ($c \approx 1.58$) in the case of homogeneous capacity constraints. We first show that the problem is much harder to approximate in the heterogeneous case. We then use a generalization of a classical algorithm to obtain an $M$-approximation where $M$ is the maximum number of leaves of the subtrees representing the multicast communications. Surprisingly, the same algorithm, though in various disguises, is used in the literature at least four times to solve related problems (though the analysis is different). The special case of the problem where instances are restricted to unicast communications in a star topology network is known to be polynomial-time solvable. We extend this result and show that the problem can be solved in polynomial time for a set of paths in a tree that share a common vertex.
1608.04738
Shenjian Zhao
Shenjian Zhao, Zhihua Zhang
An Efficient Character-Level Neural Machine Translation
null
null
null
null
cs.CL stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural machine translation aims at building a single large neural network that can be trained to maximize translation performance. The encoder-decoder architecture with an attention mechanism achieves a translation performance comparable to the existing state-of-the-art phrase-based systems on the task of English-to-French translation. However, the use of large vocabulary becomes the bottleneck in both training and improving the performance. In this paper, we propose an efficient architecture to train a deep character-level neural machine translation by introducing a decimator and an interpolator. The decimator is used to sample the source sequence before encoding while the interpolator is used to resample after decoding. Such a deep model has two major advantages. It avoids the large vocabulary issue radically; at the same time, it is much faster and more memory-efficient in training than conventional character-based models. More interestingly, our model is able to translate the misspelled word like human beings.
[ { "created": "Tue, 16 Aug 2016 07:44:02 GMT", "version": "v1" }, { "created": "Fri, 19 Aug 2016 06:49:32 GMT", "version": "v2" } ]
2016-08-22
[ [ "Zhao", "Shenjian", "" ], [ "Zhang", "Zhihua", "" ] ]
Neural machine translation aims at building a single large neural network that can be trained to maximize translation performance. The encoder-decoder architecture with an attention mechanism achieves a translation performance comparable to the existing state-of-the-art phrase-based systems on the task of English-to-French translation. However, the use of large vocabulary becomes the bottleneck in both training and improving the performance. In this paper, we propose an efficient architecture to train a deep character-level neural machine translation by introducing a decimator and an interpolator. The decimator is used to sample the source sequence before encoding while the interpolator is used to resample after decoding. Such a deep model has two major advantages. It avoids the large vocabulary issue radically; at the same time, it is much faster and more memory-efficient in training than conventional character-based models. More interestingly, our model is able to translate the misspelled word like human beings.
1912.01798
Charlie Hou
Charlie Hou and Mingxun Zhou and Yan Ji and Phil Daian and Florian Tramer and Giulia Fanti and Ari Juels
SquirRL: Automating Attack Analysis on Blockchain Incentive Mechanisms with Deep Reinforcement Learning
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Incentive mechanisms are central to the functionality of permissionless blockchains: they incentivize participants to run and secure the underlying consensus protocol. Designing incentive-compatible incentive mechanisms is notoriously challenging, however. As a result, most public blockchains today use incentive mechanisms whose security properties are poorly understood and largely untested. In this work, we propose SquirRL, a framework for using deep reinforcement learning to analyze attacks on blockchain incentive mechanisms. We demonstrate SquirRL's power by first recovering known attacks: (1) the optimal selfish mining attack in Bitcoin [52], and (2) the Nash equilibrium in block withholding attacks [16]. We also use SquirRL to obtain several novel empirical results. First, we discover a counterintuitive flaw in the widely used rushing adversary model when applied to multi-agent Markov games with incomplete information. Second, we demonstrate that the optimal selfish mining strategy identified in [52] is actually not a Nash equilibrium in the multi-agent selfish mining setting. In fact, our results suggest (but do not prove) that when more than two competing agents engage in selfish mining, there is no profitable Nash equilibrium. This is consistent with the lack of observed selfish mining in the wild. Third, we find a novel attack on a simplified version of Ethereum's finalization mechanism, Casper the Friendly Finality Gadget (FFG) that allows a strategic agent to amplify her rewards by up to 30%. Notably, [10] show that honest voting is a Nash equilibrium in Casper FFG: our attack shows that when Casper FFG is composed with selfish mining, this is no longer the case. Altogether, our experiments demonstrate SquirRL's flexibility and promise as a framework for studying attack settings that have thus far eluded theoretical and empirical understanding.
[ { "created": "Wed, 4 Dec 2019 04:48:21 GMT", "version": "v1" }, { "created": "Tue, 4 Aug 2020 19:02:57 GMT", "version": "v2" } ]
2020-08-06
[ [ "Hou", "Charlie", "" ], [ "Zhou", "Mingxun", "" ], [ "Ji", "Yan", "" ], [ "Daian", "Phil", "" ], [ "Tramer", "Florian", "" ], [ "Fanti", "Giulia", "" ], [ "Juels", "Ari", "" ] ]
Incentive mechanisms are central to the functionality of permissionless blockchains: they incentivize participants to run and secure the underlying consensus protocol. Designing incentive-compatible incentive mechanisms is notoriously challenging, however. As a result, most public blockchains today use incentive mechanisms whose security properties are poorly understood and largely untested. In this work, we propose SquirRL, a framework for using deep reinforcement learning to analyze attacks on blockchain incentive mechanisms. We demonstrate SquirRL's power by first recovering known attacks: (1) the optimal selfish mining attack in Bitcoin [52], and (2) the Nash equilibrium in block withholding attacks [16]. We also use SquirRL to obtain several novel empirical results. First, we discover a counterintuitive flaw in the widely used rushing adversary model when applied to multi-agent Markov games with incomplete information. Second, we demonstrate that the optimal selfish mining strategy identified in [52] is actually not a Nash equilibrium in the multi-agent selfish mining setting. In fact, our results suggest (but do not prove) that when more than two competing agents engage in selfish mining, there is no profitable Nash equilibrium. This is consistent with the lack of observed selfish mining in the wild. Third, we find a novel attack on a simplified version of Ethereum's finalization mechanism, Casper the Friendly Finality Gadget (FFG) that allows a strategic agent to amplify her rewards by up to 30%. Notably, [10] show that honest voting is a Nash equilibrium in Casper FFG: our attack shows that when Casper FFG is composed with selfish mining, this is no longer the case. Altogether, our experiments demonstrate SquirRL's flexibility and promise as a framework for studying attack settings that have thus far eluded theoretical and empirical understanding.
1502.00724
Amal Saha
Amal Saha, Sugata Sanyal
Review of Considerations for Mobile Device based Secure Access to Financial Services and Risk Handling Strategy for CIOs, CISOs and CTOs
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The information technology and security stakeholders like CIOs, CISOs and CTOs in financial services organization are often asked to identify the risks with mobile computing channel for financial services that they support. They are also asked to come up with approaches for handling risks, define risk acceptance level and mitigate them. This requires them to articulate strategy for supporting a huge variety of mobile devices from various vendors with different operating systems and hardware platforms and at the same time stay within the accepted risk level. These articulations should be captured in information security policy document or other suitable document of financial services organization like banks, payment service provider, etc. While risks and mitigation approaches are available from multiple sources, the senior stakeholders may find it challenging to articulate the issues in a comprehensive manner for sharing with business owners and other technology stakeholders. This paper reviews the current research that addresses the issues mentioned above and articulates a strategy that the senior stakeholders may use in their organization. It is assumed that this type of comprehensive strategy guide for senior stakeholders is not readily available and CIOs, CISOs and CTOs would find this paper to be very useful.
[ { "created": "Tue, 3 Feb 2015 03:44:32 GMT", "version": "v1" } ]
2015-02-04
[ [ "Saha", "Amal", "" ], [ "Sanyal", "Sugata", "" ] ]
The information technology and security stakeholders like CIOs, CISOs and CTOs in financial services organization are often asked to identify the risks with mobile computing channel for financial services that they support. They are also asked to come up with approaches for handling risks, define risk acceptance level and mitigate them. This requires them to articulate strategy for supporting a huge variety of mobile devices from various vendors with different operating systems and hardware platforms and at the same time stay within the accepted risk level. These articulations should be captured in information security policy document or other suitable document of financial services organization like banks, payment service provider, etc. While risks and mitigation approaches are available from multiple sources, the senior stakeholders may find it challenging to articulate the issues in a comprehensive manner for sharing with business owners and other technology stakeholders. This paper reviews the current research that addresses the issues mentioned above and articulates a strategy that the senior stakeholders may use in their organization. It is assumed that this type of comprehensive strategy guide for senior stakeholders is not readily available and CIOs, CISOs and CTOs would find this paper to be very useful.
cs/0204009
Thomas Eiter
Thomas Eiter, Georg Gottlob, and Kazuhisa Makino
New Results on Monotone Dualization and Generating Hypergraph Transversals
Removed some minor errors. A shorter version of this paper appears in: Proceedings of the 34th ACM Symposium on Theory of Computing (STOC-02), May 19-21, 2002, Montreal, Quebec, Canada
null
null
INFSYS RR-1843-02-05, Institut f. Informationssysteme, TU Wien, April 2002
cs.DS cs.CC
null
We consider the problem of dualizing a monotone CNF (equivalently, computing all minimal transversals of a hypergraph), whose associated decision problem is a prominent open problem in NP-completeness. We present a number of new polynomial time resp. output-polynomial time results for significant cases, which largely advance the tractability frontier and improve on previous results. Furthermore, we show that duality of two monotone CNFs can be disproved with limited nondeterminism. More precisely, this is feasible in polynomial time with O(chi(n) * log n) suitably guessed bits, where chi(n) is given by \chi(n)^chi(n) = n; note that chi(n) = o(log n). This result sheds new light on the complexity of this important problem.
[ { "created": "Thu, 4 Apr 2002 19:23:49 GMT", "version": "v1" }, { "created": "Sun, 14 Apr 2002 00:25:11 GMT", "version": "v2" }, { "created": "Fri, 26 Apr 2002 10:47:18 GMT", "version": "v3" } ]
2007-05-23
[ [ "Eiter", "Thomas", "" ], [ "Gottlob", "Georg", "" ], [ "Makino", "Kazuhisa", "" ] ]
We consider the problem of dualizing a monotone CNF (equivalently, computing all minimal transversals of a hypergraph), whose associated decision problem is a prominent open problem in NP-completeness. We present a number of new polynomial time resp. output-polynomial time results for significant cases, which largely advance the tractability frontier and improve on previous results. Furthermore, we show that duality of two monotone CNFs can be disproved with limited nondeterminism. More precisely, this is feasible in polynomial time with O(chi(n) * log n) suitably guessed bits, where chi(n) is given by \chi(n)^chi(n) = n; note that chi(n) = o(log n). This result sheds new light on the complexity of this important problem.
2201.12347
Varun Ojha
Chandresh Pravin, Ivan Martino, Giuseppe Nicosia, Varun Ojha
Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons
null
Artificial Neural Networks and Machine Learning ICANN 2021
10.1007/978-3-030-86362-3_2
null
cs.LG cs.CR
http://creativecommons.org/licenses/by/4.0/
We identify fragile and robust neurons of deep learning architectures using nodal dropouts of the first convolutional layer. Using an adversarial targeting algorithm, we correlate these neurons with the distribution of adversarial attacks on the network. Adversarial robustness of neural networks has gained significant attention in recent times and highlights intrinsic weaknesses of deep learning networks against carefully constructed distortion applied to input images. In this paper, we evaluate the robustness of state-of-the-art image classification models trained on the MNIST and CIFAR10 datasets against the fast gradient sign method attack, a simple yet effective method of deceiving neural networks. Our method identifies the specific neurons of a network that are most affected by the adversarial attack being applied. We, therefore, propose to make fragile neurons more robust against these attacks by compressing features within robust neurons and amplifying the fragile neurons proportionally.
[ { "created": "Mon, 31 Jan 2022 14:34:07 GMT", "version": "v1" } ]
2022-02-01
[ [ "Pravin", "Chandresh", "" ], [ "Martino", "Ivan", "" ], [ "Nicosia", "Giuseppe", "" ], [ "Ojha", "Varun", "" ] ]
We identify fragile and robust neurons of deep learning architectures using nodal dropouts of the first convolutional layer. Using an adversarial targeting algorithm, we correlate these neurons with the distribution of adversarial attacks on the network. Adversarial robustness of neural networks has gained significant attention in recent times and highlights intrinsic weaknesses of deep learning networks against carefully constructed distortion applied to input images. In this paper, we evaluate the robustness of state-of-the-art image classification models trained on the MNIST and CIFAR10 datasets against the fast gradient sign method attack, a simple yet effective method of deceiving neural networks. Our method identifies the specific neurons of a network that are most affected by the adversarial attack being applied. We, therefore, propose to make fragile neurons more robust against these attacks by compressing features within robust neurons and amplifying the fragile neurons proportionally.
2109.05752
Anhad Bhati
Anhad Bhati, Sibi Raj B. Pillai, Rahul Vaze
On the Age of Information of a Queuing System with Heterogeneous Servers
6 pages, 4 figures. Appeared in NCC 2021, IIT Kanpur
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
An optimal control problem with heterogeneous servers to minimize the average age of information (AoI) is considered. Each server maintains a separate queue, and each packet arriving to the system is randomly routed to one of the servers. Assuming Poisson arrivals and exponentially distributed service times, we first derive an exact expression of the average AoI for two heterogeneous servers. Next, to solve for the optimal average AoI, a close approximation is derived, called the approximate AoI, this is shown to be useful for multi-server systems as well. We show that for the optimal approximate AoI, server utilization (ratio of arrival rate and service rate) for each server should be same as the optimal server utilization with a single server queue. For two identical servers, it is shown that the average AoI is approximately 5/8 times the average AoI of a single server. Furthermore, the average AoI is shown to decrease considerably with the addition of more servers to the system.
[ { "created": "Mon, 13 Sep 2021 07:23:41 GMT", "version": "v1" }, { "created": "Tue, 14 Sep 2021 11:34:00 GMT", "version": "v2" } ]
2021-09-15
[ [ "Bhati", "Anhad", "" ], [ "Pillai", "Sibi Raj B.", "" ], [ "Vaze", "Rahul", "" ] ]
An optimal control problem with heterogeneous servers to minimize the average age of information (AoI) is considered. Each server maintains a separate queue, and each packet arriving to the system is randomly routed to one of the servers. Assuming Poisson arrivals and exponentially distributed service times, we first derive an exact expression of the average AoI for two heterogeneous servers. Next, to solve for the optimal average AoI, a close approximation is derived, called the approximate AoI, this is shown to be useful for multi-server systems as well. We show that for the optimal approximate AoI, server utilization (ratio of arrival rate and service rate) for each server should be same as the optimal server utilization with a single server queue. For two identical servers, it is shown that the average AoI is approximately 5/8 times the average AoI of a single server. Furthermore, the average AoI is shown to decrease considerably with the addition of more servers to the system.
1512.02179
Kevin P. Costello
Marek Chrobak, Kevin P. Costello
Faster Information Gathering in Ad-Hoc Radio Tree Networks
Full version; extended abstract to appear in LATIN '16
null
null
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study information gathering in ad-hoc radio networks. Initially, each node of the network has a piece of information called a rumor, and the overall objective is to gather all these rumors in the designated target node. The ad-hoc property refers to the fact that the topology of the network is unknown when the computation starts. Aggregation of rumors is not allowed, which means that each node may transmit at most one rumor in one step. We focus on networks with tree topologies, that is we assume that the network is a tree with all edges directed towards the root, but, being ad-hoc, its actual topology is not known. We provide two deterministic algorithms for this problem. For the model that does not assume any collision detection nor acknowledgement mechanisms, we give an $O(n\log\log n)$-time algorithm, improving the previous upper bound of $O(n\log n)$. We also show that this running time can be further reduced to $O(n)$ if the model allows for acknowledgements of successful transmissions.
[ { "created": "Mon, 7 Dec 2015 19:21:59 GMT", "version": "v1" } ]
2015-12-08
[ [ "Chrobak", "Marek", "" ], [ "Costello", "Kevin P.", "" ] ]
We study information gathering in ad-hoc radio networks. Initially, each node of the network has a piece of information called a rumor, and the overall objective is to gather all these rumors in the designated target node. The ad-hoc property refers to the fact that the topology of the network is unknown when the computation starts. Aggregation of rumors is not allowed, which means that each node may transmit at most one rumor in one step. We focus on networks with tree topologies, that is we assume that the network is a tree with all edges directed towards the root, but, being ad-hoc, its actual topology is not known. We provide two deterministic algorithms for this problem. For the model that does not assume any collision detection nor acknowledgement mechanisms, we give an $O(n\log\log n)$-time algorithm, improving the previous upper bound of $O(n\log n)$. We also show that this running time can be further reduced to $O(n)$ if the model allows for acknowledgements of successful transmissions.
2107.13361
Yu Huang
Yu Huang, Gary G. Yen and Vincent S. Tseng
Snippet Policy Network for Multi-class Varied-length ECG Early Classification
null
null
null
null
cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Arrhythmia detection from ECG is an important research subject in the prevention and diagnosis of cardiovascular diseases. The prevailing studies formulate arrhythmia detection from ECG as a time series classification problem. Meanwhile, early detection of arrhythmia presents a real-world demand for early prevention and diagnosis. In this paper, we address a problem of cardiovascular disease early classification, which is a varied-length and long-length time series early classification problem as well. For solving this problem, we propose a deep reinforcement learning-based framework, namely Snippet Policy Network (SPN), consisting of four modules, snippet generator, backbone network, controlling agent, and discriminator. Comparing to the existing approaches, the proposed framework features flexible input length, solves the dual-optimization solution of the earliness and accuracy goals. Experimental results demonstrate that SPN achieves an excellent performance of over 80\% in terms of accuracy. Compared to the state-of-the-art methods, at least 7% improvement on different metrics, including the precision, recall, F1-score, and harmonic mean, is delivered by the proposed SPN. To the best of our knowledge, this is the first work focusing on solving the cardiovascular early classification problem based on varied-length ECG data. Based on these excellent features from SPN, it offers a good exemplification for addressing all kinds of varied-length time series early classification problems.
[ { "created": "Wed, 28 Jul 2021 13:47:31 GMT", "version": "v1" } ]
2021-07-29
[ [ "Huang", "Yu", "" ], [ "Yen", "Gary G.", "" ], [ "Tseng", "Vincent S.", "" ] ]
Arrhythmia detection from ECG is an important research subject in the prevention and diagnosis of cardiovascular diseases. The prevailing studies formulate arrhythmia detection from ECG as a time series classification problem. Meanwhile, early detection of arrhythmia presents a real-world demand for early prevention and diagnosis. In this paper, we address a problem of cardiovascular disease early classification, which is a varied-length and long-length time series early classification problem as well. For solving this problem, we propose a deep reinforcement learning-based framework, namely Snippet Policy Network (SPN), consisting of four modules, snippet generator, backbone network, controlling agent, and discriminator. Comparing to the existing approaches, the proposed framework features flexible input length, solves the dual-optimization solution of the earliness and accuracy goals. Experimental results demonstrate that SPN achieves an excellent performance of over 80\% in terms of accuracy. Compared to the state-of-the-art methods, at least 7% improvement on different metrics, including the precision, recall, F1-score, and harmonic mean, is delivered by the proposed SPN. To the best of our knowledge, this is the first work focusing on solving the cardiovascular early classification problem based on varied-length ECG data. Based on these excellent features from SPN, it offers a good exemplification for addressing all kinds of varied-length time series early classification problems.
2301.03319
Vincent Vandeghinste
Vincent Vandeghinste, Oliver Guhr
FullStop:Punctuation and Segmentation Prediction for Dutch with Transformers
18 pages
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
When applying automated speech recognition (ASR) for Belgian Dutch (Van Dyck et al. 2021), the output consists of an unsegmented stream of words, without any punctuation. A next step is to perform segmentation and insert punctuation, making the ASR output more readable and easy to manually correct. As far as we know there is no publicly available punctuation insertion system for Dutch that functions at a usable level. The model we present here is an extension of the models of Guhr et al. (2021) for Dutch and is made publicly available. We trained a sequence classification model, based on the Dutch language model RobBERT (Delobelle et al. 2020). For every word in the input sequence, the models predicts a punctuation marker that follows the word. We have also extended a multilingual model, for cases where the language is unknown or where code switching applies. When performing the task of segmentation, the application of the best models onto out of domain test data, a sliding window of 200 words of the ASR output stream is sent to the classifier, and segmentation is applied when the system predicts a segmenting punctuation sign with a ratio above threshold. Results show to be much better than a machine translation baseline approach.
[ { "created": "Mon, 9 Jan 2023 13:12:05 GMT", "version": "v1" } ]
2023-01-10
[ [ "Vandeghinste", "Vincent", "" ], [ "Guhr", "Oliver", "" ] ]
When applying automated speech recognition (ASR) for Belgian Dutch (Van Dyck et al. 2021), the output consists of an unsegmented stream of words, without any punctuation. A next step is to perform segmentation and insert punctuation, making the ASR output more readable and easy to manually correct. As far as we know there is no publicly available punctuation insertion system for Dutch that functions at a usable level. The model we present here is an extension of the models of Guhr et al. (2021) for Dutch and is made publicly available. We trained a sequence classification model, based on the Dutch language model RobBERT (Delobelle et al. 2020). For every word in the input sequence, the models predicts a punctuation marker that follows the word. We have also extended a multilingual model, for cases where the language is unknown or where code switching applies. When performing the task of segmentation, the application of the best models onto out of domain test data, a sliding window of 200 words of the ASR output stream is sent to the classifier, and segmentation is applied when the system predicts a segmenting punctuation sign with a ratio above threshold. Results show to be much better than a machine translation baseline approach.
1806.03621
Lei Xie
Yougen Yuan, Cheung-Chi Leung, Lei Xie, Hongjie Chen, Bin Ma, Haizhou Li
Learning Acoustic Word Embeddings with Temporal Context for Query-by-Example Speech Search
5 pages, 4 figures, INTERSPEECH 2018
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose to learn acoustic word embeddings with temporal context for query-by-example (QbE) speech search. The temporal context includes the leading and trailing word sequences of a word. We assume that there exist spoken word pairs in the training database. We pad the word pairs with their original temporal context to form fixed-length speech segment pairs. We obtain the acoustic word embeddings through a deep convolutional neural network (CNN) which is trained on the speech segment pairs with a triplet loss. Shifting a fixed-length analysis window through the search content, we obtain a running sequence of embeddings. In this way, searching for the spoken query is equivalent to the matching of acoustic word embeddings. The experiments show that our proposed acoustic word embeddings learned with temporal context are effective in QbE speech search. They outperform the state-of-the-art frame-level feature representations and reduce run-time computation since no dynamic time warping is required in QbE speech search. We also find that it is important to have sufficient speech segment pairs to train the deep CNN for effective acoustic word embeddings.
[ { "created": "Sun, 10 Jun 2018 09:40:08 GMT", "version": "v1" }, { "created": "Sun, 17 Jun 2018 07:38:18 GMT", "version": "v2" } ]
2018-06-19
[ [ "Yuan", "Yougen", "" ], [ "Leung", "Cheung-Chi", "" ], [ "Xie", "Lei", "" ], [ "Chen", "Hongjie", "" ], [ "Ma", "Bin", "" ], [ "Li", "Haizhou", "" ] ]
We propose to learn acoustic word embeddings with temporal context for query-by-example (QbE) speech search. The temporal context includes the leading and trailing word sequences of a word. We assume that there exist spoken word pairs in the training database. We pad the word pairs with their original temporal context to form fixed-length speech segment pairs. We obtain the acoustic word embeddings through a deep convolutional neural network (CNN) which is trained on the speech segment pairs with a triplet loss. Shifting a fixed-length analysis window through the search content, we obtain a running sequence of embeddings. In this way, searching for the spoken query is equivalent to the matching of acoustic word embeddings. The experiments show that our proposed acoustic word embeddings learned with temporal context are effective in QbE speech search. They outperform the state-of-the-art frame-level feature representations and reduce run-time computation since no dynamic time warping is required in QbE speech search. We also find that it is important to have sufficient speech segment pairs to train the deep CNN for effective acoustic word embeddings.
2407.03791
Florian Schneider
Florian Schneider and Sunayana Sitaram
M$\mathbf5$ -- A Diverse Benchmark to Assess the Performance of Large Multimodal Models Across Multilingual and Multicultural Vision-Language Tasks
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Since the release of ChatGPT, the field of Natural Language Processing has experienced rapid advancements, particularly in Large Language Models (LLMs) and their multimodal counterparts, Large Multimodal Models (LMMs). Despite their impressive capabilities, LLMs often exhibit significant performance disparities across different languages and cultural contexts, as demonstrated by various text-only benchmarks. However, current research lacks such benchmarks for multimodal visio-linguistic settings. This work fills this gap by introducing M5, the first comprehensive benchmark designed to evaluate LMMs on diverse vision-language tasks within a multilingual and multicultural context. M5 includes eight datasets covering five tasks and $41$ languages, with a focus on underrepresented languages and culturally diverse images. Furthermore, we introduce two novel datasets, M5-VGR and M5-VLOD, including a new Visio-Linguistic Outlier Detection task, in which all evaluated open-source models fail to significantly surpass the random baseline. Through extensive evaluation and analyses, we highlight substantial task-agnostic performance disparities between high- and low-resource languages. Moreover, we show that larger models do not necessarily outperform smaller ones in a multilingual setting.
[ { "created": "Thu, 4 Jul 2024 09:55:04 GMT", "version": "v1" } ]
2024-07-08
[ [ "Schneider", "Florian", "" ], [ "Sitaram", "Sunayana", "" ] ]
Since the release of ChatGPT, the field of Natural Language Processing has experienced rapid advancements, particularly in Large Language Models (LLMs) and their multimodal counterparts, Large Multimodal Models (LMMs). Despite their impressive capabilities, LLMs often exhibit significant performance disparities across different languages and cultural contexts, as demonstrated by various text-only benchmarks. However, current research lacks such benchmarks for multimodal visio-linguistic settings. This work fills this gap by introducing M5, the first comprehensive benchmark designed to evaluate LMMs on diverse vision-language tasks within a multilingual and multicultural context. M5 includes eight datasets covering five tasks and $41$ languages, with a focus on underrepresented languages and culturally diverse images. Furthermore, we introduce two novel datasets, M5-VGR and M5-VLOD, including a new Visio-Linguistic Outlier Detection task, in which all evaluated open-source models fail to significantly surpass the random baseline. Through extensive evaluation and analyses, we highlight substantial task-agnostic performance disparities between high- and low-resource languages. Moreover, we show that larger models do not necessarily outperform smaller ones in a multilingual setting.
2407.08733
Zihao Zhou
Zihao Zhou, Shudong Liu, Maizhen Ning, Wei Liu, Jindong Wang, Derek F. Wong, Xiaowei Huang, Qiufeng Wang, Kaizhu Huang
Is Your Model Really A Good Math Reasoner? Evaluating Mathematical Reasoning with Checklist
35 pages, 10 figures, preprint
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Exceptional mathematical reasoning ability is one of the key features that demonstrate the power of large language models (LLMs). How to comprehensively define and evaluate the mathematical abilities of LLMs, and even reflect the user experience in real-world scenarios, has emerged as a critical issue. Current benchmarks predominantly concentrate on problem-solving capabilities, which presents a substantial risk of model overfitting and fails to accurately represent genuine mathematical reasoning abilities. In this paper, we argue that if a model really understands a problem, it should be robustly and readily applied across a diverse array of tasks. Motivated by this, we introduce MATHCHECK, a well-designed checklist for testing task generalization and reasoning robustness, as well as an automatic tool to generate checklists efficiently. MATHCHECK includes multiple mathematical reasoning tasks and robustness test types to facilitate a comprehensive evaluation of both mathematical reasoning ability and behavior testing. Utilizing MATHCHECK, we develop MATHCHECK-GSM and MATHCHECK-GEO to assess mathematical textual reasoning and multi-modal reasoning capabilities, respectively, serving as upgraded versions of benchmarks including GSM8k, GeoQA, UniGeo, and Geometry3K. We adopt MATHCHECK-GSM and MATHCHECK-GEO to evaluate over 20 LLMs and 11 MLLMs, assessing their comprehensive mathematical reasoning abilities. Our results demonstrate that while frontier LLMs like GPT-4o continue to excel in various abilities on the checklist, many other model families exhibit a significant decline. Further experiments indicate that, compared to traditional math benchmarks, MATHCHECK better reflects true mathematical abilities and represents mathematical intelligence more linearly, thereby supporting our design. On our MATHCHECK, we can easily conduct detailed behavior analysis to deeply investigate models.
[ { "created": "Thu, 11 Jul 2024 17:58:58 GMT", "version": "v1" } ]
2024-07-12
[ [ "Zhou", "Zihao", "" ], [ "Liu", "Shudong", "" ], [ "Ning", "Maizhen", "" ], [ "Liu", "Wei", "" ], [ "Wang", "Jindong", "" ], [ "Wong", "Derek F.", "" ], [ "Huang", "Xiaowei", "" ], [ "Wang", "Qiufeng", "" ], [ "Huang", "Kaizhu", "" ] ]
Exceptional mathematical reasoning ability is one of the key features that demonstrate the power of large language models (LLMs). How to comprehensively define and evaluate the mathematical abilities of LLMs, and even reflect the user experience in real-world scenarios, has emerged as a critical issue. Current benchmarks predominantly concentrate on problem-solving capabilities, which presents a substantial risk of model overfitting and fails to accurately represent genuine mathematical reasoning abilities. In this paper, we argue that if a model really understands a problem, it should be robustly and readily applied across a diverse array of tasks. Motivated by this, we introduce MATHCHECK, a well-designed checklist for testing task generalization and reasoning robustness, as well as an automatic tool to generate checklists efficiently. MATHCHECK includes multiple mathematical reasoning tasks and robustness test types to facilitate a comprehensive evaluation of both mathematical reasoning ability and behavior testing. Utilizing MATHCHECK, we develop MATHCHECK-GSM and MATHCHECK-GEO to assess mathematical textual reasoning and multi-modal reasoning capabilities, respectively, serving as upgraded versions of benchmarks including GSM8k, GeoQA, UniGeo, and Geometry3K. We adopt MATHCHECK-GSM and MATHCHECK-GEO to evaluate over 20 LLMs and 11 MLLMs, assessing their comprehensive mathematical reasoning abilities. Our results demonstrate that while frontier LLMs like GPT-4o continue to excel in various abilities on the checklist, many other model families exhibit a significant decline. Further experiments indicate that, compared to traditional math benchmarks, MATHCHECK better reflects true mathematical abilities and represents mathematical intelligence more linearly, thereby supporting our design. On our MATHCHECK, we can easily conduct detailed behavior analysis to deeply investigate models.
2210.12581
Pei Liu
Pei Liu, Xiaoyu Sun, Yanjie Zhao, Yonghui Liu, John Grundy, and Li Li
A First Look at CI/CD Adoptions in Open-Source Android Apps
null
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Continuous Integration (CI) and Continuous Delivery (CD) have been demonstrated to be effective in facilitating software building, testing, and deployment. Many research studies have investigated and subsequently improved their working processes. Unfortunately, such research efforts have largely not touched on the usage of CI/CD in the development of Android apps. We fill this gap by conducting an exploratory study of CI/CD adoption in open-source Android apps. We start by collecting a set of 84,475 open-source Android apps from the most popular three online code hosting sites, namely Github, GitLab, and Bitbucket. We then look into those apps and find that (1) only around 10\% of apps have leveraged CI/CD services, i.e., the majority of open-source Android apps are developed without accessing CI/CD services, (2) a small number of apps (291) has even adopted multiple CI/CD services, (3) nearly half of the apps adopted CI/CD services have not really used them, and (4) CI/CD services are useful to improve the popularity of projects.
[ { "created": "Sun, 23 Oct 2022 00:34:10 GMT", "version": "v1" } ]
2022-10-25
[ [ "Liu", "Pei", "" ], [ "Sun", "Xiaoyu", "" ], [ "Zhao", "Yanjie", "" ], [ "Liu", "Yonghui", "" ], [ "Grundy", "John", "" ], [ "Li", "Li", "" ] ]
Continuous Integration (CI) and Continuous Delivery (CD) have been demonstrated to be effective in facilitating software building, testing, and deployment. Many research studies have investigated and subsequently improved their working processes. Unfortunately, such research efforts have largely not touched on the usage of CI/CD in the development of Android apps. We fill this gap by conducting an exploratory study of CI/CD adoption in open-source Android apps. We start by collecting a set of 84,475 open-source Android apps from the most popular three online code hosting sites, namely Github, GitLab, and Bitbucket. We then look into those apps and find that (1) only around 10\% of apps have leveraged CI/CD services, i.e., the majority of open-source Android apps are developed without accessing CI/CD services, (2) a small number of apps (291) has even adopted multiple CI/CD services, (3) nearly half of the apps adopted CI/CD services have not really used them, and (4) CI/CD services are useful to improve the popularity of projects.
0911.5143
MohammadHossein Bateni
MohammadHossein Bateni and MohammadTaghi Hajiaghayi and D\'aniel Marx
Approximation Schemes for Steiner Forest on Planar Graphs and Graphs of Bounded Treewidth
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We give the first polynomial-time approximation scheme (PTAS) for the Steiner forest problem on planar graphs and, more generally, on graphs of bounded genus. As a first step, we show how to build a Steiner forest spanner for such graphs. The crux of the process is a clustering procedure called prize-collecting clustering that breaks down the input instance into separate subinstances which are easier to handle; moreover, the terminals in different subinstances are far from each other. Each subinstance has a relatively inexpensive Steiner tree connecting all its terminals, and the subinstances can be solved (almost) separately. Another building block is a PTAS for Steiner forest on graphs of bounded treewidth. Surprisingly, Steiner forest is NP-hard even on graphs of treewidth 3. Therefore, our PTAS for bounded treewidth graph needs a nontrivial combination of approximation arguments and dynamic programming on the tree decomposition. We further show that Steiner forest can be solved in polynomial time for series-parallel graphs (graphs of treewidth at most two) by a novel combination of dynamic programming and minimum cut computations, completing our thorough complexity study of Steiner forest in the range of bounded treewidth graphs, planar graphs, and bounded genus graphs.
[ { "created": "Thu, 26 Nov 2009 19:19:53 GMT", "version": "v1" } ]
2009-11-30
[ [ "Bateni", "MohammadHossein", "" ], [ "Hajiaghayi", "MohammadTaghi", "" ], [ "Marx", "Dániel", "" ] ]
We give the first polynomial-time approximation scheme (PTAS) for the Steiner forest problem on planar graphs and, more generally, on graphs of bounded genus. As a first step, we show how to build a Steiner forest spanner for such graphs. The crux of the process is a clustering procedure called prize-collecting clustering that breaks down the input instance into separate subinstances which are easier to handle; moreover, the terminals in different subinstances are far from each other. Each subinstance has a relatively inexpensive Steiner tree connecting all its terminals, and the subinstances can be solved (almost) separately. Another building block is a PTAS for Steiner forest on graphs of bounded treewidth. Surprisingly, Steiner forest is NP-hard even on graphs of treewidth 3. Therefore, our PTAS for bounded treewidth graph needs a nontrivial combination of approximation arguments and dynamic programming on the tree decomposition. We further show that Steiner forest can be solved in polynomial time for series-parallel graphs (graphs of treewidth at most two) by a novel combination of dynamic programming and minimum cut computations, completing our thorough complexity study of Steiner forest in the range of bounded treewidth graphs, planar graphs, and bounded genus graphs.
1009.4566
S. M. Kamruzzaman
S. M. Kamruzzaman and Md. Monirul Islam
An Algorithm to Extract Rules from Artificial Neural Networks for Medical Diagnosis Problems
19 Pages, Internatiomal Journal
International Journal of Information Technology (IJIT), Vol. 12, No. 8, pp. 41-59, 2006
null
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artificial neural networks (ANNs) have been successfully applied to solve a variety of classification and function approximation problems. Although ANNs can generally predict better than decision trees for pattern classification problems, ANNs are often regarded as black boxes since their predictions cannot be explained clearly like those of decision trees. This paper presents a new algorithm, called rule extraction from ANNs (REANN), to extract rules from trained ANNs for medical diagnosis problems. A standard three-layer feedforward ANN with four-phase training is the basis of the proposed algorithm. In the first phase, the number of hidden nodes in ANNs is determined automatically by a constructive algorithm. In the second phase, irrelevant connections and input nodes are removed from trained ANNs without sacrificing the predictive accuracy of ANNs. The continuous activation values of the hidden nodes are discretized by using an efficient heuristic clustering algorithm in the third phase. Finally, rules are extracted from compact ANNs by examining the discretized activation values of the hidden nodes. Extensive experimental studies on three benchmark classification problems, i.e. breast cancer, diabetes and lenses, demonstrate that REANN can generate high quality rules from ANNs, which are comparable with other methods in terms of number of rules, average number of conditions for a rule, and predictive accuracy.
[ { "created": "Thu, 23 Sep 2010 10:30:55 GMT", "version": "v1" } ]
2010-09-28
[ [ "Kamruzzaman", "S. M.", "" ], [ "Islam", "Md. Monirul", "" ] ]
Artificial neural networks (ANNs) have been successfully applied to solve a variety of classification and function approximation problems. Although ANNs can generally predict better than decision trees for pattern classification problems, ANNs are often regarded as black boxes since their predictions cannot be explained clearly like those of decision trees. This paper presents a new algorithm, called rule extraction from ANNs (REANN), to extract rules from trained ANNs for medical diagnosis problems. A standard three-layer feedforward ANN with four-phase training is the basis of the proposed algorithm. In the first phase, the number of hidden nodes in ANNs is determined automatically by a constructive algorithm. In the second phase, irrelevant connections and input nodes are removed from trained ANNs without sacrificing the predictive accuracy of ANNs. The continuous activation values of the hidden nodes are discretized by using an efficient heuristic clustering algorithm in the third phase. Finally, rules are extracted from compact ANNs by examining the discretized activation values of the hidden nodes. Extensive experimental studies on three benchmark classification problems, i.e. breast cancer, diabetes and lenses, demonstrate that REANN can generate high quality rules from ANNs, which are comparable with other methods in terms of number of rules, average number of conditions for a rule, and predictive accuracy.
1804.08064
Young-Bum Kim
Young-Bum Kim, Dongchan Kim, Joo-Kyung Kim, Ruhi Sarikaya
A Scalable Neural Shortlisting-Reranking Approach for Large-Scale Domain Classification in Natural Language Understanding
Accepted to NAACL 2018
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Intelligent personal digital assistants (IPDAs), a popular real-life application with spoken language understanding capabilities, can cover potentially thousands of overlapping domains for natural language understanding, and the task of finding the best domain to handle an utterance becomes a challenging problem on a large scale. In this paper, we propose a set of efficient and scalable neural shortlisting-reranking models for large-scale domain classification in IPDAs. The shortlisting stage focuses on efficiently trimming all domains down to a list of k-best candidate domains, and the reranking stage performs a list-wise reranking of the initial k-best domains with additional contextual information. We show the effectiveness of our approach with extensive experiments on 1,500 IPDA domains.
[ { "created": "Sun, 22 Apr 2018 03:56:39 GMT", "version": "v1" } ]
2018-04-24
[ [ "Kim", "Young-Bum", "" ], [ "Kim", "Dongchan", "" ], [ "Kim", "Joo-Kyung", "" ], [ "Sarikaya", "Ruhi", "" ] ]
Intelligent personal digital assistants (IPDAs), a popular real-life application with spoken language understanding capabilities, can cover potentially thousands of overlapping domains for natural language understanding, and the task of finding the best domain to handle an utterance becomes a challenging problem on a large scale. In this paper, we propose a set of efficient and scalable neural shortlisting-reranking models for large-scale domain classification in IPDAs. The shortlisting stage focuses on efficiently trimming all domains down to a list of k-best candidate domains, and the reranking stage performs a list-wise reranking of the initial k-best domains with additional contextual information. We show the effectiveness of our approach with extensive experiments on 1,500 IPDA domains.
1510.05681
Rodrigo de Souza Couto
Rodrigo de Souza Couto, Stefano Secci, Miguel Elias Mitre Campista, Lu\'is Henrique Maciel Kosmalski Costa
Server Placement with Shared Backups for Disaster-Resilient Clouds
Computer Networks 2015
null
10.1016/j.comnet.2015.09.039
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A key strategy to build disaster-resilient clouds is to employ backups of virtual machines in a geo-distributed infrastructure. Today, the continuous and acknowledged replication of virtual machines in different servers is a service provided by different hypervisors. This strategy guarantees that the virtual machines will have no loss of disk and memory content if a disaster occurs, at a cost of strict bandwidth and latency requirements. Considering this kind of service, in this work, we propose an optimization problem to place servers in a wide area network. The goal is to guarantee that backup machines do not fail at the same time as their primary counterparts. In addition, by using virtualization, we also aim to reduce the amount of backup servers required. The optimal results, achieved in real topologies, reduce the number of backup servers by at least 40%. Moreover, this work highlights several characteristics of the backup service according to the employed network, such as the fulfillment of latency requirements.
[ { "created": "Mon, 19 Oct 2015 20:45:38 GMT", "version": "v1" } ]
2015-10-21
[ [ "Couto", "Rodrigo de Souza", "" ], [ "Secci", "Stefano", "" ], [ "Campista", "Miguel Elias Mitre", "" ], [ "Costa", "Luís Henrique Maciel Kosmalski", "" ] ]
A key strategy to build disaster-resilient clouds is to employ backups of virtual machines in a geo-distributed infrastructure. Today, the continuous and acknowledged replication of virtual machines in different servers is a service provided by different hypervisors. This strategy guarantees that the virtual machines will have no loss of disk and memory content if a disaster occurs, at a cost of strict bandwidth and latency requirements. Considering this kind of service, in this work, we propose an optimization problem to place servers in a wide area network. The goal is to guarantee that backup machines do not fail at the same time as their primary counterparts. In addition, by using virtualization, we also aim to reduce the amount of backup servers required. The optimal results, achieved in real topologies, reduce the number of backup servers by at least 40%. Moreover, this work highlights several characteristics of the backup service according to the employed network, such as the fulfillment of latency requirements.
1804.00516
Anabel G\'omez-R\'ios
Anabel G\'omez-R\'ios, Siham Tabik, Juli\'an Luengo, ASM Shihavuddin, Bartosz Krawczyk and Francisco Herrera
Towards Highly Accurate Coral Texture Images Classification Using Deep Convolutional Neural Networks and Data Augmentation
22 pages, 10 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recognition of coral species based on underwater texture images pose a significant difficulty for machine learning algorithms, due to the three following challenges embedded in the nature of this data: 1) datasets do not include information about the global structure of the coral; 2) several species of coral have very similar characteristics; and 3) defining the spatial borders between classes is difficult as many corals tend to appear together in groups. For this reason, the classification of coral species has always required an aid from a domain expert. The objective of this paper is to develop an accurate classification model for coral texture images. Current datasets contain a large number of imbalanced classes, while the images are subject to inter-class variation. We have analyzed 1) several Convolutional Neural Network (CNN) architectures, 2) data augmentation techniques and 3) transfer learning. We have achieved the state-of-the art accuracies using different variations of ResNet on the two current coral texture datasets, EILAT and RSMAS.
[ { "created": "Tue, 27 Mar 2018 12:05:12 GMT", "version": "v1" } ]
2018-04-03
[ [ "Gómez-Ríos", "Anabel", "" ], [ "Tabik", "Siham", "" ], [ "Luengo", "Julián", "" ], [ "Shihavuddin", "ASM", "" ], [ "Krawczyk", "Bartosz", "" ], [ "Herrera", "Francisco", "" ] ]
The recognition of coral species based on underwater texture images pose a significant difficulty for machine learning algorithms, due to the three following challenges embedded in the nature of this data: 1) datasets do not include information about the global structure of the coral; 2) several species of coral have very similar characteristics; and 3) defining the spatial borders between classes is difficult as many corals tend to appear together in groups. For this reason, the classification of coral species has always required an aid from a domain expert. The objective of this paper is to develop an accurate classification model for coral texture images. Current datasets contain a large number of imbalanced classes, while the images are subject to inter-class variation. We have analyzed 1) several Convolutional Neural Network (CNN) architectures, 2) data augmentation techniques and 3) transfer learning. We have achieved the state-of-the art accuracies using different variations of ResNet on the two current coral texture datasets, EILAT and RSMAS.
1602.08777
Martin Strohmeier
Martin Strohmeier, Matthias Sch\"afer, Rui Pinheiro, Vincent Lenders, Ivan Martinovic
On Perception and Reality in Wireless Air Traffic Communications Security
20 pages, 5 figures, 7 tables
null
10.1109/TITS.2016.2612584
null
cs.CR cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
More than a dozen wireless technologies are used by air traffic communication systems during different flight phases. From a conceptual perspective, all of them are insecure as security was never part of their design. Recent contributions from academic and hacking communities have exploited this inherent vulnerability to demonstrate attacks on some of these technologies. However, not all of these contributions have resonated widely within aviation circles. At the same time, the security community lacks certain aviation domain knowledge, preventing aviation authorities from giving credence to their findings. In this paper, we aim to reconcile the view of the security community and the perspective of aviation professionals concerning the safety of air traffic communication technologies. To achieve this, we first provide a systematization of the applications of wireless technologies upon which civil aviation relies. Based on these applications, we comprehensively analyze vulnerabilities, attacks, and countermeasures. We categorize the existing research on countermeasures into approaches that are applicable in the short term and research of secure new technologies deployable in the long term. Since not all of the required aviation knowledge is codified in academic publications, we additionally examine existing aviation standards and survey 242 international aviation experts. Besides their domain knowledge, we also analyze the awareness of members of the aviation community concerning the security of wireless systems and collect their expert opinions on the potential impact of concrete attack scenarios using these technologies.
[ { "created": "Sun, 28 Feb 2016 22:25:20 GMT", "version": "v1" }, { "created": "Tue, 1 Mar 2016 15:26:03 GMT", "version": "v2" }, { "created": "Mon, 24 Oct 2016 17:00:40 GMT", "version": "v3" } ]
2016-11-21
[ [ "Strohmeier", "Martin", "" ], [ "Schäfer", "Matthias", "" ], [ "Pinheiro", "Rui", "" ], [ "Lenders", "Vincent", "" ], [ "Martinovic", "Ivan", "" ] ]
More than a dozen wireless technologies are used by air traffic communication systems during different flight phases. From a conceptual perspective, all of them are insecure as security was never part of their design. Recent contributions from academic and hacking communities have exploited this inherent vulnerability to demonstrate attacks on some of these technologies. However, not all of these contributions have resonated widely within aviation circles. At the same time, the security community lacks certain aviation domain knowledge, preventing aviation authorities from giving credence to their findings. In this paper, we aim to reconcile the view of the security community and the perspective of aviation professionals concerning the safety of air traffic communication technologies. To achieve this, we first provide a systematization of the applications of wireless technologies upon which civil aviation relies. Based on these applications, we comprehensively analyze vulnerabilities, attacks, and countermeasures. We categorize the existing research on countermeasures into approaches that are applicable in the short term and research of secure new technologies deployable in the long term. Since not all of the required aviation knowledge is codified in academic publications, we additionally examine existing aviation standards and survey 242 international aviation experts. Besides their domain knowledge, we also analyze the awareness of members of the aviation community concerning the security of wireless systems and collect their expert opinions on the potential impact of concrete attack scenarios using these technologies.
2309.11018
Nastaran Darabi
Domenico Parente, Nastaran Darabi, Alex C. Stutts, Theja Tulabandhula, and Amit Ranjan Trivedi
Conformalized Multimodal Uncertainty Regression and Reasoning
null
null
null
null
cs.LG cs.CV cs.RO
http://creativecommons.org/licenses/by/4.0/
This paper introduces a lightweight uncertainty estimator capable of predicting multimodal (disjoint) uncertainty bounds by integrating conformal prediction with a deep-learning regressor. We specifically discuss its application for visual odometry (VO), where environmental features such as flying domain symmetries and sensor measurements under ambiguities and occlusion can result in multimodal uncertainties. Our simulation results show that uncertainty estimates in our framework adapt sample-wise against challenging operating conditions such as pronounced noise, limited training data, and limited parametric size of the prediction model. We also develop a reasoning framework that leverages these robust uncertainty estimates and incorporates optical flow-based reasoning to improve prediction prediction accuracy. Thus, by appropriately accounting for predictive uncertainties of data-driven learning and closing their estimation loop via rule-based reasoning, our methodology consistently surpasses conventional deep learning approaches on all these challenging scenarios--pronounced noise, limited training data, and limited model size-reducing the prediction error by 2-3x.
[ { "created": "Wed, 20 Sep 2023 02:40:59 GMT", "version": "v1" } ]
2023-09-21
[ [ "Parente", "Domenico", "" ], [ "Darabi", "Nastaran", "" ], [ "Stutts", "Alex C.", "" ], [ "Tulabandhula", "Theja", "" ], [ "Trivedi", "Amit Ranjan", "" ] ]
This paper introduces a lightweight uncertainty estimator capable of predicting multimodal (disjoint) uncertainty bounds by integrating conformal prediction with a deep-learning regressor. We specifically discuss its application for visual odometry (VO), where environmental features such as flying domain symmetries and sensor measurements under ambiguities and occlusion can result in multimodal uncertainties. Our simulation results show that uncertainty estimates in our framework adapt sample-wise against challenging operating conditions such as pronounced noise, limited training data, and limited parametric size of the prediction model. We also develop a reasoning framework that leverages these robust uncertainty estimates and incorporates optical flow-based reasoning to improve prediction prediction accuracy. Thus, by appropriately accounting for predictive uncertainties of data-driven learning and closing their estimation loop via rule-based reasoning, our methodology consistently surpasses conventional deep learning approaches on all these challenging scenarios--pronounced noise, limited training data, and limited model size-reducing the prediction error by 2-3x.