id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2312.13115
Youjia Li
Youjia Li, Jianjun Shi, Zheng Zhang
A Novel Approach for Rapid Development Based on ChatGPT and Prompt Engineering
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Code generation stands as a powerful technique in modern software development, improving development efficiency, reducing errors, and fostering standardization and consistency. Recently, ChatGPT has exhibited immense potential in automatic code generation. However, existing researches on code generation lack guidance for practical software development process. In this study, we utilized ChatGPT to develop a web-based code generation platform consisting of key components: User Interface, Prompt Builder and Backend Service. Specifically, Prompt Builder dynamically generated comprehensive prompts to enhance model generation performance. We conducted experiments on 2 datasets, evaluating the generated code through 8 widely used metrics.The results demonstrate that (1) Our Prompt Builder is effective, resulting in a 65.06% improvement in EM, a 38.45% improvement in BLEU, a 15.70% improvement in CodeBLEU, and a 50.64% improvement in Pass@1. (2) In real development scenarios, 98.5% of test cases can be validated through manual validation, highlighting the genuine assistance provided by the ChatGPT-based code generation approach.
[ { "created": "Wed, 20 Dec 2023 15:36:13 GMT", "version": "v1" }, { "created": "Thu, 21 Dec 2023 03:28:41 GMT", "version": "v2" } ]
2023-12-22
[ [ "Li", "Youjia", "" ], [ "Shi", "Jianjun", "" ], [ "Zhang", "Zheng", "" ] ]
Code generation stands as a powerful technique in modern software development, improving development efficiency, reducing errors, and fostering standardization and consistency. Recently, ChatGPT has exhibited immense potential in automatic code generation. However, existing researches on code generation lack guidance for practical software development process. In this study, we utilized ChatGPT to develop a web-based code generation platform consisting of key components: User Interface, Prompt Builder and Backend Service. Specifically, Prompt Builder dynamically generated comprehensive prompts to enhance model generation performance. We conducted experiments on 2 datasets, evaluating the generated code through 8 widely used metrics.The results demonstrate that (1) Our Prompt Builder is effective, resulting in a 65.06% improvement in EM, a 38.45% improvement in BLEU, a 15.70% improvement in CodeBLEU, and a 50.64% improvement in Pass@1. (2) In real development scenarios, 98.5% of test cases can be validated through manual validation, highlighting the genuine assistance provided by the ChatGPT-based code generation approach.
1705.01963
Ray Li
Venkatesan Guruswami and Ray Li
Polynomial time decodable codes for the binary deletion channel
arXiv admin note: substantial text overlap with arXiv:1612.06335. The published version of this paper incorrectly states the alphabet size in Theorem 3.4. This version states the result correctly
IEEE Trans. Information Theory 65(4): 2171 - 2178 (2019)
null
null
cs.IT cs.DS math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the random deletion channel, each bit is deleted independently with probability $p$. For the random deletion channel, the existence of codes of rate $(1-p)/9$, and thus bounded away from $0$ for any $p < 1$, has been known. We give an explicit construction with polynomial time encoding and deletion correction algorithms with rate $c_0 (1-p)$ for an absolute constant $c_0 > 0$.
[ { "created": "Thu, 4 May 2017 18:18:07 GMT", "version": "v1" }, { "created": "Wed, 26 Jul 2017 17:10:26 GMT", "version": "v2" }, { "created": "Tue, 11 Jun 2019 23:12:58 GMT", "version": "v3" } ]
2019-06-13
[ [ "Guruswami", "Venkatesan", "" ], [ "Li", "Ray", "" ] ]
In the random deletion channel, each bit is deleted independently with probability $p$. For the random deletion channel, the existence of codes of rate $(1-p)/9$, and thus bounded away from $0$ for any $p < 1$, has been known. We give an explicit construction with polynomial time encoding and deletion correction algorithms with rate $c_0 (1-p)$ for an absolute constant $c_0 > 0$.
2311.07763
Brian Barr PhD
Brian Barr, Noah Fatsi, Leif Hancox-Li, Peter Richter, Daniel Proano, and Caleb Mok
The Disagreement Problem in Faithfulness Metrics
6 pages (excluding refs and appendix)
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
The field of explainable artificial intelligence (XAI) aims to explain how black-box machine learning models work. Much of the work centers around the holy grail of providing post-hoc feature attributions to any model architecture. While the pace of innovation around novel methods has slowed down, the question remains of how to choose a method, and how to make it fit for purpose. Recently, efforts around benchmarking XAI methods have suggested metrics for that purpose -- but there are many choices. That bounty of choice still leaves an end user unclear on how to proceed. This paper focuses on comparing metrics with the aim of measuring faithfulness of local explanations on tabular classification problems -- and shows that the current metrics don't agree; leaving users unsure how to choose the most faithful explanations.
[ { "created": "Mon, 13 Nov 2023 21:26:24 GMT", "version": "v1" } ]
2023-11-15
[ [ "Barr", "Brian", "" ], [ "Fatsi", "Noah", "" ], [ "Hancox-Li", "Leif", "" ], [ "Richter", "Peter", "" ], [ "Proano", "Daniel", "" ], [ "Mok", "Caleb", "" ] ]
The field of explainable artificial intelligence (XAI) aims to explain how black-box machine learning models work. Much of the work centers around the holy grail of providing post-hoc feature attributions to any model architecture. While the pace of innovation around novel methods has slowed down, the question remains of how to choose a method, and how to make it fit for purpose. Recently, efforts around benchmarking XAI methods have suggested metrics for that purpose -- but there are many choices. That bounty of choice still leaves an end user unclear on how to proceed. This paper focuses on comparing metrics with the aim of measuring faithfulness of local explanations on tabular classification problems -- and shows that the current metrics don't agree; leaving users unsure how to choose the most faithful explanations.
2301.11518
Geng Zhao
Geng Zhao, Banghua Zhu, Jiantao Jiao, Michael I. Jordan
Online Learning in Stackelberg Games with an Omniscient Follower
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of online learning in a two-player decentralized cooperative Stackelberg game. In each round, the leader first takes an action, followed by the follower who takes their action after observing the leader's move. The goal of the leader is to learn to minimize the cumulative regret based on the history of interactions. Differing from the traditional formulation of repeated Stackelberg games, we assume the follower is omniscient, with full knowledge of the true reward, and that they always best-respond to the leader's actions. We analyze the sample complexity of regret minimization in this repeated Stackelberg game. We show that depending on the reward structure, the existence of the omniscient follower may change the sample complexity drastically, from constant to exponential, even for linear cooperative Stackelberg games. This poses unique challenges for the learning process of the leader and the subsequent regret analysis.
[ { "created": "Fri, 27 Jan 2023 03:35:10 GMT", "version": "v1" }, { "created": "Tue, 11 Apr 2023 20:27:37 GMT", "version": "v2" } ]
2023-04-13
[ [ "Zhao", "Geng", "" ], [ "Zhu", "Banghua", "" ], [ "Jiao", "Jiantao", "" ], [ "Jordan", "Michael I.", "" ] ]
We study the problem of online learning in a two-player decentralized cooperative Stackelberg game. In each round, the leader first takes an action, followed by the follower who takes their action after observing the leader's move. The goal of the leader is to learn to minimize the cumulative regret based on the history of interactions. Differing from the traditional formulation of repeated Stackelberg games, we assume the follower is omniscient, with full knowledge of the true reward, and that they always best-respond to the leader's actions. We analyze the sample complexity of regret minimization in this repeated Stackelberg game. We show that depending on the reward structure, the existence of the omniscient follower may change the sample complexity drastically, from constant to exponential, even for linear cooperative Stackelberg games. This poses unique challenges for the learning process of the leader and the subsequent regret analysis.
2205.13206
Achille Felicetti
Franco Niccolucci, Achille Felicetti, Sorin Hermon
Populating the Digital Space for Cultural Heritage with Heritage Digital Twins
Submitted to Data - An Open Access Journal from MDPI. 29 pages, 9 figures
Data 2022, 7(8), 105
10.3390/data7080105
null
cs.DL cs.CY
http://creativecommons.org/licenses/by-nc-nd/4.0/
The present paper concerns the design of the semantic infrastructure of the digital space for cultural heritage as envisaged by the European Commission in its recent documents. Due to the complexity of the cultural heritage data and of their intrinsic interrelationships, it is necessary to introduce a novel ontology, yet compliant with existing standards and interoperable with previous platforms used in this context, such as Europeana. The digital space organization must be tailored to the methods and the theory of cultural heritage, briefly summarized in the introduction. The new ontology is based on the Digital Twin concept, i.e. the digital counterpart of cultural heritage assets incorporating all the digital information pertaining to them. This creates a Knowledge Base on the cultural heritage digital space. The paper outlines the main features of the proposed Heritage Digital Twin ontology and provides some examples of application. Future work will include completing the ontology in all its details and testing it in other real cases and with the various sectors of the cultural heritage community.
[ { "created": "Thu, 26 May 2022 07:49:27 GMT", "version": "v1" } ]
2023-02-16
[ [ "Niccolucci", "Franco", "" ], [ "Felicetti", "Achille", "" ], [ "Hermon", "Sorin", "" ] ]
The present paper concerns the design of the semantic infrastructure of the digital space for cultural heritage as envisaged by the European Commission in its recent documents. Due to the complexity of the cultural heritage data and of their intrinsic interrelationships, it is necessary to introduce a novel ontology, yet compliant with existing standards and interoperable with previous platforms used in this context, such as Europeana. The digital space organization must be tailored to the methods and the theory of cultural heritage, briefly summarized in the introduction. The new ontology is based on the Digital Twin concept, i.e. the digital counterpart of cultural heritage assets incorporating all the digital information pertaining to them. This creates a Knowledge Base on the cultural heritage digital space. The paper outlines the main features of the proposed Heritage Digital Twin ontology and provides some examples of application. Future work will include completing the ontology in all its details and testing it in other real cases and with the various sectors of the cultural heritage community.
2405.18751
Jordi Armengol-Estap\'e
Jordi Armengol-Estap\'e, Vincent Michalski, Ramnath Kumar, Pierre-Luc St-Charles, Doina Precup and Samira Ebrahimi Kahou
On the Limits of Multi-modal Meta-Learning with Auxiliary Task Modulation Using Conditional Batch Normalization
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Few-shot learning aims to learn representations that can tackle novel tasks given a small number of examples. Recent studies show that cross-modal learning can improve representations for few-shot classification. More specifically, language is a rich modality that can be used to guide visual learning. In this work, we experiment with a multi-modal architecture for few-shot learning that consists of three components: a classifier, an auxiliary network, and a bridge network. While the classifier performs the main classification task, the auxiliary network learns to predict language representations from the same input, and the bridge network transforms high-level features of the auxiliary network into modulation parameters for layers of the few-shot classifier using conditional batch normalization. The bridge should encourage a form of lightweight semantic alignment between language and vision which could be useful for the classifier. However, after evaluating the proposed approach on two popular few-shot classification benchmarks we find that a) the improvements do not reproduce across benchmarks, and b) when they do, the improvements are due to the additional compute and parameters introduced by the bridge network. We contribute insights and recommendations for future work in multi-modal meta-learning, especially when using language representations.
[ { "created": "Wed, 29 May 2024 04:29:12 GMT", "version": "v1" }, { "created": "Thu, 30 May 2024 14:13:05 GMT", "version": "v2" } ]
2024-05-31
[ [ "Armengol-Estapé", "Jordi", "" ], [ "Michalski", "Vincent", "" ], [ "Kumar", "Ramnath", "" ], [ "St-Charles", "Pierre-Luc", "" ], [ "Precup", "Doina", "" ], [ "Kahou", "Samira Ebrahimi", "" ] ]
Few-shot learning aims to learn representations that can tackle novel tasks given a small number of examples. Recent studies show that cross-modal learning can improve representations for few-shot classification. More specifically, language is a rich modality that can be used to guide visual learning. In this work, we experiment with a multi-modal architecture for few-shot learning that consists of three components: a classifier, an auxiliary network, and a bridge network. While the classifier performs the main classification task, the auxiliary network learns to predict language representations from the same input, and the bridge network transforms high-level features of the auxiliary network into modulation parameters for layers of the few-shot classifier using conditional batch normalization. The bridge should encourage a form of lightweight semantic alignment between language and vision which could be useful for the classifier. However, after evaluating the proposed approach on two popular few-shot classification benchmarks we find that a) the improvements do not reproduce across benchmarks, and b) when they do, the improvements are due to the additional compute and parameters introduced by the bridge network. We contribute insights and recommendations for future work in multi-modal meta-learning, especially when using language representations.
1609.03545
Hayko Riemenschneider
Julien Weissenberg and Hayko Riemenschneider and Ralf Dragon and Luc Van Gool
Dilemma First Search for Effortless Optimization of NP-Hard Problems
To be published at ICPR 2016
null
null
null
cs.DS cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To tackle the exponentiality associated with NP-hard problems, two paradigms have been proposed. First, Branch & Bound, like Dynamic Programming, achieve efficient exact inference but requires extensive information and analysis about the problem at hand. Second, meta-heuristics are easier to implement but comparatively inefficient. As a result, a number of problems have been left unoptimized and plain greedy solutions are used. We introduce a theoretical framework and propose a powerful yet simple search method called Dilemma First Search (DFS). DFS exploits the decision heuristic needed for the greedy solution for further optimization. DFS is useful when it is hard to design efficient exact inference. We evaluate DFS on two problems: First, the Knapsack problem, for which efficient algorithms exist, serves as a toy example. Second, Decision Tree inference, where state-of-the-art algorithms rely on the greedy or randomness-based solutions. We further show that decision trees benefit from optimizations that are performed in a fraction of the iterations required by a random-based search.
[ { "created": "Mon, 12 Sep 2016 19:36:02 GMT", "version": "v1" } ]
2016-09-13
[ [ "Weissenberg", "Julien", "" ], [ "Riemenschneider", "Hayko", "" ], [ "Dragon", "Ralf", "" ], [ "Van Gool", "Luc", "" ] ]
To tackle the exponentiality associated with NP-hard problems, two paradigms have been proposed. First, Branch & Bound, like Dynamic Programming, achieve efficient exact inference but requires extensive information and analysis about the problem at hand. Second, meta-heuristics are easier to implement but comparatively inefficient. As a result, a number of problems have been left unoptimized and plain greedy solutions are used. We introduce a theoretical framework and propose a powerful yet simple search method called Dilemma First Search (DFS). DFS exploits the decision heuristic needed for the greedy solution for further optimization. DFS is useful when it is hard to design efficient exact inference. We evaluate DFS on two problems: First, the Knapsack problem, for which efficient algorithms exist, serves as a toy example. Second, Decision Tree inference, where state-of-the-art algorithms rely on the greedy or randomness-based solutions. We further show that decision trees benefit from optimizations that are performed in a fraction of the iterations required by a random-based search.
2110.08797
Gen Luo
Gen Luo, Yiyi Zhou, Xiaoshuai Sun, Yongjian Wu, Yue Gao, Rongrong Ji
Towards Language-guided Visual Recognition via Dynamic Convolutions
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we are committed to establishing an unified and end-to-end multi-modal network via exploring the language-guided visual recognition. To approach this target, we first propose a novel multi-modal convolution module called Language-dependent Convolution (LaConv). Its convolution kernels are dynamically generated based on natural language information, which can help extract differentiated visual features for different multi-modal examples. Based on the LaConv module, we further build the first fully language-driven convolution network, termed as LaConvNet, which can unify the visual recognition and multi-modal reasoning in one forward structure. To validate LaConv and LaConvNet, we conduct extensive experiments on four benchmark datasets of two vision-and-language tasks, i.e., visual question answering (VQA) and referring expression comprehension (REC). The experimental results not only shows the performance gains of LaConv compared to the existing multi-modal modules, but also witness the merits of LaConvNet as an unified network, including compact network, high generalization ability and excellent performance, e.g., +4.7% on RefCOCO+.
[ { "created": "Sun, 17 Oct 2021 11:29:13 GMT", "version": "v1" }, { "created": "Thu, 14 Sep 2023 13:37:38 GMT", "version": "v2" } ]
2023-09-15
[ [ "Luo", "Gen", "" ], [ "Zhou", "Yiyi", "" ], [ "Sun", "Xiaoshuai", "" ], [ "Wu", "Yongjian", "" ], [ "Gao", "Yue", "" ], [ "Ji", "Rongrong", "" ] ]
In this paper, we are committed to establishing an unified and end-to-end multi-modal network via exploring the language-guided visual recognition. To approach this target, we first propose a novel multi-modal convolution module called Language-dependent Convolution (LaConv). Its convolution kernels are dynamically generated based on natural language information, which can help extract differentiated visual features for different multi-modal examples. Based on the LaConv module, we further build the first fully language-driven convolution network, termed as LaConvNet, which can unify the visual recognition and multi-modal reasoning in one forward structure. To validate LaConv and LaConvNet, we conduct extensive experiments on four benchmark datasets of two vision-and-language tasks, i.e., visual question answering (VQA) and referring expression comprehension (REC). The experimental results not only shows the performance gains of LaConv compared to the existing multi-modal modules, but also witness the merits of LaConvNet as an unified network, including compact network, high generalization ability and excellent performance, e.g., +4.7% on RefCOCO+.
1611.03971
Tayfun Tuna
Tayfun Tuna and Esra Akbas and Ahmet Aksoy and Muhammed Abdullah Canbaz and Umit Karabiyik and Bilal Gonen and Ramazan Aygun
User characterization for online social networks
null
Soc. Netw. Anal. Min. (2016) 6: 104. doi:10.1007/s13278-016-0412-3
10.1007/s13278-016-0412-3
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online social network analysis has attracted great attention with a vast number of users sharing information and availability of APIs that help to crawl online social network data. In this paper, we study the research studies that are helpful for user characterization as online users may not always reveal their true identity or attributes. We especially focused on user attribute determination such as gender, age, etc.; user behavior analysis such as motives for deception; mental models that are indicators of user behavior; user categorization such as bots vs. humans; and entity matching on different social networks. We believe our summary of analysis of user characterization will provide important insights to researchers and better services to online users.
[ { "created": "Sat, 12 Nov 2016 08:30:18 GMT", "version": "v1" }, { "created": "Tue, 27 Dec 2016 00:11:13 GMT", "version": "v2" } ]
2016-12-28
[ [ "Tuna", "Tayfun", "" ], [ "Akbas", "Esra", "" ], [ "Aksoy", "Ahmet", "" ], [ "Canbaz", "Muhammed Abdullah", "" ], [ "Karabiyik", "Umit", "" ], [ "Gonen", "Bilal", "" ], [ "Aygun", "Ramazan", "" ] ]
Online social network analysis has attracted great attention with a vast number of users sharing information and availability of APIs that help to crawl online social network data. In this paper, we study the research studies that are helpful for user characterization as online users may not always reveal their true identity or attributes. We especially focused on user attribute determination such as gender, age, etc.; user behavior analysis such as motives for deception; mental models that are indicators of user behavior; user categorization such as bots vs. humans; and entity matching on different social networks. We believe our summary of analysis of user characterization will provide important insights to researchers and better services to online users.
2307.14208
Tanapol Kosolwattana
Tanapol Kosolwattana, Huazheng Wang, Ying Lin
Online Modeling and Monitoring of Dependent Processes under Resource Constraints
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adaptive monitoring of a large population of dynamic processes is critical for the timely detection of abnormal events under limited resources in many healthcare and engineering systems. Examples include the risk-based disease screening and condition-based process monitoring. However, existing adaptive monitoring models either ignore the dependency among processes or overlook the uncertainty in process modeling. To design an optimal monitoring strategy that accurately monitors the processes with poor health conditions and actively collects information for uncertainty reduction, a novel online collaborative learning method is proposed in this study. The proposed method designs a collaborative learning-based upper confidence bound (CL-UCB) algorithm to optimally balance the exploitation and exploration of dependent processes under limited resources. Efficiency of the proposed method is demonstrated through theoretical analysis, simulation studies and an empirical study of adaptive cognitive monitoring in Alzheimer's disease.
[ { "created": "Wed, 26 Jul 2023 14:14:38 GMT", "version": "v1" }, { "created": "Sat, 21 Oct 2023 23:14:34 GMT", "version": "v2" } ]
2023-10-24
[ [ "Kosolwattana", "Tanapol", "" ], [ "Wang", "Huazheng", "" ], [ "Lin", "Ying", "" ] ]
Adaptive monitoring of a large population of dynamic processes is critical for the timely detection of abnormal events under limited resources in many healthcare and engineering systems. Examples include the risk-based disease screening and condition-based process monitoring. However, existing adaptive monitoring models either ignore the dependency among processes or overlook the uncertainty in process modeling. To design an optimal monitoring strategy that accurately monitors the processes with poor health conditions and actively collects information for uncertainty reduction, a novel online collaborative learning method is proposed in this study. The proposed method designs a collaborative learning-based upper confidence bound (CL-UCB) algorithm to optimally balance the exploitation and exploration of dependent processes under limited resources. Efficiency of the proposed method is demonstrated through theoretical analysis, simulation studies and an empirical study of adaptive cognitive monitoring in Alzheimer's disease.
2303.05807
Ziteng Cui
Ziteng Cui, Lin Gu, Xiao Sun, Xianzheng Ma, Yu Qiao, Tatsuya Harada
Aleth-NeRF: Low-light Condition View Synthesis with Concealing Fields
website page: https://cuiziteng.github.io/Aleth_NeRF_web/, refer to new version: arXiv:2312.09093
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Common capture low-light scenes are challenging for most computer vision techniques, including Neural Radiance Fields (NeRF). Vanilla NeRF is viewer-centred simplifies the rendering process only as light emission from 3D locations in the viewing direction, thus failing to model the low-illumination induced darkness. Inspired by the emission theory of ancient Greeks that visual perception is accomplished by rays casting from eyes, we make slight modifications on vanilla NeRF to train on multiple views of low-light scenes, we can thus render out the well-lit scene in an unsupervised manner. We introduce a surrogate concept, Concealing Fields, that reduces the transport of light during the volume rendering stage. Specifically, our proposed method, Aleth-NeRF, directly learns from the dark image to understand volumetric object representation and concealing field under priors. By simply eliminating Concealing Fields, we can render a single or multi-view well-lit image(s) and gain superior performance over other 2D low-light enhancement methods. Additionally, we collect the first paired LOw-light and normal-light Multi-view (LOM) datasets for future research. This version is invalid, please refer to our new AAAI version: arXiv:2312.09093
[ { "created": "Fri, 10 Mar 2023 09:28:09 GMT", "version": "v1" }, { "created": "Sat, 30 Dec 2023 02:42:12 GMT", "version": "v2" } ]
2024-01-02
[ [ "Cui", "Ziteng", "" ], [ "Gu", "Lin", "" ], [ "Sun", "Xiao", "" ], [ "Ma", "Xianzheng", "" ], [ "Qiao", "Yu", "" ], [ "Harada", "Tatsuya", "" ] ]
Common capture low-light scenes are challenging for most computer vision techniques, including Neural Radiance Fields (NeRF). Vanilla NeRF is viewer-centred simplifies the rendering process only as light emission from 3D locations in the viewing direction, thus failing to model the low-illumination induced darkness. Inspired by the emission theory of ancient Greeks that visual perception is accomplished by rays casting from eyes, we make slight modifications on vanilla NeRF to train on multiple views of low-light scenes, we can thus render out the well-lit scene in an unsupervised manner. We introduce a surrogate concept, Concealing Fields, that reduces the transport of light during the volume rendering stage. Specifically, our proposed method, Aleth-NeRF, directly learns from the dark image to understand volumetric object representation and concealing field under priors. By simply eliminating Concealing Fields, we can render a single or multi-view well-lit image(s) and gain superior performance over other 2D low-light enhancement methods. Additionally, we collect the first paired LOw-light and normal-light Multi-view (LOM) datasets for future research. This version is invalid, please refer to our new AAAI version: arXiv:2312.09093
1910.08185
Wail Alkowaileet
Wail Y. Alkowaileet, Sattam Alsubaiee and Michael J. Carey
An LSM-based Tuple Compaction Framework for Apache AsterixDB (Extended Version)
18 pages, 28 figures, to appear in VLDB 2020
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Document database systems store self-describing semi-structured records, such as JSON, "as-is" without requiring the users to pre-define a schema. This provides users with the flexibility to change the structure of incoming records without worrying about taking the system offline or hindering the performance of currently running queries. However, the flexibility of such systems does not free. The large amount of redundancy in the records can introduce an unnecessary storage overhead and impact query performance. Our focus in this paper is to address the storage overhead issue by introducing a tuple compactor framework that infers and extracts the schema from self-describing semi-structured records during the data ingestion. As many prominent document stores, such as MongoDB and Couchbase, adopt Log Structured Merge (LSM) trees in their storage engines, our framework exploits LSM lifecycle events to piggyback the schema inference and extraction operations. We have implemented and empirically evaluated our approach to measure its impact on storage, data ingestion, and query performance in the context of Apache AsterixDB.
[ { "created": "Thu, 17 Oct 2019 22:13:40 GMT", "version": "v1" }, { "created": "Mon, 11 May 2020 05:23:31 GMT", "version": "v2" } ]
2020-05-12
[ [ "Alkowaileet", "Wail Y.", "" ], [ "Alsubaiee", "Sattam", "" ], [ "Carey", "Michael J.", "" ] ]
Document database systems store self-describing semi-structured records, such as JSON, "as-is" without requiring the users to pre-define a schema. This provides users with the flexibility to change the structure of incoming records without worrying about taking the system offline or hindering the performance of currently running queries. However, the flexibility of such systems does not free. The large amount of redundancy in the records can introduce an unnecessary storage overhead and impact query performance. Our focus in this paper is to address the storage overhead issue by introducing a tuple compactor framework that infers and extracts the schema from self-describing semi-structured records during the data ingestion. As many prominent document stores, such as MongoDB and Couchbase, adopt Log Structured Merge (LSM) trees in their storage engines, our framework exploits LSM lifecycle events to piggyback the schema inference and extraction operations. We have implemented and empirically evaluated our approach to measure its impact on storage, data ingestion, and query performance in the context of Apache AsterixDB.
1302.4767
Nicola Laurenti
Francesco Renna, Nicola Laurenti, Stefano Tomasin, Marco Baldi, Nicola Maturo, Marco Bianchi, Franco Chiaraluce and Matthieu Bloch
Low-power Secret-key Agreement over OFDM
9 pages, 4 figures; this is the authors prepared version of the paper with the same name accepted for HotWiSec 2013, the Second ACM Workshop on Hot Topics on Wireless Network Security and Privacy, Budapest, Hungary 17-19 April 2013
null
null
null
cs.IT cs.CR math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Information-theoretic secret-key agreement is perhaps the most practically feasible mechanism that provides unconditional security at the physical layer to date. In this paper, we consider the problem of secret-key agreement by sharing randomness at low power over an orthogonal frequency division multiplexing (OFDM) link, in the presence of an eavesdropper. The low power assumption greatly simplifies the design of the randomness sharing scheme, even in a fading channel scenario. We assess the performance of the proposed system in terms of secrecy key rate and show that a practical approach to key sharing is obtained by using low-density parity check (LDPC) codes for information reconciliation. Numerical results confirm the merits of the proposed approach as a feasible and practical solution. Moreover, the outage formulation allows to implement secret-key agreement even when only statistical knowledge of the eavesdropper channel is available.
[ { "created": "Tue, 19 Feb 2013 22:19:12 GMT", "version": "v1" } ]
2013-02-21
[ [ "Renna", "Francesco", "" ], [ "Laurenti", "Nicola", "" ], [ "Tomasin", "Stefano", "" ], [ "Baldi", "Marco", "" ], [ "Maturo", "Nicola", "" ], [ "Bianchi", "Marco", "" ], [ "Chiaraluce", "Franco", "" ], [ "Bloch", "Matthieu", "" ] ]
Information-theoretic secret-key agreement is perhaps the most practically feasible mechanism that provides unconditional security at the physical layer to date. In this paper, we consider the problem of secret-key agreement by sharing randomness at low power over an orthogonal frequency division multiplexing (OFDM) link, in the presence of an eavesdropper. The low power assumption greatly simplifies the design of the randomness sharing scheme, even in a fading channel scenario. We assess the performance of the proposed system in terms of secrecy key rate and show that a practical approach to key sharing is obtained by using low-density parity check (LDPC) codes for information reconciliation. Numerical results confirm the merits of the proposed approach as a feasible and practical solution. Moreover, the outage formulation allows to implement secret-key agreement even when only statistical knowledge of the eavesdropper channel is available.
2407.01976
Jinghui Lu
Jinghui Lu, Haiyang Yu, Yanjie Wang, Yongjie Ye, Jingqun Tang, Ziwei Yang, Binghong Wu, Qi Liu, Hao Feng, Han Wang, Hao Liu, Can Huang
A Bounding Box is Worth One Token: Interleaving Layout and Text in a Large Language Model for Document Understanding
null
null
null
null
cs.CL cs.AI cs.MM
http://creativecommons.org/publicdomain/zero/1.0/
Recently, many studies have demonstrated that exclusively incorporating OCR-derived text and spatial layouts with large language models (LLMs) can be highly effective for document understanding tasks. However, existing methods that integrate spatial layouts with text have limitations, such as producing overly long text sequences or failing to fully leverage the autoregressive traits of LLMs. In this work, we introduce Interleaving Layout and Text in a Large Language Model (LayTextLLM)} for document understanding. In particular, LayTextLLM projects each bounding box to a single embedding and interleaves it with text, efficiently avoiding long sequence issues while leveraging autoregressive traits of LLMs. LayTextLLM not only streamlines the interaction of layout and textual data but also shows enhanced performance in Key Information Extraction (KIE) and Visual Question Answering (VQA). Comprehensive benchmark evaluations reveal significant improvements, with a 27.2% increase on KIE tasks and 12.0% on VQA tasks compared to previous state-of-the-art document understanding MLLMs, as well as a 15.1% improvement over other SOTA OCR-based LLMs on KIE tasks.
[ { "created": "Tue, 2 Jul 2024 06:29:05 GMT", "version": "v1" }, { "created": "Wed, 24 Jul 2024 11:45:48 GMT", "version": "v2" } ]
2024-07-25
[ [ "Lu", "Jinghui", "" ], [ "Yu", "Haiyang", "" ], [ "Wang", "Yanjie", "" ], [ "Ye", "Yongjie", "" ], [ "Tang", "Jingqun", "" ], [ "Yang", "Ziwei", "" ], [ "Wu", "Binghong", "" ], [ "Liu", "Qi", "" ], [ "Feng", "Hao", "" ], [ "Wang", "Han", "" ], [ "Liu", "Hao", "" ], [ "Huang", "Can", "" ] ]
Recently, many studies have demonstrated that exclusively incorporating OCR-derived text and spatial layouts with large language models (LLMs) can be highly effective for document understanding tasks. However, existing methods that integrate spatial layouts with text have limitations, such as producing overly long text sequences or failing to fully leverage the autoregressive traits of LLMs. In this work, we introduce Interleaving Layout and Text in a Large Language Model (LayTextLLM)} for document understanding. In particular, LayTextLLM projects each bounding box to a single embedding and interleaves it with text, efficiently avoiding long sequence issues while leveraging autoregressive traits of LLMs. LayTextLLM not only streamlines the interaction of layout and textual data but also shows enhanced performance in Key Information Extraction (KIE) and Visual Question Answering (VQA). Comprehensive benchmark evaluations reveal significant improvements, with a 27.2% increase on KIE tasks and 12.0% on VQA tasks compared to previous state-of-the-art document understanding MLLMs, as well as a 15.1% improvement over other SOTA OCR-based LLMs on KIE tasks.
2007.05216
Sajan Kedia
Sajan Kedia, Samyak Jain, Abhishek Sharma
Price Optimization in Fashion E-commerce
8 pages, 6 figures, AI for fashion supply chain Conference
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the rapid growth in the fashion e-commerce industry, it is becoming extremely challenging for the E-tailers to set an optimal price point for all the products on the platform. By establishing an optimal price point, they can maximize overall revenue and profit for the platform. In this paper, we propose a novel machine learning and optimization technique to find the optimal price point at an individual product level. It comprises three major components. Firstly, we use a demand prediction model to predict the next day demand for each product at a certain discount percentage. Next step, we use the concept of price elasticity of demand to get the multiple demand values by varying the discount percentage. Thus we obtain multiple price demand pairs for each product and we have to choose one of them for the live platform. Typically fashion e-commerce has millions of products, so there can be many permutations. Each permutation will assign a unique price point for all the products, which will sum up to a unique revenue number. To choose the best permutation which gives maximum revenue, a linear programming optimization technique is used. We have deployed the above methods in the live production environment and conducted several AB tests. According to the AB test result, our model is improving the revenue by 1 percent and gross margin by 0.81 percent.
[ { "created": "Fri, 10 Jul 2020 07:40:28 GMT", "version": "v1" }, { "created": "Mon, 24 Aug 2020 10:18:53 GMT", "version": "v2" } ]
2020-08-25
[ [ "Kedia", "Sajan", "" ], [ "Jain", "Samyak", "" ], [ "Sharma", "Abhishek", "" ] ]
With the rapid growth in the fashion e-commerce industry, it is becoming extremely challenging for the E-tailers to set an optimal price point for all the products on the platform. By establishing an optimal price point, they can maximize overall revenue and profit for the platform. In this paper, we propose a novel machine learning and optimization technique to find the optimal price point at an individual product level. It comprises three major components. Firstly, we use a demand prediction model to predict the next day demand for each product at a certain discount percentage. Next step, we use the concept of price elasticity of demand to get the multiple demand values by varying the discount percentage. Thus we obtain multiple price demand pairs for each product and we have to choose one of them for the live platform. Typically fashion e-commerce has millions of products, so there can be many permutations. Each permutation will assign a unique price point for all the products, which will sum up to a unique revenue number. To choose the best permutation which gives maximum revenue, a linear programming optimization technique is used. We have deployed the above methods in the live production environment and conducted several AB tests. According to the AB test result, our model is improving the revenue by 1 percent and gross margin by 0.81 percent.
1901.07634
Lam Nguyen
Lam M. Nguyen, Phuong Ha Nguyen, Dzung T. Phan, Jayant R. Kalagnanam, Marten van Dijk
DTN: A Learning Rate Scheme with Convergence Rate of $\mathcal{O}(1/t)$ for SGD
This paper has inconsistent results, i.e., we made some failed claims because we did some mistakes for using the test criterion for a series
null
null
null
cs.LG math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper has some inconsistent results, i.e., we made some failed claims because we did some mistakes for using the test criterion for a series. Precisely, our claims on the convergence rate of $\mathcal{O}(1/t)$ of SGD presented in Theorem 1, Corollary 1, Theorem 2 and Corollary 2 are wrongly derived because they are based on Lemma 5. In Lemma 5, we do not correctly use the test criterion for a series. Hence, the result of Lemma 5 is not valid. We would like to thank the community for pointing out this mistake!
[ { "created": "Tue, 22 Jan 2019 22:40:31 GMT", "version": "v1" }, { "created": "Mon, 28 Jan 2019 21:55:19 GMT", "version": "v2" }, { "created": "Thu, 28 Feb 2019 02:01:20 GMT", "version": "v3" } ]
2019-03-01
[ [ "Nguyen", "Lam M.", "" ], [ "Nguyen", "Phuong Ha", "" ], [ "Phan", "Dzung T.", "" ], [ "Kalagnanam", "Jayant R.", "" ], [ "van Dijk", "Marten", "" ] ]
This paper has some inconsistent results, i.e., we made some failed claims because we did some mistakes for using the test criterion for a series. Precisely, our claims on the convergence rate of $\mathcal{O}(1/t)$ of SGD presented in Theorem 1, Corollary 1, Theorem 2 and Corollary 2 are wrongly derived because they are based on Lemma 5. In Lemma 5, we do not correctly use the test criterion for a series. Hence, the result of Lemma 5 is not valid. We would like to thank the community for pointing out this mistake!
2005.12712
Benedikt Kleinmeier
Benedikt Kleinmeier, Gerta K\"oster, John Drury
Agent-Based Simulation of Collective Cooperation: From Experiment to Model
16 pages, 19 figures, 5 tables, 4 listings, interdisciplinary work between computer science and psychology
Journal of the Royal Society Interface (October 2020, Volume 17, Issue 171)
10.1098/rsif.2020.0396
null
cs.MA cs.CY
http://creativecommons.org/licenses/by/4.0/
Simulation models of pedestrian dynamics have become an invaluable tool for evacuation planning. Typically crowds are assumed to stream unidirectionally towards a safe area. Simulated agents avoid collisions through mechanisms that belong to each individual, such as being repelled from each other by imaginary forces. But classic locomotion models fail when collective cooperation is called for, notably when an agent, say a first-aid attendant, needs to forge a path through a densely packed group. We present a controlled experiment to observe what happens when humans pass through a dense static crowd. We formulate and test hypothesis on salient phenomena. We discuss our observations in a psychological framework. We derive a model that incorporates: agents' perception and cognitive processing of a situation that needs cooperation; selection from a portfolio of behaviours, such as being cooperative; and a suitable action, such as swapping places. Agents' ability to successfully get through a dense crowd emerges as an effect of the psychological model.
[ { "created": "Tue, 26 May 2020 13:29:08 GMT", "version": "v1" }, { "created": "Wed, 7 Oct 2020 09:40:47 GMT", "version": "v2" } ]
2020-10-08
[ [ "Kleinmeier", "Benedikt", "" ], [ "Köster", "Gerta", "" ], [ "Drury", "John", "" ] ]
Simulation models of pedestrian dynamics have become an invaluable tool for evacuation planning. Typically crowds are assumed to stream unidirectionally towards a safe area. Simulated agents avoid collisions through mechanisms that belong to each individual, such as being repelled from each other by imaginary forces. But classic locomotion models fail when collective cooperation is called for, notably when an agent, say a first-aid attendant, needs to forge a path through a densely packed group. We present a controlled experiment to observe what happens when humans pass through a dense static crowd. We formulate and test hypothesis on salient phenomena. We discuss our observations in a psychological framework. We derive a model that incorporates: agents' perception and cognitive processing of a situation that needs cooperation; selection from a portfolio of behaviours, such as being cooperative; and a suitable action, such as swapping places. Agents' ability to successfully get through a dense crowd emerges as an effect of the psychological model.
2308.03019
Pratapa Redy Lankireddy
Naveenkumar Vodnala (VNR Vignana Jyothi Institute of Engineering and Technology), Pratap Reddy Lankireddy (Jawaharlal Nehru Technological University Hyderabad), Padmasai Yarlagadda (VNR Vignana Jyothi Institute of Engineering and Technology)
Characterization of cough sounds using statistical analysis
19 pages, 8 figures, paper submitted to journal Biomedical Signal Processing and Control which is under review
null
null
null
cs.SD eess.AS eess.SP
http://creativecommons.org/licenses/by-sa/4.0/
Cough is a primary symptom of most respiratory diseases, and changes in cough characteristics provide valuable information for diagnosing respiratory diseases. The characterization of cough sounds still lacks concrete evidence, which makes it difficult to accurately distinguish between different types of coughs and other sounds. The objective of this research work is to characterize cough sounds with voiced content and cough sounds without voiced content. Further, the cough sound characteristics are compared with the characteristics of speech. The proposed method to achieve this goal utilized spectral roll-off, spectral entropy, spectral flatness, spectral flux, zero crossing rate, spectral centroid, and spectral bandwidth attributes which describe the cough sounds related to the respiratory system, glottal information, and voice model. These attributes are then subjected to statistical analysis using the measures of minimum, maximum, mean, median, and standard deviation. The experimental results show that the mean and frequency distribution of spectral roll-off, spectral centroid, and spectral bandwidth are found to be higher for cough sounds than for speech signals. Spectral flatness levels in cough sounds will rise to 0.22, whereas spectral flux varies between 0.3 and 0.6. The Zero Crossing Rate (ZCR) of most frames of cough sounds is between 0.05 and 0.4. These attributes contribute significant information while characterizing cough sounds.
[ { "created": "Sun, 6 Aug 2023 04:26:52 GMT", "version": "v1" } ]
2023-08-08
[ [ "Vodnala", "Naveenkumar", "", "VNR Vignana Jyothi Institute of Engineering and\n Technology" ], [ "Lankireddy", "Pratap Reddy", "", "Jawaharlal Nehru Technological\n University Hyderabad" ], [ "Yarlagadda", "Padmasai", "", "VNR Vignana Jyothi Institute of\n Engineering and Technology" ] ]
Cough is a primary symptom of most respiratory diseases, and changes in cough characteristics provide valuable information for diagnosing respiratory diseases. The characterization of cough sounds still lacks concrete evidence, which makes it difficult to accurately distinguish between different types of coughs and other sounds. The objective of this research work is to characterize cough sounds with voiced content and cough sounds without voiced content. Further, the cough sound characteristics are compared with the characteristics of speech. The proposed method to achieve this goal utilized spectral roll-off, spectral entropy, spectral flatness, spectral flux, zero crossing rate, spectral centroid, and spectral bandwidth attributes which describe the cough sounds related to the respiratory system, glottal information, and voice model. These attributes are then subjected to statistical analysis using the measures of minimum, maximum, mean, median, and standard deviation. The experimental results show that the mean and frequency distribution of spectral roll-off, spectral centroid, and spectral bandwidth are found to be higher for cough sounds than for speech signals. Spectral flatness levels in cough sounds will rise to 0.22, whereas spectral flux varies between 0.3 and 0.6. The Zero Crossing Rate (ZCR) of most frames of cough sounds is between 0.05 and 0.4. These attributes contribute significant information while characterizing cough sounds.
1407.4668
Albrecht Zimmermann
Albrecht Zimmermann
A feature construction framework based on outlier detection and discriminative pattern mining
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
No matter the expressive power and sophistication of supervised learning algorithms, their effectiveness is restricted by the features describing the data. This is not a new insight in ML and many methods for feature selection, transformation, and construction have been developed. But while this is on-going for general techniques for feature selection and transformation, i.e. dimensionality reduction, work on feature construction, i.e. enriching the data, is by now mainly the domain of image, particularly character, recognition, and NLP. In this work, we propose a new general framework for feature construction. The need for feature construction in a data set is indicated by class outliers and discriminative pattern mining used to derive features on their k-neighborhoods. We instantiate the framework with LOF and C4.5-Rules, and evaluate the usefulness of the derived features on a diverse collection of UCI data sets. The derived features are more often useful than ones derived by DC-Fringe, and our approach is much less likely to overfit. But while a weak learner, Naive Bayes, benefits strongly from the feature construction, the effect is less pronounced for C4.5, and almost vanishes for an SVM leaner. Keywords: feature construction, classification, outlier detection
[ { "created": "Thu, 17 Jul 2014 13:51:55 GMT", "version": "v1" } ]
2014-07-18
[ [ "Zimmermann", "Albrecht", "" ] ]
No matter the expressive power and sophistication of supervised learning algorithms, their effectiveness is restricted by the features describing the data. This is not a new insight in ML and many methods for feature selection, transformation, and construction have been developed. But while this is on-going for general techniques for feature selection and transformation, i.e. dimensionality reduction, work on feature construction, i.e. enriching the data, is by now mainly the domain of image, particularly character, recognition, and NLP. In this work, we propose a new general framework for feature construction. The need for feature construction in a data set is indicated by class outliers and discriminative pattern mining used to derive features on their k-neighborhoods. We instantiate the framework with LOF and C4.5-Rules, and evaluate the usefulness of the derived features on a diverse collection of UCI data sets. The derived features are more often useful than ones derived by DC-Fringe, and our approach is much less likely to overfit. But while a weak learner, Naive Bayes, benefits strongly from the feature construction, the effect is less pronounced for C4.5, and almost vanishes for an SVM leaner. Keywords: feature construction, classification, outlier detection
1711.04022
Hamid Eghbal-zadeh
Hamid Eghbal-zadeh, Matthias Dorfer and Gerhard Widmer
Deep Within-Class Covariance Analysis for Robust Audio Representation Learning
11 pages, 3 tables, 4 figures
null
null
null
cs.LG cs.AI cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Convolutional Neural Networks (CNNs) can learn effective features, though have been shown to suffer from a performance drop when the distribution of the data changes from training to test data. In this paper we analyze the internal representations of CNNs and observe that the representations of unseen data in each class, spread more (with higher variance) in the embedding space of the CNN compared to representations of the training data. More importantly, this difference is more extreme if the unseen data comes from a shifted distribution. Based on this observation, we objectively evaluate the degree of representation's variance in each class via eigenvalue decomposition on the within-class covariance of the internal representations of CNNs and observe the same behaviour. This can be problematic as larger variances might lead to mis-classification if the sample crosses the decision boundary of its class. We apply nearest neighbor classification on the representations and empirically show that the embeddings with the high variance actually have significantly worse KNN classification performances, although this could not be foreseen from their end-to-end classification results. To tackle this problem, we propose Deep Within-Class Covariance Analysis (DWCCA), a deep neural network layer that significantly reduces the within-class covariance of a DNN's representation, improving performance on unseen test data from a shifted distribution. We empirically evaluate DWCCA on two datasets for Acoustic Scene Classification (DCASE2016 and DCASE2017). We demonstrate that not only does DWCCA significantly improve the network's internal representation, it also increases the end-to-end classification accuracy, especially when the test set exhibits a distribution shift. By adding DWCCA to a VGG network, we achieve around 6 percentage points improvement in the case of a distribution mismatch.
[ { "created": "Fri, 10 Nov 2017 21:39:12 GMT", "version": "v1" }, { "created": "Fri, 30 Nov 2018 09:48:48 GMT", "version": "v2" } ]
2018-12-03
[ [ "Eghbal-zadeh", "Hamid", "" ], [ "Dorfer", "Matthias", "" ], [ "Widmer", "Gerhard", "" ] ]
Convolutional Neural Networks (CNNs) can learn effective features, though have been shown to suffer from a performance drop when the distribution of the data changes from training to test data. In this paper we analyze the internal representations of CNNs and observe that the representations of unseen data in each class, spread more (with higher variance) in the embedding space of the CNN compared to representations of the training data. More importantly, this difference is more extreme if the unseen data comes from a shifted distribution. Based on this observation, we objectively evaluate the degree of representation's variance in each class via eigenvalue decomposition on the within-class covariance of the internal representations of CNNs and observe the same behaviour. This can be problematic as larger variances might lead to mis-classification if the sample crosses the decision boundary of its class. We apply nearest neighbor classification on the representations and empirically show that the embeddings with the high variance actually have significantly worse KNN classification performances, although this could not be foreseen from their end-to-end classification results. To tackle this problem, we propose Deep Within-Class Covariance Analysis (DWCCA), a deep neural network layer that significantly reduces the within-class covariance of a DNN's representation, improving performance on unseen test data from a shifted distribution. We empirically evaluate DWCCA on two datasets for Acoustic Scene Classification (DCASE2016 and DCASE2017). We demonstrate that not only does DWCCA significantly improve the network's internal representation, it also increases the end-to-end classification accuracy, especially when the test set exhibits a distribution shift. By adding DWCCA to a VGG network, we achieve around 6 percentage points improvement in the case of a distribution mismatch.
2303.10567
Jinyeong Jeong
Jinyeong Jeong, Min Jun Kim
Passivity-based Decentralized Control for Collaborative Grasping of Under-Actuated Aerial Manipulators
IEEE International Conference on Robotics and Automation (ICRA) 2023
null
10.1109/ICRA48891.2023.10160334
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a decentralized passive impedance control scheme for collaborative grasping using under-actuated aerial manipulators (AMs). The AM system is formulated, using a proper coordinate transformation, as an inertially decoupled dynamics with which a passivity-based control design is conducted. Since the interaction for grasping can be interpreted as a feedback interconnection of passive systems, an arbitrary number of AMs can be modularly combined, leading to a decentralized control scheme. Another interesting consequence of the passivity property is that the AMs automatically converge to a certain configuration to accomplish the grasping. Collaborative grasping using 10 AMs is presented in simulation.
[ { "created": "Sun, 19 Mar 2023 05:04:50 GMT", "version": "v1" } ]
2024-01-12
[ [ "Jeong", "Jinyeong", "" ], [ "Kim", "Min Jun", "" ] ]
This paper proposes a decentralized passive impedance control scheme for collaborative grasping using under-actuated aerial manipulators (AMs). The AM system is formulated, using a proper coordinate transformation, as an inertially decoupled dynamics with which a passivity-based control design is conducted. Since the interaction for grasping can be interpreted as a feedback interconnection of passive systems, an arbitrary number of AMs can be modularly combined, leading to a decentralized control scheme. Another interesting consequence of the passivity property is that the AMs automatically converge to a certain configuration to accomplish the grasping. Collaborative grasping using 10 AMs is presented in simulation.
1904.02074
Qinbing Fu
Qinbing Fu, Nicola Bellotto, Huatian Wang, F. Claire Rind, Hongxin Wang, Shigang Yue
A Visual Neural Network for Robust Collision Perception in Vehicle Driving Scenarios
12 pages, 7 figures, conference, springer format
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This research addresses the challenging problem of visual collision detection in very complex and dynamic real physical scenes, specifically, the vehicle driving scenarios. This research takes inspiration from a large-field looming sensitive neuron, i.e., the lobula giant movement detector (LGMD) in the locust's visual pathways, which represents high spike frequency to rapid approaching objects. Building upon our previous models, in this paper we propose a novel inhibition mechanism that is capable of adapting to different levels of background complexity. This adaptive mechanism works effectively to mediate the local inhibition strength and tune the temporal latency of local excitation reaching the LGMD neuron. As a result, the proposed model is effective to extract colliding cues from complex dynamic visual scenes. We tested the proposed method using a range of stimuli including simulated movements in grating backgrounds and shifting of a natural panoramic scene, as well as vehicle crash video sequences. The experimental results demonstrate the proposed method is feasible for fast collision perception in real-world situations with potential applications in future autonomous vehicles.
[ { "created": "Wed, 3 Apr 2019 16:05:56 GMT", "version": "v1" } ]
2019-04-04
[ [ "Fu", "Qinbing", "" ], [ "Bellotto", "Nicola", "" ], [ "Wang", "Huatian", "" ], [ "Rind", "F. Claire", "" ], [ "Wang", "Hongxin", "" ], [ "Yue", "Shigang", "" ] ]
This research addresses the challenging problem of visual collision detection in very complex and dynamic real physical scenes, specifically, the vehicle driving scenarios. This research takes inspiration from a large-field looming sensitive neuron, i.e., the lobula giant movement detector (LGMD) in the locust's visual pathways, which represents high spike frequency to rapid approaching objects. Building upon our previous models, in this paper we propose a novel inhibition mechanism that is capable of adapting to different levels of background complexity. This adaptive mechanism works effectively to mediate the local inhibition strength and tune the temporal latency of local excitation reaching the LGMD neuron. As a result, the proposed model is effective to extract colliding cues from complex dynamic visual scenes. We tested the proposed method using a range of stimuli including simulated movements in grating backgrounds and shifting of a natural panoramic scene, as well as vehicle crash video sequences. The experimental results demonstrate the proposed method is feasible for fast collision perception in real-world situations with potential applications in future autonomous vehicles.
2012.07347
Maxim Vashkevich
Maxim Vashkevich and Yulia Rushkevich
Classification of ALS patients based on acoustic analysis of sustained vowel phonations
null
Biomedical Signal Processing and Control, Volume 65, March 2021, 102350
10.1016/j.bspc.2020.102350
null
cs.SD cs.CL cs.LG eess.AS
http://creativecommons.org/licenses/by/4.0/
Amyotrophic lateral sclerosis (ALS) is incurable neurological disorder with rapidly progressive course. Common early symptoms of ALS are difficulty in swallowing and speech. However, early acoustic manifestation of speech and voice symptoms is very variable, that making their detection very challenging, both by human specialists and automatic systems. This study presents an approach to voice assessment for automatic system that separates healthy people from patients with ALS. In particular, this work focus on analysing of sustain phonation of vowels /a/ and /i/ to perform automatic classification of ALS patients. A wide range of acoustic features such as MFCC, formants, jitter, shimmer, vibrato, PPE, GNE, HNR, etc. were analysed. We also proposed a new set of acoustic features for characterizing harmonic structure of the vowels. Calculation of these features is based on pitch synchronized voice analysis. A linear discriminant analysis (LDA) was used to classify the phonation produced by patients with ALS and those by healthy individuals. Several algorithms of feature selection were tested to find optimal feature subset for LDA model. The study's experiments show that the most successful LDA model based on 32 features picked out by LASSO feature selection algorithm attains 99.7% accuracy with 99.3% sensitivity and 99.9% specificity. Among the classifiers with a small number of features, we can highlight LDA model with 5 features, which has 89.0% accuracy (87.5% sensitivity and 90.4% specificity).
[ { "created": "Mon, 14 Dec 2020 08:56:53 GMT", "version": "v1" }, { "created": "Mon, 11 Jan 2021 08:41:07 GMT", "version": "v2" } ]
2021-01-12
[ [ "Vashkevich", "Maxim", "" ], [ "Rushkevich", "Yulia", "" ] ]
Amyotrophic lateral sclerosis (ALS) is incurable neurological disorder with rapidly progressive course. Common early symptoms of ALS are difficulty in swallowing and speech. However, early acoustic manifestation of speech and voice symptoms is very variable, that making their detection very challenging, both by human specialists and automatic systems. This study presents an approach to voice assessment for automatic system that separates healthy people from patients with ALS. In particular, this work focus on analysing of sustain phonation of vowels /a/ and /i/ to perform automatic classification of ALS patients. A wide range of acoustic features such as MFCC, formants, jitter, shimmer, vibrato, PPE, GNE, HNR, etc. were analysed. We also proposed a new set of acoustic features for characterizing harmonic structure of the vowels. Calculation of these features is based on pitch synchronized voice analysis. A linear discriminant analysis (LDA) was used to classify the phonation produced by patients with ALS and those by healthy individuals. Several algorithms of feature selection were tested to find optimal feature subset for LDA model. The study's experiments show that the most successful LDA model based on 32 features picked out by LASSO feature selection algorithm attains 99.7% accuracy with 99.3% sensitivity and 99.9% specificity. Among the classifiers with a small number of features, we can highlight LDA model with 5 features, which has 89.0% accuracy (87.5% sensitivity and 90.4% specificity).
2406.07547
Xi Chen
Xi Chen, Yutong Feng, Mengting Chen, Yiyang Wang, Shilong Zhang, Yu Liu, Yujun Shen, Hengshuang Zhao
Zero-shot Image Editing with Reference Imitation
https://xavierchen34.github.io/MimicBrush-Page
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Image editing serves as a practical yet challenging task considering the diverse demands from users, where one of the hardest parts is to precisely describe how the edited image should look like. In this work, we present a new form of editing, termed imitative editing, to help users exercise their creativity more conveniently. Concretely, to edit an image region of interest, users are free to directly draw inspiration from some in-the-wild references (e.g., some relative pictures come across online), without having to cope with the fit between the reference and the source. Such a design requires the system to automatically figure out what to expect from the reference to perform the editing. For this purpose, we propose a generative training framework, dubbed MimicBrush, which randomly selects two frames from a video clip, masks some regions of one frame, and learns to recover the masked regions using the information from the other frame. That way, our model, developed from a diffusion prior, is able to capture the semantic correspondence between separate images in a self-supervised manner. We experimentally show the effectiveness of our method under various test cases as well as its superiority over existing alternatives. We also construct a benchmark to facilitate further research.
[ { "created": "Tue, 11 Jun 2024 17:59:51 GMT", "version": "v1" } ]
2024-06-12
[ [ "Chen", "Xi", "" ], [ "Feng", "Yutong", "" ], [ "Chen", "Mengting", "" ], [ "Wang", "Yiyang", "" ], [ "Zhang", "Shilong", "" ], [ "Liu", "Yu", "" ], [ "Shen", "Yujun", "" ], [ "Zhao", "Hengshuang", "" ] ]
Image editing serves as a practical yet challenging task considering the diverse demands from users, where one of the hardest parts is to precisely describe how the edited image should look like. In this work, we present a new form of editing, termed imitative editing, to help users exercise their creativity more conveniently. Concretely, to edit an image region of interest, users are free to directly draw inspiration from some in-the-wild references (e.g., some relative pictures come across online), without having to cope with the fit between the reference and the source. Such a design requires the system to automatically figure out what to expect from the reference to perform the editing. For this purpose, we propose a generative training framework, dubbed MimicBrush, which randomly selects two frames from a video clip, masks some regions of one frame, and learns to recover the masked regions using the information from the other frame. That way, our model, developed from a diffusion prior, is able to capture the semantic correspondence between separate images in a self-supervised manner. We experimentally show the effectiveness of our method under various test cases as well as its superiority over existing alternatives. We also construct a benchmark to facilitate further research.
1904.03256
Mohammad Sadegh Rasooli
Maryam Aminian, Mohammad Sadegh Rasooli, Mona Diab
Cross-Lingual Transfer of Semantic Roles: From Raw Text to Semantic Roles
Accepted at the 13th International Conference on Computational Semantics (IWCS 2019)
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe a transfer method based on annotation projection to develop a dependency-based semantic role labeling system for languages for which no supervised linguistic information other than parallel data is available. Unlike previous work that presumes the availability of supervised features such as lemmas, part-of-speech tags, and dependency parse trees, we only make use of word and character features. Our deep model considers using character-based representations as well as unsupervised stem embeddings to alleviate the need for supervised features. Our experiments outperform a state-of-the-art method that uses supervised lexico-syntactic features on 6 out of 7 languages in the Universal Proposition Bank.
[ { "created": "Fri, 5 Apr 2019 20:04:04 GMT", "version": "v1" } ]
2019-04-09
[ [ "Aminian", "Maryam", "" ], [ "Rasooli", "Mohammad Sadegh", "" ], [ "Diab", "Mona", "" ] ]
We describe a transfer method based on annotation projection to develop a dependency-based semantic role labeling system for languages for which no supervised linguistic information other than parallel data is available. Unlike previous work that presumes the availability of supervised features such as lemmas, part-of-speech tags, and dependency parse trees, we only make use of word and character features. Our deep model considers using character-based representations as well as unsupervised stem embeddings to alleviate the need for supervised features. Our experiments outperform a state-of-the-art method that uses supervised lexico-syntactic features on 6 out of 7 languages in the Universal Proposition Bank.
2109.02914
Junghyo Jo
Sungyeop Lee and Junghyo Jo
Scale-invariant representation of machine learning
null
null
10.1103/PhysRevE.105.044306
null
cs.LG cs.IT math.IT physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The success of machine learning has resulted from its structured representation of data. Similar data have close internal representations as compressed codes for classification or emerged labels for clustering. We observe that the frequency of internal codes or labels follows power laws in both supervised and unsupervised learning models. This scale-invariant distribution implies that machine learning largely compresses frequent typical data, and simultaneously, differentiates many atypical data as outliers. In this study, we derive the process by which these power laws can naturally arise in machine learning. In terms of information theory, the scale-invariant representation corresponds to a maximally uncertain data grouping among possible representations that guarantee a given learning accuracy.
[ { "created": "Tue, 7 Sep 2021 07:56:15 GMT", "version": "v1" }, { "created": "Wed, 23 Mar 2022 08:11:08 GMT", "version": "v2" } ]
2022-04-13
[ [ "Lee", "Sungyeop", "" ], [ "Jo", "Junghyo", "" ] ]
The success of machine learning has resulted from its structured representation of data. Similar data have close internal representations as compressed codes for classification or emerged labels for clustering. We observe that the frequency of internal codes or labels follows power laws in both supervised and unsupervised learning models. This scale-invariant distribution implies that machine learning largely compresses frequent typical data, and simultaneously, differentiates many atypical data as outliers. In this study, we derive the process by which these power laws can naturally arise in machine learning. In terms of information theory, the scale-invariant representation corresponds to a maximally uncertain data grouping among possible representations that guarantee a given learning accuracy.
1404.0977
Oren Weimann
Shay Mozes, Yahav Nussbaum, Oren Weimann
Faster Shortest Paths in Dense Distance Graphs, with Applications
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show how to combine two techniques for efficiently computing shortest paths in directed planar graphs. The first is the linear-time shortest-path algorithm of Henzinger, Klein, Subramanian, and Rao [STOC'94]. The second is Fakcharoenphol and Rao's algorithm [FOCS'01] for emulating Dijkstra's algorithm on the dense distance graph (DDG). A DDG is defined for a decomposition of a planar graph $G$ into regions of at most $r$ vertices each, for some parameter $r < n$. The vertex set of the DDG is the set of $\Theta(n/\sqrt r)$ vertices of $G$ that belong to more than one region (boundary vertices). The DDG has $\Theta(n)$ arcs, such that distances in the DDG are equal to the distances in $G$. Fakcharoenphol and Rao's implementation of Dijkstra's algorithm on the DDG (nicknamed FR-Dijkstra) runs in $O(n\log(n) r^{-1/2} \log r)$ time, and is a key component in many state-of-the-art planar graph algorithms for shortest paths, minimum cuts, and maximum flows. By combining these two techniques we remove the $\log n$ dependency in the running time of the shortest-path algorithm, making it $O(n r^{-1/2} \log^2r)$. This work is part of a research agenda that aims to develop new techniques that would lead to faster, possibly linear-time, algorithms for problems such as minimum-cut, maximum-flow, and shortest paths with negative arc lengths. As immediate applications, we show how to compute maximum flow in directed weighted planar graphs in $O(n \log p)$ time, where $p$ is the minimum number of edges on any path from the source to the sink. We also show how to compute any part of the DDG that corresponds to a region with $r$ vertices and $k$ boundary vertices in $O(r \log k)$ time, which is faster than has been previously known for small values of $k$.
[ { "created": "Thu, 3 Apr 2014 15:44:54 GMT", "version": "v1" } ]
2014-04-04
[ [ "Mozes", "Shay", "" ], [ "Nussbaum", "Yahav", "" ], [ "Weimann", "Oren", "" ] ]
We show how to combine two techniques for efficiently computing shortest paths in directed planar graphs. The first is the linear-time shortest-path algorithm of Henzinger, Klein, Subramanian, and Rao [STOC'94]. The second is Fakcharoenphol and Rao's algorithm [FOCS'01] for emulating Dijkstra's algorithm on the dense distance graph (DDG). A DDG is defined for a decomposition of a planar graph $G$ into regions of at most $r$ vertices each, for some parameter $r < n$. The vertex set of the DDG is the set of $\Theta(n/\sqrt r)$ vertices of $G$ that belong to more than one region (boundary vertices). The DDG has $\Theta(n)$ arcs, such that distances in the DDG are equal to the distances in $G$. Fakcharoenphol and Rao's implementation of Dijkstra's algorithm on the DDG (nicknamed FR-Dijkstra) runs in $O(n\log(n) r^{-1/2} \log r)$ time, and is a key component in many state-of-the-art planar graph algorithms for shortest paths, minimum cuts, and maximum flows. By combining these two techniques we remove the $\log n$ dependency in the running time of the shortest-path algorithm, making it $O(n r^{-1/2} \log^2r)$. This work is part of a research agenda that aims to develop new techniques that would lead to faster, possibly linear-time, algorithms for problems such as minimum-cut, maximum-flow, and shortest paths with negative arc lengths. As immediate applications, we show how to compute maximum flow in directed weighted planar graphs in $O(n \log p)$ time, where $p$ is the minimum number of edges on any path from the source to the sink. We also show how to compute any part of the DDG that corresponds to a region with $r$ vertices and $k$ boundary vertices in $O(r \log k)$ time, which is faster than has been previously known for small values of $k$.
1909.02564
Jarom\'ir Janisch
Jarom\'ir Janisch, Tom\'a\v{s} Pevn\'y and Viliam Lis\'y
Classification with Costly Features as a Sequential Decision-Making Problem
null
Machine Learning (2020): 1-29
10.1007/s10994-020-05874-8
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work focuses on a specific classification problem, where the information about a sample is not readily available, but has to be acquired for a cost, and there is a per-sample budget. Inspired by real-world use-cases, we analyze average and hard variations of a directly specified budget. We postulate the problem in its explicit formulation and then convert it into an equivalent MDP, that can be solved with deep reinforcement learning. Also, we evaluate a real-world inspired setting with sparse training dataset with missing features. The presented method performs robustly well in all settings across several distinct datasets, outperforming other prior-art algorithms. The method is flexible, as showcased with all mentioned modifications and can be improved with any domain independent advancement in RL.
[ { "created": "Thu, 5 Sep 2019 14:46:40 GMT", "version": "v1" } ]
2020-03-05
[ [ "Janisch", "Jaromír", "" ], [ "Pevný", "Tomáš", "" ], [ "Lisý", "Viliam", "" ] ]
This work focuses on a specific classification problem, where the information about a sample is not readily available, but has to be acquired for a cost, and there is a per-sample budget. Inspired by real-world use-cases, we analyze average and hard variations of a directly specified budget. We postulate the problem in its explicit formulation and then convert it into an equivalent MDP, that can be solved with deep reinforcement learning. Also, we evaluate a real-world inspired setting with sparse training dataset with missing features. The presented method performs robustly well in all settings across several distinct datasets, outperforming other prior-art algorithms. The method is flexible, as showcased with all mentioned modifications and can be improved with any domain independent advancement in RL.
1204.1420
Abdullah Alshehab M.
Abdullah Alshehab, Chiu Tung Wu, Nao Kobayashi, Sikieng Sok, Shigeru Shimamoto
Intra-bodyhybrid communication scheme for healthcare systems
International Journal on Bioinformatics & Biosciences (IJBB) Vol.2, No.1, March 2012
null
10.5121/ijbb.2012.21011
null
cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Intra-body communication (IBC) is a type of Body Area Network (BAN)that utilizes human body as the medium for data transmission. Thelow power requirements of intra-body communication (IBC) as compared to near field electromagnetic waves showed that it can be a suitable solution for Medical Body Area Networks (MBANs) in a mobile health care system.In this paper, we investigate the transmission characteristics of the human body as a conductor of signals by considering different data transmission rates of multi-point to point network in order to reduce overall power consumption of the BAN.Furthermore, we utilize IBC and propose a new scheme to combines Slotted ALOHA, TDMA, and Reservation ALOHA together to increase the throughput and decrease the delay. By using our new hybrid scheme with the movable boundary designed for health status monitoring, we are able to increase the efficiency of data transmission by prioritizing the more critical data from the sensors.
[ { "created": "Fri, 6 Apr 2012 07:06:43 GMT", "version": "v1" } ]
2012-04-09
[ [ "Alshehab", "Abdullah", "" ], [ "Wu", "Chiu Tung", "" ], [ "Kobayashi", "Nao", "" ], [ "Sok", "Sikieng", "" ], [ "Shimamoto", "Shigeru", "" ] ]
Intra-body communication (IBC) is a type of Body Area Network (BAN)that utilizes human body as the medium for data transmission. Thelow power requirements of intra-body communication (IBC) as compared to near field electromagnetic waves showed that it can be a suitable solution for Medical Body Area Networks (MBANs) in a mobile health care system.In this paper, we investigate the transmission characteristics of the human body as a conductor of signals by considering different data transmission rates of multi-point to point network in order to reduce overall power consumption of the BAN.Furthermore, we utilize IBC and propose a new scheme to combines Slotted ALOHA, TDMA, and Reservation ALOHA together to increase the throughput and decrease the delay. By using our new hybrid scheme with the movable boundary designed for health status monitoring, we are able to increase the efficiency of data transmission by prioritizing the more critical data from the sensors.
2111.04264
Chenglong Li
Chenglong Li, Tianhao Zhu, Lei Liu, Xiaonan Si, Zilin Fan, Sulan Zhai
Cross-Modal Object Tracking: Modality-Aware Representations and A Unified Benchmark
In Submission
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In many visual systems, visual tracking often bases on RGB image sequences, in which some targets are invalid in low-light conditions, and tracking performance is thus affected significantly. Introducing other modalities such as depth and infrared data is an effective way to handle imaging limitations of individual sources, but multi-modal imaging platforms usually require elaborate designs and cannot be applied in many real-world applications at present. Near-infrared (NIR) imaging becomes an essential part of many surveillance cameras, whose imaging is switchable between RGB and NIR based on the light intensity. These two modalities are heterogeneous with very different visual properties and thus bring big challenges for visual tracking. However, existing works have not studied this challenging problem. In this work, we address the cross-modal object tracking problem and contribute a new video dataset, including 654 cross-modal image sequences with over 481K frames in total, and the average video length is more than 735 frames. To promote the research and development of cross-modal object tracking, we propose a new algorithm, which learns the modality-aware target representation to mitigate the appearance gap between RGB and NIR modalities in the tracking process. It is plug-and-play and could thus be flexibly embedded into different tracking frameworks. Extensive experiments on the dataset are conducted, and we demonstrate the effectiveness of the proposed algorithm in two representative tracking frameworks against 17 state-of-the-art tracking methods. We will release the dataset for free academic usage, dataset download link and code will be released soon.
[ { "created": "Mon, 8 Nov 2021 03:58:55 GMT", "version": "v1" }, { "created": "Thu, 11 Nov 2021 08:30:58 GMT", "version": "v2" } ]
2021-11-12
[ [ "Li", "Chenglong", "" ], [ "Zhu", "Tianhao", "" ], [ "Liu", "Lei", "" ], [ "Si", "Xiaonan", "" ], [ "Fan", "Zilin", "" ], [ "Zhai", "Sulan", "" ] ]
In many visual systems, visual tracking often bases on RGB image sequences, in which some targets are invalid in low-light conditions, and tracking performance is thus affected significantly. Introducing other modalities such as depth and infrared data is an effective way to handle imaging limitations of individual sources, but multi-modal imaging platforms usually require elaborate designs and cannot be applied in many real-world applications at present. Near-infrared (NIR) imaging becomes an essential part of many surveillance cameras, whose imaging is switchable between RGB and NIR based on the light intensity. These two modalities are heterogeneous with very different visual properties and thus bring big challenges for visual tracking. However, existing works have not studied this challenging problem. In this work, we address the cross-modal object tracking problem and contribute a new video dataset, including 654 cross-modal image sequences with over 481K frames in total, and the average video length is more than 735 frames. To promote the research and development of cross-modal object tracking, we propose a new algorithm, which learns the modality-aware target representation to mitigate the appearance gap between RGB and NIR modalities in the tracking process. It is plug-and-play and could thus be flexibly embedded into different tracking frameworks. Extensive experiments on the dataset are conducted, and we demonstrate the effectiveness of the proposed algorithm in two representative tracking frameworks against 17 state-of-the-art tracking methods. We will release the dataset for free academic usage, dataset download link and code will be released soon.
1907.10322
Simon Tamayo Giraldo
Sarah Manard (CAOR), Nicolas Vergos (CAOR), Simon Tamayo (CAOR), Fr\'ed\'eric Fontane (CAOR)
Electronic health record in the era of industry 4.0: the French example
null
International Conference e-Health 2019, Jul 2019, Porto, Portugal
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recent implementation of the Electronic Health Record (EHR) in France is part of a more general process of digitizing information flows, as the world enters the fourth industrial revolution in a phenomenon known as Industry 4.0. Behind this concept lies the concern to allow Man to remain permanently in control of his destiny, despite an increasingly interconnected world (Internet of Things, cooperative robots, augmented reality, etc.). Accordingly, the implementation of EHR must guarantee the respect for the private life of each citizen. From this perspective, healthcare professionals will therefore have to constantly ensure the protection of medical confidentiality during Electronic Data Interchange (EDI). This paper summarises the current state of the use of EHR in France. Based on a survey conducted by the European Commission to assess the deployment of digitalisation in the health sector in EU countries, this article aims to highlight the opportunities and perspectives that Industry 4.0 could bring to the health sector in France. However, this study also identifies a number of limits related to the application of such a system, the first of which is cyber threat or transhumanism. To this end, a SWOT matrix identifies the strengths and weaknesses related to the implementation of the French EHR.
[ { "created": "Wed, 24 Jul 2019 09:24:24 GMT", "version": "v1" }, { "created": "Thu, 25 Jul 2019 09:00:12 GMT", "version": "v2" } ]
2019-07-26
[ [ "Manard", "Sarah", "", "CAOR" ], [ "Vergos", "Nicolas", "", "CAOR" ], [ "Tamayo", "Simon", "", "CAOR" ], [ "Fontane", "Frédéric", "", "CAOR" ] ]
The recent implementation of the Electronic Health Record (EHR) in France is part of a more general process of digitizing information flows, as the world enters the fourth industrial revolution in a phenomenon known as Industry 4.0. Behind this concept lies the concern to allow Man to remain permanently in control of his destiny, despite an increasingly interconnected world (Internet of Things, cooperative robots, augmented reality, etc.). Accordingly, the implementation of EHR must guarantee the respect for the private life of each citizen. From this perspective, healthcare professionals will therefore have to constantly ensure the protection of medical confidentiality during Electronic Data Interchange (EDI). This paper summarises the current state of the use of EHR in France. Based on a survey conducted by the European Commission to assess the deployment of digitalisation in the health sector in EU countries, this article aims to highlight the opportunities and perspectives that Industry 4.0 could bring to the health sector in France. However, this study also identifies a number of limits related to the application of such a system, the first of which is cyber threat or transhumanism. To this end, a SWOT matrix identifies the strengths and weaknesses related to the implementation of the French EHR.
2109.01879
Anindya Mondal
Anindya Mondal, Mayukhmali Das
Moving Object Detection for Event-based Vision using k-means Clustering
Nine pages, five figures, Published in 2021 IEEE 8th Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering (UPCON)
null
10.1109/UPCON52273.2021.9667636
null
cs.CV cs.AI eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Moving object detection is important in computer vision. Event-based cameras are bio-inspired cameras that work by mimicking the working of the human eye. These cameras have multiple advantages over conventional frame-based cameras, like reduced latency, HDR, reduced motion blur during high motion, low power consumption, etc. In spite of these advantages, event-based cameras are noise-sensitive and have low resolution. Moreover, the task of moving object detection in these cameras is difficult, as event-based sensors lack useful visual features like texture and color. In this paper, we investigate the application of the k-means clustering technique in detecting moving objects in event-based data.
[ { "created": "Sat, 4 Sep 2021 14:43:14 GMT", "version": "v1" }, { "created": "Fri, 1 Oct 2021 16:06:17 GMT", "version": "v2" }, { "created": "Mon, 8 Nov 2021 08:24:19 GMT", "version": "v3" }, { "created": "Tue, 11 Jan 2022 21:03:51 GMT", "version": "v4" } ]
2022-01-13
[ [ "Mondal", "Anindya", "" ], [ "Das", "Mayukhmali", "" ] ]
Moving object detection is important in computer vision. Event-based cameras are bio-inspired cameras that work by mimicking the working of the human eye. These cameras have multiple advantages over conventional frame-based cameras, like reduced latency, HDR, reduced motion blur during high motion, low power consumption, etc. In spite of these advantages, event-based cameras are noise-sensitive and have low resolution. Moreover, the task of moving object detection in these cameras is difficult, as event-based sensors lack useful visual features like texture and color. In this paper, we investigate the application of the k-means clustering technique in detecting moving objects in event-based data.
0806.4553
Viorica Sofronie-Stokkermans
Viorica Sofronie-Stokkermans
Interpolation in local theory extensions
31 pages, 1 figure
Logical Methods in Computer Science, Volume 4, Issue 4 (October 17, 2008) lmcs:1143
10.2168/LMCS-4(4:1)2008
null
cs.LO cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we study interpolation in local extensions of a base theory. We identify situations in which it is possible to obtain interpolants in a hierarchical manner, by using a prover and a procedure for generating interpolants in the base theory as black-boxes. We present several examples of theory extensions in which interpolants can be computed this way, and discuss applications in verification, knowledge representation, and modular reasoning in combinations of local theories.
[ { "created": "Fri, 27 Jun 2008 15:51:02 GMT", "version": "v1" }, { "created": "Thu, 16 Oct 2008 22:01:02 GMT", "version": "v2" } ]
2015-07-01
[ [ "Sofronie-Stokkermans", "Viorica", "" ] ]
In this paper we study interpolation in local extensions of a base theory. We identify situations in which it is possible to obtain interpolants in a hierarchical manner, by using a prover and a procedure for generating interpolants in the base theory as black-boxes. We present several examples of theory extensions in which interpolants can be computed this way, and discuss applications in verification, knowledge representation, and modular reasoning in combinations of local theories.
2303.07740
Yang Bai
Min Cao, Yang Bai, Jingyao Wang, Ziqiang Cao, Liqiang Nie, Min Zhang
Efficient Image-Text Retrieval via Keyword-Guided Pre-Screening
11 pages, 7 figures, 6 tables
null
null
null
cs.CV cs.CL
http://creativecommons.org/licenses/by/4.0/
Under the flourishing development in performance, current image-text retrieval methods suffer from $N$-related time complexity, which hinders their application in practice. Targeting at efficiency improvement, this paper presents a simple and effective keyword-guided pre-screening framework for the image-text retrieval. Specifically, we convert the image and text data into the keywords and perform the keyword matching across modalities to exclude a large number of irrelevant gallery samples prior to the retrieval network. For the keyword prediction, we transfer it into a multi-label classification problem and propose a multi-task learning scheme by appending the multi-label classifiers to the image-text retrieval network to achieve a lightweight and high-performance keyword prediction. For the keyword matching, we introduce the inverted index in the search engine and create a win-win situation on both time and space complexities for the pre-screening. Extensive experiments on two widely-used datasets, i.e., Flickr30K and MS-COCO, verify the effectiveness of the proposed framework. The proposed framework equipped with only two embedding layers achieves $O(1)$ querying time complexity, while improving the retrieval efficiency and keeping its performance, when applied prior to the common image-text retrieval methods. Our code will be released.
[ { "created": "Tue, 14 Mar 2023 09:36:42 GMT", "version": "v1" } ]
2023-03-15
[ [ "Cao", "Min", "" ], [ "Bai", "Yang", "" ], [ "Wang", "Jingyao", "" ], [ "Cao", "Ziqiang", "" ], [ "Nie", "Liqiang", "" ], [ "Zhang", "Min", "" ] ]
Under the flourishing development in performance, current image-text retrieval methods suffer from $N$-related time complexity, which hinders their application in practice. Targeting at efficiency improvement, this paper presents a simple and effective keyword-guided pre-screening framework for the image-text retrieval. Specifically, we convert the image and text data into the keywords and perform the keyword matching across modalities to exclude a large number of irrelevant gallery samples prior to the retrieval network. For the keyword prediction, we transfer it into a multi-label classification problem and propose a multi-task learning scheme by appending the multi-label classifiers to the image-text retrieval network to achieve a lightweight and high-performance keyword prediction. For the keyword matching, we introduce the inverted index in the search engine and create a win-win situation on both time and space complexities for the pre-screening. Extensive experiments on two widely-used datasets, i.e., Flickr30K and MS-COCO, verify the effectiveness of the proposed framework. The proposed framework equipped with only two embedding layers achieves $O(1)$ querying time complexity, while improving the retrieval efficiency and keeping its performance, when applied prior to the common image-text retrieval methods. Our code will be released.
0911.3343
Nicolai Kuntze
Nicolai Kuntze, Juergen Repp, Hervais Simo Fhom, Andreas Fuchs, Ine-Saf Benaissa
Final Architecture Specification of security, privacy, and incentive mechanisms
Delieverable of the EU FP7 project Nanodatacenters
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this document, we define the NADA security architecture based on refined use case scenarios, a derived high level model and security analysis. For the architecure design and verification we are applying the well known STRIDE model.
[ { "created": "Tue, 17 Nov 2009 15:58:10 GMT", "version": "v1" } ]
2009-11-18
[ [ "Kuntze", "Nicolai", "" ], [ "Repp", "Juergen", "" ], [ "Fhom", "Hervais Simo", "" ], [ "Fuchs", "Andreas", "" ], [ "Benaissa", "Ine-Saf", "" ] ]
In this document, we define the NADA security architecture based on refined use case scenarios, a derived high level model and security analysis. For the architecure design and verification we are applying the well known STRIDE model.
1807.01545
Christian H\"ager
Christian H\"ager, Henry D. Pfister
Wideband Time-Domain Digital Backpropagation via Subband Processing and Deep Learning
3 pages, 3 figurs
null
null
null
cs.IT math.IT stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a low-complexity sub-banded DSP architecture for digital backpropagation where the walk-off effect is compensated using simple delay elements. For a simulated 96-Gbaud signal and 2500 km optical link, our method achieves a 2.8 dB SNR improvement over linear equalization.
[ { "created": "Wed, 4 Jul 2018 12:39:25 GMT", "version": "v1" } ]
2018-07-05
[ [ "Häger", "Christian", "" ], [ "Pfister", "Henry D.", "" ] ]
We propose a low-complexity sub-banded DSP architecture for digital backpropagation where the walk-off effect is compensated using simple delay elements. For a simulated 96-Gbaud signal and 2500 km optical link, our method achieves a 2.8 dB SNR improvement over linear equalization.
2202.05347
T\'ulio Marcondes Moreira
T\'ulio Marcondes Moreira, Jackson Geraldo de Faria Jr, Pedro O.S. Vaz-de-Melo and Gilberto Medeiros-Ribeiro
Development and Validation of an AI-Driven Model for the La Rance Tidal Barrage: A Generalisable Case Study
30 pages, 22 figures and 6 tables
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this work, an AI-Driven (autonomous) model representation of the La Rance tidal barrage was developed using novel parametrisation and Deep Reinforcement Learning (DRL) techniques. Our model results were validated with experimental measurements, yielding the first Tidal Range Structure (TRS) model validated against a constructed tidal barrage and made available to academics. In order to proper model La Rance, parametrisation methodologies were developed for simulating (i) turbines (in pumping and power generation modes), (ii) transition ramp functions (for opening and closing hydraulic structures) and (iii) equivalent lagoon wetted area. Furthermore, an updated DRL method was implemented for optimising the operation of the hydraulic structures that compose La Rance. The achieved objective of this work was to verify the capabilities of an AI-Driven TRS model to appropriately predict (i) turbine power and (ii) lagoon water level variations. In addition, the observed operational strategy and yearly energy output of our AI-Driven model appeared to be comparable with those reported for the La Rance tidal barrage. The outcomes of this work (developed methodologies and DRL implementations) are generalisable and can be applied to other TRS projects. Furthermore, this work provided insights which allow for more realistic simulation of TRS operation, enabled through our AI-Driven model.
[ { "created": "Thu, 10 Feb 2022 22:02:52 GMT", "version": "v1" } ]
2022-02-14
[ [ "Moreira", "Túlio Marcondes", "" ], [ "Faria", "Jackson Geraldo de", "Jr" ], [ "Vaz-de-Melo", "Pedro O. S.", "" ], [ "Medeiros-Ribeiro", "Gilberto", "" ] ]
In this work, an AI-Driven (autonomous) model representation of the La Rance tidal barrage was developed using novel parametrisation and Deep Reinforcement Learning (DRL) techniques. Our model results were validated with experimental measurements, yielding the first Tidal Range Structure (TRS) model validated against a constructed tidal barrage and made available to academics. In order to proper model La Rance, parametrisation methodologies were developed for simulating (i) turbines (in pumping and power generation modes), (ii) transition ramp functions (for opening and closing hydraulic structures) and (iii) equivalent lagoon wetted area. Furthermore, an updated DRL method was implemented for optimising the operation of the hydraulic structures that compose La Rance. The achieved objective of this work was to verify the capabilities of an AI-Driven TRS model to appropriately predict (i) turbine power and (ii) lagoon water level variations. In addition, the observed operational strategy and yearly energy output of our AI-Driven model appeared to be comparable with those reported for the La Rance tidal barrage. The outcomes of this work (developed methodologies and DRL implementations) are generalisable and can be applied to other TRS projects. Furthermore, this work provided insights which allow for more realistic simulation of TRS operation, enabled through our AI-Driven model.
2207.05267
Bo Wang
Haiqing Hao (1), Zhongwang Pang (1 and 2), Guan Wang (1 and 2) and Bo Wang (1 and 2) ((1) State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing, China, (2) Key Laboratory of Photonic Control Technology (Tsinghua University), Ministry of Education, Beijing, China)
Indoor optical fiber eavesdropping approach and its avoidance
8 pages, 4 figures, submitted to Optics Express
null
10.1364/OE.470529
null
cs.SD eess.AS physics.ins-det physics.optics
http://creativecommons.org/licenses/by-nc-nd/4.0/
The optical fiber network has become a worldwide infrastructure. In addition to the basic functions in telecommunication, its sensing ability has attracted more and more attention. In this paper, we discuss the risk of household fiber being used for eavesdropping and demonstrate its performance in the lab. Using a 3-meter tail fiber in front of the household optical modem, voices of normal human speech can be eavesdropped by a laser interferometer and recovered 1.1 km away. The detection distance limit and system noise are analyzed quantitatively. We also give some practical ways to prevent eavesdropping through household fiber.
[ { "created": "Tue, 12 Jul 2022 02:31:34 GMT", "version": "v1" }, { "created": "Wed, 3 Aug 2022 13:58:31 GMT", "version": "v2" } ]
2022-10-05
[ [ "Hao", "Haiqing", "", "1 and 2" ], [ "Pang", "Zhongwang", "", "1 and 2" ], [ "Wang", "Guan", "", "1 and 2" ], [ "Wang", "Bo", "", "1 and 2" ] ]
The optical fiber network has become a worldwide infrastructure. In addition to the basic functions in telecommunication, its sensing ability has attracted more and more attention. In this paper, we discuss the risk of household fiber being used for eavesdropping and demonstrate its performance in the lab. Using a 3-meter tail fiber in front of the household optical modem, voices of normal human speech can be eavesdropped by a laser interferometer and recovered 1.1 km away. The detection distance limit and system noise are analyzed quantitatively. We also give some practical ways to prevent eavesdropping through household fiber.
2007.11849
Chen-Yu Wei
Chen-Yu Wei, Mehdi Jafarnia-Jahromi, Haipeng Luo, Rahul Jain
Learning Infinite-horizon Average-reward MDPs with Linear Function Approximation
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop several new algorithms for learning Markov Decision Processes in an infinite-horizon average-reward setting with linear function approximation. Using the optimism principle and assuming that the MDP has a linear structure, we first propose a computationally inefficient algorithm with optimal $\widetilde{O}(\sqrt{T})$ regret and another computationally efficient variant with $\widetilde{O}(T^{3/4})$ regret, where $T$ is the number of interactions. Next, taking inspiration from adversarial linear bandits, we develop yet another efficient algorithm with $\widetilde{O}(\sqrt{T})$ regret under a different set of assumptions, improving the best existing result by Hao et al. (2020) with $\widetilde{O}(T^{2/3})$ regret. Moreover, we draw a connection between this algorithm and the Natural Policy Gradient algorithm proposed by Kakade (2002), and show that our analysis improves the sample complexity bound recently given by Agarwal et al. (2020).
[ { "created": "Thu, 23 Jul 2020 08:23:44 GMT", "version": "v1" }, { "created": "Mon, 26 Apr 2021 09:12:03 GMT", "version": "v2" } ]
2021-04-27
[ [ "Wei", "Chen-Yu", "" ], [ "Jafarnia-Jahromi", "Mehdi", "" ], [ "Luo", "Haipeng", "" ], [ "Jain", "Rahul", "" ] ]
We develop several new algorithms for learning Markov Decision Processes in an infinite-horizon average-reward setting with linear function approximation. Using the optimism principle and assuming that the MDP has a linear structure, we first propose a computationally inefficient algorithm with optimal $\widetilde{O}(\sqrt{T})$ regret and another computationally efficient variant with $\widetilde{O}(T^{3/4})$ regret, where $T$ is the number of interactions. Next, taking inspiration from adversarial linear bandits, we develop yet another efficient algorithm with $\widetilde{O}(\sqrt{T})$ regret under a different set of assumptions, improving the best existing result by Hao et al. (2020) with $\widetilde{O}(T^{2/3})$ regret. Moreover, we draw a connection between this algorithm and the Natural Policy Gradient algorithm proposed by Kakade (2002), and show that our analysis improves the sample complexity bound recently given by Agarwal et al. (2020).
2010.09409
Juan Jos\'e G\'omez Rodr\'iguez
Juan J. G\'omez Rodr\'iguez, Jos\'e Lamarca, Javier Morlana, Juan D. Tard\'os, Jos\'e M. M. Montiel
SD-DefSLAM: Semi-Direct Monocular SLAM for Deformable and Intracorporeal Scenes
10 pages, 8 figures. Submitted to RA-L with option to ICRA 2021. Associated video: https://youtu.be/gkcC0IR3X6A
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conventional SLAM techniques strongly rely on scene rigidity to solve data association, ignoring dynamic parts of the scene. In this work we present Semi-Direct DefSLAM (SD-DefSLAM), a novel monocular deformable SLAM method able to map highly deforming environments, built on top of DefSLAM. To robustly solve data association in challenging deforming scenes, SD-DefSLAM combines direct and indirect methods: an enhanced illumination-invariant Lucas-Kanade tracker for data association, geometric Bundle Adjustment for pose and deformable map estimation, and bag-of-words based on feature descriptors for camera relocation. Dynamic objects are detected and segmented-out using a CNN trained for the specific application domain. We thoroughly evaluate our system in two public datasets. The mandala dataset is a SLAM benchmark with increasingly aggressive deformations. The Hamlyn dataset contains intracorporeal sequences that pose serious real-life challenges beyond deformation like weak texture, specular reflections, surgical tools and occlusions. Our results show that SD-DefSLAM outperforms DefSLAM in point tracking, reconstruction accuracy and scale drift thanks to the improvement in all the data association steps, being the first system able to robustly perform SLAM inside the human body.
[ { "created": "Mon, 19 Oct 2020 12:07:07 GMT", "version": "v1" } ]
2020-10-20
[ [ "Rodríguez", "Juan J. Gómez", "" ], [ "Lamarca", "José", "" ], [ "Morlana", "Javier", "" ], [ "Tardós", "Juan D.", "" ], [ "Montiel", "José M. M.", "" ] ]
Conventional SLAM techniques strongly rely on scene rigidity to solve data association, ignoring dynamic parts of the scene. In this work we present Semi-Direct DefSLAM (SD-DefSLAM), a novel monocular deformable SLAM method able to map highly deforming environments, built on top of DefSLAM. To robustly solve data association in challenging deforming scenes, SD-DefSLAM combines direct and indirect methods: an enhanced illumination-invariant Lucas-Kanade tracker for data association, geometric Bundle Adjustment for pose and deformable map estimation, and bag-of-words based on feature descriptors for camera relocation. Dynamic objects are detected and segmented-out using a CNN trained for the specific application domain. We thoroughly evaluate our system in two public datasets. The mandala dataset is a SLAM benchmark with increasingly aggressive deformations. The Hamlyn dataset contains intracorporeal sequences that pose serious real-life challenges beyond deformation like weak texture, specular reflections, surgical tools and occlusions. Our results show that SD-DefSLAM outperforms DefSLAM in point tracking, reconstruction accuracy and scale drift thanks to the improvement in all the data association steps, being the first system able to robustly perform SLAM inside the human body.
2403.06189
Dai Yuqin
Yuqin Dai, Wanlu Zhu, Ronghui Li, Zeping Ren, Xiangzheng Zhou, Xiu Li, Jun Li, Jian Yang
Harmonious Group Choreography with Trajectory-Controllable Diffusion
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Creating group choreography from music has gained attention in cultural entertainment and virtual reality, aiming to coordinate visually cohesive and diverse group movements. Despite increasing interest, recent works face challenges in achieving aesthetically appealing choreography, primarily for two key issues: multi-dancer collision and single-dancer foot slide. To address these issues, we propose a Trajectory-Controllable Diffusion (TCDiff), a novel approach that harnesses non-overlapping trajectories to facilitate coherent dance movements. Specifically, to tackle dancer collisions, we introduce a Dance-Beat Navigator capable of generating trajectories for multiple dancers based on the music, complemented by a Distance-Consistency loss to maintain appropriate spacing among trajectories within a reasonable threshold. To mitigate foot sliding, we present a Footwork Adaptor that utilizes trajectory displacement from adjacent frames to enable flexible footwork, coupled with a Relative Forward-Kinematic loss to adjust the positioning of individual dancers' root nodes and joints. Extensive experiments demonstrate that our method achieves state-of-the-art results.
[ { "created": "Sun, 10 Mar 2024 12:11:34 GMT", "version": "v1" }, { "created": "Thu, 6 Jun 2024 08:19:12 GMT", "version": "v2" }, { "created": "Wed, 14 Aug 2024 02:38:55 GMT", "version": "v3" } ]
2024-08-15
[ [ "Dai", "Yuqin", "" ], [ "Zhu", "Wanlu", "" ], [ "Li", "Ronghui", "" ], [ "Ren", "Zeping", "" ], [ "Zhou", "Xiangzheng", "" ], [ "Li", "Xiu", "" ], [ "Li", "Jun", "" ], [ "Yang", "Jian", "" ] ]
Creating group choreography from music has gained attention in cultural entertainment and virtual reality, aiming to coordinate visually cohesive and diverse group movements. Despite increasing interest, recent works face challenges in achieving aesthetically appealing choreography, primarily for two key issues: multi-dancer collision and single-dancer foot slide. To address these issues, we propose a Trajectory-Controllable Diffusion (TCDiff), a novel approach that harnesses non-overlapping trajectories to facilitate coherent dance movements. Specifically, to tackle dancer collisions, we introduce a Dance-Beat Navigator capable of generating trajectories for multiple dancers based on the music, complemented by a Distance-Consistency loss to maintain appropriate spacing among trajectories within a reasonable threshold. To mitigate foot sliding, we present a Footwork Adaptor that utilizes trajectory displacement from adjacent frames to enable flexible footwork, coupled with a Relative Forward-Kinematic loss to adjust the positioning of individual dancers' root nodes and joints. Extensive experiments demonstrate that our method achieves state-of-the-art results.
2304.02572
Zheqing Zhu
Hongbo Guo, Ruben Naeff, Alex Nikulkov, Zheqing Zhu
Evaluating Online Bandit Exploration In Large-Scale Recommender System
null
null
null
null
cs.IR cs.AI cs.LG cs.SI
http://creativecommons.org/licenses/by/4.0/
Bandit learning has been an increasingly popular design choice for recommender system. Despite the strong interest in bandit learning from the community, there remains multiple bottlenecks that prevent many bandit learning approaches from productionalization. One major bottleneck is how to test the effectiveness of bandit algorithm with fairness and without data leakage. Different from supervised learning algorithms, bandit learning algorithms emphasize greatly on the data collection process through their explorative nature. Such explorative behavior may induce unfair evaluation in a classic A/B test setting. In this work, we apply upper confidence bound (UCB) to our large scale short video recommender system and present a test framework for the production bandit learning life-cycle with a new set of metrics. Extensive experiment results show that our experiment design is able to fairly evaluate the performance of bandit learning in the recommender system.
[ { "created": "Wed, 5 Apr 2023 16:44:36 GMT", "version": "v1" }, { "created": "Thu, 22 Jun 2023 03:41:43 GMT", "version": "v2" }, { "created": "Sun, 30 Jul 2023 08:29:55 GMT", "version": "v3" } ]
2023-08-01
[ [ "Guo", "Hongbo", "" ], [ "Naeff", "Ruben", "" ], [ "Nikulkov", "Alex", "" ], [ "Zhu", "Zheqing", "" ] ]
Bandit learning has been an increasingly popular design choice for recommender system. Despite the strong interest in bandit learning from the community, there remains multiple bottlenecks that prevent many bandit learning approaches from productionalization. One major bottleneck is how to test the effectiveness of bandit algorithm with fairness and without data leakage. Different from supervised learning algorithms, bandit learning algorithms emphasize greatly on the data collection process through their explorative nature. Such explorative behavior may induce unfair evaluation in a classic A/B test setting. In this work, we apply upper confidence bound (UCB) to our large scale short video recommender system and present a test framework for the production bandit learning life-cycle with a new set of metrics. Extensive experiment results show that our experiment design is able to fairly evaluate the performance of bandit learning in the recommender system.
1308.0037
Ryan Williams
Ryan K. Williams, Andrea Gasparri, and Bhaskar Krishnamachari
Route Swarm: Wireless Network Optimization through Mobility
9 pages, 4 figures, submitted to the IEEE International Conference on Intelligent Robots and Systems (IROS) 2014
null
10.1109/IROS.2014.6943092
null
cs.SY cs.MA cs.NI cs.RO math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we demonstrate a novel hybrid architecture for coordinating networked robots in sensing and information routing applications. The proposed INformation and Sensing driven PhysIcally REconfigurable robotic network (INSPIRE), consists of a Physical Control Plane (PCP) which commands agent position, and an Information Control Plane (ICP) which regulates information flow towards communication/sensing objectives. We describe an instantiation where a mobile robotic network is dynamically reconfigured to ensure high quality routes between static wireless nodes, which act as source/destination pairs for information flow. The ICP commands the robots towards evenly distributed inter-flow allocations, with intra-flow configurations that maximize route quality. The PCP then guides the robots via potential-based control to reconfigure according to ICP commands. This formulation, deemed Route Swarm, decouples information flow and physical control, generating a feedback between routing and sensing needs and robotic configuration. We demonstrate our propositions through simulation under a realistic wireless network regime.
[ { "created": "Wed, 31 Jul 2013 20:47:14 GMT", "version": "v1" }, { "created": "Tue, 3 Sep 2013 21:16:47 GMT", "version": "v2" }, { "created": "Fri, 7 Feb 2014 02:24:13 GMT", "version": "v3" } ]
2016-11-18
[ [ "Williams", "Ryan K.", "" ], [ "Gasparri", "Andrea", "" ], [ "Krishnamachari", "Bhaskar", "" ] ]
In this paper, we demonstrate a novel hybrid architecture for coordinating networked robots in sensing and information routing applications. The proposed INformation and Sensing driven PhysIcally REconfigurable robotic network (INSPIRE), consists of a Physical Control Plane (PCP) which commands agent position, and an Information Control Plane (ICP) which regulates information flow towards communication/sensing objectives. We describe an instantiation where a mobile robotic network is dynamically reconfigured to ensure high quality routes between static wireless nodes, which act as source/destination pairs for information flow. The ICP commands the robots towards evenly distributed inter-flow allocations, with intra-flow configurations that maximize route quality. The PCP then guides the robots via potential-based control to reconfigure according to ICP commands. This formulation, deemed Route Swarm, decouples information flow and physical control, generating a feedback between routing and sensing needs and robotic configuration. We demonstrate our propositions through simulation under a realistic wireless network regime.
1705.08362
Thorsten Wi{\ss}mann
Ulrich Dorsch, Stefan Milius, Lutz Schr\"oder, Thorsten Wi{\ss}mann
Efficient Coalgebraic Partition Refinement
null
null
null
null
cs.DS cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a generic partition refinement algorithm that quotients coalgebraic systems by behavioural equivalence, an important task in reactive verification; coalgebraic generality implies in particular that we cover not only classical relational systems but also various forms of weighted systems. Under assumptions on the type functor that allow representing its finite coalgebras in terms of nodes and edges, our algorithm runs in time $\mathcal{O}(m\cdot \log n)$ where $n$ and $m$ are the numbers of nodes and edges, respectively. Instances of our generic algorithm thus match the runtime of the best known algorithms for unlabelled transition systems, Markov chains, and deterministic automata (with fixed alphabets), and improve the best known algorithms for Segala systems.
[ { "created": "Tue, 23 May 2017 15:31:59 GMT", "version": "v1" }, { "created": "Sat, 8 Jul 2017 09:53:47 GMT", "version": "v2" }, { "created": "Thu, 13 Jul 2017 10:49:21 GMT", "version": "v3" }, { "created": "Mon, 9 Oct 2017 10:19:12 GMT", "version": "v4" } ]
2017-10-10
[ [ "Dorsch", "Ulrich", "" ], [ "Milius", "Stefan", "" ], [ "Schröder", "Lutz", "" ], [ "Wißmann", "Thorsten", "" ] ]
We present a generic partition refinement algorithm that quotients coalgebraic systems by behavioural equivalence, an important task in reactive verification; coalgebraic generality implies in particular that we cover not only classical relational systems but also various forms of weighted systems. Under assumptions on the type functor that allow representing its finite coalgebras in terms of nodes and edges, our algorithm runs in time $\mathcal{O}(m\cdot \log n)$ where $n$ and $m$ are the numbers of nodes and edges, respectively. Instances of our generic algorithm thus match the runtime of the best known algorithms for unlabelled transition systems, Markov chains, and deterministic automata (with fixed alphabets), and improve the best known algorithms for Segala systems.
1204.4909
M. Rizwan Jameel Qureshi Dr.
M. Rizwan Jameel Qureshi and Waseem Qureshi
Evaluation of the Design Metric to Reduce the Number of Defects in Software Development
9 Pages
International Journal of Information Technology and Computer Science (IJITCS), Vol. 4/4, pp. 9-17, April 2012
10.5815/ijitcs.2012.04.02
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Software design is one of the most important and key activities in the system development life cycle (SDLC) phase that ensures the quality of software. Different key areas of design are very vital to be taken into consideration while designing software. Software design describes how the software system is decomposed and managed in smaller components. Object-oriented (OO) paradigm has facilitated software industry with more reliable and manageable software and its design. The quality of the software design can be measured through different metrics such as Chidamber and Kemerer (CK) design metrics, Mood Metrics & Lorenz and Kidd metrics. CK metrics is one of the oldest and most reliable metrics among all metrics available to software industry to evaluate OO design. This paper presents an evaluation of CK metrics to propose an improved CK design metrics values to reduce the defects during software design phase in software. This paper will also describe that whether a significant effect of any CK design metrics exists on total number of defects per module or not. This is achieved by conducting survey in two software development companies.
[ { "created": "Sun, 22 Apr 2012 16:35:41 GMT", "version": "v1" } ]
2012-04-24
[ [ "Qureshi", "M. Rizwan Jameel", "" ], [ "Qureshi", "Waseem", "" ] ]
Software design is one of the most important and key activities in the system development life cycle (SDLC) phase that ensures the quality of software. Different key areas of design are very vital to be taken into consideration while designing software. Software design describes how the software system is decomposed and managed in smaller components. Object-oriented (OO) paradigm has facilitated software industry with more reliable and manageable software and its design. The quality of the software design can be measured through different metrics such as Chidamber and Kemerer (CK) design metrics, Mood Metrics & Lorenz and Kidd metrics. CK metrics is one of the oldest and most reliable metrics among all metrics available to software industry to evaluate OO design. This paper presents an evaluation of CK metrics to propose an improved CK design metrics values to reduce the defects during software design phase in software. This paper will also describe that whether a significant effect of any CK design metrics exists on total number of defects per module or not. This is achieved by conducting survey in two software development companies.
2004.02002
Tiancheng Zhao
Tianchang Zhao and Kyusong Lee
Talk to Papers: Bringing Neural Question Answering to Academic Search
demo paper accepted at ACL 2020
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce Talk to Papers, which exploits the recent open-domain question answering (QA) techniques to improve the current experience of academic search. It's designed to enable researchers to use natural language queries to find precise answers and extract insights from a massive amount of academic papers. We present a large improvement over classic search engine baseline on several standard QA datasets and provide the community a collaborative data collection tool to curate the first natural language processing research QA dataset via a community effort.
[ { "created": "Sat, 4 Apr 2020 19:19:55 GMT", "version": "v1" }, { "created": "Mon, 13 Apr 2020 14:38:11 GMT", "version": "v2" }, { "created": "Thu, 21 May 2020 20:26:28 GMT", "version": "v3" } ]
2020-05-25
[ [ "Zhao", "Tianchang", "" ], [ "Lee", "Kyusong", "" ] ]
We introduce Talk to Papers, which exploits the recent open-domain question answering (QA) techniques to improve the current experience of academic search. It's designed to enable researchers to use natural language queries to find precise answers and extract insights from a massive amount of academic papers. We present a large improvement over classic search engine baseline on several standard QA datasets and provide the community a collaborative data collection tool to curate the first natural language processing research QA dataset via a community effort.
2312.03322
Zhimiao Yu
Zhimiao Yu, Tiancheng Lin, Yi Xu
Background Clustering Pre-training for Few-shot Segmentation
6 pages, 2 figures, ICIP 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent few-shot segmentation (FSS) methods introduce an extra pre-training stage before meta-training to obtain a stronger backbone, which has become a standard step in few-shot learning. Despite the effectiveness, current pre-training scheme suffers from the merged background problem: only base classes are labelled as foregrounds, making it hard to distinguish between novel classes and actual background. In this paper, we propose a new pre-training scheme for FSS via decoupling the novel classes from background, called Background Clustering Pre-Training (BCPT). Specifically, we adopt online clustering to the pixel embeddings of merged background to explore the underlying semantic structures, bridging the gap between pre-training and adaptation to novel classes. Given the clustering results, we further propose the background mining loss and leverage base classes to guide the clustering process, improving the quality and stability of clustering results. Experiments on PASCAL-5i and COCO-20i show that BCPT yields advanced performance. Code will be available.
[ { "created": "Wed, 6 Dec 2023 07:16:32 GMT", "version": "v1" } ]
2023-12-07
[ [ "Yu", "Zhimiao", "" ], [ "Lin", "Tiancheng", "" ], [ "Xu", "Yi", "" ] ]
Recent few-shot segmentation (FSS) methods introduce an extra pre-training stage before meta-training to obtain a stronger backbone, which has become a standard step in few-shot learning. Despite the effectiveness, current pre-training scheme suffers from the merged background problem: only base classes are labelled as foregrounds, making it hard to distinguish between novel classes and actual background. In this paper, we propose a new pre-training scheme for FSS via decoupling the novel classes from background, called Background Clustering Pre-Training (BCPT). Specifically, we adopt online clustering to the pixel embeddings of merged background to explore the underlying semantic structures, bridging the gap between pre-training and adaptation to novel classes. Given the clustering results, we further propose the background mining loss and leverage base classes to guide the clustering process, improving the quality and stability of clustering results. Experiments on PASCAL-5i and COCO-20i show that BCPT yields advanced performance. Code will be available.
1609.03355
Zhou Zhou
Zhou Zhou, Jun Fang, Linxiao Yang, Hongbin Li, Zhi Chen and Rick S. Blum
Low-Rank Tensor Decomposition-Aided Channel Estimation for Millimeter Wave MIMO-OFDM Systems
arXiv admin note: text overlap with arXiv:1602.07955
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of downlink channel estimation for millimeter wave (mmWave) MIMO-OFDM systems, where both the base station (BS) and the mobile station (MS) employ large antenna arrays for directional precoding/beamforming. Hybrid analog and digital beamforming structures are employed in order to offer a compromise between hardware complexity and system performance. Different from most existing studies that are concerned with narrowband channels, we consider estimation of wideband mmWave channels with frequency selectivity, which is more appropriate for mmWave MIMO-OFDM systems. By exploiting the sparse scattering nature of mmWave channels, we propose a CANDECOMP/PARAFAC (CP) decomposition-based method for channel parameter estimation (including angles of arrival/departure, time delays, and fading coefficients). In our proposed method, the received signal at the BS is expressed as a third-order tensor. We show that the tensor has the form of a low-rank CP decomposition, and the channel parameters can be estimated from the associated factor matrices. Our analysis reveals that the uniqueness of the CP decomposition can be guaranteed even when the size of the tensor is small. Hence the proposed method has the potential to achieve substantial training overhead reduction. We also develop Cramer-Rao bound (CRB) results for channel parameters, and compare our proposed method with a compressed sensing-based method. Simulation results show that the proposed method attains mean square errors that are very close to their associated CRBs, and presents a clear advantage over the compressed sensing-based method in terms of both estimation accuracy and computational complexity.
[ { "created": "Mon, 12 Sep 2016 11:52:48 GMT", "version": "v1" }, { "created": "Tue, 1 Nov 2016 08:45:05 GMT", "version": "v2" } ]
2016-11-02
[ [ "Zhou", "Zhou", "" ], [ "Fang", "Jun", "" ], [ "Yang", "Linxiao", "" ], [ "Li", "Hongbin", "" ], [ "Chen", "Zhi", "" ], [ "Blum", "Rick S.", "" ] ]
We consider the problem of downlink channel estimation for millimeter wave (mmWave) MIMO-OFDM systems, where both the base station (BS) and the mobile station (MS) employ large antenna arrays for directional precoding/beamforming. Hybrid analog and digital beamforming structures are employed in order to offer a compromise between hardware complexity and system performance. Different from most existing studies that are concerned with narrowband channels, we consider estimation of wideband mmWave channels with frequency selectivity, which is more appropriate for mmWave MIMO-OFDM systems. By exploiting the sparse scattering nature of mmWave channels, we propose a CANDECOMP/PARAFAC (CP) decomposition-based method for channel parameter estimation (including angles of arrival/departure, time delays, and fading coefficients). In our proposed method, the received signal at the BS is expressed as a third-order tensor. We show that the tensor has the form of a low-rank CP decomposition, and the channel parameters can be estimated from the associated factor matrices. Our analysis reveals that the uniqueness of the CP decomposition can be guaranteed even when the size of the tensor is small. Hence the proposed method has the potential to achieve substantial training overhead reduction. We also develop Cramer-Rao bound (CRB) results for channel parameters, and compare our proposed method with a compressed sensing-based method. Simulation results show that the proposed method attains mean square errors that are very close to their associated CRBs, and presents a clear advantage over the compressed sensing-based method in terms of both estimation accuracy and computational complexity.
2304.07825
Samuel Epstein
Samuel Epstein
Regression and Algorithmic Information Theory
null
null
null
null
cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we prove a theorem about regression, in that the shortest description of a function consistent with a finite sample of data is less than the combined conditional Kolmogorov complexities over the data in the sample.
[ { "created": "Sun, 16 Apr 2023 16:30:38 GMT", "version": "v1" } ]
2023-04-18
[ [ "Epstein", "Samuel", "" ] ]
In this paper we prove a theorem about regression, in that the shortest description of a function consistent with a finite sample of data is less than the combined conditional Kolmogorov complexities over the data in the sample.
2209.01641
Souvik Datta
Souvik Datta, Mangolik Kundu, Ratnadeep Das Choudhury, Sriramalakshmi P, Sreedevi VT
IoT Book Bot
2022 IEEE India Council International Subsections Conference (INDISCON)
null
10.1109/INDISCON54605.2022.9862937
null
cs.HC cs.RO
http://creativecommons.org/licenses/by/4.0/
In order to ease the process of library management many technologies have been adopted but most of them focus on inventory management. There has hardly been any progress of automation in the field of issuing and returning books to the library on time. In colleges and schools, hostellers often forget to timely return the issued books back to the library. To solve the above issue and to ensure timely submission of the issued books, this work develops a Book-Bot which solves these complexities. The bot can commute from point A to point B, scan and verify QR Codes and Barcodes. The bot will have a certain payload capacity for carrying books. The QR code and Barcode scanning will be enabled by a Pi Camera, OpenCV and Raspberry Pi, thus making the exchange of books safe and secure. The odometry maneuvers of the bot will be controlled manually via a Blynk App. This paper focuses on how human intervention can be reduced and automates the issue part of library management system with the help of a bot.
[ { "created": "Sun, 4 Sep 2022 15:30:17 GMT", "version": "v1" } ]
2022-09-07
[ [ "Datta", "Souvik", "" ], [ "Kundu", "Mangolik", "" ], [ "Choudhury", "Ratnadeep Das", "" ], [ "P", "Sriramalakshmi", "" ], [ "VT", "Sreedevi", "" ] ]
In order to ease the process of library management many technologies have been adopted but most of them focus on inventory management. There has hardly been any progress of automation in the field of issuing and returning books to the library on time. In colleges and schools, hostellers often forget to timely return the issued books back to the library. To solve the above issue and to ensure timely submission of the issued books, this work develops a Book-Bot which solves these complexities. The bot can commute from point A to point B, scan and verify QR Codes and Barcodes. The bot will have a certain payload capacity for carrying books. The QR code and Barcode scanning will be enabled by a Pi Camera, OpenCV and Raspberry Pi, thus making the exchange of books safe and secure. The odometry maneuvers of the bot will be controlled manually via a Blynk App. This paper focuses on how human intervention can be reduced and automates the issue part of library management system with the help of a bot.
2407.19199
Ryosuke Motegi
Ryosuke Motegi and Yoichi Seki
A simulation study of cluster search algorithms in data set generated by Gaussian mixture models
null
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Determining the number of clusters is a fundamental issue in data clustering. Several algorithms have been proposed, including centroid-based algorithms using the Euclidean distance and model-based algorithms using a mixture of probability distributions. Among these, greedy algorithms for searching the number of clusters by repeatedly splitting or merging clusters have advantages in terms of computation time for problems with large sample sizes. However, studies comparing these methods in systematic evaluation experiments still need to be included. This study examines centroid- and model-based cluster search algorithms in various cases that Gaussian mixture models (GMMs) can generate. The cases are generated by combining five factors: dimensionality, sample size, the number of clusters, cluster overlap, and covariance type. The results show that some cluster-splitting criteria based on Euclidean distance make unreasonable decisions when clusters overlap. The results also show that model-based algorithms are insensitive to covariance type and cluster overlap compared to the centroid-based method if the sample size is sufficient. Our cluster search implementation codes are available at https://github.com/lipryou/searchClustK
[ { "created": "Sat, 27 Jul 2024 07:47:25 GMT", "version": "v1" } ]
2024-07-30
[ [ "Motegi", "Ryosuke", "" ], [ "Seki", "Yoichi", "" ] ]
Determining the number of clusters is a fundamental issue in data clustering. Several algorithms have been proposed, including centroid-based algorithms using the Euclidean distance and model-based algorithms using a mixture of probability distributions. Among these, greedy algorithms for searching the number of clusters by repeatedly splitting or merging clusters have advantages in terms of computation time for problems with large sample sizes. However, studies comparing these methods in systematic evaluation experiments still need to be included. This study examines centroid- and model-based cluster search algorithms in various cases that Gaussian mixture models (GMMs) can generate. The cases are generated by combining five factors: dimensionality, sample size, the number of clusters, cluster overlap, and covariance type. The results show that some cluster-splitting criteria based on Euclidean distance make unreasonable decisions when clusters overlap. The results also show that model-based algorithms are insensitive to covariance type and cluster overlap compared to the centroid-based method if the sample size is sufficient. Our cluster search implementation codes are available at https://github.com/lipryou/searchClustK
2302.12987
Yi Gao
Yi Gao, Miao Xu, Min-Ling Zhang
Complementary to Multiple Labels: A Correlation-Aware Correction Approach
null
null
10.1109/TPAMI.2024.3416384
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
\textit{Complementary label learning} (CLL) requires annotators to give \emph{irrelevant} labels instead of relevant labels for instances. Currently, CLL has shown its promising performance on multi-class data by estimating a transition matrix. However, current multi-class CLL techniques cannot work well on multi-labeled data since they assume each instance is associated with one label while each multi-labeled instance is relevant to multiple labels. Here, we show theoretically how the estimated transition matrix in multi-class CLL could be distorted in multi-labeled cases as they ignore co-existing relevant labels. Moreover, theoretical findings reveal that calculating a transition matrix from label correlations in \textit{multi-labeled CLL} (ML-CLL) needs multi-labeled data, while this is unavailable for ML-CLL. To solve this issue, we propose a two-step method to estimate the transition matrix from candidate labels. Specifically, we first estimate an initial transition matrix by decomposing the multi-label problem into a series of binary classification problems, then the initial transition matrix is corrected by label correlations to enforce the addition of relationships among labels. We further show that the proposal is classifier-consistent, and additionally introduce an MSE-based regularizer to alleviate the tendency of BCE loss overfitting to noises. Experimental results have demonstrated the effectiveness of the proposed method.
[ { "created": "Sat, 25 Feb 2023 04:48:48 GMT", "version": "v1" } ]
2024-06-25
[ [ "Gao", "Yi", "" ], [ "Xu", "Miao", "" ], [ "Zhang", "Min-Ling", "" ] ]
\textit{Complementary label learning} (CLL) requires annotators to give \emph{irrelevant} labels instead of relevant labels for instances. Currently, CLL has shown its promising performance on multi-class data by estimating a transition matrix. However, current multi-class CLL techniques cannot work well on multi-labeled data since they assume each instance is associated with one label while each multi-labeled instance is relevant to multiple labels. Here, we show theoretically how the estimated transition matrix in multi-class CLL could be distorted in multi-labeled cases as they ignore co-existing relevant labels. Moreover, theoretical findings reveal that calculating a transition matrix from label correlations in \textit{multi-labeled CLL} (ML-CLL) needs multi-labeled data, while this is unavailable for ML-CLL. To solve this issue, we propose a two-step method to estimate the transition matrix from candidate labels. Specifically, we first estimate an initial transition matrix by decomposing the multi-label problem into a series of binary classification problems, then the initial transition matrix is corrected by label correlations to enforce the addition of relationships among labels. We further show that the proposal is classifier-consistent, and additionally introduce an MSE-based regularizer to alleviate the tendency of BCE loss overfitting to noises. Experimental results have demonstrated the effectiveness of the proposed method.
2306.03894
Todd Schmid
Todd Schmid and Victoria Noquez and Lawrence S. Moss
Fractals from Regular Behaviours
Expanded and edited into a journal version. Submitted to the CALCO 2023 special issue of LMCS. (31 pages, 5 figures.)
null
null
null
cs.LO cs.FL
http://creativecommons.org/licenses/by/4.0/
We are interested in connections between the theory of fractal sets obtained as attractors of iterated function systems and process calculi. To this end, we reinterpret Milner's expressions for processes as contraction operators on a complete metric space. When the space is, for example, the plane, the denotations of fixed point terms correspond to familiar fractal sets. We give a sound and complete axiomatization of fractal equivalence, the congruence on terms consisting of pairs that construct identical self-similar sets in all interpretations. We further make connections to labelled Markov chains and to invariant measures. In all of this work, we use important results from process calculi. For example, we use Rabinovich's completeness theorem for trace equivalence in our own completeness theorem. In addition to our results, we also raise several questions related to both fractals and process calculi.
[ { "created": "Tue, 6 Jun 2023 17:55:12 GMT", "version": "v1" }, { "created": "Thu, 1 Feb 2024 16:18:26 GMT", "version": "v2" } ]
2024-02-02
[ [ "Schmid", "Todd", "" ], [ "Noquez", "Victoria", "" ], [ "Moss", "Lawrence S.", "" ] ]
We are interested in connections between the theory of fractal sets obtained as attractors of iterated function systems and process calculi. To this end, we reinterpret Milner's expressions for processes as contraction operators on a complete metric space. When the space is, for example, the plane, the denotations of fixed point terms correspond to familiar fractal sets. We give a sound and complete axiomatization of fractal equivalence, the congruence on terms consisting of pairs that construct identical self-similar sets in all interpretations. We further make connections to labelled Markov chains and to invariant measures. In all of this work, we use important results from process calculi. For example, we use Rabinovich's completeness theorem for trace equivalence in our own completeness theorem. In addition to our results, we also raise several questions related to both fractals and process calculi.
2208.04347
Jason Phang
Jason Phang, Yao Zhao, Peter J. Liu
Investigating Efficiently Extending Transformers for Long Input Summarization
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While large pretrained Transformer models have proven highly capable at tackling natural language tasks, handling long sequence inputs continues to be a significant challenge. One such task is long input summarization, where inputs are longer than the maximum input context of most pretrained models. Through an extensive set of experiments, we investigate what model architectural changes and pretraining paradigms can most efficiently adapt a pretrained Transformer for long input summarization. We find that a staggered, block-local Transformer with global encoder tokens strikes a good balance of performance and efficiency, and that an additional pretraining phase on long sequences meaningfully improves downstream summarization performance. Based on our findings, we introduce PEGASUS-X, an extension of the PEGASUS model with additional long input pretraining to handle inputs of up to 16K tokens. PEGASUS-X achieves strong performance on long input summarization tasks comparable with much larger models while adding few additional parameters and not requiring model parallelism to train.
[ { "created": "Mon, 8 Aug 2022 18:10:58 GMT", "version": "v1" } ]
2022-08-10
[ [ "Phang", "Jason", "" ], [ "Zhao", "Yao", "" ], [ "Liu", "Peter J.", "" ] ]
While large pretrained Transformer models have proven highly capable at tackling natural language tasks, handling long sequence inputs continues to be a significant challenge. One such task is long input summarization, where inputs are longer than the maximum input context of most pretrained models. Through an extensive set of experiments, we investigate what model architectural changes and pretraining paradigms can most efficiently adapt a pretrained Transformer for long input summarization. We find that a staggered, block-local Transformer with global encoder tokens strikes a good balance of performance and efficiency, and that an additional pretraining phase on long sequences meaningfully improves downstream summarization performance. Based on our findings, we introduce PEGASUS-X, an extension of the PEGASUS model with additional long input pretraining to handle inputs of up to 16K tokens. PEGASUS-X achieves strong performance on long input summarization tasks comparable with much larger models while adding few additional parameters and not requiring model parallelism to train.
1404.5144
Sacha Gomez
M. Konomi and G. M. Sacha
Influence of the learning method in the performance of feedforward neural networks when the activity of neurons is modified
null
null
null
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A method that allows us to give a different treatment to any neuron inside feedforward neural networks is presented. The algorithm has been implemented with two very different learning methods: a standard Back-propagation (BP) procedure and an evolutionary algorithm. First, we have demonstrated that the EA training method converges faster and gives more accurate results than BP. Then we have made a full analysis of the effects of turning off different combinations of neurons after the training phase. We demonstrate that EA is much more robust than BP for all the cases under study. Even in the case when two hidden neurons are lost, EA training is still able to give good average results. This difference implies that we must be very careful when pruning or redundancy effects are being studied since the network performance when losing neurons strongly depends on the training method. Moreover, the influence of the individual inputs will also depend on the training algorithm. Since EA keeps a good classification performance when units are lost, this method could be a good way to simulate biological learning systems since they must be robust against deficient neuron performance. Although biological systems are much more complex than the simulations shown in this article, we propose that a smart training strategy such as the one shown here could be considered as a first protection against the losing of a certain number of neurons.
[ { "created": "Mon, 21 Apr 2014 09:00:19 GMT", "version": "v1" } ]
2014-04-22
[ [ "Konomi", "M.", "" ], [ "Sacha", "G. M.", "" ] ]
A method that allows us to give a different treatment to any neuron inside feedforward neural networks is presented. The algorithm has been implemented with two very different learning methods: a standard Back-propagation (BP) procedure and an evolutionary algorithm. First, we have demonstrated that the EA training method converges faster and gives more accurate results than BP. Then we have made a full analysis of the effects of turning off different combinations of neurons after the training phase. We demonstrate that EA is much more robust than BP for all the cases under study. Even in the case when two hidden neurons are lost, EA training is still able to give good average results. This difference implies that we must be very careful when pruning or redundancy effects are being studied since the network performance when losing neurons strongly depends on the training method. Moreover, the influence of the individual inputs will also depend on the training algorithm. Since EA keeps a good classification performance when units are lost, this method could be a good way to simulate biological learning systems since they must be robust against deficient neuron performance. Although biological systems are much more complex than the simulations shown in this article, we propose that a smart training strategy such as the one shown here could be considered as a first protection against the losing of a certain number of neurons.
2407.00142
Christopher Irwin
Christopher Irwin, Flavio Mignone, Stefania Montani, Luigi Portinale
Graph Neural Networks for Gut Microbiome Metaomic data: A preliminary work
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
The gut microbiome, crucial for human health, presents challenges in analyzing its complex metaomic data due to high dimensionality and sparsity. Traditional methods struggle to capture its intricate relationships. We investigate graph neural networks (GNNs) for this task, aiming to derive meaningful representations of individual gut microbiomes. Unlike methods relying solely on taxa abundance, we directly leverage phylogenetic relationships, in order to obtain a generalized encoder for taxa networks. The representation learnt from the encoder are then used to train a model for phenotype prediction such as Inflammatory Bowel Disease (IBD).
[ { "created": "Fri, 28 Jun 2024 15:53:36 GMT", "version": "v1" } ]
2024-07-02
[ [ "Irwin", "Christopher", "" ], [ "Mignone", "Flavio", "" ], [ "Montani", "Stefania", "" ], [ "Portinale", "Luigi", "" ] ]
The gut microbiome, crucial for human health, presents challenges in analyzing its complex metaomic data due to high dimensionality and sparsity. Traditional methods struggle to capture its intricate relationships. We investigate graph neural networks (GNNs) for this task, aiming to derive meaningful representations of individual gut microbiomes. Unlike methods relying solely on taxa abundance, we directly leverage phylogenetic relationships, in order to obtain a generalized encoder for taxa networks. The representation learnt from the encoder are then used to train a model for phenotype prediction such as Inflammatory Bowel Disease (IBD).
2402.11835
Hugh Zhang
Luca D'Amico-Wong, Hugh Zhang, Marc Lanctot, David C. Parkes
Easy as ABCs: Unifying Boltzmann Q-Learning and Counterfactual Regret Minimization
null
null
null
null
cs.LG cs.GT cs.MA
http://creativecommons.org/licenses/by/4.0/
We propose ABCs (Adaptive Branching through Child stationarity), a best-of-both-worlds algorithm combining Boltzmann Q-learning (BQL), a classic reinforcement learning algorithm for single-agent domains, and counterfactual regret minimization (CFR), a central algorithm for learning in multi-agent domains. ABCs adaptively chooses what fraction of the environment to explore each iteration by measuring the stationarity of the environment's reward and transition dynamics. In Markov decision processes, ABCs converges to the optimal policy with at most an O(A) factor slowdown compared to BQL, where A is the number of actions in the environment. In two-player zero-sum games, ABCs is guaranteed to converge to a Nash equilibrium (assuming access to a perfect oracle for detecting stationarity), while BQL has no such guarantees. Empirically, ABCs demonstrates strong performance when benchmarked across environments drawn from the OpenSpiel game library and OpenAI Gym and exceeds all prior methods in environments which are neither fully stationary nor fully nonstationary.
[ { "created": "Mon, 19 Feb 2024 04:58:39 GMT", "version": "v1" } ]
2024-02-20
[ [ "D'Amico-Wong", "Luca", "" ], [ "Zhang", "Hugh", "" ], [ "Lanctot", "Marc", "" ], [ "Parkes", "David C.", "" ] ]
We propose ABCs (Adaptive Branching through Child stationarity), a best-of-both-worlds algorithm combining Boltzmann Q-learning (BQL), a classic reinforcement learning algorithm for single-agent domains, and counterfactual regret minimization (CFR), a central algorithm for learning in multi-agent domains. ABCs adaptively chooses what fraction of the environment to explore each iteration by measuring the stationarity of the environment's reward and transition dynamics. In Markov decision processes, ABCs converges to the optimal policy with at most an O(A) factor slowdown compared to BQL, where A is the number of actions in the environment. In two-player zero-sum games, ABCs is guaranteed to converge to a Nash equilibrium (assuming access to a perfect oracle for detecting stationarity), while BQL has no such guarantees. Empirically, ABCs demonstrates strong performance when benchmarked across environments drawn from the OpenSpiel game library and OpenAI Gym and exceeds all prior methods in environments which are neither fully stationary nor fully nonstationary.
2307.01646
Qi Yan
Qi Yan, Zhengyang Liang, Yang Song, Renjie Liao, Lele Wang
SwinGNN: Rethinking Permutation Invariance in Diffusion Models for Graph Generation
TMLR 2024
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diffusion models based on permutation-equivariant networks can learn permutation-invariant distributions for graph data. However, in comparison to their non-invariant counterparts, we have found that these invariant models encounter greater learning challenges since 1) their effective target distributions exhibit more modes; 2) their optimal one-step denoising scores are the score functions of Gaussian mixtures with more components. Motivated by this analysis, we propose a non-invariant diffusion model, called $\textit{SwinGNN}$, which employs an efficient edge-to-edge 2-WL message passing network and utilizes shifted window based self-attention inspired by SwinTransformers. Further, through systematic ablations, we identify several critical training and sampling techniques that significantly improve the sample quality of graph generation. At last, we introduce a simple post-processing trick, $\textit{i.e.}$, randomly permuting the generated graphs, which provably converts any graph generative model to a permutation-invariant one. Extensive experiments on synthetic and real-world protein and molecule datasets show that our SwinGNN achieves state-of-the-art performances. Our code is released at https://github.com/qiyan98/SwinGNN.
[ { "created": "Tue, 4 Jul 2023 10:58:42 GMT", "version": "v1" }, { "created": "Wed, 19 Jul 2023 04:59:35 GMT", "version": "v2" }, { "created": "Tue, 18 Jun 2024 05:55:32 GMT", "version": "v3" }, { "created": "Wed, 19 Jun 2024 04:48:13 GMT", "version": "v4" } ]
2024-06-21
[ [ "Yan", "Qi", "" ], [ "Liang", "Zhengyang", "" ], [ "Song", "Yang", "" ], [ "Liao", "Renjie", "" ], [ "Wang", "Lele", "" ] ]
Diffusion models based on permutation-equivariant networks can learn permutation-invariant distributions for graph data. However, in comparison to their non-invariant counterparts, we have found that these invariant models encounter greater learning challenges since 1) their effective target distributions exhibit more modes; 2) their optimal one-step denoising scores are the score functions of Gaussian mixtures with more components. Motivated by this analysis, we propose a non-invariant diffusion model, called $\textit{SwinGNN}$, which employs an efficient edge-to-edge 2-WL message passing network and utilizes shifted window based self-attention inspired by SwinTransformers. Further, through systematic ablations, we identify several critical training and sampling techniques that significantly improve the sample quality of graph generation. At last, we introduce a simple post-processing trick, $\textit{i.e.}$, randomly permuting the generated graphs, which provably converts any graph generative model to a permutation-invariant one. Extensive experiments on synthetic and real-world protein and molecule datasets show that our SwinGNN achieves state-of-the-art performances. Our code is released at https://github.com/qiyan98/SwinGNN.
1706.07535
Hemanth Venkateswara
Hemanth Venkateswara, Prasanth Lade, Binbin Lin, Jieping Ye, Sethuraman Panchanathan
Efficient Approximate Solutions to Mutual Information Based Global Feature Selection
ICDM 2015 Conference
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mutual Information (MI) is often used for feature selection when developing classifier models. Estimating the MI for a subset of features is often intractable. We demonstrate, that under the assumptions of conditional independence, MI between a subset of features can be expressed as the Conditional Mutual Information (CMI) between pairs of features. But selecting features with the highest CMI turns out to be a hard combinatorial problem. In this work, we have applied two unique global methods, Truncated Power Method (TPower) and Low Rank Bilinear Approximation (LowRank), to solve the feature selection problem. These algorithms provide very good approximations to the NP-hard CMI based feature selection problem. We experimentally demonstrate the effectiveness of these procedures across multiple datasets and compare them with existing MI based global and iterative feature selection procedures.
[ { "created": "Fri, 23 Jun 2017 01:08:59 GMT", "version": "v1" } ]
2017-06-26
[ [ "Venkateswara", "Hemanth", "" ], [ "Lade", "Prasanth", "" ], [ "Lin", "Binbin", "" ], [ "Ye", "Jieping", "" ], [ "Panchanathan", "Sethuraman", "" ] ]
Mutual Information (MI) is often used for feature selection when developing classifier models. Estimating the MI for a subset of features is often intractable. We demonstrate, that under the assumptions of conditional independence, MI between a subset of features can be expressed as the Conditional Mutual Information (CMI) between pairs of features. But selecting features with the highest CMI turns out to be a hard combinatorial problem. In this work, we have applied two unique global methods, Truncated Power Method (TPower) and Low Rank Bilinear Approximation (LowRank), to solve the feature selection problem. These algorithms provide very good approximations to the NP-hard CMI based feature selection problem. We experimentally demonstrate the effectiveness of these procedures across multiple datasets and compare them with existing MI based global and iterative feature selection procedures.
2109.05234
Zezhong Wang Mr.
Zezhong Wang, Hongru Wang, Kwan Wai Chung, Jia Zhu, Gabriel Pui Cheong Fung, Kam-Fai Wong
Prior Omission of Dissimilar Source Domain(s) for Cost-Effective Few-Shot Learning
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Few-shot slot tagging is an emerging research topic in the field of Natural Language Understanding (NLU). With sufficient annotated data from source domains, the key challenge is how to train and adapt the model to another target domain which only has few labels. Conventional few-shot approaches use all the data from the source domains without considering inter-domain relations and implicitly assume each sample in the domain contributes equally. However, our experiments show that the data distribution bias among different domains will significantly affect the adaption performance. Moreover, transferring knowledge from dissimilar domains will even introduce some extra noises so that affect the performance of models. To tackle this problem, we propose an effective similarity-based method to select data from the source domains. In addition, we propose a Shared-Private Network (SP-Net) for the few-shot slot tagging task. The words from the same class would have some shared features. We extract those shared features from the limited annotated data on the target domain and merge them together as the label embedding to help us predict other unlabelled data on the target domain. The experiment shows that our method outperforms the state-of-the-art approaches with fewer source data. The result also proves that some training data from dissimilar sources are redundant and even negative for the adaption.
[ { "created": "Sat, 11 Sep 2021 09:30:59 GMT", "version": "v1" } ]
2021-09-14
[ [ "Wang", "Zezhong", "" ], [ "Wang", "Hongru", "" ], [ "Chung", "Kwan Wai", "" ], [ "Zhu", "Jia", "" ], [ "Fung", "Gabriel Pui Cheong", "" ], [ "Wong", "Kam-Fai", "" ] ]
Few-shot slot tagging is an emerging research topic in the field of Natural Language Understanding (NLU). With sufficient annotated data from source domains, the key challenge is how to train and adapt the model to another target domain which only has few labels. Conventional few-shot approaches use all the data from the source domains without considering inter-domain relations and implicitly assume each sample in the domain contributes equally. However, our experiments show that the data distribution bias among different domains will significantly affect the adaption performance. Moreover, transferring knowledge from dissimilar domains will even introduce some extra noises so that affect the performance of models. To tackle this problem, we propose an effective similarity-based method to select data from the source domains. In addition, we propose a Shared-Private Network (SP-Net) for the few-shot slot tagging task. The words from the same class would have some shared features. We extract those shared features from the limited annotated data on the target domain and merge them together as the label embedding to help us predict other unlabelled data on the target domain. The experiment shows that our method outperforms the state-of-the-art approaches with fewer source data. The result also proves that some training data from dissimilar sources are redundant and even negative for the adaption.
2104.01040
Igor Halperin
Igor Halperin
Distributional Offline Continuous-Time Reinforcement Learning with Neural Physics-Informed PDEs (SciPhy RL for DOCTR-L)
24 pages, 5 figures
null
null
null
cs.LG cs.AI physics.comp-ph q-fin.CP
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper addresses distributional offline continuous-time reinforcement learning (DOCTR-L) with stochastic policies for high-dimensional optimal control. A soft distributional version of the classical Hamilton-Jacobi-Bellman (HJB) equation is given by a semilinear partial differential equation (PDE). This `soft HJB equation' can be learned from offline data without assuming that the latter correspond to a previous optimal or near-optimal policy. A data-driven solution of the soft HJB equation uses methods of Neural PDEs and Physics-Informed Neural Networks developed in the field of Scientific Machine Learning (SciML). The suggested approach, dubbed `SciPhy RL', thus reduces DOCTR-L to solving neural PDEs from data. Our algorithm called Deep DOCTR-L converts offline high-dimensional data into an optimal policy in one step by reducing it to supervised learning, instead of relying on value iteration or policy iteration methods. The method enables a computable approach to the quality control of obtained policies in terms of both their expected returns and uncertainties about their values.
[ { "created": "Fri, 2 Apr 2021 13:22:14 GMT", "version": "v1" } ]
2021-04-05
[ [ "Halperin", "Igor", "" ] ]
This paper addresses distributional offline continuous-time reinforcement learning (DOCTR-L) with stochastic policies for high-dimensional optimal control. A soft distributional version of the classical Hamilton-Jacobi-Bellman (HJB) equation is given by a semilinear partial differential equation (PDE). This `soft HJB equation' can be learned from offline data without assuming that the latter correspond to a previous optimal or near-optimal policy. A data-driven solution of the soft HJB equation uses methods of Neural PDEs and Physics-Informed Neural Networks developed in the field of Scientific Machine Learning (SciML). The suggested approach, dubbed `SciPhy RL', thus reduces DOCTR-L to solving neural PDEs from data. Our algorithm called Deep DOCTR-L converts offline high-dimensional data into an optimal policy in one step by reducing it to supervised learning, instead of relying on value iteration or policy iteration methods. The method enables a computable approach to the quality control of obtained policies in terms of both their expected returns and uncertainties about their values.
1006.2022
Haim Permuter Henry
Haim Permuter, Shlomo (Shitz) Shamai, and Anelia Somekh-Baruch
Message and state cooperation in multiple access channels
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the capacity of a multiple access channel with cooperating encoders where partial state information is known to each encoder and full state information is known to the decoder. The cooperation between the encoders has a two-fold purpose: to generate empirical state coordination between the encoders, and to share information about the private messages that each encoder has. For two-way cooperation, this two-fold purpose is achieved by double-binning, where the first layer of binning is used to generate the state coordination similarly to the two-way source coding, and the second layer of binning is used to transmit information about the private messages. The complete result provides the framework and perspective for addressing a complex level of cooperation that mixes states and messages in an optimal way.
[ { "created": "Thu, 10 Jun 2010 13:15:44 GMT", "version": "v1" } ]
2010-06-11
[ [ "Permuter", "Haim", "", "Shitz" ], [ "Shlomo", "", "", "Shitz" ], [ "Shamai", "", "" ], [ "Somekh-Baruch", "Anelia", "" ] ]
We investigate the capacity of a multiple access channel with cooperating encoders where partial state information is known to each encoder and full state information is known to the decoder. The cooperation between the encoders has a two-fold purpose: to generate empirical state coordination between the encoders, and to share information about the private messages that each encoder has. For two-way cooperation, this two-fold purpose is achieved by double-binning, where the first layer of binning is used to generate the state coordination similarly to the two-way source coding, and the second layer of binning is used to transmit information about the private messages. The complete result provides the framework and perspective for addressing a complex level of cooperation that mixes states and messages in an optimal way.
2303.11899
Hankang Gu
Hankang Gu, Shangbo Wang, Xiaoguang Ma, Dongyao Jia, Guoqiang Mao, Eng Gee Lim, Cheuk Pong Ryan Wong
Large-Scale Traffic Signal Control Using Constrained Network Partition and Adaptive Deep Reinforcement Learning
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-agent Deep Reinforcement Learning (MADRL) based traffic signal control becomes a popular research topic in recent years. To alleviate the scalability issue of completely centralized RL techniques and the non-stationarity issue of completely decentralized RL techniques on large-scale traffic networks, some literature utilizes a regional control approach where the whole network is firstly partitioned into multiple disjoint regions, followed by applying the centralized RL approach to each region. However, the existing partitioning rules either have no constraints on the topology of regions or require the same topology for all regions. Meanwhile, no existing regional control approach explores the performance of optimal joint action in an exponentially growing regional action space when intersections are controlled by 4-phase traffic signals (EW, EWL, NS, NSL). In this paper, we propose a novel RL training framework named RegionLight to tackle the above limitations. Specifically, the topology of regions is firstly constrained to a star network which comprises one center and an arbitrary number of leaves. Next, the network partitioning problem is modeled as an optimization problem to minimize the number of regions. Then, an Adaptive Branching Dueling Q-Network (ABDQ) model is proposed to decompose the regional control task into several joint signal control sub-tasks corresponding to particular intersections. Subsequently, these sub-tasks maximize the regional benefits cooperatively. Finally, the global control strategy for the whole network is obtained by concatenating the optimal joint actions of all regions. Experimental results demonstrate the superiority of our proposed framework over all baselines under both real and synthetic datasets in all evaluation metrics.
[ { "created": "Tue, 21 Mar 2023 14:42:58 GMT", "version": "v1" }, { "created": "Wed, 22 Mar 2023 07:34:22 GMT", "version": "v2" }, { "created": "Fri, 7 Apr 2023 06:38:44 GMT", "version": "v3" }, { "created": "Mon, 26 Jun 2023 04:08:48 GMT", "version": "v4" }, { "created": "Thu, 7 Sep 2023 04:42:45 GMT", "version": "v5" } ]
2023-09-08
[ [ "Gu", "Hankang", "" ], [ "Wang", "Shangbo", "" ], [ "Ma", "Xiaoguang", "" ], [ "Jia", "Dongyao", "" ], [ "Mao", "Guoqiang", "" ], [ "Lim", "Eng Gee", "" ], [ "Wong", "Cheuk Pong Ryan", "" ] ]
Multi-agent Deep Reinforcement Learning (MADRL) based traffic signal control becomes a popular research topic in recent years. To alleviate the scalability issue of completely centralized RL techniques and the non-stationarity issue of completely decentralized RL techniques on large-scale traffic networks, some literature utilizes a regional control approach where the whole network is firstly partitioned into multiple disjoint regions, followed by applying the centralized RL approach to each region. However, the existing partitioning rules either have no constraints on the topology of regions or require the same topology for all regions. Meanwhile, no existing regional control approach explores the performance of optimal joint action in an exponentially growing regional action space when intersections are controlled by 4-phase traffic signals (EW, EWL, NS, NSL). In this paper, we propose a novel RL training framework named RegionLight to tackle the above limitations. Specifically, the topology of regions is firstly constrained to a star network which comprises one center and an arbitrary number of leaves. Next, the network partitioning problem is modeled as an optimization problem to minimize the number of regions. Then, an Adaptive Branching Dueling Q-Network (ABDQ) model is proposed to decompose the regional control task into several joint signal control sub-tasks corresponding to particular intersections. Subsequently, these sub-tasks maximize the regional benefits cooperatively. Finally, the global control strategy for the whole network is obtained by concatenating the optimal joint actions of all regions. Experimental results demonstrate the superiority of our proposed framework over all baselines under both real and synthetic datasets in all evaluation metrics.
2210.07806
Luca Canalini
Luca Canalini, Jan Klein, Nuno Pedrosa de Barros, Diana Maria Sima, Dorothea Miller, Horst Hahn
Comparison of different automatic solutions for resection cavity segmentation in postoperative MRI volumes including longitudinal acquisitions
null
SPIE Proceedings Vol. 11598 - Medical Imaging 2021: Image-Guided Procedures, Robotic Interventions, and Modeling
10.1117/12.2580889
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
In this work, we compare five deep learning solutions to automatically segment the resection cavity in postoperative MRI. The proposed methods are based on the same 3D U-Net architecture. We use a dataset of postoperative MRI volumes, each including four MRI sequences and the ground truth of the corresponding resection cavity. Four solutions are trained with a different MRI sequence. Besides, a method designed with all the available sequences is also presented. Our experiments show that the method trained only with the T1 weighted contrast-enhanced MRI sequence achieves the best results, with a median DICE index of 0.81.
[ { "created": "Fri, 14 Oct 2022 13:37:35 GMT", "version": "v1" } ]
2022-10-17
[ [ "Canalini", "Luca", "" ], [ "Klein", "Jan", "" ], [ "de Barros", "Nuno Pedrosa", "" ], [ "Sima", "Diana Maria", "" ], [ "Miller", "Dorothea", "" ], [ "Hahn", "Horst", "" ] ]
In this work, we compare five deep learning solutions to automatically segment the resection cavity in postoperative MRI. The proposed methods are based on the same 3D U-Net architecture. We use a dataset of postoperative MRI volumes, each including four MRI sequences and the ground truth of the corresponding resection cavity. Four solutions are trained with a different MRI sequence. Besides, a method designed with all the available sequences is also presented. Our experiments show that the method trained only with the T1 weighted contrast-enhanced MRI sequence achieves the best results, with a median DICE index of 0.81.
2405.03287
Mehedi Hasan Raju
Mehedi Hasan Raju, Dillon J Lohr, Oleg V Komogortsev
Evaluating Eye Movement Biometrics in Virtual Reality: A Comparative Analysis of VR Headset and High-End Eye-Tracker Collected Dataset
9 pages, 6 figures
null
null
null
cs.HC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Previous studies have shown that eye movement data recorded at 1000 Hz can be used to authenticate individuals. This study explores the effectiveness of eye movement-based biometrics (EMB) by utilizing data from an eye-tracking (ET)-enabled virtual reality (VR) headset (GazeBaseVR) and compares it to the performance using data from a high-end eye tracker (GazeBase) that has been downsampled to 250 Hz. The research also aims to assess the biometric potential of both binocular and monocular eye movement data. GazeBaseVR dataset achieves an equal error rate (EER) of 1.67% and a false rejection rate (FRR) at 10^-4 false acceptance rate (FAR) of 22.73% in a binocular configuration. This study underscores the biometric viability of data obtained from eye-tracking-enabled VR headset.
[ { "created": "Mon, 6 May 2024 09:05:06 GMT", "version": "v1" } ]
2024-05-07
[ [ "Raju", "Mehedi Hasan", "" ], [ "Lohr", "Dillon J", "" ], [ "Komogortsev", "Oleg V", "" ] ]
Previous studies have shown that eye movement data recorded at 1000 Hz can be used to authenticate individuals. This study explores the effectiveness of eye movement-based biometrics (EMB) by utilizing data from an eye-tracking (ET)-enabled virtual reality (VR) headset (GazeBaseVR) and compares it to the performance using data from a high-end eye tracker (GazeBase) that has been downsampled to 250 Hz. The research also aims to assess the biometric potential of both binocular and monocular eye movement data. GazeBaseVR dataset achieves an equal error rate (EER) of 1.67% and a false rejection rate (FRR) at 10^-4 false acceptance rate (FAR) of 22.73% in a binocular configuration. This study underscores the biometric viability of data obtained from eye-tracking-enabled VR headset.
2211.14923
Jiayu Song
Jiayu Song, Iman Munire Bilal, Adam Tsakalidis, Rob Procter, Maria Liakata
Unsupervised Opinion Summarisation in the Wasserstein Space
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Opinion summarisation synthesises opinions expressed in a group of documents discussing the same topic to produce a single summary. Recent work has looked at opinion summarisation of clusters of social media posts. Such posts are noisy and have unpredictable structure, posing additional challenges for the construction of the summary distribution and the preservation of meaning compared to online reviews, which has been so far the focus of opinion summarisation. To address these challenges we present \textit{WassOS}, an unsupervised abstractive summarization model which makes use of the Wasserstein distance. A Variational Autoencoder is used to get the distribution of documents/posts, and the distributions are disentangled into separate semantic and syntactic spaces. The summary distribution is obtained using the Wasserstein barycenter of the semantic and syntactic distributions. A latent variable sampled from the summary distribution is fed into a GRU decoder with a transformer layer to produce the final summary. Our experiments on multiple datasets including Twitter clusters, Reddit threads, and reviews show that WassOS almost always outperforms the state-of-the-art on ROUGE metrics and consistently produces the best summaries with respect to meaning preservation according to human evaluations.
[ { "created": "Sun, 27 Nov 2022 19:45:38 GMT", "version": "v1" } ]
2022-11-29
[ [ "Song", "Jiayu", "" ], [ "Bilal", "Iman Munire", "" ], [ "Tsakalidis", "Adam", "" ], [ "Procter", "Rob", "" ], [ "Liakata", "Maria", "" ] ]
Opinion summarisation synthesises opinions expressed in a group of documents discussing the same topic to produce a single summary. Recent work has looked at opinion summarisation of clusters of social media posts. Such posts are noisy and have unpredictable structure, posing additional challenges for the construction of the summary distribution and the preservation of meaning compared to online reviews, which has been so far the focus of opinion summarisation. To address these challenges we present \textit{WassOS}, an unsupervised abstractive summarization model which makes use of the Wasserstein distance. A Variational Autoencoder is used to get the distribution of documents/posts, and the distributions are disentangled into separate semantic and syntactic spaces. The summary distribution is obtained using the Wasserstein barycenter of the semantic and syntactic distributions. A latent variable sampled from the summary distribution is fed into a GRU decoder with a transformer layer to produce the final summary. Our experiments on multiple datasets including Twitter clusters, Reddit threads, and reviews show that WassOS almost always outperforms the state-of-the-art on ROUGE metrics and consistently produces the best summaries with respect to meaning preservation according to human evaluations.
2012.08489
Valerio Perrone
Valerio Perrone, Huibin Shen, Aida Zolic, Iaroslav Shcherbatyi, Amr Ahmed, Tanya Bansal, Michele Donini, Fela Winkelmolen, Rodolphe Jenatton, Jean Baptiste Faddoul, Barbara Pogorzelska, Miroslav Miladinovic, Krishnaram Kenthapadi, Matthias Seeger, C\'edric Archambeau
Amazon SageMaker Automatic Model Tuning: Scalable Gradient-Free Optimization
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tuning complex machine learning systems is challenging. Machine learning typically requires to set hyperparameters, be it regularization, architecture, or optimization parameters, whose tuning is critical to achieve good predictive performance. To democratize access to machine learning systems, it is essential to automate the tuning. This paper presents Amazon SageMaker Automatic Model Tuning (AMT), a fully managed system for gradient-free optimization at scale. AMT finds the best version of a trained machine learning model by repeatedly evaluating it with different hyperparameter configurations. It leverages either random search or Bayesian optimization to choose the hyperparameter values resulting in the best model, as measured by the metric chosen by the user. AMT can be used with built-in algorithms, custom algorithms, and Amazon SageMaker pre-built containers for machine learning frameworks. We discuss the core functionality, system architecture, our design principles, and lessons learned. We also describe more advanced features of AMT, such as automated early stopping and warm-starting, showing in experiments their benefits to users.
[ { "created": "Tue, 15 Dec 2020 18:34:34 GMT", "version": "v1" }, { "created": "Fri, 18 Jun 2021 19:41:09 GMT", "version": "v2" } ]
2021-06-22
[ [ "Perrone", "Valerio", "" ], [ "Shen", "Huibin", "" ], [ "Zolic", "Aida", "" ], [ "Shcherbatyi", "Iaroslav", "" ], [ "Ahmed", "Amr", "" ], [ "Bansal", "Tanya", "" ], [ "Donini", "Michele", "" ], [ "Winkelmolen", "Fela", "" ], [ "Jenatton", "Rodolphe", "" ], [ "Faddoul", "Jean Baptiste", "" ], [ "Pogorzelska", "Barbara", "" ], [ "Miladinovic", "Miroslav", "" ], [ "Kenthapadi", "Krishnaram", "" ], [ "Seeger", "Matthias", "" ], [ "Archambeau", "Cédric", "" ] ]
Tuning complex machine learning systems is challenging. Machine learning typically requires to set hyperparameters, be it regularization, architecture, or optimization parameters, whose tuning is critical to achieve good predictive performance. To democratize access to machine learning systems, it is essential to automate the tuning. This paper presents Amazon SageMaker Automatic Model Tuning (AMT), a fully managed system for gradient-free optimization at scale. AMT finds the best version of a trained machine learning model by repeatedly evaluating it with different hyperparameter configurations. It leverages either random search or Bayesian optimization to choose the hyperparameter values resulting in the best model, as measured by the metric chosen by the user. AMT can be used with built-in algorithms, custom algorithms, and Amazon SageMaker pre-built containers for machine learning frameworks. We discuss the core functionality, system architecture, our design principles, and lessons learned. We also describe more advanced features of AMT, such as automated early stopping and warm-starting, showing in experiments their benefits to users.
1906.10724
Rahul Aralikatte
Rahul Aralikatte and Anders S{\o}gaard
Model-based annotation of coreference
To appear in LREC 2020
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans do not make inferences over texts, but over models of what texts are about. When annotators are asked to annotate coreferent spans of text, it is therefore a somewhat unnatural task. This paper presents an alternative in which we preprocess documents, linking entities to a knowledge base, and turn the coreference annotation task -- in our case limited to pronouns -- into an annotation task where annotators are asked to assign pronouns to entities. Model-based annotation is shown to lead to faster annotation and higher inter-annotator agreement, and we argue that it also opens up for an alternative approach to coreference resolution. We present two new coreference benchmark datasets, for English Wikipedia and English teacher-student dialogues, and evaluate state-of-the-art coreference resolvers on them.
[ { "created": "Tue, 25 Jun 2019 18:56:36 GMT", "version": "v1" }, { "created": "Fri, 30 Aug 2019 08:25:13 GMT", "version": "v2" }, { "created": "Sun, 1 Mar 2020 23:17:56 GMT", "version": "v3" } ]
2020-03-03
[ [ "Aralikatte", "Rahul", "" ], [ "Søgaard", "Anders", "" ] ]
Humans do not make inferences over texts, but over models of what texts are about. When annotators are asked to annotate coreferent spans of text, it is therefore a somewhat unnatural task. This paper presents an alternative in which we preprocess documents, linking entities to a knowledge base, and turn the coreference annotation task -- in our case limited to pronouns -- into an annotation task where annotators are asked to assign pronouns to entities. Model-based annotation is shown to lead to faster annotation and higher inter-annotator agreement, and we argue that it also opens up for an alternative approach to coreference resolution. We present two new coreference benchmark datasets, for English Wikipedia and English teacher-student dialogues, and evaluate state-of-the-art coreference resolvers on them.
1609.07190
Rishab Nithyanand
Narseo Vallina-Rodriguez, Srikanth Sundaresan, Abbas Razaghpanah, Rishab Nithyanand, Mark Allman, Christian Kreibich, Phillipa Gill
Tracking the Trackers: Towards Understanding the Mobile Advertising and Tracking Ecosystem
null
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Third-party services form an integral part of the mobile ecosystem: they allow app developers to add features such as performance analytics and social network integration, and to monetize their apps by enabling user tracking and targeted ad delivery. At present users, researchers, and regulators all have at best limited understanding of this third-party ecosystem. In this paper we seek to shrink this gap. Using data from users of our ICSI Haystack app we gain a rich view of the mobile ecosystem: we identify and characterize domains associated with mobile advertising and user tracking, thereby taking an important step towards greater transparency. We furthermore outline our steps towards a public catalog and census of analytics services, their behavior, their personal data collection processes, and their use across mobile apps.
[ { "created": "Thu, 22 Sep 2016 23:45:20 GMT", "version": "v1" }, { "created": "Wed, 26 Oct 2016 15:50:14 GMT", "version": "v2" } ]
2016-10-27
[ [ "Vallina-Rodriguez", "Narseo", "" ], [ "Sundaresan", "Srikanth", "" ], [ "Razaghpanah", "Abbas", "" ], [ "Nithyanand", "Rishab", "" ], [ "Allman", "Mark", "" ], [ "Kreibich", "Christian", "" ], [ "Gill", "Phillipa", "" ] ]
Third-party services form an integral part of the mobile ecosystem: they allow app developers to add features such as performance analytics and social network integration, and to monetize their apps by enabling user tracking and targeted ad delivery. At present users, researchers, and regulators all have at best limited understanding of this third-party ecosystem. In this paper we seek to shrink this gap. Using data from users of our ICSI Haystack app we gain a rich view of the mobile ecosystem: we identify and characterize domains associated with mobile advertising and user tracking, thereby taking an important step towards greater transparency. We furthermore outline our steps towards a public catalog and census of analytics services, their behavior, their personal data collection processes, and their use across mobile apps.
2201.03472
Peter Nightingale
Peter Nightingale
Savile Row Manual
arXiv admin note: substantial text overlap with arXiv:1601.02865
null
null
null
cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
We describe the constraint modelling tool Savile Row, its input language and its main features. Savile Row translates a solver-independent constraint modelling language to the input languages for various solvers including constraint, SAT, and SMT solvers. After a brief introduction, the manual describes the Essence Prime language, which is the input language of Savile Row. Then we describe the functions of the tool, its main features and options and how to install and use it.
[ { "created": "Fri, 12 Nov 2021 09:47:55 GMT", "version": "v1" }, { "created": "Tue, 30 Jul 2024 13:31:56 GMT", "version": "v2" } ]
2024-07-31
[ [ "Nightingale", "Peter", "" ] ]
We describe the constraint modelling tool Savile Row, its input language and its main features. Savile Row translates a solver-independent constraint modelling language to the input languages for various solvers including constraint, SAT, and SMT solvers. After a brief introduction, the manual describes the Essence Prime language, which is the input language of Savile Row. Then we describe the functions of the tool, its main features and options and how to install and use it.
1802.00507
Manuel Ortega-Rodr\'iguez
Diana Valverde-M\'endez, Manuel Ortega-Rodr\'iguez, Hugo Sol\'is-S\'anchez, Ariadna Venegas-Li
The effects of anger on automated long-term-spectra based speaker-identification
11 pages
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Forensic speaker identification has traditionally considered approaches based on long term spectra analysis as especially robust, given that they work well for short recordings, are not sensitive to changes in the intensity of the sample, and continue to function in the presence of noise and limited passband. We find, however, that anger induces a significant distortion of the acoustic signal for long term spectra analysis purposes. Even moderate anger offsets speaker identification results by 33% in the direction of a different speaker altogether. Thus, caution should be exercised when applying this tool.
[ { "created": "Sat, 20 Jan 2018 03:51:30 GMT", "version": "v1" } ]
2018-02-05
[ [ "Valverde-Méndez", "Diana", "" ], [ "Ortega-Rodríguez", "Manuel", "" ], [ "Solís-Sánchez", "Hugo", "" ], [ "Venegas-Li", "Ariadna", "" ] ]
Forensic speaker identification has traditionally considered approaches based on long term spectra analysis as especially robust, given that they work well for short recordings, are not sensitive to changes in the intensity of the sample, and continue to function in the presence of noise and limited passband. We find, however, that anger induces a significant distortion of the acoustic signal for long term spectra analysis purposes. Even moderate anger offsets speaker identification results by 33% in the direction of a different speaker altogether. Thus, caution should be exercised when applying this tool.
2310.14129
Pan Xu
Tianyuan Jin, Yu Yang, Jing Tang, Xiaokui Xiao, Pan Xu
Optimal Batched Best Arm Identification
32 pages, 1 figure, 3 tables
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the batched best arm identification (BBAI) problem, where the learner's goal is to identify the best arm while switching the policy as less as possible. In particular, we aim to find the best arm with probability $1-\delta$ for some small constant $\delta>0$ while minimizing both the sample complexity (total number of arm pulls) and the batch complexity (total number of batches). We propose the three-batch best arm identification (Tri-BBAI) algorithm, which is the first batched algorithm that achieves the optimal sample complexity in the asymptotic setting (i.e., $\delta\rightarrow 0$) and runs only in at most $3$ batches. Based on Tri-BBAI, we further propose the almost optimal batched best arm identification (Opt-BBAI) algorithm, which is the first algorithm that achieves the near-optimal sample and batch complexity in the non-asymptotic setting (i.e., $\delta>0$ is arbitrarily fixed), while enjoying the same batch and sample complexity as Tri-BBAI when $\delta$ tends to zero. Moreover, in the non-asymptotic setting, the complexity of previous batch algorithms is usually conditioned on the event that the best arm is returned (with a probability of at least $1-\delta$), which is potentially unbounded in cases where a sub-optimal arm is returned. In contrast, the complexity of Opt-BBAI does not rely on such an event. This is achieved through a novel procedure that we design for checking whether the best arm is eliminated, which is of independent interest.
[ { "created": "Sat, 21 Oct 2023 22:55:50 GMT", "version": "v1" } ]
2023-10-24
[ [ "Jin", "Tianyuan", "" ], [ "Yang", "Yu", "" ], [ "Tang", "Jing", "" ], [ "Xiao", "Xiaokui", "" ], [ "Xu", "Pan", "" ] ]
We study the batched best arm identification (BBAI) problem, where the learner's goal is to identify the best arm while switching the policy as less as possible. In particular, we aim to find the best arm with probability $1-\delta$ for some small constant $\delta>0$ while minimizing both the sample complexity (total number of arm pulls) and the batch complexity (total number of batches). We propose the three-batch best arm identification (Tri-BBAI) algorithm, which is the first batched algorithm that achieves the optimal sample complexity in the asymptotic setting (i.e., $\delta\rightarrow 0$) and runs only in at most $3$ batches. Based on Tri-BBAI, we further propose the almost optimal batched best arm identification (Opt-BBAI) algorithm, which is the first algorithm that achieves the near-optimal sample and batch complexity in the non-asymptotic setting (i.e., $\delta>0$ is arbitrarily fixed), while enjoying the same batch and sample complexity as Tri-BBAI when $\delta$ tends to zero. Moreover, in the non-asymptotic setting, the complexity of previous batch algorithms is usually conditioned on the event that the best arm is returned (with a probability of at least $1-\delta$), which is potentially unbounded in cases where a sub-optimal arm is returned. In contrast, the complexity of Opt-BBAI does not rely on such an event. This is achieved through a novel procedure that we design for checking whether the best arm is eliminated, which is of independent interest.
2407.12170
Sean MacAvaney
Xuejun Chang, Debabrata Mishra, Craig Macdonald, Sean MacAvaney
Neural Passage Quality Estimation for Static Pruning
SIGIR 2024
null
10.1145/3626772.3657765
null
cs.IR
http://creativecommons.org/licenses/by/4.0/
Neural networks -- especially those that use large, pre-trained language models -- have improved search engines in various ways. Most prominently, they can estimate the relevance of a passage or document to a user's query. In this work, we depart from this direction by exploring whether neural networks can effectively predict which of a document's passages are unlikely to be relevant to any query submitted to the search engine. We refer to this query-agnostic estimation of passage relevance as a passage's quality. We find that our novel methods for estimating passage quality allow passage corpora to be pruned considerably while maintaining statistically equivalent effectiveness; our best methods can consistently prune >25% of passages in a corpora, across various retrieval pipelines. Such substantial pruning reduces the operating costs of neural search engines in terms of computing resources, power usage, and carbon footprint -- both when processing queries (thanks to a smaller index size) and when indexing (lightweight models can prune low-quality passages prior to the costly dense or learned sparse encoding step). This work sets the stage for developing more advanced neural "learning-what-to-index" methods.
[ { "created": "Tue, 16 Jul 2024 20:47:54 GMT", "version": "v1" } ]
2024-07-18
[ [ "Chang", "Xuejun", "" ], [ "Mishra", "Debabrata", "" ], [ "Macdonald", "Craig", "" ], [ "MacAvaney", "Sean", "" ] ]
Neural networks -- especially those that use large, pre-trained language models -- have improved search engines in various ways. Most prominently, they can estimate the relevance of a passage or document to a user's query. In this work, we depart from this direction by exploring whether neural networks can effectively predict which of a document's passages are unlikely to be relevant to any query submitted to the search engine. We refer to this query-agnostic estimation of passage relevance as a passage's quality. We find that our novel methods for estimating passage quality allow passage corpora to be pruned considerably while maintaining statistically equivalent effectiveness; our best methods can consistently prune >25% of passages in a corpora, across various retrieval pipelines. Such substantial pruning reduces the operating costs of neural search engines in terms of computing resources, power usage, and carbon footprint -- both when processing queries (thanks to a smaller index size) and when indexing (lightweight models can prune low-quality passages prior to the costly dense or learned sparse encoding step). This work sets the stage for developing more advanced neural "learning-what-to-index" methods.
2205.15906
Malsha V Perera
Malsha V. Perera, Wele Gedara Chaminda Bandara, Jeya Maria Jose Valanarasu, and Vishal M. Patel
SAR Despeckling Using Overcomplete Convolutional Networks
Accepted to International Geoscience and Remote Sensing Symposium (IGARSS), 2022. Our code is available at https://github.com/malshaV/sar_overcomplete
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Synthetic Aperture Radar (SAR) despeckling is an important problem in remote sensing as speckle degrades SAR images, affecting downstream tasks like detection and segmentation. Recent studies show that convolutional neural networks(CNNs) outperform classical despeckling methods. Traditional CNNs try to increase the receptive field size as the network goes deeper, thus extracting global features. However,speckle is relatively small, and increasing receptive field does not help in extracting speckle features. This study employs an overcomplete CNN architecture to focus on learning low-level features by restricting the receptive field. The proposed network consists of an overcomplete branch to focus on the local structures and an undercomplete branch that focuses on the global structures. We show that the proposed network improves despeckling performance compared to recent despeckling methods on synthetic and real SAR images.
[ { "created": "Tue, 31 May 2022 15:55:37 GMT", "version": "v1" } ]
2022-06-01
[ [ "Perera", "Malsha V.", "" ], [ "Bandara", "Wele Gedara Chaminda", "" ], [ "Valanarasu", "Jeya Maria Jose", "" ], [ "Patel", "Vishal M.", "" ] ]
Synthetic Aperture Radar (SAR) despeckling is an important problem in remote sensing as speckle degrades SAR images, affecting downstream tasks like detection and segmentation. Recent studies show that convolutional neural networks(CNNs) outperform classical despeckling methods. Traditional CNNs try to increase the receptive field size as the network goes deeper, thus extracting global features. However,speckle is relatively small, and increasing receptive field does not help in extracting speckle features. This study employs an overcomplete CNN architecture to focus on learning low-level features by restricting the receptive field. The proposed network consists of an overcomplete branch to focus on the local structures and an undercomplete branch that focuses on the global structures. We show that the proposed network improves despeckling performance compared to recent despeckling methods on synthetic and real SAR images.
1805.00443
Lucie Jacquin
Pierre-Michel Riccio (LGI2P)
Une approche pour mieux appr{\'e}hender l'alt{\'e}rit{\'e} en SIC
in French
Colloque Communication, Organisation, Soci{\'e}t{\'e} du Savoir et Information (COSSI) 15-17 juin 2016 , Jun 2016, Montpellier, France
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a novel approach that aims: to facilitate the building of teams relying on notions of skills, of motivation or of potential; identify the requested upgrade effort to improve his own knowledge and join a team; or create more suitable devices for a target population. The whole forms a toolbox that seems appropriate to facilitate better recognition of otherness.
[ { "created": "Fri, 20 Apr 2018 11:20:51 GMT", "version": "v1" } ]
2018-05-02
[ [ "Riccio", "Pierre-Michel", "", "LGI2P" ] ]
In this paper, we propose a novel approach that aims: to facilitate the building of teams relying on notions of skills, of motivation or of potential; identify the requested upgrade effort to improve his own knowledge and join a team; or create more suitable devices for a target population. The whole forms a toolbox that seems appropriate to facilitate better recognition of otherness.
2401.14079
Tobias Eisenreich
Tobias Eisenreich, Sandro Speth, Stefan Wagner
From Requirements to Architecture: An AI-Based Journey to Semi-Automatically Generate Software Architectures
4 pages, vision paper, submitted to the ICSE workshop Designing2024
null
10.1145/3643660.3643942
null
cs.SE cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Designing domain models and software architectures represents a significant challenge in software development, as the resulting architectures play a vital role in fulfilling the system's quality of service. Due to time pressure, architects often model only one architecture based on their known limited domain understanding, patterns, and experience instead of thoroughly analyzing the domain and evaluating multiple candidates, selecting the best fitting. Existing approaches try to generate domain models based on requirements, but still require time-consuming manual effort to achieve good results. Therefore, in this vision paper, we propose a method to generate software architecture candidates semi-automatically based on requirements using artificial intelligence techniques. We further envision an automatic evaluation and trade-off analysis of the generated architecture candidates using, e.g., the architecture trade-off analysis method combined with large language models and quantitative analyses. To evaluate this approach, we aim to analyze the quality of the generated architecture models and the efficiency and effectiveness of our proposed process by conducting qualitative studies.
[ { "created": "Thu, 25 Jan 2024 10:56:58 GMT", "version": "v1" } ]
2024-02-02
[ [ "Eisenreich", "Tobias", "" ], [ "Speth", "Sandro", "" ], [ "Wagner", "Stefan", "" ] ]
Designing domain models and software architectures represents a significant challenge in software development, as the resulting architectures play a vital role in fulfilling the system's quality of service. Due to time pressure, architects often model only one architecture based on their known limited domain understanding, patterns, and experience instead of thoroughly analyzing the domain and evaluating multiple candidates, selecting the best fitting. Existing approaches try to generate domain models based on requirements, but still require time-consuming manual effort to achieve good results. Therefore, in this vision paper, we propose a method to generate software architecture candidates semi-automatically based on requirements using artificial intelligence techniques. We further envision an automatic evaluation and trade-off analysis of the generated architecture candidates using, e.g., the architecture trade-off analysis method combined with large language models and quantitative analyses. To evaluate this approach, we aim to analyze the quality of the generated architecture models and the efficiency and effectiveness of our proposed process by conducting qualitative studies.
1406.6114
Sakthithasan Sripirakas
Sakthithasan Sripirakas and Russel Pears
Mining Recurrent Concepts in Data Streams using the Discrete Fourier Transform
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this research we address the problem of capturing recurring concepts in a data stream environment. Recurrence capture enables the re-use of previously learned classifiers without the need for re-learning while providing for better accuracy during the concept recurrence interval. We capture concepts by applying the Discrete Fourier Transform (DFT) to Decision Tree classifiers to obtain highly compressed versions of the trees at concept drift points in the stream and store such trees in a repository for future use. Our empirical results on real world and synthetic data exhibiting varying degrees of recurrence show that the Fourier compressed trees are more robust to noise and are able to capture recurring concepts with higher precision than a meta learning approach that chooses to re-use classifiers in their originally occurring form.
[ { "created": "Tue, 24 Jun 2014 00:48:23 GMT", "version": "v1" } ]
2014-06-25
[ [ "Sripirakas", "Sakthithasan", "" ], [ "Pears", "Russel", "" ] ]
In this research we address the problem of capturing recurring concepts in a data stream environment. Recurrence capture enables the re-use of previously learned classifiers without the need for re-learning while providing for better accuracy during the concept recurrence interval. We capture concepts by applying the Discrete Fourier Transform (DFT) to Decision Tree classifiers to obtain highly compressed versions of the trees at concept drift points in the stream and store such trees in a repository for future use. Our empirical results on real world and synthetic data exhibiting varying degrees of recurrence show that the Fourier compressed trees are more robust to noise and are able to capture recurring concepts with higher precision than a meta learning approach that chooses to re-use classifiers in their originally occurring form.
2402.03396
Yifeng He
Yifeng He, Jiabo Huang, Yuyang Rong, Yiwen Guo, Ethan Wang, Hao Chen
UniTSyn: A Large-Scale Dataset Capable of Enhancing the Prowess of Large Language Models for Program Testing
8 pages, 5 figures
null
null
null
cs.SE cs.AI cs.CL cs.CR cs.LG
http://creativecommons.org/licenses/by/4.0/
The remarkable capability of large language models (LLMs) in generating high-quality code has drawn increasing attention in the software testing community. However, existing code LLMs often demonstrate unsatisfactory capabilities in generating accurate and complete tests since they were trained on code snippets collected without differentiating between code for testing purposes and other code. In this paper, we present a large-scale dataset UniTSyn, which is capable of enhancing the prowess of LLMs for Unit Test Synthesis. Associating tests with the tested functions is crucial for LLMs to infer the expected behavior and the logic paths to be verified. By leveraging Language Server Protocol, UniTSyn achieves the challenging goal of collecting focal-test pairs without per-project execution setups or per-language heuristics that tend to be fragile and difficult to scale. It contains 2.7 million focal-test pairs across five mainstream programming languages, making it possible to be utilized for enhancing the test generation ability of LLMs. The details of UniTSyn can be found in Table 1. Our experiments demonstrate that, by building an autoregressive model based on UniTSyn, we can achieve significant benefits in learning and understanding unit test representations, resulting in improved generation accuracy and code coverage across all evaluated programming languages. Code and data will be publicly available.
[ { "created": "Sun, 4 Feb 2024 22:48:05 GMT", "version": "v1" } ]
2024-02-07
[ [ "He", "Yifeng", "" ], [ "Huang", "Jiabo", "" ], [ "Rong", "Yuyang", "" ], [ "Guo", "Yiwen", "" ], [ "Wang", "Ethan", "" ], [ "Chen", "Hao", "" ] ]
The remarkable capability of large language models (LLMs) in generating high-quality code has drawn increasing attention in the software testing community. However, existing code LLMs often demonstrate unsatisfactory capabilities in generating accurate and complete tests since they were trained on code snippets collected without differentiating between code for testing purposes and other code. In this paper, we present a large-scale dataset UniTSyn, which is capable of enhancing the prowess of LLMs for Unit Test Synthesis. Associating tests with the tested functions is crucial for LLMs to infer the expected behavior and the logic paths to be verified. By leveraging Language Server Protocol, UniTSyn achieves the challenging goal of collecting focal-test pairs without per-project execution setups or per-language heuristics that tend to be fragile and difficult to scale. It contains 2.7 million focal-test pairs across five mainstream programming languages, making it possible to be utilized for enhancing the test generation ability of LLMs. The details of UniTSyn can be found in Table 1. Our experiments demonstrate that, by building an autoregressive model based on UniTSyn, we can achieve significant benefits in learning and understanding unit test representations, resulting in improved generation accuracy and code coverage across all evaluated programming languages. Code and data will be publicly available.
2008.04701
Matthijs Maas
John-Clark Levin and Matthijs M. Maas
Roadmap to a Roadmap: How Could We Tell When AGI is a 'Manhattan Project' Away?
null
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper argues that at a certain point in research toward AGI, the problem may become well-enough theorized that a clear roadmap exists for achieving it, such that a Manhattan Project-like effort could greatly shorten the time to completion. If state actors perceive that this threshold has been crossed, their incentives around openness and international cooperation may shift rather suddenly, with serious implications for AI risks and the stability of international AI governance regimes. The paper characterizes how such a 'runway' period would be qualitatively different from preceding stages of AI research, and accordingly proposes a research program aimed at assessing how close the field of AI is to such a threshold - that is, it calls for the formulation of a 'roadmap to the roadmap.'
[ { "created": "Thu, 6 Aug 2020 06:07:47 GMT", "version": "v1" } ]
2020-08-12
[ [ "Levin", "John-Clark", "" ], [ "Maas", "Matthijs M.", "" ] ]
This paper argues that at a certain point in research toward AGI, the problem may become well-enough theorized that a clear roadmap exists for achieving it, such that a Manhattan Project-like effort could greatly shorten the time to completion. If state actors perceive that this threshold has been crossed, their incentives around openness and international cooperation may shift rather suddenly, with serious implications for AI risks and the stability of international AI governance regimes. The paper characterizes how such a 'runway' period would be qualitatively different from preceding stages of AI research, and accordingly proposes a research program aimed at assessing how close the field of AI is to such a threshold - that is, it calls for the formulation of a 'roadmap to the roadmap.'
cs/0701134
Wenbing Zhao
Wenbing Zhao
Byzantine Fault Tolerance for Nondeterministic Applications
To appear in the proceedings of the 3rd IEEE International Symposium on Dependable, Autonomic and Secure Computing, 2007
null
10.1109/DASC.2007.11
null
cs.DC
null
All practical applications contain some degree of nondeterminism. When such applications are replicated to achieve Byzantine fault tolerance (BFT), their nondeterministic operations must be controlled to ensure replica consistency. To the best of our knowledge, only the most simplistic types of replica nondeterminism have been dealt with. Furthermore, there lacks a systematic approach to handling common types of nondeterminism. In this paper, we propose a classification of common types of replica nondeterminism with respect to the requirement of achieving Byzantine fault tolerance, and describe the design and implementation of the core mechanisms necessary to handle such nondeterminism within a Byzantine fault tolerance framework.
[ { "created": "Sun, 21 Jan 2007 20:44:52 GMT", "version": "v1" }, { "created": "Wed, 1 Aug 2007 04:51:53 GMT", "version": "v2" } ]
2016-11-15
[ [ "Zhao", "Wenbing", "" ] ]
All practical applications contain some degree of nondeterminism. When such applications are replicated to achieve Byzantine fault tolerance (BFT), their nondeterministic operations must be controlled to ensure replica consistency. To the best of our knowledge, only the most simplistic types of replica nondeterminism have been dealt with. Furthermore, there lacks a systematic approach to handling common types of nondeterminism. In this paper, we propose a classification of common types of replica nondeterminism with respect to the requirement of achieving Byzantine fault tolerance, and describe the design and implementation of the core mechanisms necessary to handle such nondeterminism within a Byzantine fault tolerance framework.
1801.03911
Sahil Garg
Sahil Garg and Greg Ver Steeg and Aram Galstyan
Stochastic Learning of Nonstationary Kernels for Natural Language Modeling
null
null
null
null
cs.CL cs.IR cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Natural language processing often involves computations with semantic or syntactic graphs to facilitate sophisticated reasoning based on structural relationships. While convolution kernels provide a powerful tool for comparing graph structure based on node (word) level relationships, they are difficult to customize and can be computationally expensive. We propose a generalization of convolution kernels, with a nonstationary model, for better expressibility of natural languages in supervised settings. For a scalable learning of the parameters introduced with our model, we propose a novel algorithm that leverages stochastic sampling on k-nearest neighbor graphs, along with approximations based on locality-sensitive hashing. We demonstrate the advantages of our approach on a challenging real-world (structured inference) problem of automatically extracting biological models from the text of scientific papers.
[ { "created": "Thu, 11 Jan 2018 18:24:02 GMT", "version": "v1" }, { "created": "Thu, 1 Feb 2018 21:41:27 GMT", "version": "v2" } ]
2018-02-13
[ [ "Garg", "Sahil", "" ], [ "Steeg", "Greg Ver", "" ], [ "Galstyan", "Aram", "" ] ]
Natural language processing often involves computations with semantic or syntactic graphs to facilitate sophisticated reasoning based on structural relationships. While convolution kernels provide a powerful tool for comparing graph structure based on node (word) level relationships, they are difficult to customize and can be computationally expensive. We propose a generalization of convolution kernels, with a nonstationary model, for better expressibility of natural languages in supervised settings. For a scalable learning of the parameters introduced with our model, we propose a novel algorithm that leverages stochastic sampling on k-nearest neighbor graphs, along with approximations based on locality-sensitive hashing. We demonstrate the advantages of our approach on a challenging real-world (structured inference) problem of automatically extracting biological models from the text of scientific papers.
1912.12430
Theofilos Triommatis
Theofilos Triommatis and Aris Pagourtzis
Approximate #Knapsack Computations to Count Semi-Fair Allocations
null
null
null
null
cs.CC
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this paper, we study the problem of counting the number of different knapsack solutions with a prescribed cardinality. We present an FPTAS for this problem, based on dynamic programming. We also introduce two different types of semi-fair allocations of indivisible goods between two players. By semi-fair allocations, we mean allocations that ensure that at least one of the two players will be free of envy. We study the problem of counting such allocations and we provide FPTASs for both types, by employing our FPTAS for the prescribed cardinality knapsack problem.
[ { "created": "Sat, 28 Dec 2019 09:48:32 GMT", "version": "v1" } ]
2020-01-01
[ [ "Triommatis", "Theofilos", "" ], [ "Pagourtzis", "Aris", "" ] ]
In this paper, we study the problem of counting the number of different knapsack solutions with a prescribed cardinality. We present an FPTAS for this problem, based on dynamic programming. We also introduce two different types of semi-fair allocations of indivisible goods between two players. By semi-fair allocations, we mean allocations that ensure that at least one of the two players will be free of envy. We study the problem of counting such allocations and we provide FPTASs for both types, by employing our FPTAS for the prescribed cardinality knapsack problem.
2406.04823
David Samuel
David Samuel
BERTs are Generative In-Context Learners
21 pages, preprint
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper explores the in-context learning capabilities of masked language models, challenging the common view that this ability does not 'emerge' in them. We present an embarrassingly simple inference technique that enables DeBERTa to operate as a generative model without any additional training. Our findings demonstrate that DeBERTa can match and even surpass GPT-3, its contemporary that famously introduced the paradigm of in-context learning. The comparative analysis reveals that the masked and causal language models behave very differently, as they clearly outperform each other on different categories of tasks. This suggests that there is great potential for a hybrid training approach that takes advantage of the strengths of both training objectives.
[ { "created": "Fri, 7 Jun 2024 10:48:45 GMT", "version": "v1" } ]
2024-06-10
[ [ "Samuel", "David", "" ] ]
This paper explores the in-context learning capabilities of masked language models, challenging the common view that this ability does not 'emerge' in them. We present an embarrassingly simple inference technique that enables DeBERTa to operate as a generative model without any additional training. Our findings demonstrate that DeBERTa can match and even surpass GPT-3, its contemporary that famously introduced the paradigm of in-context learning. The comparative analysis reveals that the masked and causal language models behave very differently, as they clearly outperform each other on different categories of tasks. This suggests that there is great potential for a hybrid training approach that takes advantage of the strengths of both training objectives.
cs/0511063
Michele Finelli
Michele Finelli
Pathwords: a user-friendly schema for common passwords management
null
null
null
null
cs.CR
null
Many computer-based authentication schemata are based on pass- words. Logging on a computer, reading email, accessing content on a web server are all examples of applications where the identification of the user is usually accomplished matching the data provided by the user with data known by the application. Such a widespread approach relies on some assumptions, whose satisfaction is of foremost importance to guarantee the robustness of the solution. Some of these assumptions, like having a "secure" chan- nel to transmit data, or having sound algorithms to check the correct- ness of the data, are not addressed by this paper. We will focus on two simple issues: the problem of using adequate passwords and the problem of managing passwords. The proposed solution, the pathword, is a method that guarantees: 1 that the passwords generated with the help of a pathword are adequate (i.e. that they are not easy to guess), 2 that managing pathwords is more user friendly than managing passwords and that pathwords are less amenable to problems typical of passwords.
[ { "created": "Wed, 16 Nov 2005 17:46:48 GMT", "version": "v1" } ]
2007-05-23
[ [ "Finelli", "Michele", "" ] ]
Many computer-based authentication schemata are based on pass- words. Logging on a computer, reading email, accessing content on a web server are all examples of applications where the identification of the user is usually accomplished matching the data provided by the user with data known by the application. Such a widespread approach relies on some assumptions, whose satisfaction is of foremost importance to guarantee the robustness of the solution. Some of these assumptions, like having a "secure" chan- nel to transmit data, or having sound algorithms to check the correct- ness of the data, are not addressed by this paper. We will focus on two simple issues: the problem of using adequate passwords and the problem of managing passwords. The proposed solution, the pathword, is a method that guarantees: 1 that the passwords generated with the help of a pathword are adequate (i.e. that they are not easy to guess), 2 that managing pathwords is more user friendly than managing passwords and that pathwords are less amenable to problems typical of passwords.
2402.01386
Zeeshan Rasheed Mr
Zeeshan Rasheed, Muhammad Waseem, Aakash Ahmad, Kai-Kristian Kemell, Wang Xiaofeng, Anh Nguyen Duc, Pekka Abrahamsson
Can Large Language Models Serve as Data Analysts? A Multi-Agent Assisted Approach for Qualitative Data Analysis
9 pages and 2 figures
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advancements in Large Language Models (LLMs) have enabled collaborative human-bot interactions in Software Engineering (SE), similar to many other professions. However, the potential benefits and implications of incorporating LLMs into qualitative data analysis in SE have not been completely explored. For instance, conducting qualitative data analysis manually can be a time-consuming, effort-intensive, and error-prone task for researchers. LLM-based solutions, such as generative AI models trained on massive datasets, can be utilized to automate tasks in software development as well as in qualitative data analysis. To this end, we utilized LLMs to automate and expedite the qualitative data analysis processes. We employed a multi-agent model, where each agent was tasked with executing distinct, individual research related activities. Our proposed model interpreted large quantities of textual documents and interview transcripts to perform several common tasks used in qualitative analysis. The results show that this technical assistant speeds up significantly the data analysis process, enabling researchers to manage larger datasets much more effectively. Furthermore, this approach introduces a new dimension of scalability and accuracy in qualitative research, potentially transforming data interpretation methodologies in SE.
[ { "created": "Fri, 2 Feb 2024 13:10:46 GMT", "version": "v1" } ]
2024-02-05
[ [ "Rasheed", "Zeeshan", "" ], [ "Waseem", "Muhammad", "" ], [ "Ahmad", "Aakash", "" ], [ "Kemell", "Kai-Kristian", "" ], [ "Xiaofeng", "Wang", "" ], [ "Duc", "Anh Nguyen", "" ], [ "Abrahamsson", "Pekka", "" ] ]
Recent advancements in Large Language Models (LLMs) have enabled collaborative human-bot interactions in Software Engineering (SE), similar to many other professions. However, the potential benefits and implications of incorporating LLMs into qualitative data analysis in SE have not been completely explored. For instance, conducting qualitative data analysis manually can be a time-consuming, effort-intensive, and error-prone task for researchers. LLM-based solutions, such as generative AI models trained on massive datasets, can be utilized to automate tasks in software development as well as in qualitative data analysis. To this end, we utilized LLMs to automate and expedite the qualitative data analysis processes. We employed a multi-agent model, where each agent was tasked with executing distinct, individual research related activities. Our proposed model interpreted large quantities of textual documents and interview transcripts to perform several common tasks used in qualitative analysis. The results show that this technical assistant speeds up significantly the data analysis process, enabling researchers to manage larger datasets much more effectively. Furthermore, this approach introduces a new dimension of scalability and accuracy in qualitative research, potentially transforming data interpretation methodologies in SE.
1703.08206
Manuel Peuster
Manuel Peuster and Holger Karl
Understand Your Chains: Towards Performance Profile-based Network Service Management
Submitted to and accepted by the European Workshop on Software Defined Networks (EWSDN) 2016
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Allocating resources to virtualized network functions and services to meet service level agreements is a challenging task for NFV management and orchestration systems. This becomes even more challenging when agile development methodologies, like DevOps, are applied. In such scenarios, management and orchestration systems are continuously facing new versions of functions and services which makes it hard to decide how much resources have to be allocated to them to provide the expected service performance. One solution for this problem is to support resource allocation decisions with performance behavior information obtained by profiling techniques applied to such network functions and services. In this position paper, we analyze and discuss the components needed to generate such performance behavior information within the NFV DevOps workflow. We also outline research questions that identify open issues and missing pieces for a fully integrated NFV profiling solution. Further, we introduce a novel profiling mechanism that is able to profile virtualized network functions and entire network service chains under different resource constraints before they are deployed on production infrastructure.
[ { "created": "Thu, 23 Mar 2017 19:12:09 GMT", "version": "v1" } ]
2017-03-27
[ [ "Peuster", "Manuel", "" ], [ "Karl", "Holger", "" ] ]
Allocating resources to virtualized network functions and services to meet service level agreements is a challenging task for NFV management and orchestration systems. This becomes even more challenging when agile development methodologies, like DevOps, are applied. In such scenarios, management and orchestration systems are continuously facing new versions of functions and services which makes it hard to decide how much resources have to be allocated to them to provide the expected service performance. One solution for this problem is to support resource allocation decisions with performance behavior information obtained by profiling techniques applied to such network functions and services. In this position paper, we analyze and discuss the components needed to generate such performance behavior information within the NFV DevOps workflow. We also outline research questions that identify open issues and missing pieces for a fully integrated NFV profiling solution. Further, we introduce a novel profiling mechanism that is able to profile virtualized network functions and entire network service chains under different resource constraints before they are deployed on production infrastructure.
2403.19449
Marcin Hoffmann
Marcin Hoffmann, Pawe{\l} Kryszkiewicz
O-RAN for Energy-Efficient Serving Cluster Formulation in User-Centric Cell-Free MMIMO
Accepted for presentation during The 2nd Workshop on Next-generation Open and Programmable Radio Access Networks (NG-OPERA), organized in conjunction with IEEE International Conference on Computer Communications, May 20, 2024
null
null
null
cs.IT cs.NI math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The 6G Massive Multiple-Input Multiple-Output (MMIMO) networks can follow the so-called User-Centric Cell-Free (UCCF) architecture, where a single user is served by multiple Access Points (APs) coordinated by the Central Processing Unit (CPU). In this paper, we propose how O-RAN functionalities, i.e., rApp-xApp pair, can be used for energy-efficient Serving Cluster Formulation (SCF). Simulation studies show up to 37\% gain in Energy Efficiency (EE) of the proposed solution over the state-of-the-art Network-Centric (NC) designs.
[ { "created": "Thu, 28 Mar 2024 14:17:19 GMT", "version": "v1" } ]
2024-03-29
[ [ "Hoffmann", "Marcin", "" ], [ "Kryszkiewicz", "Paweł", "" ] ]
The 6G Massive Multiple-Input Multiple-Output (MMIMO) networks can follow the so-called User-Centric Cell-Free (UCCF) architecture, where a single user is served by multiple Access Points (APs) coordinated by the Central Processing Unit (CPU). In this paper, we propose how O-RAN functionalities, i.e., rApp-xApp pair, can be used for energy-efficient Serving Cluster Formulation (SCF). Simulation studies show up to 37\% gain in Energy Efficiency (EE) of the proposed solution over the state-of-the-art Network-Centric (NC) designs.
2301.06923
Zhibo Zhang
Zhibo Zhang, Sani Umar, Ahmed Y. Al Hammadi, Sangyoung Yoon, Ernesto Damiani, Claudio Agostino Ardagna, Nicola Bena, and Chan Yeob Yeun
Explainable Data Poison Attacks on Human Emotion Evaluation Systems based on EEG Signals
null
IEEE Access 2023
10.1109/ACCESS.2023.3245813
null
cs.LG eess.SP
http://creativecommons.org/licenses/by/4.0/
The major aim of this paper is to explain the data poisoning attacks using label-flipping during the training stage of the electroencephalogram (EEG) signal-based human emotion evaluation systems deploying Machine Learning models from the attackers' perspective. Human emotion evaluation using EEG signals has consistently attracted a lot of research attention. The identification of human emotional states based on EEG signals is effective to detect potential internal threats caused by insider individuals. Nevertheless, EEG signal-based human emotion evaluation systems have shown several vulnerabilities to data poison attacks. The findings of the experiments demonstrate that the suggested data poison assaults are model-independently successful, although various models exhibit varying levels of resilience to the attacks. In addition, the data poison attacks on the EEG signal-based human emotion evaluation systems are explained with several Explainable Artificial Intelligence (XAI) methods, including Shapley Additive Explanation (SHAP) values, Local Interpretable Model-agnostic Explanations (LIME), and Generated Decision Trees. And the codes of this paper are publicly available on GitHub.
[ { "created": "Tue, 17 Jan 2023 14:44:46 GMT", "version": "v1" } ]
2023-03-15
[ [ "Zhang", "Zhibo", "" ], [ "Umar", "Sani", "" ], [ "Hammadi", "Ahmed Y. Al", "" ], [ "Yoon", "Sangyoung", "" ], [ "Damiani", "Ernesto", "" ], [ "Ardagna", "Claudio Agostino", "" ], [ "Bena", "Nicola", "" ], [ "Yeun", "Chan Yeob", "" ] ]
The major aim of this paper is to explain the data poisoning attacks using label-flipping during the training stage of the electroencephalogram (EEG) signal-based human emotion evaluation systems deploying Machine Learning models from the attackers' perspective. Human emotion evaluation using EEG signals has consistently attracted a lot of research attention. The identification of human emotional states based on EEG signals is effective to detect potential internal threats caused by insider individuals. Nevertheless, EEG signal-based human emotion evaluation systems have shown several vulnerabilities to data poison attacks. The findings of the experiments demonstrate that the suggested data poison assaults are model-independently successful, although various models exhibit varying levels of resilience to the attacks. In addition, the data poison attacks on the EEG signal-based human emotion evaluation systems are explained with several Explainable Artificial Intelligence (XAI) methods, including Shapley Additive Explanation (SHAP) values, Local Interpretable Model-agnostic Explanations (LIME), and Generated Decision Trees. And the codes of this paper are publicly available on GitHub.
2005.08625
Na Li
Na Li, Xinbo Zhao, Chong Ma
JointsGait:A model-based Gait Recognition Method based on Gait Graph Convolutional Networks and Joints Relationship Pyramid Mapping
The paper format was changed and experiments on other databases were added. The format and page layout were changed greatly
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gait, as one of unique biometric features, has the advantage of being recognized from a long distance away, can be widely used in public security. Considering 3D pose estimation is more challenging than 2D pose estimation in practice , we research on using 2D joints to recognize gait in this paper, and a new model-based gait recognition method JointsGait is put forward to extract gait information from 2D human body joints. Appearance-based gait recognition algorithms are prevalent before. However, appearance features suffer from external factors which can cause drastic appearance variations, e.g. clothing. Unlike previous approaches, JointsGait firstly extracted spatio-temporal features from 2D joints using gait graph convolutional networks, which are less interfered by external factors. Secondly, Joints Relationship Pyramid Mapping (JRPM) are proposed to map spatio-temporal gait features into a discriminative feature space with biological advantages according to the relationship of human joints when people are walking at various scales. Finally, we design a fusion loss strategy to help the joints features to be insensitive to cross-view. Our method is evaluated on two large datasets, Kinect Gait Biometry Dataset and CASIA-B. On Kinect Gait Biometry Dataset database, JointsGait only uses corresponding 2D coordinates of joints, but achieves satisfactory recognition accuracy compared with those model-based algorithms using 3D joints. On CASIA-B database, the proposed method greatly outperforms advanced model-based methods in all walking conditions, even performs superior to state-of-art appearance-based methods when clothing seriously affect people's appearance. The experimental results demonstrate that JointsGait achieves the state-of-art performance despite the low dimensional feature (2D body joints) and is less affected by the view variations and clothing variation.
[ { "created": "Mon, 27 Apr 2020 08:30:37 GMT", "version": "v1" }, { "created": "Wed, 9 Dec 2020 09:12:03 GMT", "version": "v2" } ]
2020-12-10
[ [ "Li", "Na", "" ], [ "Zhao", "Xinbo", "" ], [ "Ma", "Chong", "" ] ]
Gait, as one of unique biometric features, has the advantage of being recognized from a long distance away, can be widely used in public security. Considering 3D pose estimation is more challenging than 2D pose estimation in practice , we research on using 2D joints to recognize gait in this paper, and a new model-based gait recognition method JointsGait is put forward to extract gait information from 2D human body joints. Appearance-based gait recognition algorithms are prevalent before. However, appearance features suffer from external factors which can cause drastic appearance variations, e.g. clothing. Unlike previous approaches, JointsGait firstly extracted spatio-temporal features from 2D joints using gait graph convolutional networks, which are less interfered by external factors. Secondly, Joints Relationship Pyramid Mapping (JRPM) are proposed to map spatio-temporal gait features into a discriminative feature space with biological advantages according to the relationship of human joints when people are walking at various scales. Finally, we design a fusion loss strategy to help the joints features to be insensitive to cross-view. Our method is evaluated on two large datasets, Kinect Gait Biometry Dataset and CASIA-B. On Kinect Gait Biometry Dataset database, JointsGait only uses corresponding 2D coordinates of joints, but achieves satisfactory recognition accuracy compared with those model-based algorithms using 3D joints. On CASIA-B database, the proposed method greatly outperforms advanced model-based methods in all walking conditions, even performs superior to state-of-art appearance-based methods when clothing seriously affect people's appearance. The experimental results demonstrate that JointsGait achieves the state-of-art performance despite the low dimensional feature (2D body joints) and is less affected by the view variations and clothing variation.
2403.14679
Lorenzo Pellegrini
Davide Maltoni, Lorenzo Pellegrini
Continual Learning by Three-Phase Consolidation
13 pages, 2 figures, 8 tables. Preprint under review
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
TPC (Three-Phase Consolidation) is here introduced as a simple but effective approach to continually learn new classes (and/or instances of known classes) while controlling forgetting of previous knowledge. Each experience (a.k.a. task) is learned in three phases characterized by different rules and learning dynamics, aimed at removing the class-bias problem (due to class unbalancing) and limiting gradient-based corrections to prevent forgetting of underrepresented classes. Several experiments on complex datasets demonstrate its accuracy and efficiency advantages over competitive existing approaches. The algorithm and all the results presented in this paper are fully reproducible thanks to its publication on the Avalanche open framework for continual learning.
[ { "created": "Tue, 12 Mar 2024 15:31:14 GMT", "version": "v1" } ]
2024-03-25
[ [ "Maltoni", "Davide", "" ], [ "Pellegrini", "Lorenzo", "" ] ]
TPC (Three-Phase Consolidation) is here introduced as a simple but effective approach to continually learn new classes (and/or instances of known classes) while controlling forgetting of previous knowledge. Each experience (a.k.a. task) is learned in three phases characterized by different rules and learning dynamics, aimed at removing the class-bias problem (due to class unbalancing) and limiting gradient-based corrections to prevent forgetting of underrepresented classes. Several experiments on complex datasets demonstrate its accuracy and efficiency advantages over competitive existing approaches. The algorithm and all the results presented in this paper are fully reproducible thanks to its publication on the Avalanche open framework for continual learning.
1210.1357
Bojin Zheng
Jun Qin, Hongrun Wu, Xiaonian Tong, Bojin Zheng
A quantitative method for determining the robustness of complex networks
null
Physica D 2013 253 85--90
10.1016/j.physd.2013.03.002
null
cs.SI nlin.AO physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most current studies estimate the invulnerability of complex networks using a qualitative method that analyzes the inaccurate decay rate of network efficiency. This method results in confusion over the invulnerability of various types of complex networks. By normalizing network efficiency and defining a baseline, this paper defines the invulnerability index as the integral of the difference between the normalized network efficiency curve and the baseline. This quantitative method seeks to establish a benchmark for the robustness and fragility of networks and to measure network invulnerability under both edge and node attacks. To validate the reliability of the proposed method, three small-world networks were selected as test beds. The simulation results indicate that the proposed invulnerability index can effectively and accurately quantify network resilience. The index should provide a valuable reference for determining network invulnerability in future research.
[ { "created": "Thu, 4 Oct 2012 09:46:26 GMT", "version": "v1" }, { "created": "Sat, 23 Mar 2013 09:33:00 GMT", "version": "v2" } ]
2014-02-18
[ [ "Qin", "Jun", "" ], [ "Wu", "Hongrun", "" ], [ "Tong", "Xiaonian", "" ], [ "Zheng", "Bojin", "" ] ]
Most current studies estimate the invulnerability of complex networks using a qualitative method that analyzes the inaccurate decay rate of network efficiency. This method results in confusion over the invulnerability of various types of complex networks. By normalizing network efficiency and defining a baseline, this paper defines the invulnerability index as the integral of the difference between the normalized network efficiency curve and the baseline. This quantitative method seeks to establish a benchmark for the robustness and fragility of networks and to measure network invulnerability under both edge and node attacks. To validate the reliability of the proposed method, three small-world networks were selected as test beds. The simulation results indicate that the proposed invulnerability index can effectively and accurately quantify network resilience. The index should provide a valuable reference for determining network invulnerability in future research.
2107.13180
Javier Naranjo-Alcazar
Javier Naranjo-Alcazar, Sergi Perez-Castanos, Aaron Lopez-Garcia, Pedro Zuccarello, Maximo Cobos, Francesc J. Ferri
Squeeze-Excitation Convolutional Recurrent Neural Networks for Audio-Visual Scene Classification
null
null
null
null
cs.MM cs.CV cs.SD eess.AS eess.IV
http://creativecommons.org/publicdomain/zero/1.0/
The use of multiple and semantically correlated sources can provide complementary information to each other that may not be evident when working with individual modalities on their own. In this context, multi-modal models can help producing more accurate and robust predictions in machine learning tasks where audio-visual data is available. This paper presents a multi-modal model for automatic scene classification that exploits simultaneously auditory and visual information. The proposed approach makes use of two separate networks which are respectively trained in isolation on audio and visual data, so that each network specializes in a given modality. The visual subnetwork is a pre-trained VGG16 model followed by a bidiretional recurrent layer, while the residual audio subnetwork is based on stacked squeeze-excitation convolutional blocks trained from scratch. After training each subnetwork, the fusion of information from the audio and visual streams is performed at two different stages. The early fusion stage combines features resulting from the last convolutional block of the respective subnetworks at different time steps to feed a bidirectional recurrent structure. The late fusion stage combines the output of the early fusion stage with the independent predictions provided by the two subnetworks, resulting in the final prediction. We evaluate the method using the recently published TAU Audio-Visual Urban Scenes 2021, which contains synchronized audio and video recordings from 12 European cities in 10 different scene classes. The proposed model has been shown to provide an excellent trade-off between prediction performance (86.5%) and system complexity (15M parameters) in the evaluation results of the DCASE 2021 Challenge.
[ { "created": "Wed, 28 Jul 2021 06:10:10 GMT", "version": "v1" } ]
2021-07-29
[ [ "Naranjo-Alcazar", "Javier", "" ], [ "Perez-Castanos", "Sergi", "" ], [ "Lopez-Garcia", "Aaron", "" ], [ "Zuccarello", "Pedro", "" ], [ "Cobos", "Maximo", "" ], [ "Ferri", "Francesc J.", "" ] ]
The use of multiple and semantically correlated sources can provide complementary information to each other that may not be evident when working with individual modalities on their own. In this context, multi-modal models can help producing more accurate and robust predictions in machine learning tasks where audio-visual data is available. This paper presents a multi-modal model for automatic scene classification that exploits simultaneously auditory and visual information. The proposed approach makes use of two separate networks which are respectively trained in isolation on audio and visual data, so that each network specializes in a given modality. The visual subnetwork is a pre-trained VGG16 model followed by a bidiretional recurrent layer, while the residual audio subnetwork is based on stacked squeeze-excitation convolutional blocks trained from scratch. After training each subnetwork, the fusion of information from the audio and visual streams is performed at two different stages. The early fusion stage combines features resulting from the last convolutional block of the respective subnetworks at different time steps to feed a bidirectional recurrent structure. The late fusion stage combines the output of the early fusion stage with the independent predictions provided by the two subnetworks, resulting in the final prediction. We evaluate the method using the recently published TAU Audio-Visual Urban Scenes 2021, which contains synchronized audio and video recordings from 12 European cities in 10 different scene classes. The proposed model has been shown to provide an excellent trade-off between prediction performance (86.5%) and system complexity (15M parameters) in the evaluation results of the DCASE 2021 Challenge.
2009.04416
Karl Cobbe
Karl Cobbe, Jacob Hilton, Oleg Klimov, John Schulman
Phasic Policy Gradient
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce Phasic Policy Gradient (PPG), a reinforcement learning framework which modifies traditional on-policy actor-critic methods by separating policy and value function training into distinct phases. In prior methods, one must choose between using a shared network or separate networks to represent the policy and value function. Using separate networks avoids interference between objectives, while using a shared network allows useful features to be shared. PPG is able to achieve the best of both worlds by splitting optimization into two phases, one that advances training and one that distills features. PPG also enables the value function to be more aggressively optimized with a higher level of sample reuse. Compared to PPO, we find that PPG significantly improves sample efficiency on the challenging Procgen Benchmark.
[ { "created": "Wed, 9 Sep 2020 16:52:53 GMT", "version": "v1" } ]
2020-09-10
[ [ "Cobbe", "Karl", "" ], [ "Hilton", "Jacob", "" ], [ "Klimov", "Oleg", "" ], [ "Schulman", "John", "" ] ]
We introduce Phasic Policy Gradient (PPG), a reinforcement learning framework which modifies traditional on-policy actor-critic methods by separating policy and value function training into distinct phases. In prior methods, one must choose between using a shared network or separate networks to represent the policy and value function. Using separate networks avoids interference between objectives, while using a shared network allows useful features to be shared. PPG is able to achieve the best of both worlds by splitting optimization into two phases, one that advances training and one that distills features. PPG also enables the value function to be more aggressively optimized with a higher level of sample reuse. Compared to PPO, we find that PPG significantly improves sample efficiency on the challenging Procgen Benchmark.
1303.5855
Zhong-Yuan Zhang
Zhong-Yuan Zhang and Yong Wang and Yong-Yeol Ahn
Overlapping Community Detection in Complex Networks using Symmetric Binary Matrix Factorization
null
null
10.1103/PhysRevE.87.062803
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Discovering overlapping community structures is a crucial step to understanding the structure and dynamics of many networks. In this paper we develop a symmetric binary matrix factorization model (SBMF) to identify overlapping communities. Our model allows us not only to assign community memberships explicitly to nodes, but also to distinguish outliers from overlapping nodes. In addition, we propose a modified partition density to evaluate the quality of community structures. We use this to determine the most appropriate number of communities. We evaluate our methods using both synthetic benchmarks and real world networks, demonstrating the effectiveness of our approach.
[ { "created": "Sat, 23 Mar 2013 15:16:44 GMT", "version": "v1" } ]
2015-06-15
[ [ "Zhang", "Zhong-Yuan", "" ], [ "Wang", "Yong", "" ], [ "Ahn", "Yong-Yeol", "" ] ]
Discovering overlapping community structures is a crucial step to understanding the structure and dynamics of many networks. In this paper we develop a symmetric binary matrix factorization model (SBMF) to identify overlapping communities. Our model allows us not only to assign community memberships explicitly to nodes, but also to distinguish outliers from overlapping nodes. In addition, we propose a modified partition density to evaluate the quality of community structures. We use this to determine the most appropriate number of communities. We evaluate our methods using both synthetic benchmarks and real world networks, demonstrating the effectiveness of our approach.
0909.4767
Christine Bachoc
Christine Bachoc (IMB)
Semidefinite programming, harmonic analysis and coding theory
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
These lecture notes where presented as a course of the CIMPA summer school in Manila, July 20-30, 2009, Semidefinite programming in algebraic combinatorics. This version is an update June 2010.
[ { "created": "Fri, 25 Sep 2009 19:04:18 GMT", "version": "v1" }, { "created": "Wed, 8 Sep 2010 11:37:01 GMT", "version": "v2" } ]
2010-09-09
[ [ "Bachoc", "Christine", "", "IMB" ] ]
These lecture notes where presented as a course of the CIMPA summer school in Manila, July 20-30, 2009, Semidefinite programming in algebraic combinatorics. This version is an update June 2010.
1401.3075
Qifu Sun
Qifu (Tyler) Sun, Xunrui Yin, Zongpeng Li, and Keping Long
Multicast Network Coding and Field Sizes
null
null
10.1109/TIT.2015.2473863
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In an acyclic multicast network, it is well known that a linear network coding solution over GF($q$) exists when $q$ is sufficiently large. In particular, for each prime power $q$ no smaller than the number of receivers, a linear solution over GF($q$) can be efficiently constructed. In this work, we reveal that a linear solution over a given finite field does \emph{not} necessarily imply the existence of a linear solution over all larger finite fields. Specifically, we prove by construction that: (i) For every source dimension no smaller than 3, there is a multicast network linearly solvable over GF(7) but not over GF(8), and another multicast network linearly solvable over GF(16) but not over GF(17); (ii) There is a multicast network linearly solvable over GF(5) but not over such GF($q$) that $q > 5$ is a Mersenne prime plus 1, which can be extremely large; (iii) A multicast network linearly solvable over GF($q^{m_1}$) and over GF($q^{m_2}$) is \emph{not} necessarily linearly solvable over GF($q^{m_1+m_2}$); (iv) There exists a class of multicast networks with a set $T$ of receivers such that the minimum field size $q_{min}$ for a linear solution over GF($q_{min}$) is lower bounded by $\Theta(\sqrt{|T|})$, but not every larger field than GF($q_{min}$) suffices to yield a linear solution. The insight brought from this work is that not only the field size, but also the order of subgroups in the multiplicative group of a finite field affects the linear solvability of a multicast network.
[ { "created": "Tue, 14 Jan 2014 06:03:54 GMT", "version": "v1" }, { "created": "Fri, 13 Feb 2015 06:34:47 GMT", "version": "v2" } ]
2016-09-26
[ [ "Qifu", "", "", "Tyler" ], [ "Sun", "", "" ], [ "Yin", "Xunrui", "" ], [ "Li", "Zongpeng", "" ], [ "Long", "Keping", "" ] ]
In an acyclic multicast network, it is well known that a linear network coding solution over GF($q$) exists when $q$ is sufficiently large. In particular, for each prime power $q$ no smaller than the number of receivers, a linear solution over GF($q$) can be efficiently constructed. In this work, we reveal that a linear solution over a given finite field does \emph{not} necessarily imply the existence of a linear solution over all larger finite fields. Specifically, we prove by construction that: (i) For every source dimension no smaller than 3, there is a multicast network linearly solvable over GF(7) but not over GF(8), and another multicast network linearly solvable over GF(16) but not over GF(17); (ii) There is a multicast network linearly solvable over GF(5) but not over such GF($q$) that $q > 5$ is a Mersenne prime plus 1, which can be extremely large; (iii) A multicast network linearly solvable over GF($q^{m_1}$) and over GF($q^{m_2}$) is \emph{not} necessarily linearly solvable over GF($q^{m_1+m_2}$); (iv) There exists a class of multicast networks with a set $T$ of receivers such that the minimum field size $q_{min}$ for a linear solution over GF($q_{min}$) is lower bounded by $\Theta(\sqrt{|T|})$, but not every larger field than GF($q_{min}$) suffices to yield a linear solution. The insight brought from this work is that not only the field size, but also the order of subgroups in the multiplicative group of a finite field affects the linear solvability of a multicast network.
2112.09631
Archan Ray
Archan Ray, Nicholas Monath, Andrew McCallum, Cameron Musco
Sublinear Time Approximation of Text Similarity Matrices
25 pages, 10 figures
null
null
null
cs.LG cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study algorithms for approximating pairwise similarity matrices that arise in natural language processing. Generally, computing a similarity matrix for $n$ data points requires $\Omega(n^2)$ similarity computations. This quadratic scaling is a significant bottleneck, especially when similarities are computed via expensive functions, e.g., via transformer models. Approximation methods reduce this quadratic complexity, often by using a small subset of exactly computed similarities to approximate the remainder of the complete pairwise similarity matrix. Significant work focuses on the efficient approximation of positive semidefinite (PSD) similarity matrices, which arise e.g., in kernel methods. However, much less is understood about indefinite (non-PSD) similarity matrices, which often arise in NLP. Motivated by the observation that many of these matrices are still somewhat close to PSD, we introduce a generalization of the popular Nystr\"{o}m method to the indefinite setting. Our algorithm can be applied to any similarity matrix and runs in sublinear time in the size of the matrix, producing a rank-$s$ approximation with just $O(ns)$ similarity computations. We show that our method, along with a simple variant of CUR decomposition, performs very well in approximating a variety of similarity matrices arising in NLP tasks. We demonstrate high accuracy of the approximated similarity matrices in the downstream tasks of document classification, sentence similarity, and cross-document coreference.
[ { "created": "Fri, 17 Dec 2021 17:04:34 GMT", "version": "v1" }, { "created": "Wed, 16 Feb 2022 19:18:56 GMT", "version": "v2" }, { "created": "Wed, 27 Apr 2022 13:56:51 GMT", "version": "v3" } ]
2022-04-28
[ [ "Ray", "Archan", "" ], [ "Monath", "Nicholas", "" ], [ "McCallum", "Andrew", "" ], [ "Musco", "Cameron", "" ] ]
We study algorithms for approximating pairwise similarity matrices that arise in natural language processing. Generally, computing a similarity matrix for $n$ data points requires $\Omega(n^2)$ similarity computations. This quadratic scaling is a significant bottleneck, especially when similarities are computed via expensive functions, e.g., via transformer models. Approximation methods reduce this quadratic complexity, often by using a small subset of exactly computed similarities to approximate the remainder of the complete pairwise similarity matrix. Significant work focuses on the efficient approximation of positive semidefinite (PSD) similarity matrices, which arise e.g., in kernel methods. However, much less is understood about indefinite (non-PSD) similarity matrices, which often arise in NLP. Motivated by the observation that many of these matrices are still somewhat close to PSD, we introduce a generalization of the popular Nystr\"{o}m method to the indefinite setting. Our algorithm can be applied to any similarity matrix and runs in sublinear time in the size of the matrix, producing a rank-$s$ approximation with just $O(ns)$ similarity computations. We show that our method, along with a simple variant of CUR decomposition, performs very well in approximating a variety of similarity matrices arising in NLP tasks. We demonstrate high accuracy of the approximated similarity matrices in the downstream tasks of document classification, sentence similarity, and cross-document coreference.
2406.17826
Krzysztof Kotowski PhD
Krzysztof Kotowski, Christoph Haskamp, Jacek Andrzejewski, Bogdan Ruszczak, Jakub Nalepa, Daniel Lakey, Peter Collins, Aybike Kolmas, Mauro Bartesaghi, Jose Martinez-Heras, Gabriele De Canio
European Space Agency Benchmark for Anomaly Detection in Satellite Telemetry
87 pages, 24 figures, 19 tables
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning has vast potential to improve anomaly detection in satellite telemetry which is a crucial task for spacecraft operations. This potential is currently hampered by a lack of comprehensible benchmarks for multivariate time series anomaly detection, especially for the challenging case of satellite telemetry. The European Space Agency Benchmark for Anomaly Detection in Satellite Telemetry (ESA-ADB) aims to address this challenge and establish a new standard in the domain. It is a result of close cooperation between spacecraft operations engineers from the European Space Agency (ESA) and machine learning experts. The newly introduced ESA Anomalies Dataset contains annotated real-life telemetry from three different ESA missions, out of which two are included in ESA-ADB. Results of typical anomaly detection algorithms assessed in our novel hierarchical evaluation pipeline show that new approaches are necessary to address operators' needs. All elements of ESA-ADB are publicly available to ensure its full reproducibility.
[ { "created": "Tue, 25 Jun 2024 13:23:37 GMT", "version": "v1" } ]
2024-06-27
[ [ "Kotowski", "Krzysztof", "" ], [ "Haskamp", "Christoph", "" ], [ "Andrzejewski", "Jacek", "" ], [ "Ruszczak", "Bogdan", "" ], [ "Nalepa", "Jakub", "" ], [ "Lakey", "Daniel", "" ], [ "Collins", "Peter", "" ], [ "Kolmas", "Aybike", "" ], [ "Bartesaghi", "Mauro", "" ], [ "Martinez-Heras", "Jose", "" ], [ "De Canio", "Gabriele", "" ] ]
Machine learning has vast potential to improve anomaly detection in satellite telemetry which is a crucial task for spacecraft operations. This potential is currently hampered by a lack of comprehensible benchmarks for multivariate time series anomaly detection, especially for the challenging case of satellite telemetry. The European Space Agency Benchmark for Anomaly Detection in Satellite Telemetry (ESA-ADB) aims to address this challenge and establish a new standard in the domain. It is a result of close cooperation between spacecraft operations engineers from the European Space Agency (ESA) and machine learning experts. The newly introduced ESA Anomalies Dataset contains annotated real-life telemetry from three different ESA missions, out of which two are included in ESA-ADB. Results of typical anomaly detection algorithms assessed in our novel hierarchical evaluation pipeline show that new approaches are necessary to address operators' needs. All elements of ESA-ADB are publicly available to ensure its full reproducibility.
2207.01234
Vishnu Raj
Vishnu Raj, Tianyu Cui, Markus Heinonen and Pekka Marttinen
Incorporating functional summary information in Bayesian neural networks using a Dirichlet process likelihood approach
Accepted in AISTATS 2023
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bayesian neural networks (BNNs) can account for both aleatoric and epistemic uncertainty. However, in BNNs the priors are often specified over the weights which rarely reflects true prior knowledge in large and complex neural network architectures. We present a simple approach to incorporate prior knowledge in BNNs based on external summary information about the predicted classification probabilities for a given dataset. The available summary information is incorporated as augmented data and modeled with a Dirichlet process, and we derive the corresponding \emph{Summary Evidence Lower BOund}. The approach is founded on Bayesian principles, and all hyperparameters have a proper probabilistic interpretation. We show how the method can inform the model about task difficulty and class imbalance. Extensive experiments show that, with negligible computational overhead, our method parallels and in many cases outperforms popular alternatives in accuracy, uncertainty calibration, and robustness against corruptions with both balanced and imbalanced data.
[ { "created": "Mon, 4 Jul 2022 07:06:45 GMT", "version": "v1" }, { "created": "Tue, 24 Jan 2023 08:08:11 GMT", "version": "v2" } ]
2023-01-25
[ [ "Raj", "Vishnu", "" ], [ "Cui", "Tianyu", "" ], [ "Heinonen", "Markus", "" ], [ "Marttinen", "Pekka", "" ] ]
Bayesian neural networks (BNNs) can account for both aleatoric and epistemic uncertainty. However, in BNNs the priors are often specified over the weights which rarely reflects true prior knowledge in large and complex neural network architectures. We present a simple approach to incorporate prior knowledge in BNNs based on external summary information about the predicted classification probabilities for a given dataset. The available summary information is incorporated as augmented data and modeled with a Dirichlet process, and we derive the corresponding \emph{Summary Evidence Lower BOund}. The approach is founded on Bayesian principles, and all hyperparameters have a proper probabilistic interpretation. We show how the method can inform the model about task difficulty and class imbalance. Extensive experiments show that, with negligible computational overhead, our method parallels and in many cases outperforms popular alternatives in accuracy, uncertainty calibration, and robustness against corruptions with both balanced and imbalanced data.
2210.07564
Xin Tian
Xin Tian, Yingzhan Lin, Mengfei Song, Siqi Bao, Fan Wang, Huang He, Shuqi Sun, Hua Wu
Q-TOD: A Query-driven Task-oriented Dialogue System
Accepted to EMNLP 2022
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Existing pipelined task-oriented dialogue systems usually have difficulties adapting to unseen domains, whereas end-to-end systems are plagued by large-scale knowledge bases in practice. In this paper, we introduce a novel query-driven task-oriented dialogue system, namely Q-TOD. The essential information from the dialogue context is extracted into a query, which is further employed to retrieve relevant knowledge records for response generation. Firstly, as the query is in the form of natural language and not confined to the schema of the knowledge base, the issue of domain adaption is alleviated remarkably in Q-TOD. Secondly, as the query enables the decoupling of knowledge retrieval from the generation, Q-TOD gets rid of the issue of knowledge base scalability. To evaluate the effectiveness of the proposed Q-TOD, we collect query annotations for three publicly available task-oriented dialogue datasets. Comprehensive experiments verify that Q-TOD outperforms strong baselines and establishes a new state-of-the-art performance on these datasets.
[ { "created": "Fri, 14 Oct 2022 06:38:19 GMT", "version": "v1" } ]
2022-10-17
[ [ "Tian", "Xin", "" ], [ "Lin", "Yingzhan", "" ], [ "Song", "Mengfei", "" ], [ "Bao", "Siqi", "" ], [ "Wang", "Fan", "" ], [ "He", "Huang", "" ], [ "Sun", "Shuqi", "" ], [ "Wu", "Hua", "" ] ]
Existing pipelined task-oriented dialogue systems usually have difficulties adapting to unseen domains, whereas end-to-end systems are plagued by large-scale knowledge bases in practice. In this paper, we introduce a novel query-driven task-oriented dialogue system, namely Q-TOD. The essential information from the dialogue context is extracted into a query, which is further employed to retrieve relevant knowledge records for response generation. Firstly, as the query is in the form of natural language and not confined to the schema of the knowledge base, the issue of domain adaption is alleviated remarkably in Q-TOD. Secondly, as the query enables the decoupling of knowledge retrieval from the generation, Q-TOD gets rid of the issue of knowledge base scalability. To evaluate the effectiveness of the proposed Q-TOD, we collect query annotations for three publicly available task-oriented dialogue datasets. Comprehensive experiments verify that Q-TOD outperforms strong baselines and establishes a new state-of-the-art performance on these datasets.