id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1910.04277
Daniel Campos
Daniel Campos, Zoe Konrad
Experiments in Inferring Social Networks of Diffusion
null
null
null
null
cs.SI cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Information diffusion is a fundamental process that takes place over networks. While it is rarely realistic to observe the individual transmissions of the information diffusion process, it is typically possible to observe when individuals first publish the information. We look specifically at previously published algorithm NETINF that probabilistically identifies the optimal network that best explains the observed infection times. We explore how the algorithm could perform on a range of intrinsically different social and information network topologies, from news blogs and websites to Twitter to Reddit.
[ { "created": "Wed, 9 Oct 2019 22:13:25 GMT", "version": "v1" } ]
2019-10-11
[ [ "Campos", "Daniel", "" ], [ "Konrad", "Zoe", "" ] ]
Information diffusion is a fundamental process that takes place over networks. While it is rarely realistic to observe the individual transmissions of the information diffusion process, it is typically possible to observe when individuals first publish the information. We look specifically at previously published algorithm NETINF that probabilistically identifies the optimal network that best explains the observed infection times. We explore how the algorithm could perform on a range of intrinsically different social and information network topologies, from news blogs and websites to Twitter to Reddit.
1602.02174
Haris Aziz
Haris Aziz
Participation Incentives in Randomized Social Choice
corrected one proposition from previous version
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When aggregating preferences of agents via voting, two desirable goals are to identify outcomes that are Pareto optimal and to incentivize agents to participate in the voting process. We consider participation notions as formalized by Brandl, Brandt, and Hofbauer (2015) and study how far efficiency and participation are achievable by randomized social choice functions in particular when agents' preferences are downward lexicographic (DL) or satisfy stochastic dominance (SD). Our results include the followings ones: we prove formal relations between the participation notions with respect to SD and DL and we show that the maximal recursive rule satisfies very strong participation with respect to both SD and DL.
[ { "created": "Fri, 5 Feb 2016 22:00:07 GMT", "version": "v1" }, { "created": "Tue, 8 Nov 2016 22:39:39 GMT", "version": "v2" } ]
2016-11-10
[ [ "Aziz", "Haris", "" ] ]
When aggregating preferences of agents via voting, two desirable goals are to identify outcomes that are Pareto optimal and to incentivize agents to participate in the voting process. We consider participation notions as formalized by Brandl, Brandt, and Hofbauer (2015) and study how far efficiency and participation are achievable by randomized social choice functions in particular when agents' preferences are downward lexicographic (DL) or satisfy stochastic dominance (SD). Our results include the followings ones: we prove formal relations between the participation notions with respect to SD and DL and we show that the maximal recursive rule satisfies very strong participation with respect to both SD and DL.
2002.03491
Xiaoming Chen
Xiaoming Chen, Derrick Wing Kwan Ng, Wei Yu, Erik G. Larsson, Naofal Al-Dhahir, Robert Schober
Massive Access for 5G and Beyond
22 pages, 8 fugures, 6 tables
IEEE Journal on Selected Areas in Communications, 2020
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Massive access, also known as massive connectivity or massive machine-type communication (mMTC), is one of the main use cases of the fifth-generation (5G) and beyond 5G (B5G) wireless networks. A typical application of massive access is the cellular Internet of Things (IoT). Different from conventional human-type communication, massive access aims at realizing efficient and reliable communications for a massive number of IoT devices. Hence, the main characteristics of massive access include low power, massive connectivity, and broad coverage, which require new concepts, theories, and paradigms for the design of next-generation cellular networks. This paper presents a comprehensive survey of aspects of massive access design for B5G wireless networks. Specifically, we provide a detailed review of massive access from the perspectives of theory, protocols, techniques, coverage, energy, and security. Furthermore, several future research directions and challenges are identified.
[ { "created": "Mon, 10 Feb 2020 01:31:22 GMT", "version": "v1" }, { "created": "Mon, 3 Aug 2020 03:21:26 GMT", "version": "v2" } ]
2020-08-04
[ [ "Chen", "Xiaoming", "" ], [ "Ng", "Derrick Wing Kwan", "" ], [ "Yu", "Wei", "" ], [ "Larsson", "Erik G.", "" ], [ "Al-Dhahir", "Naofal", "" ], [ "Schober", "Robert", "" ] ]
Massive access, also known as massive connectivity or massive machine-type communication (mMTC), is one of the main use cases of the fifth-generation (5G) and beyond 5G (B5G) wireless networks. A typical application of massive access is the cellular Internet of Things (IoT). Different from conventional human-type communication, massive access aims at realizing efficient and reliable communications for a massive number of IoT devices. Hence, the main characteristics of massive access include low power, massive connectivity, and broad coverage, which require new concepts, theories, and paradigms for the design of next-generation cellular networks. This paper presents a comprehensive survey of aspects of massive access design for B5G wireless networks. Specifically, we provide a detailed review of massive access from the perspectives of theory, protocols, techniques, coverage, energy, and security. Furthermore, several future research directions and challenges are identified.
2312.04193
Adri\'an Bazaga
Adri\'an Bazaga, Pietro Li\`o, Gos Micklem
Language Model Knowledge Distillation for Efficient Question Answering in Spanish
ICLR 2024 Tiny Paper (6 pages, 2 tables)
null
null
null
cs.CL cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Recent advances in the development of pre-trained Spanish language models has led to significant progress in many Natural Language Processing (NLP) tasks, such as question answering. However, the lack of efficient models imposes a barrier for the adoption of such models in resource-constrained environments. Therefore, smaller distilled models for the Spanish language could be proven to be highly scalable and facilitate their further adoption on a variety of tasks and scenarios. In this work, we take one step in this direction by developing SpanishTinyRoBERTa, a compressed language model based on RoBERTa for efficient question answering in Spanish. To achieve this, we employ knowledge distillation from a large model onto a lighter model that allows for a wider implementation, even in areas with limited computational resources, whilst attaining negligible performance sacrifice. Our experiments show that the dense distilled model can still preserve the performance of its larger counterpart, while significantly increasing inference speedup. This work serves as a starting point for further research and investigation of model compression efforts for Spanish language models across various NLP tasks.
[ { "created": "Thu, 7 Dec 2023 10:21:22 GMT", "version": "v1" }, { "created": "Sat, 16 Mar 2024 17:44:27 GMT", "version": "v2" } ]
2024-03-19
[ [ "Bazaga", "Adrián", "" ], [ "Liò", "Pietro", "" ], [ "Micklem", "Gos", "" ] ]
Recent advances in the development of pre-trained Spanish language models has led to significant progress in many Natural Language Processing (NLP) tasks, such as question answering. However, the lack of efficient models imposes a barrier for the adoption of such models in resource-constrained environments. Therefore, smaller distilled models for the Spanish language could be proven to be highly scalable and facilitate their further adoption on a variety of tasks and scenarios. In this work, we take one step in this direction by developing SpanishTinyRoBERTa, a compressed language model based on RoBERTa for efficient question answering in Spanish. To achieve this, we employ knowledge distillation from a large model onto a lighter model that allows for a wider implementation, even in areas with limited computational resources, whilst attaining negligible performance sacrifice. Our experiments show that the dense distilled model can still preserve the performance of its larger counterpart, while significantly increasing inference speedup. This work serves as a starting point for further research and investigation of model compression efforts for Spanish language models across various NLP tasks.
1610.05725
Anatoly Plotnikov
Anatoly D. Plotnikov
Polynomial-time algorithm for determining the graph isomorphism (v.2)
13 pages, 11 figures
American Journal of Information Science and Computer Engineering, Vol. 3, No. 6, 2017, pp. 71-76
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop the methodology of positioning graph vertices relative to each other to solve the problem of determining isomorphism of two undirected graphs. Based on the position of the vertex in one of the graphs, it is determined the corresponding vertex in the other graph. For the selected vertex of the undirected graph, we define the neighborhoods of the vertices. Next, we construct the auxiliary directed graph, spawned by the selected vertex. The vertices of the digraph are positioned by special characteristics --- vectors, which locate each vertex of the digraph relative the found neighborhoods. This enabled to develop the algorithm for determining graph isomorphism, the runing time of which is equal to $O(n^4)$.
[ { "created": "Wed, 27 Apr 2016 20:06:12 GMT", "version": "v1" }, { "created": "Thu, 27 Oct 2016 21:03:20 GMT", "version": "v2" } ]
2018-02-13
[ [ "Plotnikov", "Anatoly D.", "" ] ]
We develop the methodology of positioning graph vertices relative to each other to solve the problem of determining isomorphism of two undirected graphs. Based on the position of the vertex in one of the graphs, it is determined the corresponding vertex in the other graph. For the selected vertex of the undirected graph, we define the neighborhoods of the vertices. Next, we construct the auxiliary directed graph, spawned by the selected vertex. The vertices of the digraph are positioned by special characteristics --- vectors, which locate each vertex of the digraph relative the found neighborhoods. This enabled to develop the algorithm for determining graph isomorphism, the runing time of which is equal to $O(n^4)$.
2304.07261
Qingyue Yang
Qingyue Yang, Hongjing Niu, Pengfei Xia, Wei Zhang, Bin Li
Frequency Decomposition to Tap the Potential of Single Domain for Generalization
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Domain generalization (DG), aiming at models able to work on multiple unseen domains, is a must-have characteristic of general artificial intelligence. DG based on single source domain training data is more challenging due to the lack of comparable information to help identify domain invariant features. In this paper, it is determined that the domain invariant features could be contained in the single source domain training samples, then the task is to find proper ways to extract such domain invariant features from the single source domain samples. An assumption is made that the domain invariant features are closely related to the frequency. Then, a new method that learns through multiple frequency domains is proposed. The key idea is, dividing the frequency domain of each original image into multiple subdomains, and learning features in the subdomain by a designed two branches network. In this way, the model is enforced to learn features from more samples of the specifically limited spectrum, which increases the possibility of obtaining the domain invariant features that might have previously been defiladed by easily learned features. Extensive experimental investigation reveals that 1) frequency decomposition can help the model learn features that are difficult to learn. 2) the proposed method outperforms the state-of-the-art methods of single-source domain generalization.
[ { "created": "Fri, 14 Apr 2023 17:15:47 GMT", "version": "v1" } ]
2023-04-17
[ [ "Yang", "Qingyue", "" ], [ "Niu", "Hongjing", "" ], [ "Xia", "Pengfei", "" ], [ "Zhang", "Wei", "" ], [ "Li", "Bin", "" ] ]
Domain generalization (DG), aiming at models able to work on multiple unseen domains, is a must-have characteristic of general artificial intelligence. DG based on single source domain training data is more challenging due to the lack of comparable information to help identify domain invariant features. In this paper, it is determined that the domain invariant features could be contained in the single source domain training samples, then the task is to find proper ways to extract such domain invariant features from the single source domain samples. An assumption is made that the domain invariant features are closely related to the frequency. Then, a new method that learns through multiple frequency domains is proposed. The key idea is, dividing the frequency domain of each original image into multiple subdomains, and learning features in the subdomain by a designed two branches network. In this way, the model is enforced to learn features from more samples of the specifically limited spectrum, which increases the possibility of obtaining the domain invariant features that might have previously been defiladed by easily learned features. Extensive experimental investigation reveals that 1) frequency decomposition can help the model learn features that are difficult to learn. 2) the proposed method outperforms the state-of-the-art methods of single-source domain generalization.
1510.01098
Boshra Rajaei
Boshra Rajaei, Eric W. Tramel, Sylvain Gigan, Florent Krzakala, Laurent Daudet
Intensity-only optical compressive imaging using a multiply scattering material and a double phase retrieval approach
null
Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) pages: 4054 - 4058
10.1109/ICASSP.2016.7472439
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, the problem of compressive imaging is addressed using natural randomization by means of a multiply scattering medium. To utilize the medium in this way, its corresponding transmission matrix must be estimated. To calibrate the imager, we use a digital micromirror device (DMD) as a simple, cheap, and high-resolution binary intensity modulator. We propose a phase retrieval algorithm which is well adapted to intensity-only measurements on the camera, and to the input binary intensity patterns, both to estimate the complex transmission matrix as well as image reconstruction. We demonstrate promising experimental results for the proposed algorithm using the MNIST dataset of handwritten digits as example images.
[ { "created": "Mon, 5 Oct 2015 11:07:30 GMT", "version": "v1" }, { "created": "Mon, 25 Jan 2016 14:35:44 GMT", "version": "v2" } ]
2016-08-26
[ [ "Rajaei", "Boshra", "" ], [ "Tramel", "Eric W.", "" ], [ "Gigan", "Sylvain", "" ], [ "Krzakala", "Florent", "" ], [ "Daudet", "Laurent", "" ] ]
In this paper, the problem of compressive imaging is addressed using natural randomization by means of a multiply scattering medium. To utilize the medium in this way, its corresponding transmission matrix must be estimated. To calibrate the imager, we use a digital micromirror device (DMD) as a simple, cheap, and high-resolution binary intensity modulator. We propose a phase retrieval algorithm which is well adapted to intensity-only measurements on the camera, and to the input binary intensity patterns, both to estimate the complex transmission matrix as well as image reconstruction. We demonstrate promising experimental results for the proposed algorithm using the MNIST dataset of handwritten digits as example images.
2407.05419
Nafisa Hussain
Nafisa Hussain
Multimodal Language Models for Domain-Specific Procedural Video Summarization
6 pages, 3 figures
null
null
null
cs.CV cs.IR
http://creativecommons.org/publicdomain/zero/1.0/
Videos serve as a powerful medium to convey ideas, tell stories, and provide detailed instructions, especially through long-format tutorials. Such tutorials are valuable for learning new skills at one's own pace, yet they can be overwhelming due to their length and dense content. Viewers often seek specific information, like precise measurements or step-by-step execution details, making it essential to extract and summarize key segments efficiently. An intelligent, time-sensitive video assistant capable of summarizing and detecting highlights in long videos is highly sought after. Recent advancements in Multimodal Large Language Models offer promising solutions to develop such an assistant. Our research explores the use of multimodal models to enhance video summarization and step-by-step instruction generation within specific domains. These models need to understand temporal events and relationships among actions across video frames. Our approach focuses on fine-tuning TimeChat to improve its performance in specific domains: cooking and medical procedures. By training the model on domain-specific datasets like Tasty for cooking and MedVidQA for medical procedures, we aim to enhance its ability to generate concise, accurate summaries of instructional videos. We curate and restructure these datasets to create high-quality video-centric instruction data. Our findings indicate that when finetuned on domain-specific procedural data, TimeChat can significantly improve the extraction and summarization of key instructional steps in long-format videos. This research demonstrates the potential of specialized multimodal models to assist with practical tasks by providing personalized, step-by-step guidance tailored to the unique aspects of each domain.
[ { "created": "Sun, 7 Jul 2024 15:50:46 GMT", "version": "v1" } ]
2024-07-09
[ [ "Hussain", "Nafisa", "" ] ]
Videos serve as a powerful medium to convey ideas, tell stories, and provide detailed instructions, especially through long-format tutorials. Such tutorials are valuable for learning new skills at one's own pace, yet they can be overwhelming due to their length and dense content. Viewers often seek specific information, like precise measurements or step-by-step execution details, making it essential to extract and summarize key segments efficiently. An intelligent, time-sensitive video assistant capable of summarizing and detecting highlights in long videos is highly sought after. Recent advancements in Multimodal Large Language Models offer promising solutions to develop such an assistant. Our research explores the use of multimodal models to enhance video summarization and step-by-step instruction generation within specific domains. These models need to understand temporal events and relationships among actions across video frames. Our approach focuses on fine-tuning TimeChat to improve its performance in specific domains: cooking and medical procedures. By training the model on domain-specific datasets like Tasty for cooking and MedVidQA for medical procedures, we aim to enhance its ability to generate concise, accurate summaries of instructional videos. We curate and restructure these datasets to create high-quality video-centric instruction data. Our findings indicate that when finetuned on domain-specific procedural data, TimeChat can significantly improve the extraction and summarization of key instructional steps in long-format videos. This research demonstrates the potential of specialized multimodal models to assist with practical tasks by providing personalized, step-by-step guidance tailored to the unique aspects of each domain.
2310.16866
Benjamin Chung
Benjamin Chung
A Type System for Julia
PhD thesis
null
null
null
cs.PL
http://creativecommons.org/licenses/by-sa/4.0/
The Julia programming language was designed to fill the needs of scientific computing by combining the benefits of productivity and performance languages. Julia allows users to write untyped scripts easily without needing to worry about many implementation details, as do other productivity languages. If one just wants to get the work done-regardless of how efficient or general the program might be, such a paradigm is ideal. Simultaneously, Julia also allows library developers to write efficient generic code that can run as fast as implementations in performance languages such as C or Fortran. This combination of user-facing ease and library developer-facing performance has proven quite attractive, and the language has increasing adoption. With adoption comes combinatorial challenges to correctness. Multiple dispatch -- Julia's key mechanism for abstraction -- allows many libraries to compose "out of the box." However, it creates bugs where one library's requirements do not match what another provides. Typing could address this at the cost of Julia's flexibility for scripting. I developed a "best of both worlds" solution: gradual typing for Julia. My system forms the core of a gradual type system for Julia, laying the foundation for improving the correctness of Julia programs while not getting in the way of script writers. My framework allows methods to be individually typed or untyped, allowing users to write untyped code that interacts with typed library code and vice versa. Typed methods then get a soundness guarantee that is robust in the presence of both dynamically typed code and dynamically generated definitions. I additionally describe protocols, a mechanism for typing abstraction over concrete implementation that accommodates one common pattern in Julia libraries, and describe its implementation into my typed Julia framework.
[ { "created": "Wed, 25 Oct 2023 10:55:21 GMT", "version": "v1" } ]
2023-10-27
[ [ "Chung", "Benjamin", "" ] ]
The Julia programming language was designed to fill the needs of scientific computing by combining the benefits of productivity and performance languages. Julia allows users to write untyped scripts easily without needing to worry about many implementation details, as do other productivity languages. If one just wants to get the work done-regardless of how efficient or general the program might be, such a paradigm is ideal. Simultaneously, Julia also allows library developers to write efficient generic code that can run as fast as implementations in performance languages such as C or Fortran. This combination of user-facing ease and library developer-facing performance has proven quite attractive, and the language has increasing adoption. With adoption comes combinatorial challenges to correctness. Multiple dispatch -- Julia's key mechanism for abstraction -- allows many libraries to compose "out of the box." However, it creates bugs where one library's requirements do not match what another provides. Typing could address this at the cost of Julia's flexibility for scripting. I developed a "best of both worlds" solution: gradual typing for Julia. My system forms the core of a gradual type system for Julia, laying the foundation for improving the correctness of Julia programs while not getting in the way of script writers. My framework allows methods to be individually typed or untyped, allowing users to write untyped code that interacts with typed library code and vice versa. Typed methods then get a soundness guarantee that is robust in the presence of both dynamically typed code and dynamically generated definitions. I additionally describe protocols, a mechanism for typing abstraction over concrete implementation that accommodates one common pattern in Julia libraries, and describe its implementation into my typed Julia framework.
2307.00936
Xiaoshuang Liang
Yunyou Huang, Xianglong Guan, Xiangjiang Lu, Xiaoshuang Liang, Xiuxia Miao, Jiyue Xie, Wenjing Liu, Li Ma, Suqin Tang, Zhifei Zhang, and Jianfeng Zhan
OpenAPMax: Abnormal Patterns-based Model for Real-World Alzheimer's Disease Diagnosis
Alzheimer's Disease, Abnormal Patterns, Open-set Recognition, OpenAPMax
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Alzheimer's disease (AD) cannot be reversed, but early diagnosis will significantly benefit patients' medical treatment and care. In recent works, AD diagnosis has the primary assumption that all categories are known a prior -- a closed-set classification problem, which contrasts with the open-set recognition problem. This assumption hinders the application of the model in natural clinical settings. Although many open-set recognition technologies have been proposed in other fields, they are challenging to use for AD diagnosis directly since 1) AD is a degenerative disease of the nervous system with similar symptoms at each stage, and it is difficult to distinguish from its pre-state, and 2) diversified strategies for AD diagnosis are challenging to model uniformly. In this work, inspired by the concerns of clinicians during diagnosis, we propose an open-set recognition model, OpenAPMax, based on the anomaly pattern to address AD diagnosis in real-world settings. OpenAPMax first obtains the abnormal pattern of each patient relative to each known category through statistics or a literature search, clusters the patients' abnormal pattern, and finally, uses extreme value theory (EVT) to model the distance between each patient's abnormal pattern and the center of their category and modify the classification probability. We evaluate the performance of the proposed method with recent open-set recognition, where we obtain state-of-the-art results.
[ { "created": "Mon, 3 Jul 2023 11:21:09 GMT", "version": "v1" } ]
2023-07-04
[ [ "Huang", "Yunyou", "" ], [ "Guan", "Xianglong", "" ], [ "Lu", "Xiangjiang", "" ], [ "Liang", "Xiaoshuang", "" ], [ "Miao", "Xiuxia", "" ], [ "Xie", "Jiyue", "" ], [ "Liu", "Wenjing", "" ], [ "Ma", "Li", "" ], [ "Tang", "Suqin", "" ], [ "Zhang", "Zhifei", "" ], [ "Zhan", "Jianfeng", "" ] ]
Alzheimer's disease (AD) cannot be reversed, but early diagnosis will significantly benefit patients' medical treatment and care. In recent works, AD diagnosis has the primary assumption that all categories are known a prior -- a closed-set classification problem, which contrasts with the open-set recognition problem. This assumption hinders the application of the model in natural clinical settings. Although many open-set recognition technologies have been proposed in other fields, they are challenging to use for AD diagnosis directly since 1) AD is a degenerative disease of the nervous system with similar symptoms at each stage, and it is difficult to distinguish from its pre-state, and 2) diversified strategies for AD diagnosis are challenging to model uniformly. In this work, inspired by the concerns of clinicians during diagnosis, we propose an open-set recognition model, OpenAPMax, based on the anomaly pattern to address AD diagnosis in real-world settings. OpenAPMax first obtains the abnormal pattern of each patient relative to each known category through statistics or a literature search, clusters the patients' abnormal pattern, and finally, uses extreme value theory (EVT) to model the distance between each patient's abnormal pattern and the center of their category and modify the classification probability. We evaluate the performance of the proposed method with recent open-set recognition, where we obtain state-of-the-art results.
2012.14642
Le Qi
Le Qi, Yu Zhang, Qingyu Yin, Ting Liu
Multiple Structural Priors Guided Self Attention Network for Language Understanding
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Self attention networks (SANs) have been widely utilized in recent NLP studies. Unlike CNNs or RNNs, standard SANs are usually position-independent, and thus are incapable of capturing the structural priors between sequences of words. Existing studies commonly apply one single mask strategy on SANs for incorporating structural priors while failing at modeling more abundant structural information of texts. In this paper, we aim at introducing multiple types of structural priors into SAN models, proposing the Multiple Structural Priors Guided Self Attention Network (MS-SAN) that transforms different structural priors into different attention heads by using a novel multi-mask based multi-head attention mechanism. In particular, we integrate two categories of structural priors, including the sequential order and the relative position of words. For the purpose of capturing the latent hierarchical structure of the texts, we extract these information not only from the word contexts but also from the dependency syntax trees. Experimental results on two tasks show that MS-SAN achieves significant improvements against other strong baselines.
[ { "created": "Tue, 29 Dec 2020 07:30:03 GMT", "version": "v1" } ]
2021-01-01
[ [ "Qi", "Le", "" ], [ "Zhang", "Yu", "" ], [ "Yin", "Qingyu", "" ], [ "Liu", "Ting", "" ] ]
Self attention networks (SANs) have been widely utilized in recent NLP studies. Unlike CNNs or RNNs, standard SANs are usually position-independent, and thus are incapable of capturing the structural priors between sequences of words. Existing studies commonly apply one single mask strategy on SANs for incorporating structural priors while failing at modeling more abundant structural information of texts. In this paper, we aim at introducing multiple types of structural priors into SAN models, proposing the Multiple Structural Priors Guided Self Attention Network (MS-SAN) that transforms different structural priors into different attention heads by using a novel multi-mask based multi-head attention mechanism. In particular, we integrate two categories of structural priors, including the sequential order and the relative position of words. For the purpose of capturing the latent hierarchical structure of the texts, we extract these information not only from the word contexts but also from the dependency syntax trees. Experimental results on two tasks show that MS-SAN achieves significant improvements against other strong baselines.
2202.09221
Stefan Scherzinger
Stefan Scherzinger, Pascal Becker, Arne Roennau and R\"udiger Dillmann
Motion Macro Programming on Assistive Robotic Manipulators: Three Skill Types for Everyday Tasks
8 pages, 10 figures, accepted to the IEEE 20th International Conference on Ubiquitous Robots (UR 2023), Honolulu, USA
null
null
null
cs.RO
http://creativecommons.org/licenses/by-sa/4.0/
Assistive robotic manipulators are becoming increasingly important for people with disabilities. Teleoperating the manipulator in mundane tasks is part of their daily lives. Instead of steering the robot through all actions, applying self-recorded motion macros could greatly facilitate repetitive tasks. Dynamic Movement Primitives (DMP) are a powerful method for skill learning via teleoperation. For this use case, however, they need simple heuristics to specify where to start, stop, and parameterize a skill without a background in computer science and academic sensor setups for autonomous perception. To achieve this goal, this paper provides the concept of local, global, and hybrid skills that form a modular basis for composing single-handed tasks of daily living. These skills are specified implicitly and can easily be programmed by users themselves, requiring only their basic robotic manipulator. The paper contributes all details for robot-agnostic implementations. Experiments validate the developed methods for exemplary tasks, such as scratching an itchy spot, sorting objects on a desk, and feeding a piggy bank with coins. The paper is accompanied by an open-source implementation at https://github.com/fzi-forschungszentrum-informatik/ArNe
[ { "created": "Fri, 18 Feb 2022 14:41:20 GMT", "version": "v1" }, { "created": "Sun, 16 Apr 2023 11:47:28 GMT", "version": "v2" }, { "created": "Fri, 12 May 2023 14:14:09 GMT", "version": "v3" } ]
2023-05-15
[ [ "Scherzinger", "Stefan", "" ], [ "Becker", "Pascal", "" ], [ "Roennau", "Arne", "" ], [ "Dillmann", "Rüdiger", "" ] ]
Assistive robotic manipulators are becoming increasingly important for people with disabilities. Teleoperating the manipulator in mundane tasks is part of their daily lives. Instead of steering the robot through all actions, applying self-recorded motion macros could greatly facilitate repetitive tasks. Dynamic Movement Primitives (DMP) are a powerful method for skill learning via teleoperation. For this use case, however, they need simple heuristics to specify where to start, stop, and parameterize a skill without a background in computer science and academic sensor setups for autonomous perception. To achieve this goal, this paper provides the concept of local, global, and hybrid skills that form a modular basis for composing single-handed tasks of daily living. These skills are specified implicitly and can easily be programmed by users themselves, requiring only their basic robotic manipulator. The paper contributes all details for robot-agnostic implementations. Experiments validate the developed methods for exemplary tasks, such as scratching an itchy spot, sorting objects on a desk, and feeding a piggy bank with coins. The paper is accompanied by an open-source implementation at https://github.com/fzi-forschungszentrum-informatik/ArNe
1809.00832
Eunji Jeong
Eunji Jeong, Joo Seong Jeong, Soojeong Kim, Gyeong-In Yu, Byung-Gon Chun
Improving the Expressiveness of Deep Learning Frameworks with Recursion
Appeared in EuroSys 2018. 13 pages, 11 figures
EuroSys 2018: Thirteenth EuroSys Conference, April 23-26, 2018, Porto, Portugal
10.1145/3190508.3190530
null
cs.LG cs.AI cs.CL stat.ML
http://creativecommons.org/licenses/by/4.0/
Recursive neural networks have widely been used by researchers to handle applications with recursively or hierarchically structured data. However, embedded control flow deep learning frameworks such as TensorFlow, Theano, Caffe2, and MXNet fail to efficiently represent and execute such neural networks, due to lack of support for recursion. In this paper, we add recursion to the programming model of existing frameworks by complementing their design with recursive execution of dataflow graphs as well as additional APIs for recursive definitions. Unlike iterative implementations, which can only understand the topological index of each node in recursive data structures, our recursive implementation is able to exploit the recursive relationships between nodes for efficient execution based on parallel computation. We present an implementation on TensorFlow and evaluation results with various recursive neural network models, showing that our recursive implementation not only conveys the recursive nature of recursive neural networks better than other implementations, but also uses given resources more effectively to reduce training and inference time.
[ { "created": "Tue, 4 Sep 2018 08:31:21 GMT", "version": "v1" } ]
2018-09-05
[ [ "Jeong", "Eunji", "" ], [ "Jeong", "Joo Seong", "" ], [ "Kim", "Soojeong", "" ], [ "Yu", "Gyeong-In", "" ], [ "Chun", "Byung-Gon", "" ] ]
Recursive neural networks have widely been used by researchers to handle applications with recursively or hierarchically structured data. However, embedded control flow deep learning frameworks such as TensorFlow, Theano, Caffe2, and MXNet fail to efficiently represent and execute such neural networks, due to lack of support for recursion. In this paper, we add recursion to the programming model of existing frameworks by complementing their design with recursive execution of dataflow graphs as well as additional APIs for recursive definitions. Unlike iterative implementations, which can only understand the topological index of each node in recursive data structures, our recursive implementation is able to exploit the recursive relationships between nodes for efficient execution based on parallel computation. We present an implementation on TensorFlow and evaluation results with various recursive neural network models, showing that our recursive implementation not only conveys the recursive nature of recursive neural networks better than other implementations, but also uses given resources more effectively to reduce training and inference time.
2102.08818
Priyanshu Kumar
Aadarsh Singh and Priyanshu Kumar
SciDr at SDU-2020: IDEAS -- Identifying and Disambiguating Everyday Acronyms for Scientific Domain
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We present our systems submitted for the shared tasks of Acronym Identification (AI) and Acronym Disambiguation (AD) held under Workshop on SDU. We mainly experiment with BERT and SciBERT. In addition, we assess the effectiveness of "BIOless" tagging and blending along with the prowess of ensembling in AI. For AD, we formulate the problem as a span prediction task, experiment with different training techniques and also leverage the use of external data. Our systems rank 11th and 3rd in AI and AD tasks respectively.
[ { "created": "Wed, 17 Feb 2021 15:24:50 GMT", "version": "v1" }, { "created": "Mon, 8 Mar 2021 13:34:34 GMT", "version": "v2" } ]
2021-03-09
[ [ "Singh", "Aadarsh", "" ], [ "Kumar", "Priyanshu", "" ] ]
We present our systems submitted for the shared tasks of Acronym Identification (AI) and Acronym Disambiguation (AD) held under Workshop on SDU. We mainly experiment with BERT and SciBERT. In addition, we assess the effectiveness of "BIOless" tagging and blending along with the prowess of ensembling in AI. For AD, we formulate the problem as a span prediction task, experiment with different training techniques and also leverage the use of external data. Our systems rank 11th and 3rd in AI and AD tasks respectively.
2405.15570
Jakob Struye
Jakob Struye, Filip Lemic, Jeroen Famaey
Multi-Gigabit Interactive Extended Reality over Millimeter-Wave: An End-to-End System Approach
Accepted at IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC) 2024
null
null
null
cs.NI eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Achieving high-quality wireless interactive Extended Reality (XR) will require multi-gigabit throughput at extremely low latency. The Millimeter-Wave (mmWave) frequency bands, between 24 and 300GHz, can achieve such extreme performance. However, maintaining a consistently high Quality of Experience with highly mobile users is challenging, as mmWave communications are inherently directional. In this work, we present and evaluate an end-to-end approach to such a mmWave-based mobile XR system. We perform a highly realistic simulation of the system, incorporating accurate XR data traffic, detailed mmWave propagation models and actual user motion. We evaluate the impact of the beamforming strategy and frequency on the overall performance. In addition, we provide the first system-level evaluation of the CoVRage algorithm, a proactive and spatially aware user-side beamforming approach designed specifically for highly mobile XR environments.
[ { "created": "Fri, 24 May 2024 14:03:16 GMT", "version": "v1" } ]
2024-05-27
[ [ "Struye", "Jakob", "" ], [ "Lemic", "Filip", "" ], [ "Famaey", "Jeroen", "" ] ]
Achieving high-quality wireless interactive Extended Reality (XR) will require multi-gigabit throughput at extremely low latency. The Millimeter-Wave (mmWave) frequency bands, between 24 and 300GHz, can achieve such extreme performance. However, maintaining a consistently high Quality of Experience with highly mobile users is challenging, as mmWave communications are inherently directional. In this work, we present and evaluate an end-to-end approach to such a mmWave-based mobile XR system. We perform a highly realistic simulation of the system, incorporating accurate XR data traffic, detailed mmWave propagation models and actual user motion. We evaluate the impact of the beamforming strategy and frequency on the overall performance. In addition, we provide the first system-level evaluation of the CoVRage algorithm, a proactive and spatially aware user-side beamforming approach designed specifically for highly mobile XR environments.
2210.16352
Thomas Plagemann
Thomas Plagemann (1), Vera Goebel (1), Matthias Hollick (2), Boris Koldehofe (3) ((1) University of Oslo, (2) Technical University of Darmstadt, (3) University of Groningen)
Towards Privacy Engineering for Real-Time Analytics in the Human-Centered Internet of Things
null
null
null
null
cs.DC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Big data applications offer smart solutions to many urgent societal challenges, such as health care, traffic coordination, energy management, etc. The basic premise for these applications is "the more data the better". The focus often lies on sensing infrastructures in the public realm that produce an ever-increasing amount of data. Yet, any smartphone and smartwatch owner could be a continuous source of valuable data and contribute to many useful big data applications. However, such data can reveal a lot of sensitive information, like the current location or the heart rate of the owner of such devices. Protection of personal data is important in our society and for example manifested in the EU General Data Protection Regulation (GDPR). However, privacy protection and useful big data applications are hard to bring together, particularly in the human-centered IoT. Implementing proper privacy protection requires skills that are typically not in the focus of data analysts and big data developers. Thus, many individuals tend to share none of their data if in doubt whether it will be properly protected. There exist excellent privacy solutions between the "all or nothing" approach. For example, instead of continuously publishing the current location of individuals one might aggregate this data and only publish information of how many individuals are in a certain area of the city. Thus, personal data is not revealed, while useful information for certain applications like traffic coordination is retained. The goal of the Parrot project is to provide tools for real-time data analysis applications that leverage this "middle ground". Data analysts should only be required to specify their data needs, and end-users can select the privacy requirements for their data as well as the applications and end-users they want to share their data with.
[ { "created": "Fri, 28 Oct 2022 18:39:51 GMT", "version": "v1" } ]
2022-11-01
[ [ "Plagemann", "Thomas", "" ], [ "Goebel", "Vera", "" ], [ "Hollick", "Matthias", "" ], [ "Koldehofe", "Boris", "" ] ]
Big data applications offer smart solutions to many urgent societal challenges, such as health care, traffic coordination, energy management, etc. The basic premise for these applications is "the more data the better". The focus often lies on sensing infrastructures in the public realm that produce an ever-increasing amount of data. Yet, any smartphone and smartwatch owner could be a continuous source of valuable data and contribute to many useful big data applications. However, such data can reveal a lot of sensitive information, like the current location or the heart rate of the owner of such devices. Protection of personal data is important in our society and for example manifested in the EU General Data Protection Regulation (GDPR). However, privacy protection and useful big data applications are hard to bring together, particularly in the human-centered IoT. Implementing proper privacy protection requires skills that are typically not in the focus of data analysts and big data developers. Thus, many individuals tend to share none of their data if in doubt whether it will be properly protected. There exist excellent privacy solutions between the "all or nothing" approach. For example, instead of continuously publishing the current location of individuals one might aggregate this data and only publish information of how many individuals are in a certain area of the city. Thus, personal data is not revealed, while useful information for certain applications like traffic coordination is retained. The goal of the Parrot project is to provide tools for real-time data analysis applications that leverage this "middle ground". Data analysts should only be required to specify their data needs, and end-users can select the privacy requirements for their data as well as the applications and end-users they want to share their data with.
2002.07287
Andrej Sajenko
Frank Kammer, Johannes Meintrup, and Andrej Sajenko
Sorting and Ranking of Self-Delimiting Numbers with Applications to Outerplanar Graph Isomorphism
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Assume that an $N$-bit sequence $S$ of $k$ numbers encoded as Elias gamma codes is given as input. We present space-efficient algorithms for sorting, dense ranking and competitive ranking on $S$ in the word RAM model with word size $\Omega(\log N)$ bits. Our algorithms run in $O(k + \frac{N}{\log N})$ time and use $O(N)$ bits. The sorting algorithm returns the given numbers in sorted order, stored within a bit-vector of $N$ bits, whereas our ranking algorithms construct data structures that allow us subsequently to return the dense/competitive rank of each number $x$ in $S$ in constant time. For numbers $x \in \mathbb{N}$ with $x > N$ we require the position $p_x$ of $x$ as the input for our dense-/competitive-rank data structure. As an application of our algorithms above we give an algorithm for tree isomorphism, which runs in $O(n)$ time and uses $O(n)$ bits on $n$-node trees. Finally, we generalize our result for tree isomorphism to forests and outerplanar graphs, while maintaining a space-usage of $O(n)$ bits. The previous best linear-time algorithms for trees, forests and outerplanar graph isomorphism all use $\Theta(n \log n)$ bits.
[ { "created": "Mon, 17 Feb 2020 22:39:00 GMT", "version": "v1" }, { "created": "Mon, 15 Jun 2020 10:17:08 GMT", "version": "v2" }, { "created": "Thu, 2 May 2024 15:22:21 GMT", "version": "v3" } ]
2024-05-03
[ [ "Kammer", "Frank", "" ], [ "Meintrup", "Johannes", "" ], [ "Sajenko", "Andrej", "" ] ]
Assume that an $N$-bit sequence $S$ of $k$ numbers encoded as Elias gamma codes is given as input. We present space-efficient algorithms for sorting, dense ranking and competitive ranking on $S$ in the word RAM model with word size $\Omega(\log N)$ bits. Our algorithms run in $O(k + \frac{N}{\log N})$ time and use $O(N)$ bits. The sorting algorithm returns the given numbers in sorted order, stored within a bit-vector of $N$ bits, whereas our ranking algorithms construct data structures that allow us subsequently to return the dense/competitive rank of each number $x$ in $S$ in constant time. For numbers $x \in \mathbb{N}$ with $x > N$ we require the position $p_x$ of $x$ as the input for our dense-/competitive-rank data structure. As an application of our algorithms above we give an algorithm for tree isomorphism, which runs in $O(n)$ time and uses $O(n)$ bits on $n$-node trees. Finally, we generalize our result for tree isomorphism to forests and outerplanar graphs, while maintaining a space-usage of $O(n)$ bits. The previous best linear-time algorithms for trees, forests and outerplanar graph isomorphism all use $\Theta(n \log n)$ bits.
2308.13785
Minheng Ni
Minheng Ni, Chenfei Wu, Xiaodong Wang, Shengming Yin, Lijuan Wang, Zicheng Liu, Nan Duan
ORES: Open-vocabulary Responsible Visual Synthesis
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Avoiding synthesizing specific visual concepts is an essential challenge in responsible visual synthesis. However, the visual concept that needs to be avoided for responsible visual synthesis tends to be diverse, depending on the region, context, and usage scenarios. In this work, we formalize a new task, Open-vocabulary Responsible Visual Synthesis (ORES), where the synthesis model is able to avoid forbidden visual concepts while allowing users to input any desired content. To address this problem, we present a Two-stage Intervention (TIN) framework. By introducing 1) rewriting with learnable instruction through a large-scale language model (LLM) and 2) synthesizing with prompt intervention on a diffusion synthesis model, it can effectively synthesize images avoiding any concepts but following the user's query as much as possible. To evaluate on ORES, we provide a publicly available dataset, baseline models, and benchmark. Experimental results demonstrate the effectiveness of our method in reducing risks of image generation. Our work highlights the potential of LLMs in responsible visual synthesis. Our code and dataset is public available.
[ { "created": "Sat, 26 Aug 2023 06:47:34 GMT", "version": "v1" } ]
2023-08-29
[ [ "Ni", "Minheng", "" ], [ "Wu", "Chenfei", "" ], [ "Wang", "Xiaodong", "" ], [ "Yin", "Shengming", "" ], [ "Wang", "Lijuan", "" ], [ "Liu", "Zicheng", "" ], [ "Duan", "Nan", "" ] ]
Avoiding synthesizing specific visual concepts is an essential challenge in responsible visual synthesis. However, the visual concept that needs to be avoided for responsible visual synthesis tends to be diverse, depending on the region, context, and usage scenarios. In this work, we formalize a new task, Open-vocabulary Responsible Visual Synthesis (ORES), where the synthesis model is able to avoid forbidden visual concepts while allowing users to input any desired content. To address this problem, we present a Two-stage Intervention (TIN) framework. By introducing 1) rewriting with learnable instruction through a large-scale language model (LLM) and 2) synthesizing with prompt intervention on a diffusion synthesis model, it can effectively synthesize images avoiding any concepts but following the user's query as much as possible. To evaluate on ORES, we provide a publicly available dataset, baseline models, and benchmark. Experimental results demonstrate the effectiveness of our method in reducing risks of image generation. Our work highlights the potential of LLMs in responsible visual synthesis. Our code and dataset is public available.
2001.05288
Joseph Tassone
Joseph Tassone, Salimur Choudhury
A Comprehensive Survey on the Ambulance Routing and Location Problems
30 pages,7 figures,16 tables
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this research, an extensive literature review was performed on the recent developments of the ambulance routing problem (ARP) and ambulance location problem (ALP). Both are respective modifications of the vehicle routing problem (VRP) and maximum covering problem (MCP), with modifications to objective functions and constraints. Although alike, a key distinction is emergency service systems (EMS) are considered critical and the optimization of these has become all the more important as a result. Similar to their parent problems, these are NP-hard and must resort to approximations if the space size is too large. Much of the current work has simply been on modifying existing systems through simulation to achieve a more acceptable result. There has been attempts towards using meta-heuristics, though practical experimentation is lacking when compared to VRP or MCP. The contributions of this work are a comprehensive survey of current methodologies, summarized models, and suggested future improvements.
[ { "created": "Fri, 10 Jan 2020 05:33:11 GMT", "version": "v1" } ]
2020-01-16
[ [ "Tassone", "Joseph", "" ], [ "Choudhury", "Salimur", "" ] ]
In this research, an extensive literature review was performed on the recent developments of the ambulance routing problem (ARP) and ambulance location problem (ALP). Both are respective modifications of the vehicle routing problem (VRP) and maximum covering problem (MCP), with modifications to objective functions and constraints. Although alike, a key distinction is emergency service systems (EMS) are considered critical and the optimization of these has become all the more important as a result. Similar to their parent problems, these are NP-hard and must resort to approximations if the space size is too large. Much of the current work has simply been on modifying existing systems through simulation to achieve a more acceptable result. There has been attempts towards using meta-heuristics, though practical experimentation is lacking when compared to VRP or MCP. The contributions of this work are a comprehensive survey of current methodologies, summarized models, and suggested future improvements.
2312.16552
Jakub Mosinski
Jakub Mosi\'nski, Piotr Bili\'nski, Thomas Merritt, Abdelhamid Ezzerg, Daniel Korzekwa
AE-Flow: AutoEncoder Normalizing Flow
ICASSP 2023
null
null
null
cs.SD cs.LG eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently normalizing flows have been gaining traction in text-to-speech (TTS) and voice conversion (VC) due to their state-of-the-art (SOTA) performance. Normalizing flows are unsupervised generative models. In this paper, we introduce supervision to the training process of normalizing flows, without the need for parallel data. We call this training paradigm AutoEncoder Normalizing Flow (AE-Flow). It adds a reconstruction loss forcing the model to use information from the conditioning to reconstruct an audio sample. Our goal is to understand the impact of each component and find the right combination of the negative log-likelihood (NLL) and the reconstruction loss in training normalizing flows with coupling blocks. For that reason we will compare flow-based mapping model trained with: (i) NLL loss, (ii) NLL and reconstruction losses, as well as (iii) reconstruction loss only. Additionally, we compare our model with SOTA VC baseline. The models are evaluated in terms of naturalness, speaker similarity, intelligibility in many-to-many and many-to-any VC settings. The results show that the proposed training paradigm systematically improves speaker similarity and naturalness when compared to regular training methods of normalizing flows. Furthermore, we show that our method improves speaker similarity and intelligibility over the state-of-the-art.
[ { "created": "Wed, 27 Dec 2023 12:29:21 GMT", "version": "v1" } ]
2023-12-29
[ [ "Mosiński", "Jakub", "" ], [ "Biliński", "Piotr", "" ], [ "Merritt", "Thomas", "" ], [ "Ezzerg", "Abdelhamid", "" ], [ "Korzekwa", "Daniel", "" ] ]
Recently normalizing flows have been gaining traction in text-to-speech (TTS) and voice conversion (VC) due to their state-of-the-art (SOTA) performance. Normalizing flows are unsupervised generative models. In this paper, we introduce supervision to the training process of normalizing flows, without the need for parallel data. We call this training paradigm AutoEncoder Normalizing Flow (AE-Flow). It adds a reconstruction loss forcing the model to use information from the conditioning to reconstruct an audio sample. Our goal is to understand the impact of each component and find the right combination of the negative log-likelihood (NLL) and the reconstruction loss in training normalizing flows with coupling blocks. For that reason we will compare flow-based mapping model trained with: (i) NLL loss, (ii) NLL and reconstruction losses, as well as (iii) reconstruction loss only. Additionally, we compare our model with SOTA VC baseline. The models are evaluated in terms of naturalness, speaker similarity, intelligibility in many-to-many and many-to-any VC settings. The results show that the proposed training paradigm systematically improves speaker similarity and naturalness when compared to regular training methods of normalizing flows. Furthermore, we show that our method improves speaker similarity and intelligibility over the state-of-the-art.
1211.2361
Hossein Jahandideh
Hossein Jahandideh, Ardavan Asef-Vaziri, Mohammad Modarres
Genetic Algorithm for Designing a Convenient Facility Layout for a Circular Flow Path
Accepted to the 2013 IEEE Symposium Series on Computational Intelligence: Swarm Intelligence Symposium. This paper has been withdrawn by the author, by the request of the supervisor, to be updated, fixed, and combined with other papers
null
null
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a heuristic for designing facility layouts that are convenient for designing a unidirectional loop for material handling. We use genetic algorithm where the objective function and crossover and mutation operators have all been designed specifically for this purpose. Our design is made under flexible bay structure and comparisons are made with other layouts from the literature that were designed under flexible bay structure.
[ { "created": "Sun, 11 Nov 2012 00:26:22 GMT", "version": "v1" }, { "created": "Fri, 22 Mar 2013 06:09:29 GMT", "version": "v2" } ]
2013-03-25
[ [ "Jahandideh", "Hossein", "" ], [ "Asef-Vaziri", "Ardavan", "" ], [ "Modarres", "Mohammad", "" ] ]
In this paper, we present a heuristic for designing facility layouts that are convenient for designing a unidirectional loop for material handling. We use genetic algorithm where the objective function and crossover and mutation operators have all been designed specifically for this purpose. Our design is made under flexible bay structure and comparisons are made with other layouts from the literature that were designed under flexible bay structure.
2304.10611
Minghui Zhang
Minghui Zhang, Alex Sokolov, Weixin Cai, Si-Qing Chen
Joint Repetition Suppression and Content Moderation of Large Language Models
null
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Natural language generation (NLG) is one of the most impactful fields in NLP, and recent years have witnessed its evolution brought about by large language models (LLMs). As the key instrument for writing assistance applications, they are generally prone to replicating or extending offensive content provided in the input. In low-resource data regime, they can also lead to repetitive outputs. Usually, offensive content and repetitions are mitigated with post-hoc methods, including n-gram level blocklists, top-k and nucleus sampling. In this paper, we apply non-exact repetition suppression using token and sequence level unlikelihood loss, and further explore the framework of unlikelihood training objective in order to jointly endow the model with abilities to avoid generating offensive words and phrases from the beginning. Finally, with comprehensive experiments, we demonstrate that our proposed methods work exceptionally in controlling the repetition and content quality of LLM outputs.
[ { "created": "Thu, 20 Apr 2023 19:17:49 GMT", "version": "v1" }, { "created": "Mon, 5 Jun 2023 18:16:29 GMT", "version": "v2" } ]
2023-06-07
[ [ "Zhang", "Minghui", "" ], [ "Sokolov", "Alex", "" ], [ "Cai", "Weixin", "" ], [ "Chen", "Si-Qing", "" ] ]
Natural language generation (NLG) is one of the most impactful fields in NLP, and recent years have witnessed its evolution brought about by large language models (LLMs). As the key instrument for writing assistance applications, they are generally prone to replicating or extending offensive content provided in the input. In low-resource data regime, they can also lead to repetitive outputs. Usually, offensive content and repetitions are mitigated with post-hoc methods, including n-gram level blocklists, top-k and nucleus sampling. In this paper, we apply non-exact repetition suppression using token and sequence level unlikelihood loss, and further explore the framework of unlikelihood training objective in order to jointly endow the model with abilities to avoid generating offensive words and phrases from the beginning. Finally, with comprehensive experiments, we demonstrate that our proposed methods work exceptionally in controlling the repetition and content quality of LLM outputs.
1111.7051
Arup Pal
Arup Kumar Pal, G.P. Biswas and S. Mukhopadhyay
Design of Image Cryptosystem by Simultaneous VQ-Compression and Shuffling of Codebook and Index Matrix
null
The International journal of Multimedia & Its Applications (IJMA), Vol.1, No.1, November 2009
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The popularity of Internet usage although increases exponentially, it is incapable of providing the security for exchange of confidential data between the users. As a result, several cryptosystems for encryption of data and images have been developed for secured transmission over Internet. In this work, a scheme for Image encryption/decryption based on Vector Quantization (VQ) has been proposed that concurrently encodes the images for compression and shuffles the codebook and the index matrix using pseudorandom sequences for encryption. The processing time of the proposed scheme is much less than the other cryptosystems, because it does not use any traditional cryptographic operations, and instead it performs swapping between the contents of the codebook with respect to a random sequence, which resulted an indirect shuffling of the contents of the index matrix. It may be noted that the security of the proposed cryptosystem depends on the generation and the exchange of the random sequences used. Since the generation of truly random sequences are not practically feasible, we simulate the proposed scheme using MATLAB, where its operators like rand(method, seed), randperm(n) has been used to generate pseudorandom sequences and it has been seen that the proposed cryptosystem shows the expected performance.
[ { "created": "Wed, 30 Nov 2011 05:36:51 GMT", "version": "v1" } ]
2011-12-01
[ [ "Pal", "Arup Kumar", "" ], [ "Biswas", "G. P.", "" ], [ "Mukhopadhyay", "S.", "" ] ]
The popularity of Internet usage although increases exponentially, it is incapable of providing the security for exchange of confidential data between the users. As a result, several cryptosystems for encryption of data and images have been developed for secured transmission over Internet. In this work, a scheme for Image encryption/decryption based on Vector Quantization (VQ) has been proposed that concurrently encodes the images for compression and shuffles the codebook and the index matrix using pseudorandom sequences for encryption. The processing time of the proposed scheme is much less than the other cryptosystems, because it does not use any traditional cryptographic operations, and instead it performs swapping between the contents of the codebook with respect to a random sequence, which resulted an indirect shuffling of the contents of the index matrix. It may be noted that the security of the proposed cryptosystem depends on the generation and the exchange of the random sequences used. Since the generation of truly random sequences are not practically feasible, we simulate the proposed scheme using MATLAB, where its operators like rand(method, seed), randperm(n) has been used to generate pseudorandom sequences and it has been seen that the proposed cryptosystem shows the expected performance.
2103.12198
Jacob Nogas
Joseph Jay Williams, Jacob Nogas, Nina Deliu, Hammad Shaikh, Sofia S. Villar, Audrey Durand, Anna Rafferty
Challenges in Statistical Analysis of Data Collected by a Bandit Algorithm: An Empirical Exploration in Applications to Adaptively Randomized Experiments
null
null
null
null
cs.LG stat.AP
http://creativecommons.org/licenses/by/4.0/
Multi-armed bandit algorithms have been argued for decades as useful for adaptively randomized experiments. In such experiments, an algorithm varies which arms (e.g. alternative interventions to help students learn) are assigned to participants, with the goal of assigning higher-reward arms to as many participants as possible. We applied the bandit algorithm Thompson Sampling (TS) to run adaptive experiments in three university classes. Instructors saw great value in trying to rapidly use data to give their students in the experiments better arms (e.g. better explanations of a concept). Our deployment, however, illustrated a major barrier for scientists and practitioners to use such adaptive experiments: a lack of quantifiable insight into how much statistical analysis of specific real-world experiments is impacted (Pallmann et al, 2018; FDA, 2019), compared to traditional uniform random assignment. We therefore use our case study of the ubiquitous two-arm binary reward setting to empirically investigate the impact of using Thompson Sampling instead of uniform random assignment. In this setting, using common statistical hypothesis tests, we show that collecting data with TS can as much as double the False Positive Rate (FPR; incorrectly reporting differences when none exist) and the False Negative Rate (FNR; failing to report differences when they exist)...
[ { "created": "Mon, 22 Mar 2021 22:05:18 GMT", "version": "v1" }, { "created": "Fri, 26 Mar 2021 14:44:02 GMT", "version": "v2" } ]
2021-03-29
[ [ "Williams", "Joseph Jay", "" ], [ "Nogas", "Jacob", "" ], [ "Deliu", "Nina", "" ], [ "Shaikh", "Hammad", "" ], [ "Villar", "Sofia S.", "" ], [ "Durand", "Audrey", "" ], [ "Rafferty", "Anna", "" ] ]
Multi-armed bandit algorithms have been argued for decades as useful for adaptively randomized experiments. In such experiments, an algorithm varies which arms (e.g. alternative interventions to help students learn) are assigned to participants, with the goal of assigning higher-reward arms to as many participants as possible. We applied the bandit algorithm Thompson Sampling (TS) to run adaptive experiments in three university classes. Instructors saw great value in trying to rapidly use data to give their students in the experiments better arms (e.g. better explanations of a concept). Our deployment, however, illustrated a major barrier for scientists and practitioners to use such adaptive experiments: a lack of quantifiable insight into how much statistical analysis of specific real-world experiments is impacted (Pallmann et al, 2018; FDA, 2019), compared to traditional uniform random assignment. We therefore use our case study of the ubiquitous two-arm binary reward setting to empirically investigate the impact of using Thompson Sampling instead of uniform random assignment. In this setting, using common statistical hypothesis tests, we show that collecting data with TS can as much as double the False Positive Rate (FPR; incorrectly reporting differences when none exist) and the False Negative Rate (FNR; failing to report differences when they exist)...
1911.04020
Ya Xiao
Ya Xiao, Qingying Hao, Danfeng (Daphne) Yao
Neural Cryptanalysis: Metrics, Methodology, and Applications in CPS Ciphers
8 pages, 8 figures, The 2019 IEEE Conference on Dependable and Secure Computing
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many real-world cyber-physical systems (CPS) use proprietary cipher algorithms. In this work, we describe an easy-to-use black-box security evaluation approach to measure the strength of proprietary ciphers without having to know the algorithms. We quantify the strength of a cipher by measuring how difficult it is for a neural network to mimic the cipher algorithm. We define new metrics (e.g., cipher match rate, training data complexity and training time complexity) that are computed from neural networks to quantitatively represent the cipher strength. This measurement approach allows us to directly compare the security of ciphers. Our experimental demonstration utilizes fully connected neural networks with multiple parallel binary classifiers at the output layer. The results show that when compared with round-reduced DES, the security strength of Hitag2 (a popular stream cipher used in the keyless entry of modern cars) is weaker than 3-round DES.
[ { "created": "Mon, 11 Nov 2019 00:36:38 GMT", "version": "v1" }, { "created": "Fri, 22 Nov 2019 19:45:11 GMT", "version": "v2" }, { "created": "Tue, 26 Nov 2019 02:05:35 GMT", "version": "v3" } ]
2019-11-27
[ [ "Xiao", "Ya", "", "Daphne" ], [ "Hao", "Qingying", "", "Daphne" ], [ "Danfeng", "", "", "Daphne" ], [ "Yao", "", "" ] ]
Many real-world cyber-physical systems (CPS) use proprietary cipher algorithms. In this work, we describe an easy-to-use black-box security evaluation approach to measure the strength of proprietary ciphers without having to know the algorithms. We quantify the strength of a cipher by measuring how difficult it is for a neural network to mimic the cipher algorithm. We define new metrics (e.g., cipher match rate, training data complexity and training time complexity) that are computed from neural networks to quantitatively represent the cipher strength. This measurement approach allows us to directly compare the security of ciphers. Our experimental demonstration utilizes fully connected neural networks with multiple parallel binary classifiers at the output layer. The results show that when compared with round-reduced DES, the security strength of Hitag2 (a popular stream cipher used in the keyless entry of modern cars) is weaker than 3-round DES.
2312.01432
Andrzej Ruszczy\'nski
Zhengqi Lin and Andrzej Ruszczynski
Fast Dual Subgradient Optimization of the Integrated Transportation Distance Between Stochastic Kernels
arXiv admin note: text overlap with arXiv:2311.06645
null
null
null
cs.LG math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A generalization of the Wasserstein metric, the integrated transportation distance, establishes a novel distance between probability kernels of Markov systems. This metric serves as the foundation for an efficient approximation technique, enabling the replacement of the original system's kernel with a kernel with a discrete support of limited cardinality. To facilitate practical implementation, we present a specialized dual algorithm capable of constructing these approximate kernels quickly and efficiently, without requiring computationally expensive matrix operations. Finally, we demonstrate the efficacy of our method through several illustrative examples, showcasing its utility in practical scenarios. This advancement offers new possibilities for the streamlined analysis and manipulation of stochastic systems represented by kernels.
[ { "created": "Sun, 3 Dec 2023 15:44:17 GMT", "version": "v1" } ]
2023-12-07
[ [ "Lin", "Zhengqi", "" ], [ "Ruszczynski", "Andrzej", "" ] ]
A generalization of the Wasserstein metric, the integrated transportation distance, establishes a novel distance between probability kernels of Markov systems. This metric serves as the foundation for an efficient approximation technique, enabling the replacement of the original system's kernel with a kernel with a discrete support of limited cardinality. To facilitate practical implementation, we present a specialized dual algorithm capable of constructing these approximate kernels quickly and efficiently, without requiring computationally expensive matrix operations. Finally, we demonstrate the efficacy of our method through several illustrative examples, showcasing its utility in practical scenarios. This advancement offers new possibilities for the streamlined analysis and manipulation of stochastic systems represented by kernels.
1810.02452
Elham Havvaei
David Eppstein and Elham Havvaei
Parameterized Leaf Power Recognition via Embedding into Graph Products
null
Algorithmica 82 (8): 2337-2359, 2020
10.1007/s00453-020-00720-8
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The $k$-leaf power graph $G$ of a tree $T$ is a graph whose vertices are the leaves of $T$ and whose edges connect pairs of leaves at unweighted distance at most~$k$ in $T$. Recognition of the $k$-leaf power graphs for $k \geq 7$ is still an open problem. In this paper, we provide two algorithms for this problem for sparse leaf power graphs. Our results shows that the problem of recognizing these graphs is fixed-parameter tractable when parameterized both by $k$ and by the degeneracy of the given graph. To prove this, we first describe how to embed the leaf root of a leaf power graph into a product of the graph with a cycle graph. We bound the treewidth of the resulting product in terms of $k$ and the degeneracy of $G$. The first presented algorithm uses methods based on monadic second-order logic (MSO$_2$) to recognize the existence of a leaf power as a subgraph of the product graph. Using the same embedding in the product graph, the second algorithm presents a dynamic programming approach to solve the problem and provide a better dependence on the parameters.
[ { "created": "Thu, 4 Oct 2018 23:08:03 GMT", "version": "v1" }, { "created": "Thu, 10 Oct 2019 22:49:21 GMT", "version": "v2" }, { "created": "Sun, 31 May 2020 22:43:32 GMT", "version": "v3" } ]
2020-08-11
[ [ "Eppstein", "David", "" ], [ "Havvaei", "Elham", "" ] ]
The $k$-leaf power graph $G$ of a tree $T$ is a graph whose vertices are the leaves of $T$ and whose edges connect pairs of leaves at unweighted distance at most~$k$ in $T$. Recognition of the $k$-leaf power graphs for $k \geq 7$ is still an open problem. In this paper, we provide two algorithms for this problem for sparse leaf power graphs. Our results shows that the problem of recognizing these graphs is fixed-parameter tractable when parameterized both by $k$ and by the degeneracy of the given graph. To prove this, we first describe how to embed the leaf root of a leaf power graph into a product of the graph with a cycle graph. We bound the treewidth of the resulting product in terms of $k$ and the degeneracy of $G$. The first presented algorithm uses methods based on monadic second-order logic (MSO$_2$) to recognize the existence of a leaf power as a subgraph of the product graph. Using the same embedding in the product graph, the second algorithm presents a dynamic programming approach to solve the problem and provide a better dependence on the parameters.
2106.12978
Georgios Damaskinos
Alessandro Solbiati, Kevin Heffernan, Georgios Damaskinos, Shivani Poddar, Shubham Modi, Jacques Cali
Unsupervised Topic Segmentation of Meetings with BERT Embeddings
null
null
null
null
cs.LG cs.CL
http://creativecommons.org/licenses/by/4.0/
Topic segmentation of meetings is the task of dividing multi-person meeting transcripts into topic blocks. Supervised approaches to the problem have proven intractable due to the difficulties in collecting and accurately annotating large datasets. In this paper we show how previous unsupervised topic segmentation methods can be improved using pre-trained neural architectures. We introduce an unsupervised approach based on BERT embeddings that achieves a 15.5% reduction in error rate over existing unsupervised approaches applied to two popular datasets for meeting transcripts.
[ { "created": "Thu, 24 Jun 2021 12:54:43 GMT", "version": "v1" } ]
2021-06-25
[ [ "Solbiati", "Alessandro", "" ], [ "Heffernan", "Kevin", "" ], [ "Damaskinos", "Georgios", "" ], [ "Poddar", "Shivani", "" ], [ "Modi", "Shubham", "" ], [ "Cali", "Jacques", "" ] ]
Topic segmentation of meetings is the task of dividing multi-person meeting transcripts into topic blocks. Supervised approaches to the problem have proven intractable due to the difficulties in collecting and accurately annotating large datasets. In this paper we show how previous unsupervised topic segmentation methods can be improved using pre-trained neural architectures. We introduce an unsupervised approach based on BERT embeddings that achieves a 15.5% reduction in error rate over existing unsupervised approaches applied to two popular datasets for meeting transcripts.
2002.05245
Shengxin Liu
Xiaohui Bei, Shengxin Liu, Xinhang Lu, Hongao Wang
Maximin Fairness with Mixed Divisible and Indivisible Goods
Appears in the 35th AAAI Conference on Artificial Intelligence (AAAI), 2021
Autonomous Agents and Multi-Agent Systems, 35(2):34 (2021)
10.1007/s10458-021-09517-7
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study fair resource allocation when the resources contain a mixture of divisible and indivisible goods, focusing on the well-studied fairness notion of maximin share fairness (MMS). With only indivisible goods, a full MMS allocation may not exist, but a constant multiplicative approximate allocation always does. We analyze how the MMS approximation guarantee would be affected when the resources to be allocated also contain divisible goods. In particular, we show that the worst-case MMS approximation guarantee with mixed goods is no worse than that with only indivisible goods. However, there exist problem instances to which adding some divisible resources would strictly decrease the MMS approximation ratio of the instance. On the algorithmic front, we propose a constructive algorithm that will always produce an $\alpha$-MMS allocation for any number of agents, where $\alpha$ takes values between $1/2$ and $1$ and is a monotone increasing function determined by how agents value the divisible goods relative to their MMS values.
[ { "created": "Wed, 12 Feb 2020 21:37:38 GMT", "version": "v1" }, { "created": "Fri, 11 Dec 2020 15:32:36 GMT", "version": "v2" }, { "created": "Thu, 1 Jul 2021 13:05:31 GMT", "version": "v3" } ]
2021-07-02
[ [ "Bei", "Xiaohui", "" ], [ "Liu", "Shengxin", "" ], [ "Lu", "Xinhang", "" ], [ "Wang", "Hongao", "" ] ]
We study fair resource allocation when the resources contain a mixture of divisible and indivisible goods, focusing on the well-studied fairness notion of maximin share fairness (MMS). With only indivisible goods, a full MMS allocation may not exist, but a constant multiplicative approximate allocation always does. We analyze how the MMS approximation guarantee would be affected when the resources to be allocated also contain divisible goods. In particular, we show that the worst-case MMS approximation guarantee with mixed goods is no worse than that with only indivisible goods. However, there exist problem instances to which adding some divisible resources would strictly decrease the MMS approximation ratio of the instance. On the algorithmic front, we propose a constructive algorithm that will always produce an $\alpha$-MMS allocation for any number of agents, where $\alpha$ takes values between $1/2$ and $1$ and is a monotone increasing function determined by how agents value the divisible goods relative to their MMS values.
1301.3641
Ryan Kiros
Ryan Kiros
Training Neural Networks with Stochastic Hessian-Free Optimization
11 pages, ICLR 2013
null
null
null
cs.LG cs.NE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property and study stochastic HF with gradient and curvature mini-batches independent of the dataset size. We modify Martens' HF for these settings and integrate dropout, a method for preventing co-adaptation of feature detectors, to guard against overfitting. Stochastic Hessian-free optimization gives an intermediary between SGD and HF that achieves competitive performance on both classification and deep autoencoder experiments.
[ { "created": "Wed, 16 Jan 2013 10:10:23 GMT", "version": "v1" }, { "created": "Mon, 18 Mar 2013 05:51:37 GMT", "version": "v2" }, { "created": "Wed, 1 May 2013 06:57:50 GMT", "version": "v3" } ]
2013-05-02
[ [ "Kiros", "Ryan", "" ] ]
Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property and study stochastic HF with gradient and curvature mini-batches independent of the dataset size. We modify Martens' HF for these settings and integrate dropout, a method for preventing co-adaptation of feature detectors, to guard against overfitting. Stochastic Hessian-free optimization gives an intermediary between SGD and HF that achieves competitive performance on both classification and deep autoencoder experiments.
2403.17064
Stefan Andreas Baumann
Stefan Andreas Baumann and Felix Krause and Michael Neumayr and Nick Stracke and Vincent Tao Hu and Bj\"orn Ommer
Continuous, Subject-Specific Attribute Control in T2I Models by Identifying Semantic Directions
Project page: https://compvis.github.io/attribute-control
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, advances in text-to-image (T2I) diffusion models have substantially elevated the quality of their generated images. However, achieving fine-grained control over attributes remains a challenge due to the limitations of natural language prompts (such as no continuous set of intermediate descriptions existing between ``person'' and ``old person''). Even though many methods were introduced that augment the model or generation process to enable such control, methods that do not require a fixed reference image are limited to either enabling global fine-grained attribute expression control or coarse attribute expression control localized to specific subjects, not both simultaneously. We show that there exist directions in the commonly used token-level CLIP text embeddings that enable fine-grained subject-specific control of high-level attributes in text-to-image models. Based on this observation, we introduce one efficient optimization-free and one robust optimization-based method to identify these directions for specific attributes from contrastive text prompts. We demonstrate that these directions can be used to augment the prompt text input with fine-grained control over attributes of specific subjects in a compositional manner (control over multiple attributes of a single subject) without having to adapt the diffusion model. Project page: https://compvis.github.io/attribute-control. Code is available at https://github.com/CompVis/attribute-control.
[ { "created": "Mon, 25 Mar 2024 18:00:42 GMT", "version": "v1" } ]
2024-03-27
[ [ "Baumann", "Stefan Andreas", "" ], [ "Krause", "Felix", "" ], [ "Neumayr", "Michael", "" ], [ "Stracke", "Nick", "" ], [ "Hu", "Vincent Tao", "" ], [ "Ommer", "Björn", "" ] ]
In recent years, advances in text-to-image (T2I) diffusion models have substantially elevated the quality of their generated images. However, achieving fine-grained control over attributes remains a challenge due to the limitations of natural language prompts (such as no continuous set of intermediate descriptions existing between ``person'' and ``old person''). Even though many methods were introduced that augment the model or generation process to enable such control, methods that do not require a fixed reference image are limited to either enabling global fine-grained attribute expression control or coarse attribute expression control localized to specific subjects, not both simultaneously. We show that there exist directions in the commonly used token-level CLIP text embeddings that enable fine-grained subject-specific control of high-level attributes in text-to-image models. Based on this observation, we introduce one efficient optimization-free and one robust optimization-based method to identify these directions for specific attributes from contrastive text prompts. We demonstrate that these directions can be used to augment the prompt text input with fine-grained control over attributes of specific subjects in a compositional manner (control over multiple attributes of a single subject) without having to adapt the diffusion model. Project page: https://compvis.github.io/attribute-control. Code is available at https://github.com/CompVis/attribute-control.
2007.11246
Mehdi Teimouri
Mehdi Teimouri, Zahra Seyedghorban, Fatemeh Amirjani
Fragments-Expert: A Graphical User Interface MATLAB Toolbox for Classification of File Fragments
47 Pages, 34 Figures, and 3 Tables
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The classification of file fragments of various file formats is an essential task in various applications such as firewalls, intrusion detection systems, anti-viruses, web content filtering, and digital forensics. However, the community lacks a suitable software tool that can integrate major methods for feature extraction from file fragments and classification among various file formats. In this paper, we present Fragments-Expert that is a graphical user interface MATLAB toolbox for the classification of file fragments. It provides users with 22 categories of features extracted from file fragments. These features can be employed by 7 categories of machine learning algorithms for the task of classification among various file formats.
[ { "created": "Wed, 22 Jul 2020 08:03:02 GMT", "version": "v1" } ]
2020-07-23
[ [ "Teimouri", "Mehdi", "" ], [ "Seyedghorban", "Zahra", "" ], [ "Amirjani", "Fatemeh", "" ] ]
The classification of file fragments of various file formats is an essential task in various applications such as firewalls, intrusion detection systems, anti-viruses, web content filtering, and digital forensics. However, the community lacks a suitable software tool that can integrate major methods for feature extraction from file fragments and classification among various file formats. In this paper, we present Fragments-Expert that is a graphical user interface MATLAB toolbox for the classification of file fragments. It provides users with 22 categories of features extracted from file fragments. These features can be employed by 7 categories of machine learning algorithms for the task of classification among various file formats.
2305.04429
Yang Wu
Yang Wu, Yanyan Zhao, Zhongyang Li, Bing Qin, Kai Xiong
Improving Cross-Task Generalization with Step-by-Step Instructions
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Instruction tuning has been shown to be able to improve cross-task generalization of language models. However, it is still challenging for language models to complete the target tasks following the instructions, as the instructions are general and lack intermediate steps. To address this problem, we propose to incorporate the step-by-step instructions to help language models to decompose the tasks, which can provide the detailed and specific procedures for completing the target tasks. The step-by-step instructions are obtained automatically by prompting ChatGPT, which are further combined with the original instructions to tune language models. The extensive experiments on SUP-NATINST show that the high-quality step-by-step instructions can improve cross-task generalization across different model sizes. Moreover, the further analysis indicates the importance of the order of steps of the step-by-step instruction for the improvement. To facilitate future research, we release the step-by-step instructions and their human quality evaluation results.
[ { "created": "Mon, 8 May 2023 02:50:41 GMT", "version": "v1" } ]
2023-05-09
[ [ "Wu", "Yang", "" ], [ "Zhao", "Yanyan", "" ], [ "Li", "Zhongyang", "" ], [ "Qin", "Bing", "" ], [ "Xiong", "Kai", "" ] ]
Instruction tuning has been shown to be able to improve cross-task generalization of language models. However, it is still challenging for language models to complete the target tasks following the instructions, as the instructions are general and lack intermediate steps. To address this problem, we propose to incorporate the step-by-step instructions to help language models to decompose the tasks, which can provide the detailed and specific procedures for completing the target tasks. The step-by-step instructions are obtained automatically by prompting ChatGPT, which are further combined with the original instructions to tune language models. The extensive experiments on SUP-NATINST show that the high-quality step-by-step instructions can improve cross-task generalization across different model sizes. Moreover, the further analysis indicates the importance of the order of steps of the step-by-step instruction for the improvement. To facilitate future research, we release the step-by-step instructions and their human quality evaluation results.
2310.04020
Anand Kulkarni Dr
Anand J Kulkarni, Ishaan R Kale, Apoorva Shastri, Aayush Khandekar
Snail Homing and Mating Search Algorithm: A Novel Bio-Inspired Metaheuristic Algorithm
46 Pages, 11 Figures, 24 Tables
null
null
null
cs.NE
http://creativecommons.org/licenses/by/4.0/
In this paper, a novel Snail Homing and Mating Search (SHMS) algorithm is proposed. It is inspired from the biological behaviour of the snails. Snails continuously travels to find food and a mate, leaving behind a trail of mucus that serves as a guide for their return. Snails tend to navigate by following the available trails on the ground and responding to cues from nearby shelter homes. The proposed SHMS algorithm is investigated by solving several unimodal and multimodal functions. The solutions are validated using standard statistical tests such as two-sided and pairwise signed rank Wilcoxon test and Friedman rank test. The solution obtained from the SHMS algorithm exhibited superior robustness as well as search space exploration capabilities within the less computational cost. The real-world application of SHMS algorithm is successfully demonstrated in the engineering design domain by solving three cases of design and economic optimization shell and tube heat exchanger problem. The objective function value and other statistical results obtained using SHMS algorithm are compared with other well-known metaheuristic algorithms.
[ { "created": "Fri, 6 Oct 2023 05:18:48 GMT", "version": "v1" } ]
2023-10-09
[ [ "Kulkarni", "Anand J", "" ], [ "Kale", "Ishaan R", "" ], [ "Shastri", "Apoorva", "" ], [ "Khandekar", "Aayush", "" ] ]
In this paper, a novel Snail Homing and Mating Search (SHMS) algorithm is proposed. It is inspired from the biological behaviour of the snails. Snails continuously travels to find food and a mate, leaving behind a trail of mucus that serves as a guide for their return. Snails tend to navigate by following the available trails on the ground and responding to cues from nearby shelter homes. The proposed SHMS algorithm is investigated by solving several unimodal and multimodal functions. The solutions are validated using standard statistical tests such as two-sided and pairwise signed rank Wilcoxon test and Friedman rank test. The solution obtained from the SHMS algorithm exhibited superior robustness as well as search space exploration capabilities within the less computational cost. The real-world application of SHMS algorithm is successfully demonstrated in the engineering design domain by solving three cases of design and economic optimization shell and tube heat exchanger problem. The objective function value and other statistical results obtained using SHMS algorithm are compared with other well-known metaheuristic algorithms.
1705.07460
Min Xu
Min Xu
Experience enrichment based task independent reward model
4 pages, 1 figure
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For most reinforcement learning approaches, the learning is performed by maximizing an accumulative reward that is expectedly and manually defined for specific tasks. However, in real world, rewards are emergent phenomena from the complex interactions between agents and environments. In this paper, we propose an implicit generic reward model for reinforcement learning. Unlike those rewards that are manually defined for specific tasks, such implicit reward is task independent. It only comes from the deviation from the agents' previous experiences.
[ { "created": "Sun, 21 May 2017 15:19:20 GMT", "version": "v1" } ]
2017-05-23
[ [ "Xu", "Min", "" ] ]
For most reinforcement learning approaches, the learning is performed by maximizing an accumulative reward that is expectedly and manually defined for specific tasks. However, in real world, rewards are emergent phenomena from the complex interactions between agents and environments. In this paper, we propose an implicit generic reward model for reinforcement learning. Unlike those rewards that are manually defined for specific tasks, such implicit reward is task independent. It only comes from the deviation from the agents' previous experiences.
2303.14321
Daniel Lemire
Daniel Lemire
Exact Short Products From Truncated Multipliers
Software at https://github.com/lemire/exactshortlib
Computer Journal 67 (4), 2024
10.1093/comjnl/bxad077
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
We sometimes need to compute the most significant digits of the product of small integers with a multiplier requiring much storage: e.g., a large integer (e.g., $5^{100}$) or an irrational number ($\pi$). We only need to access the most significant digits of the multiplier-as long as the integers are sufficiently small. We provide an efficient algorithm to compute the range of integers given a truncated multiplier and a desired number of digits.
[ { "created": "Sat, 25 Mar 2023 01:26:00 GMT", "version": "v1" } ]
2024-05-07
[ [ "Lemire", "Daniel", "" ] ]
We sometimes need to compute the most significant digits of the product of small integers with a multiplier requiring much storage: e.g., a large integer (e.g., $5^{100}$) or an irrational number ($\pi$). We only need to access the most significant digits of the multiplier-as long as the integers are sufficiently small. We provide an efficient algorithm to compute the range of integers given a truncated multiplier and a desired number of digits.
2206.06836
ali hassan
Amine Mrabet, Ali Hassan, Patrice Darmon (Umanis)
"hasSignification()": une nouvelle fonction de distance pour soutenir la d\'etection de donn\'ees personnelles
in French language
null
null
null
cs.CL cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Today with Big Data and data lakes, we are faced of a mass of data that is very difficult to manage it manually. The protection of personal data in this context requires an automatic analysis for data discovery. Storing the names of attributes already analyzed in a knowledge base could optimize this automatic discovery. To have a better knowledge base, we should not store any attributes whose name does not make sense. In this article, to check if the name of an attribute has a meaning, we propose a solution that calculate the distances between this name and the words in a dictionary. Our studies on the distance functions like N-Gram, Jaro-Winkler and Levenshtein show limits to set an acceptance threshold for an attribute in the knowledge base. In order to overcome these limitations, our solution aims to strengthen the score calculation by using an exponential function based on the longest sequence. In addition, a double scan in dictionary is also proposed in order to process the attributes which have a compound name.
[ { "created": "Tue, 14 Jun 2022 13:31:26 GMT", "version": "v1" } ]
2022-06-15
[ [ "Mrabet", "Amine", "", "Umanis" ], [ "Hassan", "Ali", "", "Umanis" ], [ "Darmon", "Patrice", "", "Umanis" ] ]
Today with Big Data and data lakes, we are faced of a mass of data that is very difficult to manage it manually. The protection of personal data in this context requires an automatic analysis for data discovery. Storing the names of attributes already analyzed in a knowledge base could optimize this automatic discovery. To have a better knowledge base, we should not store any attributes whose name does not make sense. In this article, to check if the name of an attribute has a meaning, we propose a solution that calculate the distances between this name and the words in a dictionary. Our studies on the distance functions like N-Gram, Jaro-Winkler and Levenshtein show limits to set an acceptance threshold for an attribute in the knowledge base. In order to overcome these limitations, our solution aims to strengthen the score calculation by using an exponential function based on the longest sequence. In addition, a double scan in dictionary is also proposed in order to process the attributes which have a compound name.
1403.5618
P\"ar-Ola Zander
Shahadat Hossein, Par-Ola Zander, Md. Kamal, Linkon Chowdhury
Belief-Rule-Based Expert Systems for Evaluation of E- Government: A Case Study
Accepted with no Changes for Wiley Expert Systems
null
null
null
cs.AI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Little knowledge exists on the impact and results associated with e-government projects in many specific use domains. Therefore it is necessary to evaluate the efficiency and effectiveness of e-government systems. Since the development of e-government is a continuous process of improvement, it requires continuous evaluation of the overall e-government system as well as evaluation of its various dimensions such as determinants, characteristics and results. E-government development is often complex with multiple stakeholders, large user bases and complex goals. Consequently, even experts have difficulties in evaluating these systems, especially in an integrated and comprehensive way as well as on an aggregate level. Expert systems are a candidate solution to evaluate such complex e-government systems. However, it is difficult for expert systems to cope with uncertain evaluation data that are vague, inconsistent, highly subjective or in other ways challenging to formalize. This paper presents an approach that can handle uncertainty in e-government evaluation: The combination of Belief Rule Base (BRB) knowledge representation and Evidential Reasoning (ES). This approach is illustrated with a concrete prototype, known as Belief Rule Based Expert System (BRBES) and put to use in the local e-government of Bangladesh. The results have been compared with a recently developed method of evaluating e-Government, and it is shown that the results of BRBES are more accurate and reliable. BRBES can be used to identify the factors that need to be improved to achieve the overall aim of an e-government project. In addition, various "what if" scenarios can be generated and developers and managers can get a forecast of the outcomes. In this way, the system can be used to facilitate decision making processes under uncertainty.
[ { "created": "Sat, 22 Mar 2014 05:56:26 GMT", "version": "v1" }, { "created": "Mon, 9 Mar 2015 09:35:48 GMT", "version": "v2" } ]
2015-03-10
[ [ "Hossein", "Shahadat", "" ], [ "Zander", "Par-Ola", "" ], [ "Kamal", "Md.", "" ], [ "Chowdhury", "Linkon", "" ] ]
Little knowledge exists on the impact and results associated with e-government projects in many specific use domains. Therefore it is necessary to evaluate the efficiency and effectiveness of e-government systems. Since the development of e-government is a continuous process of improvement, it requires continuous evaluation of the overall e-government system as well as evaluation of its various dimensions such as determinants, characteristics and results. E-government development is often complex with multiple stakeholders, large user bases and complex goals. Consequently, even experts have difficulties in evaluating these systems, especially in an integrated and comprehensive way as well as on an aggregate level. Expert systems are a candidate solution to evaluate such complex e-government systems. However, it is difficult for expert systems to cope with uncertain evaluation data that are vague, inconsistent, highly subjective or in other ways challenging to formalize. This paper presents an approach that can handle uncertainty in e-government evaluation: The combination of Belief Rule Base (BRB) knowledge representation and Evidential Reasoning (ES). This approach is illustrated with a concrete prototype, known as Belief Rule Based Expert System (BRBES) and put to use in the local e-government of Bangladesh. The results have been compared with a recently developed method of evaluating e-Government, and it is shown that the results of BRBES are more accurate and reliable. BRBES can be used to identify the factors that need to be improved to achieve the overall aim of an e-government project. In addition, various "what if" scenarios can be generated and developers and managers can get a forecast of the outcomes. In this way, the system can be used to facilitate decision making processes under uncertainty.
2307.01689
Yuval Dagan
Angelos Assos, Idan Attias, Yuval Dagan, Constantinos Daskalakis, Maxwell Fishelson
Online Learning and Solving Infinite Games with an ERM Oracle
In COLT2023
null
null
null
cs.LG cs.AI cs.GT stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While ERM suffices to attain near-optimal generalization error in the stochastic learning setting, this is not known to be the case in the online learning setting, where algorithms for general concept classes rely on computationally inefficient oracles such as the Standard Optimal Algorithm (SOA). In this work, we propose an algorithm for online binary classification setting that relies solely on ERM oracle calls, and show that it has finite regret in the realizable setting and sublinearly growing regret in the agnostic setting. We bound the regret in terms of the Littlestone and threshold dimensions of the underlying concept class. We obtain similar results for nonparametric games, where the ERM oracle can be interpreted as a best response oracle, finding the best response of a player to a given history of play of the other players. In this setting, we provide learning algorithms that only rely on best response oracles and converge to approximate-minimax equilibria in two-player zero-sum games and approximate coarse correlated equilibria in multi-player general-sum games, as long as the game has a bounded fat-threshold dimension. Our algorithms apply to both binary-valued and real-valued games and can be viewed as providing justification for the wide use of double oracle and multiple oracle algorithms in the practice of solving large games.
[ { "created": "Tue, 4 Jul 2023 12:51:21 GMT", "version": "v1" }, { "created": "Mon, 10 Jul 2023 11:16:54 GMT", "version": "v2" } ]
2023-07-11
[ [ "Assos", "Angelos", "" ], [ "Attias", "Idan", "" ], [ "Dagan", "Yuval", "" ], [ "Daskalakis", "Constantinos", "" ], [ "Fishelson", "Maxwell", "" ] ]
While ERM suffices to attain near-optimal generalization error in the stochastic learning setting, this is not known to be the case in the online learning setting, where algorithms for general concept classes rely on computationally inefficient oracles such as the Standard Optimal Algorithm (SOA). In this work, we propose an algorithm for online binary classification setting that relies solely on ERM oracle calls, and show that it has finite regret in the realizable setting and sublinearly growing regret in the agnostic setting. We bound the regret in terms of the Littlestone and threshold dimensions of the underlying concept class. We obtain similar results for nonparametric games, where the ERM oracle can be interpreted as a best response oracle, finding the best response of a player to a given history of play of the other players. In this setting, we provide learning algorithms that only rely on best response oracles and converge to approximate-minimax equilibria in two-player zero-sum games and approximate coarse correlated equilibria in multi-player general-sum games, as long as the game has a bounded fat-threshold dimension. Our algorithms apply to both binary-valued and real-valued games and can be viewed as providing justification for the wide use of double oracle and multiple oracle algorithms in the practice of solving large games.
1508.03725
Mirco Musolesi
Veljko Pejovic, Neal Lathia, Cecilia Mascolo, Mirco Musolesi
Mobile-Based Experience Sampling for Behaviour Research
20 pages, 2 figures
null
null
null
cs.HC cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Experience Sampling Method (ESM) introduces in-situ sampling of human behaviour, and provides researchers and behavioural therapists with ecologically valid and timely assessments of a person's psychological state. This, in turn, opens up new opportunities for understanding behaviour at a scale and granularity that was not possible just a few years ago. The practical applications are many, such as the delivery of personalised and agile behaviour interventions. Mobile computing devices represent a revolutionary platform for improving ESM. They are an inseparable part of our daily lives, context-aware, and can interact with people at suitable moments. Furthermore, these devices are equipped with sensors, and can thus take part of the reporting burden off the participant, and collect data automatically. The goal of this survey is to discuss recent advancements in using mobile technologies for ESM (mESM), and present our vision of the future of mobile experience sampling.
[ { "created": "Sat, 15 Aug 2015 12:15:38 GMT", "version": "v1" } ]
2015-08-18
[ [ "Pejovic", "Veljko", "" ], [ "Lathia", "Neal", "" ], [ "Mascolo", "Cecilia", "" ], [ "Musolesi", "Mirco", "" ] ]
The Experience Sampling Method (ESM) introduces in-situ sampling of human behaviour, and provides researchers and behavioural therapists with ecologically valid and timely assessments of a person's psychological state. This, in turn, opens up new opportunities for understanding behaviour at a scale and granularity that was not possible just a few years ago. The practical applications are many, such as the delivery of personalised and agile behaviour interventions. Mobile computing devices represent a revolutionary platform for improving ESM. They are an inseparable part of our daily lives, context-aware, and can interact with people at suitable moments. Furthermore, these devices are equipped with sensors, and can thus take part of the reporting burden off the participant, and collect data automatically. The goal of this survey is to discuss recent advancements in using mobile technologies for ESM (mESM), and present our vision of the future of mobile experience sampling.
1601.03278
Longqi Yang
Longqi Yang, Diana Freed, Alex Wu, Judy Wu, JP Pollak, Deborah Estrin
Your Activities of Daily Living (YADL): An Image-based Survey Technique for Patients with Arthritis
null
null
null
null
cs.CY cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Healthcare professionals use Activities of Daily Living (ADL) to characterize a patient's functional status and to evaluate the effectiveness of treatment plans. ADLs are traditionally measured using standardized text-based questionnaires and the only form of personalization is in the form of question branching logic. Pervasive smartphone adoption makes it feasible to consider more frequent patient-reporting on ADLs. However, asking generic sets of questions repeatedly introduces user burden and fatigue that threatens to interfere with their utility. We introduce an approach called YADL (Your Activities of Daily Living) which uses images of ADLs and personalization to improve survey efficiency and the patient-experience. It offers several potential benefits: wider coverage of ADLs, improved engagement, and accurate capture of individual health situations. In this paper, we discuss our system design and the wide applicability of the design process for survey tools in healthcare and beyond. Interactions with with a small number of patients with Arthritis throughout the design process have been promising and we share detailed insights.
[ { "created": "Wed, 13 Jan 2016 15:27:58 GMT", "version": "v1" } ]
2016-01-14
[ [ "Yang", "Longqi", "" ], [ "Freed", "Diana", "" ], [ "Wu", "Alex", "" ], [ "Wu", "Judy", "" ], [ "Pollak", "JP", "" ], [ "Estrin", "Deborah", "" ] ]
Healthcare professionals use Activities of Daily Living (ADL) to characterize a patient's functional status and to evaluate the effectiveness of treatment plans. ADLs are traditionally measured using standardized text-based questionnaires and the only form of personalization is in the form of question branching logic. Pervasive smartphone adoption makes it feasible to consider more frequent patient-reporting on ADLs. However, asking generic sets of questions repeatedly introduces user burden and fatigue that threatens to interfere with their utility. We introduce an approach called YADL (Your Activities of Daily Living) which uses images of ADLs and personalization to improve survey efficiency and the patient-experience. It offers several potential benefits: wider coverage of ADLs, improved engagement, and accurate capture of individual health situations. In this paper, we discuss our system design and the wide applicability of the design process for survey tools in healthcare and beyond. Interactions with with a small number of patients with Arthritis throughout the design process have been promising and we share detailed insights.
2210.10207
Denizalp Goktas
Denizalp Goktas and Amy Greenwald
Exploitability Minimization in Games and Beyond
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pseudo-games are a natural and well-known generalization of normal-form games, in which the actions taken by each player affect not only the other players' payoffs, as in games, but also the other players' strategy sets. The solution concept par excellence for pseudo-games is the generalized Nash equilibrium (GNE), i.e., a strategy profile at which each player's strategy is feasible and no player can improve their payoffs by unilaterally deviating to another strategy in the strategy set determined by the other players' strategies. The computation of GNE in pseudo-games has long been a problem of interest, due to applications in a wide variety of fields, from environmental protection to logistics to telecommunications. Although computing GNE is PPAD-hard in general, it is still of interest to try to compute them in restricted classes of pseudo-games. One approach is to search for a strategy profile that minimizes exploitability, i.e., the sum of the regrets across all players. As exploitability is nondifferentiable in general, developing efficient first-order methods that minimize it might not seem possible at first glance. We observe, however, that the exploitability-minimization problem can be recast as a min-max optimization problem, and thereby obtain polynomial-time first-order methods to compute a refinement of GNE, namely the variational equilibria (VE), in convex-concave cumulative regret pseudo-games with jointly convex constraints. More generally, we also show that our methods find the stationary points of the exploitability in polynomial time in Lipschitz-smooth pseudo-games with jointly convex constraints. Finally, we demonstrate in experiments that our methods not only outperform known algorithms, but that even in pseudo-games where they are not guaranteed to converge to a GNE, they may do so nonetheless, with proper initialization.
[ { "created": "Tue, 18 Oct 2022 23:21:57 GMT", "version": "v1" } ]
2022-10-20
[ [ "Goktas", "Denizalp", "" ], [ "Greenwald", "Amy", "" ] ]
Pseudo-games are a natural and well-known generalization of normal-form games, in which the actions taken by each player affect not only the other players' payoffs, as in games, but also the other players' strategy sets. The solution concept par excellence for pseudo-games is the generalized Nash equilibrium (GNE), i.e., a strategy profile at which each player's strategy is feasible and no player can improve their payoffs by unilaterally deviating to another strategy in the strategy set determined by the other players' strategies. The computation of GNE in pseudo-games has long been a problem of interest, due to applications in a wide variety of fields, from environmental protection to logistics to telecommunications. Although computing GNE is PPAD-hard in general, it is still of interest to try to compute them in restricted classes of pseudo-games. One approach is to search for a strategy profile that minimizes exploitability, i.e., the sum of the regrets across all players. As exploitability is nondifferentiable in general, developing efficient first-order methods that minimize it might not seem possible at first glance. We observe, however, that the exploitability-minimization problem can be recast as a min-max optimization problem, and thereby obtain polynomial-time first-order methods to compute a refinement of GNE, namely the variational equilibria (VE), in convex-concave cumulative regret pseudo-games with jointly convex constraints. More generally, we also show that our methods find the stationary points of the exploitability in polynomial time in Lipschitz-smooth pseudo-games with jointly convex constraints. Finally, we demonstrate in experiments that our methods not only outperform known algorithms, but that even in pseudo-games where they are not guaranteed to converge to a GNE, they may do so nonetheless, with proper initialization.
2309.16789
Jayati Deshmukh
Balambiga Ayappane, Rohith Vaidyanathan, Srinath Srinivasa, Jayati Deshmukh
Extensible Consent Management Architectures for Data Trusts
An earlier version of this paper was published in ISIC 2021
null
null
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
Sensitive personal information of individuals and non-personal information of organizations or communities often needs to be legitimately exchanged among different stakeholders, to provide services, maintain public health, law and order, and so on. While such exchanges are necessary, they also impose enormous privacy and security challenges. Data protection laws like GDPR for personal data and Indian Non-personal data protection draft specify conditions and the \textit{legal capacity} in which personal and non-personal information can be solicited and disseminated further. But there is a dearth of formalisms for specifying legal capacities and jurisdictional boundaries, so that open-ended exchange of such data can be implemented. This paper proposes an extensible framework for consent management in Data Trusts in which data can flow across a network through "role tunnels" established based on corresponding legal capacities.
[ { "created": "Thu, 28 Sep 2023 18:28:50 GMT", "version": "v1" } ]
2023-10-02
[ [ "Ayappane", "Balambiga", "" ], [ "Vaidyanathan", "Rohith", "" ], [ "Srinivasa", "Srinath", "" ], [ "Deshmukh", "Jayati", "" ] ]
Sensitive personal information of individuals and non-personal information of organizations or communities often needs to be legitimately exchanged among different stakeholders, to provide services, maintain public health, law and order, and so on. While such exchanges are necessary, they also impose enormous privacy and security challenges. Data protection laws like GDPR for personal data and Indian Non-personal data protection draft specify conditions and the \textit{legal capacity} in which personal and non-personal information can be solicited and disseminated further. But there is a dearth of formalisms for specifying legal capacities and jurisdictional boundaries, so that open-ended exchange of such data can be implemented. This paper proposes an extensible framework for consent management in Data Trusts in which data can flow across a network through "role tunnels" established based on corresponding legal capacities.
1908.11431
Yuxing Ma
Yuxing Ma, Audris Mockus, Beth Milhollin, Russel Zaretzki, Randy Bradley, Bogdan Bichescu
A Methodology for Analyzing Uptake of Software Technologies Among Developers
5 figures, 15 pages
null
null
null
cs.SE
http://creativecommons.org/publicdomain/zero/1.0/
Motivation: The question of what combination of attributes drives the adoption of a particular software technology is critical to developers. It determines both those technologies that receive wide support from the community and those which may be abandoned, thus rendering developers' investments worthless. Aim and Context: We model software technology adoption by developers and provide insights on specific technology attributes that are associated with better visibility among alternative technologies. Approach: We leverage social contagion theory and statistical modeling to identify, define, and test empirically measures that are likely to affect software adoption. More specifically, we leverage a large collection of open source version control repositories to construct a software dependency chain for a specific set of R language source-code files. We formulate logistic regression models, to investigate the combination of technological attributes that drive adoption among competing data frame implementations in the R language: tidy and data.table. We quantify key project attributes that might affect adoption and also characteristics of developers making the selection. Results: We find that a quick response to raised issues, a larger number of overall deployments, and a larger number of high-quality StackExchange questions are associated with higher adoption. Decision makers tend to adopt the technology that is closer to them in the technical dependency network and in author collaborations networks while meeting their performance needs. Future work: We hope that our methodology encompassing social contagion that captures both rational and irrational preferences and the elucidation of key measures from large collections of version control data provides a general path toward increasing visibility, driving better informed decisions, and producing more sustainable and widely adopted software
[ { "created": "Thu, 29 Aug 2019 19:36:28 GMT", "version": "v1" }, { "created": "Wed, 4 Sep 2019 15:06:13 GMT", "version": "v2" } ]
2019-09-05
[ [ "Ma", "Yuxing", "" ], [ "Mockus", "Audris", "" ], [ "Milhollin", "Beth", "" ], [ "Zaretzki", "Russel", "" ], [ "Bradley", "Randy", "" ], [ "Bichescu", "Bogdan", "" ] ]
Motivation: The question of what combination of attributes drives the adoption of a particular software technology is critical to developers. It determines both those technologies that receive wide support from the community and those which may be abandoned, thus rendering developers' investments worthless. Aim and Context: We model software technology adoption by developers and provide insights on specific technology attributes that are associated with better visibility among alternative technologies. Approach: We leverage social contagion theory and statistical modeling to identify, define, and test empirically measures that are likely to affect software adoption. More specifically, we leverage a large collection of open source version control repositories to construct a software dependency chain for a specific set of R language source-code files. We formulate logistic regression models, to investigate the combination of technological attributes that drive adoption among competing data frame implementations in the R language: tidy and data.table. We quantify key project attributes that might affect adoption and also characteristics of developers making the selection. Results: We find that a quick response to raised issues, a larger number of overall deployments, and a larger number of high-quality StackExchange questions are associated with higher adoption. Decision makers tend to adopt the technology that is closer to them in the technical dependency network and in author collaborations networks while meeting their performance needs. Future work: We hope that our methodology encompassing social contagion that captures both rational and irrational preferences and the elucidation of key measures from large collections of version control data provides a general path toward increasing visibility, driving better informed decisions, and producing more sustainable and widely adopted software
2408.06814
Vaghawan Prasad Ojha
Bishwash Khanal, Sanjay Rijal, Manish Awale and Vaghawan Ojha
Structure-preserving Planar Simplification for Indoor Environments
null
null
null
null
cs.CV cs.CG
http://creativecommons.org/licenses/by/4.0/
This paper presents a novel approach for structure-preserving planar simplification of indoor scene point clouds for both simulated and real-world environments. Initially, the scene point cloud undergoes preprocessing steps, including noise reduction and Manhattan world alignment, to ensure robustness and coherence in subsequent analyses. We segment each captured scene into structured (walls-ceiling-floor) and non-structured (indoor objects) scenes. Leveraging a RANSAC algorithm, we extract primitive planes from the input point cloud, facilitating the segmentation and simplification of the structured scene. The best-fitting wall meshes are then generated from the primitives, followed by adjacent mesh merging with the vertex-translation algorithm which preserves the mesh layout. To accurately represent ceilings and floors, we employ the mesh clipping algorithm which clips the ceiling and floor meshes with respect to wall normals. In the case of indoor scenes, we apply a surface reconstruction technique to enhance the fidelity. This paper focuses on the intricate steps of the proposed scene simplification methodology, addressing complex scenarios such as multi-story and slanted walls and ceilings. We also conduct qualitative and quantitative performance comparisons against popular surface reconstruction, shape approximation, and floorplan generation approaches.
[ { "created": "Tue, 13 Aug 2024 11:10:26 GMT", "version": "v1" } ]
2024-08-14
[ [ "Khanal", "Bishwash", "" ], [ "Rijal", "Sanjay", "" ], [ "Awale", "Manish", "" ], [ "Ojha", "Vaghawan", "" ] ]
This paper presents a novel approach for structure-preserving planar simplification of indoor scene point clouds for both simulated and real-world environments. Initially, the scene point cloud undergoes preprocessing steps, including noise reduction and Manhattan world alignment, to ensure robustness and coherence in subsequent analyses. We segment each captured scene into structured (walls-ceiling-floor) and non-structured (indoor objects) scenes. Leveraging a RANSAC algorithm, we extract primitive planes from the input point cloud, facilitating the segmentation and simplification of the structured scene. The best-fitting wall meshes are then generated from the primitives, followed by adjacent mesh merging with the vertex-translation algorithm which preserves the mesh layout. To accurately represent ceilings and floors, we employ the mesh clipping algorithm which clips the ceiling and floor meshes with respect to wall normals. In the case of indoor scenes, we apply a surface reconstruction technique to enhance the fidelity. This paper focuses on the intricate steps of the proposed scene simplification methodology, addressing complex scenarios such as multi-story and slanted walls and ceilings. We also conduct qualitative and quantitative performance comparisons against popular surface reconstruction, shape approximation, and floorplan generation approaches.
2310.02227
Parshin Shojaee
Kazem Meidani, Parshin Shojaee, Chandan K. Reddy, Amir Barati Farimani
SNIP: Bridging Mathematical Symbolic and Numeric Realms with Unified Pre-training
ICLR 2024 Spotlight Paper
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
In an era where symbolic mathematical equations are indispensable for modeling complex natural phenomena, scientific inquiry often involves collecting observations and translating them into mathematical expressions. Recently, deep learning has emerged as a powerful tool for extracting insights from data. However, existing models typically specialize in either numeric or symbolic domains, and are usually trained in a supervised manner tailored to specific tasks. This approach neglects the substantial benefits that could arise from a task-agnostic multi-modal understanding between symbolic equations and their numeric counterparts. To bridge the gap, we introduce SNIP, a Symbolic-Numeric Integrated Pre-training model, which employs contrastive learning between symbolic and numeric domains, enhancing their mutual similarities in the embeddings. By performing latent space analysis, we observe that SNIP provides cross-domain insights into the representations, revealing that symbolic supervision enhances the embeddings of numeric data and vice versa. We evaluate SNIP across diverse tasks, including symbolic-to-numeric mathematical property prediction and numeric-to-symbolic equation discovery, commonly known as symbolic regression. Results show that SNIP effectively transfers to various tasks, consistently outperforming fully supervised baselines and competing strongly with established task-specific methods, especially in the low data regime scenarios where available data is limited. Code and model are available at: https://github.com/deep-symbolic-mathematics/Multimodal-Math-Pretraining
[ { "created": "Tue, 3 Oct 2023 17:32:44 GMT", "version": "v1" }, { "created": "Thu, 19 Oct 2023 13:53:04 GMT", "version": "v2" }, { "created": "Fri, 15 Mar 2024 06:00:29 GMT", "version": "v3" } ]
2024-03-18
[ [ "Meidani", "Kazem", "" ], [ "Shojaee", "Parshin", "" ], [ "Reddy", "Chandan K.", "" ], [ "Farimani", "Amir Barati", "" ] ]
In an era where symbolic mathematical equations are indispensable for modeling complex natural phenomena, scientific inquiry often involves collecting observations and translating them into mathematical expressions. Recently, deep learning has emerged as a powerful tool for extracting insights from data. However, existing models typically specialize in either numeric or symbolic domains, and are usually trained in a supervised manner tailored to specific tasks. This approach neglects the substantial benefits that could arise from a task-agnostic multi-modal understanding between symbolic equations and their numeric counterparts. To bridge the gap, we introduce SNIP, a Symbolic-Numeric Integrated Pre-training model, which employs contrastive learning between symbolic and numeric domains, enhancing their mutual similarities in the embeddings. By performing latent space analysis, we observe that SNIP provides cross-domain insights into the representations, revealing that symbolic supervision enhances the embeddings of numeric data and vice versa. We evaluate SNIP across diverse tasks, including symbolic-to-numeric mathematical property prediction and numeric-to-symbolic equation discovery, commonly known as symbolic regression. Results show that SNIP effectively transfers to various tasks, consistently outperforming fully supervised baselines and competing strongly with established task-specific methods, especially in the low data regime scenarios where available data is limited. Code and model are available at: https://github.com/deep-symbolic-mathematics/Multimodal-Math-Pretraining
1908.06148
Govind Mittal
Govind Mittal, Pawel Korus, Nasir Memon
FiFTy: Large-scale File Fragment Type Identification using Neural Networks
Paper accepted for publication in the IEEE Transactions on Information Forensics and Security
null
null
null
cs.CR cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present FiFTy, a modern file type identification tool for memory forensics and data carving. In contrast to previous approaches based on hand-crafted features, we design a compact neural network architecture, which uses a trainable embedding space, akin to successful natural language processing models. Our approach dispenses with explicit feature extraction which is a bottleneck in legacy systems. We evaluate the proposed method on a novel dataset with 75 file types - the most diverse and balanced dataset reported to date. FiFTy consistently outperforms all baselines in terms of speed, accuracy and individual misclassification rates. We achieved an average accuracy of 77.5% with processing speed of approx 38 sec/GB, which is better and more than an order of magnitude faster than the previous state-of-the-art tool - Sceadan (69% at 9 min/GB). Our tool and the corresponding dataset are available publicly online.
[ { "created": "Fri, 16 Aug 2019 19:53:46 GMT", "version": "v1" }, { "created": "Sun, 7 Jun 2020 05:13:26 GMT", "version": "v2" } ]
2020-06-09
[ [ "Mittal", "Govind", "" ], [ "Korus", "Pawel", "" ], [ "Memon", "Nasir", "" ] ]
We present FiFTy, a modern file type identification tool for memory forensics and data carving. In contrast to previous approaches based on hand-crafted features, we design a compact neural network architecture, which uses a trainable embedding space, akin to successful natural language processing models. Our approach dispenses with explicit feature extraction which is a bottleneck in legacy systems. We evaluate the proposed method on a novel dataset with 75 file types - the most diverse and balanced dataset reported to date. FiFTy consistently outperforms all baselines in terms of speed, accuracy and individual misclassification rates. We achieved an average accuracy of 77.5% with processing speed of approx 38 sec/GB, which is better and more than an order of magnitude faster than the previous state-of-the-art tool - Sceadan (69% at 9 min/GB). Our tool and the corresponding dataset are available publicly online.
1910.09495
Saeed Reza Kheradpisheh
Saeed Reza Kheradpisheh and Timoth\'ee Masquelier
S4NN: temporal backpropagation for spiking neural networks with one spike per neuron
null
International Journal of Neural Systems 2020
10.1142/S0129065720500276
null
cs.NE cs.CV cs.LG q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a new supervised learning rule for multilayer spiking neural networks (SNNs) that use a form of temporal coding known as rank-order-coding. With this coding scheme, all neurons fire exactly one spike per stimulus, but the firing order carries information. In particular, in the readout layer, the first neuron to fire determines the class of the stimulus. We derive a new learning rule for this sort of network, named S4NN, akin to traditional error backpropagation, yet based on latencies. We show how approximated error gradients can be computed backward in a feedforward network with any number of layers. This approach reaches state-of-the-art performance with supervised multi fully-connected layer SNNs: test accuracy of 97.4% for the MNIST dataset, and 99.2% for the Caltech Face/Motorbike dataset. Yet, the neuron model that we use, non-leaky integrate-and-fire, is much simpler than the one used in all previous works. The source codes of the proposed S4NN are publicly available at https://github.com/SRKH/S4NN.
[ { "created": "Mon, 21 Oct 2019 16:39:42 GMT", "version": "v1" }, { "created": "Thu, 5 Mar 2020 15:43:30 GMT", "version": "v2" }, { "created": "Mon, 13 Apr 2020 09:23:11 GMT", "version": "v3" }, { "created": "Sat, 13 Jun 2020 10:33:19 GMT", "version": "v4" } ]
2020-06-16
[ [ "Kheradpisheh", "Saeed Reza", "" ], [ "Masquelier", "Timothée", "" ] ]
We propose a new supervised learning rule for multilayer spiking neural networks (SNNs) that use a form of temporal coding known as rank-order-coding. With this coding scheme, all neurons fire exactly one spike per stimulus, but the firing order carries information. In particular, in the readout layer, the first neuron to fire determines the class of the stimulus. We derive a new learning rule for this sort of network, named S4NN, akin to traditional error backpropagation, yet based on latencies. We show how approximated error gradients can be computed backward in a feedforward network with any number of layers. This approach reaches state-of-the-art performance with supervised multi fully-connected layer SNNs: test accuracy of 97.4% for the MNIST dataset, and 99.2% for the Caltech Face/Motorbike dataset. Yet, the neuron model that we use, non-leaky integrate-and-fire, is much simpler than the one used in all previous works. The source codes of the proposed S4NN are publicly available at https://github.com/SRKH/S4NN.
2207.11365
Tushar Nagarajan
Tushar Nagarajan, Santhosh Kumar Ramakrishnan, Ruta Desai, James Hillis, Kristen Grauman
EgoEnv: Human-centric environment representations from egocentric video
Published in NeurIPS 2023 (Oral)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
First-person video highlights a camera-wearer's activities in the context of their persistent environment. However, current video understanding approaches reason over visual features from short video clips that are detached from the underlying physical space and capture only what is immediately visible. To facilitate human-centric environment understanding, we present an approach that links egocentric video and the environment by learning representations that are predictive of the camera-wearer's (potentially unseen) local surroundings. We train such models using videos from agents in simulated 3D environments where the environment is fully observable, and test them on human-captured real-world videos from unseen environments. On two human-centric video tasks, we show that models equipped with our environment-aware features consistently outperform their counterparts with traditional clip features. Moreover, despite being trained exclusively on simulated videos, our approach successfully handles real-world videos from HouseTours and Ego4D, and achieves state-of-the-art results on the Ego4D NLQ challenge. Project page: https://vision.cs.utexas.edu/projects/ego-env/
[ { "created": "Fri, 22 Jul 2022 22:39:57 GMT", "version": "v1" }, { "created": "Thu, 22 Dec 2022 16:39:40 GMT", "version": "v2" }, { "created": "Thu, 9 Nov 2023 19:13:18 GMT", "version": "v3" } ]
2023-11-13
[ [ "Nagarajan", "Tushar", "" ], [ "Ramakrishnan", "Santhosh Kumar", "" ], [ "Desai", "Ruta", "" ], [ "Hillis", "James", "" ], [ "Grauman", "Kristen", "" ] ]
First-person video highlights a camera-wearer's activities in the context of their persistent environment. However, current video understanding approaches reason over visual features from short video clips that are detached from the underlying physical space and capture only what is immediately visible. To facilitate human-centric environment understanding, we present an approach that links egocentric video and the environment by learning representations that are predictive of the camera-wearer's (potentially unseen) local surroundings. We train such models using videos from agents in simulated 3D environments where the environment is fully observable, and test them on human-captured real-world videos from unseen environments. On two human-centric video tasks, we show that models equipped with our environment-aware features consistently outperform their counterparts with traditional clip features. Moreover, despite being trained exclusively on simulated videos, our approach successfully handles real-world videos from HouseTours and Ego4D, and achieves state-of-the-art results on the Ego4D NLQ challenge. Project page: https://vision.cs.utexas.edu/projects/ego-env/
2306.07650
Yuchen Han
Yuchen Han, Chen Xu, Tong Xiao and Jingbo Zhu
Modality Adaption or Regularization? A Case Study on End-to-End Speech Translation
ACL 2023 Main Conference
null
null
null
cs.CL cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pre-training and fine-tuning is a paradigm for alleviating the data scarcity problem in end-to-end speech translation (E2E ST). The commonplace "modality gap" between speech and text data often leads to inconsistent inputs between pre-training and fine-tuning. However, we observe that this gap occurs in the early stages of fine-tuning, but does not have a major impact on the final performance. On the other hand, we find that there has another gap, which we call the "capacity gap": high resource tasks (such as ASR and MT) always require a large model to fit, when the model is reused for a low resource task (E2E ST), it will get a sub-optimal performance due to the over-fitting. In a case study, we find that the regularization plays a more important role than the well-designed modality adaption method, which achieves 29.0 for en-de and 40.3 for en-fr on the MuST-C dataset. Code and models are available at https://github.com/hannlp/TAB.
[ { "created": "Tue, 13 Jun 2023 09:42:48 GMT", "version": "v1" } ]
2023-06-14
[ [ "Han", "Yuchen", "" ], [ "Xu", "Chen", "" ], [ "Xiao", "Tong", "" ], [ "Zhu", "Jingbo", "" ] ]
Pre-training and fine-tuning is a paradigm for alleviating the data scarcity problem in end-to-end speech translation (E2E ST). The commonplace "modality gap" between speech and text data often leads to inconsistent inputs between pre-training and fine-tuning. However, we observe that this gap occurs in the early stages of fine-tuning, but does not have a major impact on the final performance. On the other hand, we find that there has another gap, which we call the "capacity gap": high resource tasks (such as ASR and MT) always require a large model to fit, when the model is reused for a low resource task (E2E ST), it will get a sub-optimal performance due to the over-fitting. In a case study, we find that the regularization plays a more important role than the well-designed modality adaption method, which achieves 29.0 for en-de and 40.3 for en-fr on the MuST-C dataset. Code and models are available at https://github.com/hannlp/TAB.
2211.16104
Gianluca Curzi
Gianluca Curzi and Anupam Das
Non-uniform complexity via non-wellfounded proofs
null
null
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
Cyclic and non-wellfounded proofs are now increasingly employed to establish metalogical results in a variety of settings, in particular for type systems with forms of (co)induction. Under the Curry-Howard correspondence, a cyclic proof can be seen as a typing derivation 'with loops', closer to low-level machine models, and so comprise a highly expressive computational model that nonetheless enjoys excellent metalogical properties. In recent work, we showed how the cyclic proof setting can be further employed to model computational complexity, yielding characterisations of the polynomial time and elementary computable functions. These characterisations are 'implicit', inspired by Bellantoni and Cook's famous algebra of safe recursion, but exhibit greater expressivity thanks to the looping capacity of cyclic proofs. In this work we investigate the capacity for non-wellfounded proofs, where finite presentability is relaxed, to model non-uniformity in complexity theory. In particular, we present a characterisation of the class $\mathsf{FP/poly}$ of functions computed by polynomial-size circuits. While relating non-wellfoundedness to non-uniformity is a natural idea, the precise amount of irregularity, informally speaking, required to capture $\mathsf{FP/poly}$ is given by proof-level conditions novel to cyclic proof theory. Along the way, we formalise some (presumably) folklore techniques for characterising non-uniform classes in relativised function algebras with appropriate oracles.
[ { "created": "Tue, 29 Nov 2022 11:26:50 GMT", "version": "v1" } ]
2022-11-30
[ [ "Curzi", "Gianluca", "" ], [ "Das", "Anupam", "" ] ]
Cyclic and non-wellfounded proofs are now increasingly employed to establish metalogical results in a variety of settings, in particular for type systems with forms of (co)induction. Under the Curry-Howard correspondence, a cyclic proof can be seen as a typing derivation 'with loops', closer to low-level machine models, and so comprise a highly expressive computational model that nonetheless enjoys excellent metalogical properties. In recent work, we showed how the cyclic proof setting can be further employed to model computational complexity, yielding characterisations of the polynomial time and elementary computable functions. These characterisations are 'implicit', inspired by Bellantoni and Cook's famous algebra of safe recursion, but exhibit greater expressivity thanks to the looping capacity of cyclic proofs. In this work we investigate the capacity for non-wellfounded proofs, where finite presentability is relaxed, to model non-uniformity in complexity theory. In particular, we present a characterisation of the class $\mathsf{FP/poly}$ of functions computed by polynomial-size circuits. While relating non-wellfoundedness to non-uniformity is a natural idea, the precise amount of irregularity, informally speaking, required to capture $\mathsf{FP/poly}$ is given by proof-level conditions novel to cyclic proof theory. Along the way, we formalise some (presumably) folklore techniques for characterising non-uniform classes in relativised function algebras with appropriate oracles.
2310.14782
Alexandra Volokhova
Alexandra Volokhova, Micha{\l} Koziarski, Alex Hern\'andez-Garc\'ia, Cheng-Hao Liu, Santiago Miret, Pablo Lemos, Luca Thiede, Zichao Yan, Al\'an Aspuru-Guzik, Yoshua Bengio
Towards equilibrium molecular conformation generation with GFlowNets
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Sampling diverse, thermodynamically feasible molecular conformations plays a crucial role in predicting properties of a molecule. In this paper we propose to use GFlowNet for sampling conformations of small molecules from the Boltzmann distribution, as determined by the molecule's energy. The proposed approach can be used in combination with energy estimation methods of different fidelity and discovers a diverse set of low-energy conformations for highly flexible drug-like molecules. We demonstrate that GFlowNet can reproduce molecular potential energy surfaces by sampling proportionally to the Boltzmann distribution.
[ { "created": "Fri, 20 Oct 2023 15:41:50 GMT", "version": "v1" } ]
2023-10-24
[ [ "Volokhova", "Alexandra", "" ], [ "Koziarski", "Michał", "" ], [ "Hernández-García", "Alex", "" ], [ "Liu", "Cheng-Hao", "" ], [ "Miret", "Santiago", "" ], [ "Lemos", "Pablo", "" ], [ "Thiede", "Luca", "" ], [ "Yan", "Zichao", "" ], [ "Aspuru-Guzik", "Alán", "" ], [ "Bengio", "Yoshua", "" ] ]
Sampling diverse, thermodynamically feasible molecular conformations plays a crucial role in predicting properties of a molecule. In this paper we propose to use GFlowNet for sampling conformations of small molecules from the Boltzmann distribution, as determined by the molecule's energy. The proposed approach can be used in combination with energy estimation methods of different fidelity and discovers a diverse set of low-energy conformations for highly flexible drug-like molecules. We demonstrate that GFlowNet can reproduce molecular potential energy surfaces by sampling proportionally to the Boltzmann distribution.
1811.08772
Sean MacAvaney
Sean MacAvaney, Andrew Yates, Arman Cohan, Luca Soldaini, Kai Hui, Nazli Goharian, Ophir Frieder
Overcoming low-utility facets for complex answer retrieval
This is a pre-print of an article published in Information Retrieval Journal. The final authenticated version (including additional experimental results, analysis, etc.) is available online at: https://doi.org/10.1007/s10791-018-9343-0
Information Retrieval Journal 2018
10.1007/s10791-018-9343-0
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many questions cannot be answered simply; their answers must include numerous nuanced details and additional context. Complex Answer Retrieval (CAR) is the retrieval of answers to such questions. In their simplest form, these questions are constructed from a topic entity (e.g., `cheese') and a facet (e.g., `health effects'). While topic matching has been thoroughly explored, we observe that some facets use general language that is unlikely to appear verbatim in answers. We call these low-utility facets. In this work, we present an approach to CAR that identifies and addresses low-utility facets. We propose two estimators of facet utility. These include exploiting the hierarchical structure of CAR queries and using facet frequency information from training data. To improve the retrieval performance on low-utility headings, we also include entity similarity scores using knowledge graph embeddings. We apply our approaches to a leading neural ranking technique, and evaluate using the TREC CAR dataset. We find that our approach perform significantly better than the unmodified neural ranker and other leading CAR techniques. We also provide a detailed analysis of our results, and verify that low-utility facets are indeed more difficult to match, and that our approach improves the performance for these difficult queries.
[ { "created": "Wed, 21 Nov 2018 15:09:00 GMT", "version": "v1" } ]
2018-11-22
[ [ "MacAvaney", "Sean", "" ], [ "Yates", "Andrew", "" ], [ "Cohan", "Arman", "" ], [ "Soldaini", "Luca", "" ], [ "Hui", "Kai", "" ], [ "Goharian", "Nazli", "" ], [ "Frieder", "Ophir", "" ] ]
Many questions cannot be answered simply; their answers must include numerous nuanced details and additional context. Complex Answer Retrieval (CAR) is the retrieval of answers to such questions. In their simplest form, these questions are constructed from a topic entity (e.g., `cheese') and a facet (e.g., `health effects'). While topic matching has been thoroughly explored, we observe that some facets use general language that is unlikely to appear verbatim in answers. We call these low-utility facets. In this work, we present an approach to CAR that identifies and addresses low-utility facets. We propose two estimators of facet utility. These include exploiting the hierarchical structure of CAR queries and using facet frequency information from training data. To improve the retrieval performance on low-utility headings, we also include entity similarity scores using knowledge graph embeddings. We apply our approaches to a leading neural ranking technique, and evaluate using the TREC CAR dataset. We find that our approach perform significantly better than the unmodified neural ranker and other leading CAR techniques. We also provide a detailed analysis of our results, and verify that low-utility facets are indeed more difficult to match, and that our approach improves the performance for these difficult queries.
2012.03682
Elnaz Soleimani
Elnaz Soleimani, Ghazaleh Khodabandelou, Abdelghani Chibani, Yacine Amirat
Generic Semi-Supervised Adversarial Subject Translation for Sensor-Based Human Activity Recognition
null
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The performance of Human Activity Recognition (HAR) models, particularly deep neural networks, is highly contingent upon the availability of the massive amount of annotated training data which should be sufficiently labeled. Though, data acquisition and manual annotation in the HAR domain are prohibitively expensive due to skilled human resource requirements in both steps. Hence, domain adaptation techniques have been proposed to adapt the knowledge from the existing source of data. More recently, adversarial transfer learning methods have shown very promising results in image classification, yet limited for sensor-based HAR problems, which are still prone to the unfavorable effects of the imbalanced distribution of samples. This paper presents a novel generic and robust approach for semi-supervised domain adaptation in HAR, which capitalizes on the advantages of the adversarial framework to tackle the shortcomings, by leveraging knowledge from annotated samples exclusively from the source subject and unlabeled ones of the target subject. Extensive subject translation experiments are conducted on three large, middle, and small-size datasets with different levels of imbalance to assess the robustness and effectiveness of the proposed model to the scale as well as imbalance in the data. The results demonstrate the effectiveness of our proposed algorithms over state-of-the-art methods, which led in up to 13%, 4%, and 13% improvement of our high-level activities recognition metrics for Opportunity, LISSI, and PAMAP2 datasets, respectively. The LISSI dataset is the most challenging one owing to its less populated and imbalanced distribution. Compared to the SA-GAN adversarial domain adaptation method, the proposed approach enhances the final classification performance with an average of 7.5% for the three datasets, which emphasizes the effectiveness of micro-mini-batch training.
[ { "created": "Wed, 11 Nov 2020 12:16:23 GMT", "version": "v1" } ]
2020-12-08
[ [ "Soleimani", "Elnaz", "" ], [ "Khodabandelou", "Ghazaleh", "" ], [ "Chibani", "Abdelghani", "" ], [ "Amirat", "Yacine", "" ] ]
The performance of Human Activity Recognition (HAR) models, particularly deep neural networks, is highly contingent upon the availability of the massive amount of annotated training data which should be sufficiently labeled. Though, data acquisition and manual annotation in the HAR domain are prohibitively expensive due to skilled human resource requirements in both steps. Hence, domain adaptation techniques have been proposed to adapt the knowledge from the existing source of data. More recently, adversarial transfer learning methods have shown very promising results in image classification, yet limited for sensor-based HAR problems, which are still prone to the unfavorable effects of the imbalanced distribution of samples. This paper presents a novel generic and robust approach for semi-supervised domain adaptation in HAR, which capitalizes on the advantages of the adversarial framework to tackle the shortcomings, by leveraging knowledge from annotated samples exclusively from the source subject and unlabeled ones of the target subject. Extensive subject translation experiments are conducted on three large, middle, and small-size datasets with different levels of imbalance to assess the robustness and effectiveness of the proposed model to the scale as well as imbalance in the data. The results demonstrate the effectiveness of our proposed algorithms over state-of-the-art methods, which led in up to 13%, 4%, and 13% improvement of our high-level activities recognition metrics for Opportunity, LISSI, and PAMAP2 datasets, respectively. The LISSI dataset is the most challenging one owing to its less populated and imbalanced distribution. Compared to the SA-GAN adversarial domain adaptation method, the proposed approach enhances the final classification performance with an average of 7.5% for the three datasets, which emphasizes the effectiveness of micro-mini-batch training.
2211.06153
Sareena Karapoola
Sareena Karapoola, Nikhilesh Singh, Chester Rebeiro, Kamakoti V
SUNDEW: An Ensemble of Predictors for Case-Sensitive Detection of Malware
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Malware programs are diverse, with varying objectives, functionalities, and threat levels ranging from mere pop-ups to financial losses. Consequently, their run-time footprints across the system differ, impacting the optimal data source (Network, Operating system (OS), Hardware) and features that are instrumental to malware detection. Further, the variations in threat levels of malware classes affect the user requirements for detection. Thus, the optimal tuple of <data-source, features, user-requirements> is different for each malware class, impacting the state-of-the-art detection solutions that are agnostic to these subtle differences. This paper presents SUNDEW, a framework to detect malware classes using their optimal tuple of <data-source, features, user-requirements>. SUNDEW uses an ensemble of specialized predictors, each trained with a particular data source (network, OS, and hardware) and tuned for features and requirements of a specific class. While the specialized ensemble with a holistic view across the system improves detection, aggregating the independent conflicting inferences from the different predictors is challenging. SUNDEW resolves such conflicts with a hierarchical aggregation considering the threat-level, noise in the data sources, and prior domain knowledge. We evaluate SUNDEW on a real-world dataset of over 10,000 malware samples from 8 classes. It achieves an F1-Score of one for most classes, with an average of 0.93 and a limited performance overhead of 1.5%.
[ { "created": "Fri, 11 Nov 2022 12:13:41 GMT", "version": "v1" }, { "created": "Mon, 14 Nov 2022 08:49:24 GMT", "version": "v2" } ]
2022-11-15
[ [ "Karapoola", "Sareena", "" ], [ "Singh", "Nikhilesh", "" ], [ "Rebeiro", "Chester", "" ], [ "V", "Kamakoti", "" ] ]
Malware programs are diverse, with varying objectives, functionalities, and threat levels ranging from mere pop-ups to financial losses. Consequently, their run-time footprints across the system differ, impacting the optimal data source (Network, Operating system (OS), Hardware) and features that are instrumental to malware detection. Further, the variations in threat levels of malware classes affect the user requirements for detection. Thus, the optimal tuple of <data-source, features, user-requirements> is different for each malware class, impacting the state-of-the-art detection solutions that are agnostic to these subtle differences. This paper presents SUNDEW, a framework to detect malware classes using their optimal tuple of <data-source, features, user-requirements>. SUNDEW uses an ensemble of specialized predictors, each trained with a particular data source (network, OS, and hardware) and tuned for features and requirements of a specific class. While the specialized ensemble with a holistic view across the system improves detection, aggregating the independent conflicting inferences from the different predictors is challenging. SUNDEW resolves such conflicts with a hierarchical aggregation considering the threat-level, noise in the data sources, and prior domain knowledge. We evaluate SUNDEW on a real-world dataset of over 10,000 malware samples from 8 classes. It achieves an F1-Score of one for most classes, with an average of 0.93 and a limited performance overhead of 1.5%.
1507.02563
Wen Shen
Wen Shen and Cristina Lopes
Managing Autonomous Mobility on Demand Systems for Better Passenger Experience
null
Proceedings of the 18th International Conference on Principles and Practice of Multi-Agent Systems (PRIMA 2015). pp 20-35. Lecture Notes in Computer Science, vol 9387. Springer
10.1007/978-3-319-25524-8_2
null
cs.AI cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Autonomous mobility on demand systems, though still in their infancy, have very promising prospects in providing urban population with sustainable and safe personal mobility in the near future. While much research has been conducted on both autonomous vehicles and mobility on demand systems, to the best of our knowledge, this is the first work that shows how to manage autonomous mobility on demand systems for better passenger experience. We introduce the Expand and Target algorithm which can be easily integrated with three different scheduling strategies for dispatching autonomous vehicles. We implement an agent-based simulation platform and empirically evaluate the proposed approaches with the New York City taxi data. Experimental results demonstrate that the algorithm significantly improve passengers' experience by reducing the average passenger waiting time by up to 29.82% and increasing the trip success rate by up to 7.65%.
[ { "created": "Thu, 9 Jul 2015 15:43:17 GMT", "version": "v1" } ]
2017-11-23
[ [ "Shen", "Wen", "" ], [ "Lopes", "Cristina", "" ] ]
Autonomous mobility on demand systems, though still in their infancy, have very promising prospects in providing urban population with sustainable and safe personal mobility in the near future. While much research has been conducted on both autonomous vehicles and mobility on demand systems, to the best of our knowledge, this is the first work that shows how to manage autonomous mobility on demand systems for better passenger experience. We introduce the Expand and Target algorithm which can be easily integrated with three different scheduling strategies for dispatching autonomous vehicles. We implement an agent-based simulation platform and empirically evaluate the proposed approaches with the New York City taxi data. Experimental results demonstrate that the algorithm significantly improve passengers' experience by reducing the average passenger waiting time by up to 29.82% and increasing the trip success rate by up to 7.65%.
2011.12713
Nima Safari
N. Safari, S.M. Mazhari, C.Y. Chung, S.B. Ko
A Secure Deep Probabilistic Dynamic Thermal Line Rating Prediction
The work is accepted for publication in Journal of Modern Power Systems and Clean Energy
null
null
null
cs.CR cs.LG eess.SP
http://creativecommons.org/licenses/by-nc-sa/4.0/
Accurate short-term prediction of overhead line (OHL) transmission ampacity can directly affect the efficiency of power system operation and planning. Any overestimation of the dynamic thermal line rating (DTLR) can lead to lifetime degradation and failure of OHLs, safety hazards, etc. This paper presents a secure yet sharp probabilistic prediction model for the hour-ahead forecasting of the DTLR. The security of the proposed DTLR limits the frequency of DTLR prediction exceeding the actual DTLR. The model is based on an augmented deep learning architecture that makes use of a wide range of predictors, including historical climatology data and latent variables obtained during DTLR calculation. Furthermore, by introducing a customized cost function, the deep neural network is trained to consider the DTLR security based on the required probability of exceedance while minimizing deviations of the predicted DTLRs from the actual values. The proposed probabilistic DTLR is developed and verified using recorded experimental data. The simulation results validate the superiority of the proposed DTLR compared to state-of-the-art prediction models using well-known evaluation metrics.
[ { "created": "Sat, 21 Nov 2020 23:20:58 GMT", "version": "v1" } ]
2020-11-26
[ [ "Safari", "N.", "" ], [ "Mazhari", "S. M.", "" ], [ "Chung", "C. Y.", "" ], [ "Ko", "S. B.", "" ] ]
Accurate short-term prediction of overhead line (OHL) transmission ampacity can directly affect the efficiency of power system operation and planning. Any overestimation of the dynamic thermal line rating (DTLR) can lead to lifetime degradation and failure of OHLs, safety hazards, etc. This paper presents a secure yet sharp probabilistic prediction model for the hour-ahead forecasting of the DTLR. The security of the proposed DTLR limits the frequency of DTLR prediction exceeding the actual DTLR. The model is based on an augmented deep learning architecture that makes use of a wide range of predictors, including historical climatology data and latent variables obtained during DTLR calculation. Furthermore, by introducing a customized cost function, the deep neural network is trained to consider the DTLR security based on the required probability of exceedance while minimizing deviations of the predicted DTLRs from the actual values. The proposed probabilistic DTLR is developed and verified using recorded experimental data. The simulation results validate the superiority of the proposed DTLR compared to state-of-the-art prediction models using well-known evaluation metrics.
2002.10732
Aamir Mahmood
Luca Beltramelli, Aamir Mahmood, Patrik \"Osterberg, and Mikael Gidlund
LoRa beyond ALOHA: An Investigation of Alternative Random Access Protocols
10 pages, 9 figures, final version to appear in IEEE Transactions on Industrial Informatics
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a stochastic geometry-based model to investigate alternative medium access choices for LoRaWAN---a widely adopted low-power wide-area networking (LPWAN) technology for the Internet-of-things (IoT). LoRaWAN adoption is driven by its simplified network architecture, air interface, and medium access. The physical layer, known as LoRa, provides quasi-orthogonal virtual channels through spreading factors (SFs) and time-power capture gains. However, the adopted pure ALOHA access mechanism suffers, in terms of scalability, under the same-channel same-SF transmissions from a large number of devices. In this paper, our objective is to explore access mechanisms beyond-ALOHA for LoRaWAN. Using recent results on time- and power-capture effects of LoRa, we develop a unified model for the comparative study of other choices, i.e., slotted ALOHA and carrier-sense multiple access (CSMA). The model includes the necessary design parameters of these access mechanisms, such as guard time and synchronization accuracy for slotted ALOHA, carrier sensing threshold for CSMA. It also accounts for the spatial interaction of devices in annular-shaped regions, characteristic of LoRa, for CSMA. The performance derived from the model in terms of coverage probability, channel throughput, and energy efficiency are validated using Monte-Carlo simulations. Our analysis shows that slotted ALOHA indeed has higher reliability than pure ALOHA but at the cost of lower energy efficiency for low device densities. Whereas, CSMA outperforms slotted ALOHA at smaller SFs in terms of reliability and energy efficiency, with its performance degrading to pure ALOHA at higher SFs.
[ { "created": "Tue, 25 Feb 2020 08:36:05 GMT", "version": "v1" } ]
2020-02-26
[ [ "Beltramelli", "Luca", "" ], [ "Mahmood", "Aamir", "" ], [ "Österberg", "Patrik", "" ], [ "Gidlund", "Mikael", "" ] ]
We present a stochastic geometry-based model to investigate alternative medium access choices for LoRaWAN---a widely adopted low-power wide-area networking (LPWAN) technology for the Internet-of-things (IoT). LoRaWAN adoption is driven by its simplified network architecture, air interface, and medium access. The physical layer, known as LoRa, provides quasi-orthogonal virtual channels through spreading factors (SFs) and time-power capture gains. However, the adopted pure ALOHA access mechanism suffers, in terms of scalability, under the same-channel same-SF transmissions from a large number of devices. In this paper, our objective is to explore access mechanisms beyond-ALOHA for LoRaWAN. Using recent results on time- and power-capture effects of LoRa, we develop a unified model for the comparative study of other choices, i.e., slotted ALOHA and carrier-sense multiple access (CSMA). The model includes the necessary design parameters of these access mechanisms, such as guard time and synchronization accuracy for slotted ALOHA, carrier sensing threshold for CSMA. It also accounts for the spatial interaction of devices in annular-shaped regions, characteristic of LoRa, for CSMA. The performance derived from the model in terms of coverage probability, channel throughput, and energy efficiency are validated using Monte-Carlo simulations. Our analysis shows that slotted ALOHA indeed has higher reliability than pure ALOHA but at the cost of lower energy efficiency for low device densities. Whereas, CSMA outperforms slotted ALOHA at smaller SFs in terms of reliability and energy efficiency, with its performance degrading to pure ALOHA at higher SFs.
2106.04835
Zichuan Lin
Zichuan Lin, Jing Huang, Bowen Zhou, Xiaodong He, Tengyu Ma
Joint System-Wise Optimization for Pipeline Goal-Oriented Dialog System
13 pages
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work (Takanobu et al., 2020) proposed the system-wise evaluation on dialog systems and found that improvement on individual components (e.g., NLU, policy) in prior work may not necessarily bring benefit to pipeline systems in system-wise evaluation. To improve the system-wise performance, in this paper, we propose new joint system-wise optimization techniques for the pipeline dialog system. First, we propose a new data augmentation approach which automates the labeling process for NLU training. Second, we propose a novel stochastic policy parameterization with Poisson distribution that enables better exploration and offers a principled way to compute policy gradient. Third, we propose a reward bonus to help policy explore successful dialogs. Our approaches outperform the competitive pipeline systems from Takanobu et al. (2020) by big margins of 12% success rate in automatic system-wise evaluation and of 16% success rate in human evaluation on the standard multi-domain benchmark dataset MultiWOZ 2.1, and also outperform the recent state-of-the-art end-to-end trained model from DSTC9.
[ { "created": "Wed, 9 Jun 2021 06:44:57 GMT", "version": "v1" } ]
2021-06-10
[ [ "Lin", "Zichuan", "" ], [ "Huang", "Jing", "" ], [ "Zhou", "Bowen", "" ], [ "He", "Xiaodong", "" ], [ "Ma", "Tengyu", "" ] ]
Recent work (Takanobu et al., 2020) proposed the system-wise evaluation on dialog systems and found that improvement on individual components (e.g., NLU, policy) in prior work may not necessarily bring benefit to pipeline systems in system-wise evaluation. To improve the system-wise performance, in this paper, we propose new joint system-wise optimization techniques for the pipeline dialog system. First, we propose a new data augmentation approach which automates the labeling process for NLU training. Second, we propose a novel stochastic policy parameterization with Poisson distribution that enables better exploration and offers a principled way to compute policy gradient. Third, we propose a reward bonus to help policy explore successful dialogs. Our approaches outperform the competitive pipeline systems from Takanobu et al. (2020) by big margins of 12% success rate in automatic system-wise evaluation and of 16% success rate in human evaluation on the standard multi-domain benchmark dataset MultiWOZ 2.1, and also outperform the recent state-of-the-art end-to-end trained model from DSTC9.
2310.16135
Chenghao Yang
Chenghao Yang, Allyson Ettinger
Can You Follow Me? Testing Situational Understanding in ChatGPT
EMNLP 2023 Main Paper (Camera Ready)
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Understanding sentence meanings and updating information states appropriately across time -- what we call "situational understanding" (SU) -- is a critical ability for human-like AI agents. SU is essential in particular for chat models, such as ChatGPT, to enable consistent, coherent, and effective dialogue between humans and AI. Previous works have identified certain SU limitations in non-chatbot Large Language models (LLMs), but the extent and causes of these limitations are not well understood, and capabilities of current chat-based models in this domain have not been explored. In this work we tackle these questions, proposing a novel synthetic environment for SU testing which allows us to do controlled and systematic testing of SU in chat-oriented models, through assessment of models' ability to track and enumerate environment states. Our environment also allows for close analysis of dynamics of model performance, to better understand underlying causes for performance patterns. We apply our test to ChatGPT, the state-of-the-art chatbot, and find that despite the fundamental simplicity of the task, the model's performance reflects an inability to retain correct environment states across time. Our follow-up analyses suggest that performance degradation is largely because ChatGPT has non-persistent in-context memory (although it can access the full dialogue history) and it is susceptible to hallucinated updates -- including updates that artificially inflate accuracies. Our findings suggest overall that ChatGPT is not currently equipped for robust tracking of situation states, and that trust in the impressive dialogue performance of ChatGPT comes with risks. We release the codebase for reproducing our test environment, as well as all prompts and API responses from ChatGPT, at https://github.com/yangalan123/SituationalTesting.
[ { "created": "Tue, 24 Oct 2023 19:22:01 GMT", "version": "v1" } ]
2023-10-26
[ [ "Yang", "Chenghao", "" ], [ "Ettinger", "Allyson", "" ] ]
Understanding sentence meanings and updating information states appropriately across time -- what we call "situational understanding" (SU) -- is a critical ability for human-like AI agents. SU is essential in particular for chat models, such as ChatGPT, to enable consistent, coherent, and effective dialogue between humans and AI. Previous works have identified certain SU limitations in non-chatbot Large Language models (LLMs), but the extent and causes of these limitations are not well understood, and capabilities of current chat-based models in this domain have not been explored. In this work we tackle these questions, proposing a novel synthetic environment for SU testing which allows us to do controlled and systematic testing of SU in chat-oriented models, through assessment of models' ability to track and enumerate environment states. Our environment also allows for close analysis of dynamics of model performance, to better understand underlying causes for performance patterns. We apply our test to ChatGPT, the state-of-the-art chatbot, and find that despite the fundamental simplicity of the task, the model's performance reflects an inability to retain correct environment states across time. Our follow-up analyses suggest that performance degradation is largely because ChatGPT has non-persistent in-context memory (although it can access the full dialogue history) and it is susceptible to hallucinated updates -- including updates that artificially inflate accuracies. Our findings suggest overall that ChatGPT is not currently equipped for robust tracking of situation states, and that trust in the impressive dialogue performance of ChatGPT comes with risks. We release the codebase for reproducing our test environment, as well as all prompts and API responses from ChatGPT, at https://github.com/yangalan123/SituationalTesting.
2209.08189
Andreas Mang
Naveen Himthani and Malte Brunn and Jae-Youn Kim and Miriam Schulte and Andreas Mang and George Biros
CLAIRE -- Parallelized Diffeomorphic Image Registration for Large-Scale Biomedical Imaging Applications
32 pages, 9 tables, 8 figures
null
null
null
cs.CV cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the performance of CLAIRE -- a diffeomorphic multi-node, multi-GPU image-registration algorithm, and software -- in large-scale biomedical imaging applications with billions of voxels. At such resolutions, most existing software packages for diffeomorphic image registration are prohibitively expensive. As a result, practitioners first significantly downsample the original images and then register them using existing tools. Our main contribution is an extensive analysis of the impact of downsampling on registration performance. We study this impact by comparing full-resolution registrations obtained with CLAIRE to lower-resolution registrations for synthetic and real-world imaging datasets. Our results suggest that registration at full resolution can yield a superior registration quality -- but not always. For example, downsampling a synthetic image from $1024^3$ to $256^3$ decreases the Dice coefficient from 92% to 79%. However, the differences are less pronounced for noisy or low-contrast high-resolution images. CLAIRE allows us not only to register images of clinically relevant size in a few seconds but also to register images at unprecedented resolution in a reasonable time. The highest resolution considered is CLARITY images of size $2816\times3016\times1162$. To the best of our knowledge, this is the first study on image registration quality at such resolutions.
[ { "created": "Fri, 16 Sep 2022 22:42:24 GMT", "version": "v1" } ]
2022-09-20
[ [ "Himthani", "Naveen", "" ], [ "Brunn", "Malte", "" ], [ "Kim", "Jae-Youn", "" ], [ "Schulte", "Miriam", "" ], [ "Mang", "Andreas", "" ], [ "Biros", "George", "" ] ]
We study the performance of CLAIRE -- a diffeomorphic multi-node, multi-GPU image-registration algorithm, and software -- in large-scale biomedical imaging applications with billions of voxels. At such resolutions, most existing software packages for diffeomorphic image registration are prohibitively expensive. As a result, practitioners first significantly downsample the original images and then register them using existing tools. Our main contribution is an extensive analysis of the impact of downsampling on registration performance. We study this impact by comparing full-resolution registrations obtained with CLAIRE to lower-resolution registrations for synthetic and real-world imaging datasets. Our results suggest that registration at full resolution can yield a superior registration quality -- but not always. For example, downsampling a synthetic image from $1024^3$ to $256^3$ decreases the Dice coefficient from 92% to 79%. However, the differences are less pronounced for noisy or low-contrast high-resolution images. CLAIRE allows us not only to register images of clinically relevant size in a few seconds but also to register images at unprecedented resolution in a reasonable time. The highest resolution considered is CLARITY images of size $2816\times3016\times1162$. To the best of our knowledge, this is the first study on image registration quality at such resolutions.
2208.08820
Jiawei Li
Jiawei Li, Ru Zhang, Jianyi Liu, Gongshen Liu
LogKernel A Threat Hunting Approach Based on Behaviour Provenance Graph and Graph Kernel Clustering
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Cyber threat hunting is a proactive search process for hidden threats in the organization's information system. It is a crucial component of active defense against advanced persistent threats (APTs). However, most of the current threat hunting methods rely on Cyber Threat Intelligence(CTI), which can find known attacks but cannot find unknown attacks that have not been disclosed by CTI. In this paper, we propose LogKernel, a threat hunting method based on graph kernel clustering which can effectively separates attack behaviour from benign activities. LogKernel first abstracts system audit logs into Behaviour Provenance Graphs (BPGs), and then clusters graphs by embedding them into a continuous space using a graph kernel. In particular, we design a new graph kernel clustering method based on the characteristics of BPGs, which can capture structure information and rich label information of the BPGs. To reduce false positives, LogKernel further quantifies the threat of abnormal behaviour. We evaluate LogKernel on the malicious dataset which includes seven simulated attack scenarios and the DAPRA CADETS dataset which includes four attack scenarios. The result shows that LogKernel can hunt all attack scenarios among them, and compared to the state-of-the-art methods, it can find unknown attacks.
[ { "created": "Thu, 18 Aug 2022 13:28:19 GMT", "version": "v1" } ]
2022-08-19
[ [ "Li", "Jiawei", "" ], [ "Zhang", "Ru", "" ], [ "Liu", "Jianyi", "" ], [ "Liu", "Gongshen", "" ] ]
Cyber threat hunting is a proactive search process for hidden threats in the organization's information system. It is a crucial component of active defense against advanced persistent threats (APTs). However, most of the current threat hunting methods rely on Cyber Threat Intelligence(CTI), which can find known attacks but cannot find unknown attacks that have not been disclosed by CTI. In this paper, we propose LogKernel, a threat hunting method based on graph kernel clustering which can effectively separates attack behaviour from benign activities. LogKernel first abstracts system audit logs into Behaviour Provenance Graphs (BPGs), and then clusters graphs by embedding them into a continuous space using a graph kernel. In particular, we design a new graph kernel clustering method based on the characteristics of BPGs, which can capture structure information and rich label information of the BPGs. To reduce false positives, LogKernel further quantifies the threat of abnormal behaviour. We evaluate LogKernel on the malicious dataset which includes seven simulated attack scenarios and the DAPRA CADETS dataset which includes four attack scenarios. The result shows that LogKernel can hunt all attack scenarios among them, and compared to the state-of-the-art methods, it can find unknown attacks.
0705.1364
Mustaq Ahmed
Mustaq Ahmed and Anna Lubiw
An Approximation Algorithm for Shortest Descending Paths
14 pages, 3 figures
null
null
CS-2007-14
cs.CG cs.DS
null
A path from s to t on a polyhedral terrain is descending if the height of a point p never increases while we move p along the path from s to t. No efficient algorithm is known to find a shortest descending path (SDP) from s to t in a polyhedral terrain. We give a simple approximation algorithm that solves the SDP problem on general terrains. Our algorithm discretizes the terrain with O(n^2 X / e) Steiner points so that after an O(n^2 X / e * log(n X /e))-time preprocessing phase for a given vertex s, we can determine a (1+e)-approximate SDP from s to any point v in O(n) time if v is either a vertex of the terrain or a Steiner point, and in O(n X /e) time otherwise. Here n is the size of the terrain, and X is a parameter of the geometry of the terrain.
[ { "created": "Wed, 9 May 2007 22:02:28 GMT", "version": "v1" } ]
2007-05-23
[ [ "Ahmed", "Mustaq", "" ], [ "Lubiw", "Anna", "" ] ]
A path from s to t on a polyhedral terrain is descending if the height of a point p never increases while we move p along the path from s to t. No efficient algorithm is known to find a shortest descending path (SDP) from s to t in a polyhedral terrain. We give a simple approximation algorithm that solves the SDP problem on general terrains. Our algorithm discretizes the terrain with O(n^2 X / e) Steiner points so that after an O(n^2 X / e * log(n X /e))-time preprocessing phase for a given vertex s, we can determine a (1+e)-approximate SDP from s to any point v in O(n) time if v is either a vertex of the terrain or a Steiner point, and in O(n X /e) time otherwise. Here n is the size of the terrain, and X is a parameter of the geometry of the terrain.
1906.07630
Ankur Kulkarni
Karan N. Chadha and Ankur A. Kulkarni
Aggregate Play and Welfare in Strategic Interactions on Networks
null
null
null
null
cs.GT math.CO math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent work by Bramoull\'{e} and Kranton, a model for the provision of public goods on a network was presented and relations between equilibria of such a game and properties of the network were established. This model was further extended to include games with imperfect substitutability in Bramoull\'{e} et al. The vast multiplicity of equilibria in such games along with the drastic changes in equilibria with small changes in network structure, makes it challenging for a system planner to estimate the maximum social welfare of such a game or to devise interventions that enhance this welfare. Our main results address this challenge by providing close approximations to the maximum social welfare and the maximum aggregate play in terms of only network characteristics such as the maximum degree and independence number. For the special case when the underlying network is a tree, we derive formulae which use only the number of nodes and their degrees. These results allow a system planner to assess aggregate outcomes and design interventions for the game, directly from the underlying graph structure, without enumerating all equilibria of the game, thereby significantly simplifying the planner's problem. A part of our results can be viewed as a logical extension of [7] where the maximum weighted aggregate effort of the model in [2] was characterized as the weighted independence number of the graph.
[ { "created": "Tue, 18 Jun 2019 15:06:07 GMT", "version": "v1" } ]
2019-06-19
[ [ "Chadha", "Karan N.", "" ], [ "Kulkarni", "Ankur A.", "" ] ]
In recent work by Bramoull\'{e} and Kranton, a model for the provision of public goods on a network was presented and relations between equilibria of such a game and properties of the network were established. This model was further extended to include games with imperfect substitutability in Bramoull\'{e} et al. The vast multiplicity of equilibria in such games along with the drastic changes in equilibria with small changes in network structure, makes it challenging for a system planner to estimate the maximum social welfare of such a game or to devise interventions that enhance this welfare. Our main results address this challenge by providing close approximations to the maximum social welfare and the maximum aggregate play in terms of only network characteristics such as the maximum degree and independence number. For the special case when the underlying network is a tree, we derive formulae which use only the number of nodes and their degrees. These results allow a system planner to assess aggregate outcomes and design interventions for the game, directly from the underlying graph structure, without enumerating all equilibria of the game, thereby significantly simplifying the planner's problem. A part of our results can be viewed as a logical extension of [7] where the maximum weighted aggregate effort of the model in [2] was characterized as the weighted independence number of the graph.
2206.07538
Javier Laplaza
Javier Laplaza, Joan Jaume Oliver, Ram\'on Romero, Alberto Sanfeliu and Ana\'is Garrell
Body Gesture Recognition to Control a Social Robot
null
null
null
null
cs.RO cs.CV cs.HC cs.LG
http://creativecommons.org/licenses/by/4.0/
In this work, we propose a gesture based language to allow humans to interact with robots using their body in a natural way. We have created a new gesture detection model using neural networks and a custom dataset of humans performing a set of body gestures to train our network. Furthermore, we compare body gesture communication with other communication channels to acknowledge the importance of adding this knowledge to robots. The presented approach is extensively validated in diverse simulations and real-life experiments with non-trained volunteers. This attains remarkable results and shows that it is a valuable framework for social robotics applications, such as human robot collaboration or human-robot interaction.
[ { "created": "Wed, 15 Jun 2022 13:49:22 GMT", "version": "v1" } ]
2022-06-16
[ [ "Laplaza", "Javier", "" ], [ "Oliver", "Joan Jaume", "" ], [ "Romero", "Ramón", "" ], [ "Sanfeliu", "Alberto", "" ], [ "Garrell", "Anaís", "" ] ]
In this work, we propose a gesture based language to allow humans to interact with robots using their body in a natural way. We have created a new gesture detection model using neural networks and a custom dataset of humans performing a set of body gestures to train our network. Furthermore, we compare body gesture communication with other communication channels to acknowledge the importance of adding this knowledge to robots. The presented approach is extensively validated in diverse simulations and real-life experiments with non-trained volunteers. This attains remarkable results and shows that it is a valuable framework for social robotics applications, such as human robot collaboration or human-robot interaction.
2004.11055
Alma Rahat PhD
Alma Rahat and Michael Wood
On Bayesian Search for the Feasible Space Under Computationally Expensive Constraints
Accepted at The Sixth International Conference on Machine Learning, Optimization, and Data Science. Main content 12 pages, a total of 19 pages with supplementary. 3 Figures and 2 tables. Python code for Bayesian search is available at: http://bitbucket.org/arahat/lod-2020
null
null
null
cs.LG cs.NE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We are often interested in identifying the feasible subset of a decision space under multiple constraints to permit effective design exploration. If determining feasibility required computationally expensive simulations, the cost of exploration would be prohibitive. Bayesian search is data-efficient for such problems: starting from a small dataset, the central concept is to use Bayesian models of constraints with an acquisition function to locate promising solutions that may improve predictions of feasibility when the dataset is augmented. At the end of this sequential active learning approach with a limited number of expensive evaluations, the models can accurately predict the feasibility of any solution obviating the need for full simulations. In this paper, we propose a novel acquisition function that combines the probability that a solution lies at the boundary between feasible and infeasible spaces (representing exploitation) and the entropy in predictions (representing exploration). Experiments confirmed the efficacy of the proposed function.
[ { "created": "Thu, 23 Apr 2020 10:22:32 GMT", "version": "v1" }, { "created": "Wed, 24 Jun 2020 12:00:05 GMT", "version": "v2" } ]
2020-06-25
[ [ "Rahat", "Alma", "" ], [ "Wood", "Michael", "" ] ]
We are often interested in identifying the feasible subset of a decision space under multiple constraints to permit effective design exploration. If determining feasibility required computationally expensive simulations, the cost of exploration would be prohibitive. Bayesian search is data-efficient for such problems: starting from a small dataset, the central concept is to use Bayesian models of constraints with an acquisition function to locate promising solutions that may improve predictions of feasibility when the dataset is augmented. At the end of this sequential active learning approach with a limited number of expensive evaluations, the models can accurately predict the feasibility of any solution obviating the need for full simulations. In this paper, we propose a novel acquisition function that combines the probability that a solution lies at the boundary between feasible and infeasible spaces (representing exploitation) and the entropy in predictions (representing exploration). Experiments confirmed the efficacy of the proposed function.
2208.02235
Roman Orus
Raj Patel, Chia-Wei Hsing, Serkan Sahin, Saeed S. Jahromi, Samuel Palmer, Shivam Sharma, Christophe Michel, Vincent Porte, Mustafa Abid, Stephane Aubert, Pierre Castellani, Chi-Guhn Lee, Samuel Mugel, Roman Orus
Quantum-Inspired Tensor Neural Networks for Partial Differential Equations
14 pages, 11 figures, minimal changes
null
null
null
cs.LG cond-mat.str-el cs.AI physics.comp-ph quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Partial Differential Equations (PDEs) are used to model a variety of dynamical systems in science and engineering. Recent advances in deep learning have enabled us to solve them in a higher dimension by addressing the curse of dimensionality in new ways. However, deep learning methods are constrained by training time and memory. To tackle these shortcomings, we implement Tensor Neural Networks (TNN), a quantum-inspired neural network architecture that leverages Tensor Network ideas to improve upon deep learning approaches. We demonstrate that TNN provide significant parameter savings while attaining the same accuracy as compared to the classical Dense Neural Network (DNN). In addition, we also show how TNN can be trained faster than DNN for the same accuracy. We benchmark TNN by applying them to solve parabolic PDEs, specifically the Black-Scholes-Barenblatt equation, widely used in financial pricing theory, empirically showing the advantages of TNN over DNN. Further examples, such as the Hamilton-Jacobi-Bellman equation, are also discussed.
[ { "created": "Wed, 3 Aug 2022 17:41:11 GMT", "version": "v1" }, { "created": "Wed, 10 Aug 2022 08:07:10 GMT", "version": "v2" } ]
2022-08-11
[ [ "Patel", "Raj", "" ], [ "Hsing", "Chia-Wei", "" ], [ "Sahin", "Serkan", "" ], [ "Jahromi", "Saeed S.", "" ], [ "Palmer", "Samuel", "" ], [ "Sharma", "Shivam", "" ], [ "Michel", "Christophe", "" ], [ "Porte", "Vincent", "" ], [ "Abid", "Mustafa", "" ], [ "Aubert", "Stephane", "" ], [ "Castellani", "Pierre", "" ], [ "Lee", "Chi-Guhn", "" ], [ "Mugel", "Samuel", "" ], [ "Orus", "Roman", "" ] ]
Partial Differential Equations (PDEs) are used to model a variety of dynamical systems in science and engineering. Recent advances in deep learning have enabled us to solve them in a higher dimension by addressing the curse of dimensionality in new ways. However, deep learning methods are constrained by training time and memory. To tackle these shortcomings, we implement Tensor Neural Networks (TNN), a quantum-inspired neural network architecture that leverages Tensor Network ideas to improve upon deep learning approaches. We demonstrate that TNN provide significant parameter savings while attaining the same accuracy as compared to the classical Dense Neural Network (DNN). In addition, we also show how TNN can be trained faster than DNN for the same accuracy. We benchmark TNN by applying them to solve parabolic PDEs, specifically the Black-Scholes-Barenblatt equation, widely used in financial pricing theory, empirically showing the advantages of TNN over DNN. Further examples, such as the Hamilton-Jacobi-Bellman equation, are also discussed.
1905.05253
Alexander Kott
Michael J. De Lucia, Allison Newcomb, Alexander Kott
Features and Operation of an Autonomous Agent for Cyber Defense
null
CSIAC Journal, v.7, n.1, April 2019, pp.6-13
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An ever increasing number of battlefield devices that are capable of collecting, processing, storing, and communicating information are rapidly becoming interconnected. The staggering number of connected devices on the battlefield greatly increases the possibility that an adversary could find ways to exploit hardware or software vulnerabilities, degrading or denying Warfighters the assured and secure use of those devices. Autonomous software agents will become necessities to manage, defend, and react to cyber threats in the future battlespace. The number of connected devices increases disproportionately to the number of cyber experts that could be available within an operational environment. In this paper, an autonomous agent capability and a scenario of how it could operate are proposed. The goal of developing such capability is to increase the security posture of the Internet of Battlefield Things and meet the challenges of an increasingly complex battlefield. This paper describes an illustrative scenario in a notional use case and discusses the challenges associated with such autonomous agents. We conclude by offering ideas for potential research into developing autonomous agents suitable for cyber defense in a battlefield environment.
[ { "created": "Mon, 13 May 2019 19:18:25 GMT", "version": "v1" } ]
2019-05-15
[ [ "De Lucia", "Michael J.", "" ], [ "Newcomb", "Allison", "" ], [ "Kott", "Alexander", "" ] ]
An ever increasing number of battlefield devices that are capable of collecting, processing, storing, and communicating information are rapidly becoming interconnected. The staggering number of connected devices on the battlefield greatly increases the possibility that an adversary could find ways to exploit hardware or software vulnerabilities, degrading or denying Warfighters the assured and secure use of those devices. Autonomous software agents will become necessities to manage, defend, and react to cyber threats in the future battlespace. The number of connected devices increases disproportionately to the number of cyber experts that could be available within an operational environment. In this paper, an autonomous agent capability and a scenario of how it could operate are proposed. The goal of developing such capability is to increase the security posture of the Internet of Battlefield Things and meet the challenges of an increasingly complex battlefield. This paper describes an illustrative scenario in a notional use case and discusses the challenges associated with such autonomous agents. We conclude by offering ideas for potential research into developing autonomous agents suitable for cyber defense in a battlefield environment.
2306.01072
Niloy Saha
Niloy Saha, Nashid Shahriar, Raouf Boutaba and Aladdin Saleh
MonArch: Network Slice Monitoring Architecture for Cloud Native 5G Deployments
Accepted at IEEE/IFIP NOMS 2023
null
null
null
cs.NI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Automated decision making algorithms are expected to play a key role in management and orchestration of network slices in 5G and beyond networks. State-of-the-art algorithms for automated orchestration and management tend to rely on data-driven methods which require a timely and accurate view of the network. Accurately monitoring an end-to-end (E2E) network slice requires a scalable monitoring architecture that facilitates collection and correlation of data from various network segments comprising the slice. The state-of-the-art on 5G monitoring mostly focuses on scalability, falling short in providing explicit support for network slicing and computing network slice key performance indicators (KPIs). To fill this gap, in this paper, we present MonArch, a scalable monitoring architecture for 5G, which focuses on network slice monitoring, slice KPI computation, and an application programming interface (API) for specifying slice monitoring requests. We validate the proposed architecture by implementing MonArch on a 5G testbed, and demonstrate its capability to compute a network slice KPI (e.g., slice throughput). Our evaluations show that MonArch does not significantly increase data ingestion time when scaling the number of slices and that a 5-second monitoring interval offers a good balance between monitoring overhead and accuracy.
[ { "created": "Thu, 1 Jun 2023 18:19:12 GMT", "version": "v1" } ]
2023-06-05
[ [ "Saha", "Niloy", "" ], [ "Shahriar", "Nashid", "" ], [ "Boutaba", "Raouf", "" ], [ "Saleh", "Aladdin", "" ] ]
Automated decision making algorithms are expected to play a key role in management and orchestration of network slices in 5G and beyond networks. State-of-the-art algorithms for automated orchestration and management tend to rely on data-driven methods which require a timely and accurate view of the network. Accurately monitoring an end-to-end (E2E) network slice requires a scalable monitoring architecture that facilitates collection and correlation of data from various network segments comprising the slice. The state-of-the-art on 5G monitoring mostly focuses on scalability, falling short in providing explicit support for network slicing and computing network slice key performance indicators (KPIs). To fill this gap, in this paper, we present MonArch, a scalable monitoring architecture for 5G, which focuses on network slice monitoring, slice KPI computation, and an application programming interface (API) for specifying slice monitoring requests. We validate the proposed architecture by implementing MonArch on a 5G testbed, and demonstrate its capability to compute a network slice KPI (e.g., slice throughput). Our evaluations show that MonArch does not significantly increase data ingestion time when scaling the number of slices and that a 5-second monitoring interval offers a good balance between monitoring overhead and accuracy.
2201.01688
Daniel Diethei
Daniel Diethei, Ashley Colley, Julian Wienert, Johannes Sch\"oning
Different Length, Different Needs: Qualitative Analysis of Threads in Online Health Communities
null
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online health communities provide a knowledge exchange platform for a wide range of diseases and health conditions. Informational and emotional support helps forum participants orient around health issues beyond in-person doctor visits. So far, little is known about the relation between the level of participation and participants' contributions in online health communities. To gain insights on the issue, we analyzed 456 posts in 56 threads from the Dermatology sub-forum of an online health community. While low participation threads (short threads) revolved around solving an individual's health issue through diagnosis suggestions and medical advice, participants in high participation threads (long threads) built collective knowledge and a sense of community, typically discussing chronic and rare conditions that medical professionals were unfamiliar with or could not treat effectively. Our results suggest that in short threads an individual's health issue is addressed, while in long threads, sub-communities about specific rare and chronic diseases emerge. This has implications for the user interface design of health forums, which could be developed to better support community building elements, even in short threads.
[ { "created": "Wed, 5 Jan 2022 16:32:28 GMT", "version": "v1" }, { "created": "Thu, 20 Jan 2022 13:28:25 GMT", "version": "v2" } ]
2022-01-21
[ [ "Diethei", "Daniel", "" ], [ "Colley", "Ashley", "" ], [ "Wienert", "Julian", "" ], [ "Schöning", "Johannes", "" ] ]
Online health communities provide a knowledge exchange platform for a wide range of diseases and health conditions. Informational and emotional support helps forum participants orient around health issues beyond in-person doctor visits. So far, little is known about the relation between the level of participation and participants' contributions in online health communities. To gain insights on the issue, we analyzed 456 posts in 56 threads from the Dermatology sub-forum of an online health community. While low participation threads (short threads) revolved around solving an individual's health issue through diagnosis suggestions and medical advice, participants in high participation threads (long threads) built collective knowledge and a sense of community, typically discussing chronic and rare conditions that medical professionals were unfamiliar with or could not treat effectively. Our results suggest that in short threads an individual's health issue is addressed, while in long threads, sub-communities about specific rare and chronic diseases emerge. This has implications for the user interface design of health forums, which could be developed to better support community building elements, even in short threads.
2407.14695
Alejandro Leonardo Garc\'ia Navarro
Alejandro L. Garc\'ia Navarro, Nataliia Koneva, Alfonso S\'anchez-Maci\'an, Jos\'e Alberto Hern\'andez
A Comprehensive Guide to Combining R and Python code for Data Science, Machine Learning and Reinforcement Learning
null
null
null
null
cs.LG cs.PL
http://creativecommons.org/licenses/by/4.0/
Python has gained widespread popularity in the fields of machine learning, artificial intelligence, and data engineering due to its effectiveness and extensive libraries. R, on its side, remains a dominant language for statistical analysis and visualization. However, certain libraries have become outdated, limiting their functionality and performance. Users can use Python's advanced machine learning and AI capabilities alongside R's robust statistical packages by combining these two programming languages. This paper explores using R's reticulate package to call Python from R, providing practical examples and highlighting scenarios where this integration enhances productivity and analytical capabilities. With a few hello-world code snippets, we demonstrate how to run Python's scikit-learn, pytorch and OpenAI gym libraries for building Machine Learning, Deep Learning, and Reinforcement Learning projects easily.
[ { "created": "Fri, 19 Jul 2024 23:01:48 GMT", "version": "v1" } ]
2024-07-23
[ [ "Navarro", "Alejandro L. García", "" ], [ "Koneva", "Nataliia", "" ], [ "Sánchez-Macián", "Alfonso", "" ], [ "Hernández", "José Alberto", "" ] ]
Python has gained widespread popularity in the fields of machine learning, artificial intelligence, and data engineering due to its effectiveness and extensive libraries. R, on its side, remains a dominant language for statistical analysis and visualization. However, certain libraries have become outdated, limiting their functionality and performance. Users can use Python's advanced machine learning and AI capabilities alongside R's robust statistical packages by combining these two programming languages. This paper explores using R's reticulate package to call Python from R, providing practical examples and highlighting scenarios where this integration enhances productivity and analytical capabilities. With a few hello-world code snippets, we demonstrate how to run Python's scikit-learn, pytorch and OpenAI gym libraries for building Machine Learning, Deep Learning, and Reinforcement Learning projects easily.
2007.03988
Quanming Yao
Yu Liu and Quanming Yao and Yong Li
Generalizing Tensor Decomposition for N-ary Relational Knowledge Bases
WWW 2020
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the rapid development of knowledge bases (KBs), link prediction task, which completes KBs with missing facts, has been broadly studied in especially binary relational KBs (a.k.a knowledge graph) with powerful tensor decomposition related methods. However, the ubiquitous n-ary relational KBs with higher-arity relational facts are paid less attention, in which existing translation based and neural network based approaches have weak expressiveness and high complexity in modeling various relations. Tensor decomposition has not been considered for n-ary relational KBs, while directly extending tensor decomposition related methods of binary relational KBs to the n-ary case does not yield satisfactory results due to exponential model complexity and their strong assumptions on binary relations. To generalize tensor decomposition for n-ary relational KBs, in this work, we propose GETD, a generalized model based on Tucker decomposition and Tensor Ring decomposition. The existing negative sampling technique is also generalized to the n-ary case for GETD. In addition, we theoretically prove that GETD is fully expressive to completely represent any KBs. Extensive evaluations on two representative n-ary relational KB datasets demonstrate the superior performance of GETD, significantly improving the state-of-the-art methods by over 15\%. Moreover, GETD further obtains the state-of-the-art results on the benchmark binary relational KB datasets.
[ { "created": "Wed, 8 Jul 2020 09:49:38 GMT", "version": "v1" } ]
2020-07-09
[ [ "Liu", "Yu", "" ], [ "Yao", "Quanming", "" ], [ "Li", "Yong", "" ] ]
With the rapid development of knowledge bases (KBs), link prediction task, which completes KBs with missing facts, has been broadly studied in especially binary relational KBs (a.k.a knowledge graph) with powerful tensor decomposition related methods. However, the ubiquitous n-ary relational KBs with higher-arity relational facts are paid less attention, in which existing translation based and neural network based approaches have weak expressiveness and high complexity in modeling various relations. Tensor decomposition has not been considered for n-ary relational KBs, while directly extending tensor decomposition related methods of binary relational KBs to the n-ary case does not yield satisfactory results due to exponential model complexity and their strong assumptions on binary relations. To generalize tensor decomposition for n-ary relational KBs, in this work, we propose GETD, a generalized model based on Tucker decomposition and Tensor Ring decomposition. The existing negative sampling technique is also generalized to the n-ary case for GETD. In addition, we theoretically prove that GETD is fully expressive to completely represent any KBs. Extensive evaluations on two representative n-ary relational KB datasets demonstrate the superior performance of GETD, significantly improving the state-of-the-art methods by over 15\%. Moreover, GETD further obtains the state-of-the-art results on the benchmark binary relational KB datasets.
cs/0403041
Mingsheng Ying
Mingsheng Ying
A Theory of Computation Based on Quantum Logic (I)
null
Theoretical Computer Science 344(2-3): 134-207 (2005)
null
null
cs.LO
null
The (meta)logic underlying classical theory of computation is Boolean (two-valued) logic. Quantum logic was proposed by Birkhoff and von Neumann as a logic of quantum mechanics more than sixty years ago. The major difference between Boolean logic and quantum logic is that the latter does not enjoy distributivity in general. The rapid development of quantum computation in recent years stimulates us to establish a theory of computation based on quantum logic. The present paper is the first step toward such a new theory and it focuses on the simplest models of computation, namely finite automata. It is found that the universal validity of many properties of automata depend heavily upon the distributivity of the underlying logic. This indicates that these properties does not universally hold in the realm of quantum logic. On the other hand, we show that a local validity of them can be recovered by imposing a certain commutativity to the (atomic) statements about the automata under consideration. This reveals an essential difference between the classical theory of computation and the computation theory based on quantum logic.
[ { "created": "Mon, 29 Mar 2004 15:20:32 GMT", "version": "v1" } ]
2013-04-02
[ [ "Ying", "Mingsheng", "" ] ]
The (meta)logic underlying classical theory of computation is Boolean (two-valued) logic. Quantum logic was proposed by Birkhoff and von Neumann as a logic of quantum mechanics more than sixty years ago. The major difference between Boolean logic and quantum logic is that the latter does not enjoy distributivity in general. The rapid development of quantum computation in recent years stimulates us to establish a theory of computation based on quantum logic. The present paper is the first step toward such a new theory and it focuses on the simplest models of computation, namely finite automata. It is found that the universal validity of many properties of automata depend heavily upon the distributivity of the underlying logic. This indicates that these properties does not universally hold in the realm of quantum logic. On the other hand, we show that a local validity of them can be recovered by imposing a certain commutativity to the (atomic) statements about the automata under consideration. This reveals an essential difference between the classical theory of computation and the computation theory based on quantum logic.
2010.05514
Luca Bedogni
Luca Bedogni, Shakila Khan Rumi, Flora Salim
Modelling Memory for Individual Re-identification in Decentralised Mobile Contact Tracing Applications
null
null
null
null
cs.CY cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In 2020 the coronavirus outbreak changed the lives of people worldwide. After an initial time period in which it was unclear how to battle the virus, social distancing has been recognised globally as an effective method to mitigate the disease spread. This called for technological tools such as Mobile Contact Tracing Applications (MCTA), which are used to digitally trace contacts among people, and in case a positive case is found, people with the application installed which had been in contact will be notified. De-centralised MCTA may suffer from a novel kind of privacy attack, based on the memory of the human beings, which upon notification of the application can identify who is the positive individual responsible for the notification. Our results show that it is indeed possible to identify positive people among the group of contacts of a human being, and this is even easier when the sociability of the positive individual is low. In practice, our simulation results show that identification can be made with an accuracy of more than 90% depending on the scenario. We also provide three mitigation strategies which can be implemented in de-centralised MCTA and analyse which of the three are more effective in limiting this novel kind of attack.
[ { "created": "Mon, 12 Oct 2020 08:10:54 GMT", "version": "v1" }, { "created": "Fri, 13 Nov 2020 08:59:54 GMT", "version": "v2" } ]
2020-11-16
[ [ "Bedogni", "Luca", "" ], [ "Rumi", "Shakila Khan", "" ], [ "Salim", "Flora", "" ] ]
In 2020 the coronavirus outbreak changed the lives of people worldwide. After an initial time period in which it was unclear how to battle the virus, social distancing has been recognised globally as an effective method to mitigate the disease spread. This called for technological tools such as Mobile Contact Tracing Applications (MCTA), which are used to digitally trace contacts among people, and in case a positive case is found, people with the application installed which had been in contact will be notified. De-centralised MCTA may suffer from a novel kind of privacy attack, based on the memory of the human beings, which upon notification of the application can identify who is the positive individual responsible for the notification. Our results show that it is indeed possible to identify positive people among the group of contacts of a human being, and this is even easier when the sociability of the positive individual is low. In practice, our simulation results show that identification can be made with an accuracy of more than 90% depending on the scenario. We also provide three mitigation strategies which can be implemented in de-centralised MCTA and analyse which of the three are more effective in limiting this novel kind of attack.
0909.1870
David Eppstein
David Eppstein
Paired approximation problems and incompatible inapproximabilities
13 pages, 3 figures. To appear at 21st ACM-SIAM Symp. Discrete Algorithms (SODA 2010)
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper considers pairs of optimization problems that are defined from a single input and for which it is desired to find a good approximation to either one of the problems. In many instances, it is possible to efficiently find an approximation of this type that is better than known inapproximability lower bounds for either of the two individual optimization problems forming the pair. In particular, we find either a $(1+\epsilon)$-approximation to $(1,2)$-TSP or a $1/\epsilon$-approximation to maximum independent set, from a given graph, in linear time. We show a similar paired approximation result for finding either a coloring or a long path. However, no such tradeoff exists in some other cases: for set cover and hitting set problems defined from a single set family, and for clique and independent set problems on the same graph, it is not possible to find an approximation when both problems are combined that is better than the best approximation for either problem on its own.
[ { "created": "Thu, 10 Sep 2009 05:23:43 GMT", "version": "v1" } ]
2009-09-11
[ [ "Eppstein", "David", "" ] ]
This paper considers pairs of optimization problems that are defined from a single input and for which it is desired to find a good approximation to either one of the problems. In many instances, it is possible to efficiently find an approximation of this type that is better than known inapproximability lower bounds for either of the two individual optimization problems forming the pair. In particular, we find either a $(1+\epsilon)$-approximation to $(1,2)$-TSP or a $1/\epsilon$-approximation to maximum independent set, from a given graph, in linear time. We show a similar paired approximation result for finding either a coloring or a long path. However, no such tradeoff exists in some other cases: for set cover and hitting set problems defined from a single set family, and for clique and independent set problems on the same graph, it is not possible to find an approximation when both problems are combined that is better than the best approximation for either problem on its own.
2112.01917
Guillermo Ortiz-Jimenez
Gizem Y\"uce, Guillermo Ortiz-Jim\'enez, Beril Besbinar, Pascal Frossard
A Structured Dictionary Perspective on Implicit Neural Representations
Accepted to IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022 (26 pages, 16 figures)
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by/4.0/
Implicit neural representations (INRs) have recently emerged as a promising alternative to classical discretized representations of signals. Nevertheless, despite their practical success, we still do not understand how INRs represent signals. We propose a novel unified perspective to theoretically analyse INRs. Leveraging results from harmonic analysis and deep learning theory, we show that most INR families are analogous to structured signal dictionaries whose atoms are integer harmonics of the set of initial mapping frequencies. This structure allows INRs to express signals with an exponentially increasing frequency support using a number of parameters that only grows linearly with depth. We also explore the inductive bias of INRs exploiting recent results about the empirical neural tangent kernel (NTK). Specifically, we show that the eigenfunctions of the NTK can be seen as dictionary atoms whose inner product with the target signal determines the final performance of their reconstruction. In this regard, we reveal that meta-learning has a reshaping effect on the NTK analogous to dictionary learning, building dictionary atoms as a combination of the examples seen during meta-training. Our results permit to design and tune novel INR architectures, but can also be of interest for the wider deep learning theory community.
[ { "created": "Fri, 3 Dec 2021 14:00:52 GMT", "version": "v1" }, { "created": "Fri, 25 Mar 2022 16:03:32 GMT", "version": "v2" } ]
2022-03-28
[ [ "Yüce", "Gizem", "" ], [ "Ortiz-Jiménez", "Guillermo", "" ], [ "Besbinar", "Beril", "" ], [ "Frossard", "Pascal", "" ] ]
Implicit neural representations (INRs) have recently emerged as a promising alternative to classical discretized representations of signals. Nevertheless, despite their practical success, we still do not understand how INRs represent signals. We propose a novel unified perspective to theoretically analyse INRs. Leveraging results from harmonic analysis and deep learning theory, we show that most INR families are analogous to structured signal dictionaries whose atoms are integer harmonics of the set of initial mapping frequencies. This structure allows INRs to express signals with an exponentially increasing frequency support using a number of parameters that only grows linearly with depth. We also explore the inductive bias of INRs exploiting recent results about the empirical neural tangent kernel (NTK). Specifically, we show that the eigenfunctions of the NTK can be seen as dictionary atoms whose inner product with the target signal determines the final performance of their reconstruction. In this regard, we reveal that meta-learning has a reshaping effect on the NTK analogous to dictionary learning, building dictionary atoms as a combination of the examples seen during meta-training. Our results permit to design and tune novel INR architectures, but can also be of interest for the wider deep learning theory community.
2401.02081
Tianchen Liu
Tianchen Liu, Liang Wu, Bo An, Zaichen Zhang, Jian Dang and Jiangzhou Wang
Performance Trade-off and Joint Waveform Design for MIMO-OFDM DFRC Systems
null
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dual-functional radar-communication (DFRC) has attracted considerable attention. This paper considers the frequency-selective multipath fading environment and proposes DFRC waveform design strategies based on multiple-input and multiple-output (MIMO) and orthogonal frequency division multiplexing (OFDM) techniques. In the proposed waveform design strategies, the Cramer-Rao bound (CRB) of the radar system, the inter-stream interference (ISI) and the achievable rate of the communication system, are respectively considered as the performance metrics. In this paper, we focus on the performance trade-off between the radar system and the communication system, and the optimization problems are formulated. In the ISI minimization based waveform design strategy, the optimization problem is convex and can be easily solved. In the achievable rate maximization based waveform design strategy, we propose a water-filling (WF) and sequential quadratic programming (SQP) based algorithm to derive the covariance matrix and the precoding matrix. Simulation results validate the proposed DFRC waveform designs and show that the achievable rate maximization based strategy has a better performance than the ISI minimization based strategy.
[ { "created": "Thu, 4 Jan 2024 06:05:29 GMT", "version": "v1" } ]
2024-01-05
[ [ "Liu", "Tianchen", "" ], [ "Wu", "Liang", "" ], [ "An", "Bo", "" ], [ "Zhang", "Zaichen", "" ], [ "Dang", "Jian", "" ], [ "Wang", "Jiangzhou", "" ] ]
Dual-functional radar-communication (DFRC) has attracted considerable attention. This paper considers the frequency-selective multipath fading environment and proposes DFRC waveform design strategies based on multiple-input and multiple-output (MIMO) and orthogonal frequency division multiplexing (OFDM) techniques. In the proposed waveform design strategies, the Cramer-Rao bound (CRB) of the radar system, the inter-stream interference (ISI) and the achievable rate of the communication system, are respectively considered as the performance metrics. In this paper, we focus on the performance trade-off between the radar system and the communication system, and the optimization problems are formulated. In the ISI minimization based waveform design strategy, the optimization problem is convex and can be easily solved. In the achievable rate maximization based waveform design strategy, we propose a water-filling (WF) and sequential quadratic programming (SQP) based algorithm to derive the covariance matrix and the precoding matrix. Simulation results validate the proposed DFRC waveform designs and show that the achievable rate maximization based strategy has a better performance than the ISI minimization based strategy.
1304.2683
Nan Yao
Yao Nan, Qian Feng and Sun Zuolei
Image Classification by Feature Dimension Reduction and Graph based Ranking
4 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dimensionality reduction (DR) of image features plays an important role in image retrieval and classification tasks. Recently, two types of methods have been proposed to improve the both the accuracy and efficiency for the dimensionality reduction problem. One uses Non-negative matrix factorization (NMF) to describe the image distribution on the space of base matrix. Another one for dimension reduction trains a subspace projection matrix to project original data space into some low-dimensional subspaces which have deep architecture, so that the low-dimensional codes would be learned. At the same time, the graph based similarity learning algorithm which tries to exploit contextual information for improving the effectiveness of image rankings is also proposed for image class and retrieval problem. In this paper, after above two methods mentioned are utilized to reduce the high-dimensional features of images respectively, we learn the graph based similarity for the image classification problem. This paper compares the proposed approach with other approaches on an image database.
[ { "created": "Tue, 9 Apr 2013 18:11:08 GMT", "version": "v1" } ]
2013-04-10
[ [ "Nan", "Yao", "" ], [ "Feng", "Qian", "" ], [ "Zuolei", "Sun", "" ] ]
Dimensionality reduction (DR) of image features plays an important role in image retrieval and classification tasks. Recently, two types of methods have been proposed to improve the both the accuracy and efficiency for the dimensionality reduction problem. One uses Non-negative matrix factorization (NMF) to describe the image distribution on the space of base matrix. Another one for dimension reduction trains a subspace projection matrix to project original data space into some low-dimensional subspaces which have deep architecture, so that the low-dimensional codes would be learned. At the same time, the graph based similarity learning algorithm which tries to exploit contextual information for improving the effectiveness of image rankings is also proposed for image class and retrieval problem. In this paper, after above two methods mentioned are utilized to reduce the high-dimensional features of images respectively, we learn the graph based similarity for the image classification problem. This paper compares the proposed approach with other approaches on an image database.
2002.06714
Qiang Wang
Qiang Wang, Fuxue Li, Tong Xiao, Yanyang Li, Yinqiao Li, Jingbo Zhu
Multi-layer Representation Fusion for Neural Machine Translation
COLING 2018
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural machine translation systems require a number of stacked layers for deep models. But the prediction depends on the sentence representation of the top-most layer with no access to low-level representations. This makes it more difficult to train the model and poses a risk of information loss to prediction. In this paper, we propose a multi-layer representation fusion (MLRF) approach to fusing stacked layers. In particular, we design three fusion functions to learn a better representation from the stack. Experimental results show that our approach yields improvements of 0.92 and 0.56 BLEU points over the strong Transformer baseline on IWSLT German-English and NIST Chinese-English MT tasks respectively. The result is new state-of-the-art in German-English translation.
[ { "created": "Sun, 16 Feb 2020 23:53:07 GMT", "version": "v1" } ]
2020-02-18
[ [ "Wang", "Qiang", "" ], [ "Li", "Fuxue", "" ], [ "Xiao", "Tong", "" ], [ "Li", "Yanyang", "" ], [ "Li", "Yinqiao", "" ], [ "Zhu", "Jingbo", "" ] ]
Neural machine translation systems require a number of stacked layers for deep models. But the prediction depends on the sentence representation of the top-most layer with no access to low-level representations. This makes it more difficult to train the model and poses a risk of information loss to prediction. In this paper, we propose a multi-layer representation fusion (MLRF) approach to fusing stacked layers. In particular, we design three fusion functions to learn a better representation from the stack. Experimental results show that our approach yields improvements of 0.92 and 0.56 BLEU points over the strong Transformer baseline on IWSLT German-English and NIST Chinese-English MT tasks respectively. The result is new state-of-the-art in German-English translation.
2206.00253
Ning Luo
Ning Luo and Linlin Zhang
Intelligent UNIT LEVEL TEST Generator for Enhanced Software Quality
10 pages, 6 figures
8th International Conference on Software Engineering (SEC 2022)
null
null
cs.SE
http://creativecommons.org/licenses/by-nc-sa/4.0/
Unit level test has been widely recognized as an important approach to improve the software quality, as it can expose bugs earlier during the development phase. However, manual unit level test development is often tedious and insufficient. Also, it is hard for developers to precisely identify the most error prone code block deserving the best test coverage by themselves. In this paper, we present the automatic Unit level test framework we used for intel media driver development. It can help us identify the most critical code block, provide the test coverage recommendation, and automatically generate >80% ULT code (~400K Lines of test code) as well as ~35% test cases (~7K test cases) for intel media driver. It helps us to greatly shrink the average ULT development effort from ~24 Man hours to ~3 Man hours per 1000 Lines of driver source code.
[ { "created": "Wed, 1 Jun 2022 06:33:48 GMT", "version": "v1" } ]
2022-06-02
[ [ "Luo", "Ning", "" ], [ "Zhang", "Linlin", "" ] ]
Unit level test has been widely recognized as an important approach to improve the software quality, as it can expose bugs earlier during the development phase. However, manual unit level test development is often tedious and insufficient. Also, it is hard for developers to precisely identify the most error prone code block deserving the best test coverage by themselves. In this paper, we present the automatic Unit level test framework we used for intel media driver development. It can help us identify the most critical code block, provide the test coverage recommendation, and automatically generate >80% ULT code (~400K Lines of test code) as well as ~35% test cases (~7K test cases) for intel media driver. It helps us to greatly shrink the average ULT development effort from ~24 Man hours to ~3 Man hours per 1000 Lines of driver source code.
1907.09695
Shivangi Srivastava
Shivangi Srivastava, Maxim Berman, Matthew B. Blaschko, Devis Tuia
Adaptive Compression-based Lifelong Learning
Accepted at BMVC 2019
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of a deep learning model losing performance on a previously learned task when fine-tuned to a new one is a phenomenon known as Catastrophic forgetting. There are two major ways to mitigate this problem: either preserving activations of the initial network during training with a new task; or restricting the new network activations to remain close to the initial ones. The latter approach falls under the denomination of lifelong learning, where the model is updated in a way that it performs well on both old and new tasks, without having access to the old task's training samples anymore. Recently, approaches like pruning networks for freeing network capacity during sequential learning of tasks have been gaining in popularity. Such approaches allow learning small networks while making redundant parameters available for the next tasks. The common problem encountered with these approaches is that the pruning percentage is hard-coded, irrespective of the number of samples, of the complexity of the learning task and of the number of classes in the dataset. We propose a method based on Bayesian optimization to perform adaptive compression/pruning of the network and show its effectiveness in lifelong learning. Our method learns to perform heavy pruning for small and/or simple datasets while using milder compression rates for large and/or complex data. Experiments on classification and semantic segmentation demonstrate the applicability of learning network compression, where we are able to effectively preserve performances along sequences of tasks of varying complexity.
[ { "created": "Tue, 23 Jul 2019 04:58:52 GMT", "version": "v1" } ]
2019-07-24
[ [ "Srivastava", "Shivangi", "" ], [ "Berman", "Maxim", "" ], [ "Blaschko", "Matthew B.", "" ], [ "Tuia", "Devis", "" ] ]
The problem of a deep learning model losing performance on a previously learned task when fine-tuned to a new one is a phenomenon known as Catastrophic forgetting. There are two major ways to mitigate this problem: either preserving activations of the initial network during training with a new task; or restricting the new network activations to remain close to the initial ones. The latter approach falls under the denomination of lifelong learning, where the model is updated in a way that it performs well on both old and new tasks, without having access to the old task's training samples anymore. Recently, approaches like pruning networks for freeing network capacity during sequential learning of tasks have been gaining in popularity. Such approaches allow learning small networks while making redundant parameters available for the next tasks. The common problem encountered with these approaches is that the pruning percentage is hard-coded, irrespective of the number of samples, of the complexity of the learning task and of the number of classes in the dataset. We propose a method based on Bayesian optimization to perform adaptive compression/pruning of the network and show its effectiveness in lifelong learning. Our method learns to perform heavy pruning for small and/or simple datasets while using milder compression rates for large and/or complex data. Experiments on classification and semantic segmentation demonstrate the applicability of learning network compression, where we are able to effectively preserve performances along sequences of tasks of varying complexity.
2103.03011
Chen-Huan Pi
Chen-Huan Pi, Kai-Chun Hu, Yu-Ting Huang, Stone Cheng
Reinforcement Learning Trajectory Generation and Control for Aggressive Perching on Vertical Walls with Quadrotors
null
null
null
null
cs.RO cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Micro aerial vehicles are widely being researched and employed due to their relative low operation costs and high flexibility in various applications. We study the under-actuated quadrotor perching problem, designing a trajectory planner and controller which generates feasible trajectories and drives quadrotors to desired state in state space. This paper proposes a trajectory generating and tracking method for quadrotor perching that takes the advantages of reinforcement learning controller and traditional controller. The trained low-level reinforcement learning controller would manipulate quadrotor toward the perching point in simulation environment. Once the simulated quadrotor has successfully perched, the relative trajectory information in simulation will be sent to tracking controller on real quadrotor and start the actual perching task. Generating feasible trajectories via the trained reinforcement learning controller requires less time, and the traditional trajectory tracking controller could easily be modified to control the quadrotor and mathematically analysis its stability and robustness. We show that this approach permits the control structure of trajectories and controllers enabling such aggressive maneuvers perching on vertical surfaces with high precision.
[ { "created": "Thu, 4 Mar 2021 13:20:05 GMT", "version": "v1" } ]
2021-03-05
[ [ "Pi", "Chen-Huan", "" ], [ "Hu", "Kai-Chun", "" ], [ "Huang", "Yu-Ting", "" ], [ "Cheng", "Stone", "" ] ]
Micro aerial vehicles are widely being researched and employed due to their relative low operation costs and high flexibility in various applications. We study the under-actuated quadrotor perching problem, designing a trajectory planner and controller which generates feasible trajectories and drives quadrotors to desired state in state space. This paper proposes a trajectory generating and tracking method for quadrotor perching that takes the advantages of reinforcement learning controller and traditional controller. The trained low-level reinforcement learning controller would manipulate quadrotor toward the perching point in simulation environment. Once the simulated quadrotor has successfully perched, the relative trajectory information in simulation will be sent to tracking controller on real quadrotor and start the actual perching task. Generating feasible trajectories via the trained reinforcement learning controller requires less time, and the traditional trajectory tracking controller could easily be modified to control the quadrotor and mathematically analysis its stability and robustness. We show that this approach permits the control structure of trajectories and controllers enabling such aggressive maneuvers perching on vertical surfaces with high precision.
2401.16937
Magnus Andersson
Saqib Qamar, Abu Imran Baba, St\'ephane Verger, Magnus Andersson
Segmentation and Characterization of Macerated Fibers and Vessels Using Deep Learning
7 figures
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Wood comprises different cell types, such as fibers, tracheids and vessels, defining its properties. Studying cells' shape, size, and arrangement in microscopy images is crucial for understanding wood characteristics. Typically, this involves macerating (soaking) samples in a solution to separate cells, then spreading them on slides for imaging with a microscope that covers a wide area, capturing thousands of cells. However, these cells often cluster and overlap in images, making the segmentation difficult and time-consuming using standard image-processing methods. In this work, we developed an automatic deep learning segmentation approach that utilizes the one-stage YOLOv8 model for fast and accurate segmentation and characterization of macerated fiber and vessel form aspen trees in microscopy images. The model can analyze 32,640 x 25,920 pixels images and demonstrate effective cell detection and segmentation, achieving a mAP_{0.5-0.95} of 78 %. To assess the model's robustness, we examined fibers from a genetically modified tree line known for longer fibers. The outcomes were comparable to previous manual measurements. Additionally, we created a user-friendly web application for image analysis and provided the code for use on Google Colab. By leveraging YOLOv8's advances, this work provides a deep learning solution to enable efficient quantification and analysis of wood cells suitable for practical applications.
[ { "created": "Tue, 30 Jan 2024 12:04:56 GMT", "version": "v1" }, { "created": "Tue, 18 Jun 2024 11:02:49 GMT", "version": "v2" } ]
2024-06-19
[ [ "Qamar", "Saqib", "" ], [ "Baba", "Abu Imran", "" ], [ "Verger", "Stéphane", "" ], [ "Andersson", "Magnus", "" ] ]
Wood comprises different cell types, such as fibers, tracheids and vessels, defining its properties. Studying cells' shape, size, and arrangement in microscopy images is crucial for understanding wood characteristics. Typically, this involves macerating (soaking) samples in a solution to separate cells, then spreading them on slides for imaging with a microscope that covers a wide area, capturing thousands of cells. However, these cells often cluster and overlap in images, making the segmentation difficult and time-consuming using standard image-processing methods. In this work, we developed an automatic deep learning segmentation approach that utilizes the one-stage YOLOv8 model for fast and accurate segmentation and characterization of macerated fiber and vessel form aspen trees in microscopy images. The model can analyze 32,640 x 25,920 pixels images and demonstrate effective cell detection and segmentation, achieving a mAP_{0.5-0.95} of 78 %. To assess the model's robustness, we examined fibers from a genetically modified tree line known for longer fibers. The outcomes were comparable to previous manual measurements. Additionally, we created a user-friendly web application for image analysis and provided the code for use on Google Colab. By leveraging YOLOv8's advances, this work provides a deep learning solution to enable efficient quantification and analysis of wood cells suitable for practical applications.
2212.02941
Shamil Mamedov
Shamil Mamedov, Rudolf Reiter, Seyed Mahdi Basiri Azad, Ruan Viljoen, Joschka Boedecker, Moritz Diehl, Jan Swevers
Safe Imitation Learning of Nonlinear Model Predictive Control for Flexible Robots
Accepted to IROS 2024
null
null
null
cs.RO cs.LG math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Flexible robots may overcome some of the industry's major challenges, such as enabling intrinsically safe human-robot collaboration and achieving a higher payload-to-mass ratio. However, controlling flexible robots is complicated due to their complex dynamics, which include oscillatory behavior and a high-dimensional state space. Nonlinear model predictive control (NMPC) offers an effective means to control such robots, but its significant computational demand often limits its application in real-time scenarios. To enable fast control of flexible robots, we propose a framework for a safe approximation of NMPC using imitation learning and a predictive safety filter. Our framework significantly reduces computation time while incurring a slight loss in performance. Compared to NMPC, our framework shows more than an eightfold improvement in computation time when controlling a three-dimensional flexible robot arm in simulation, all while guaranteeing safety constraints. Notably, our approach outperforms state-of-the-art reinforcement learning methods. The development of fast and safe approximate NMPC holds the potential to accelerate the adoption of flexible robots in industry. The project code is available at: tinyurl.com/anmpc4fr
[ { "created": "Tue, 6 Dec 2022 12:54:08 GMT", "version": "v1" }, { "created": "Thu, 28 Sep 2023 07:34:32 GMT", "version": "v2" }, { "created": "Wed, 14 Aug 2024 20:40:17 GMT", "version": "v3" } ]
2024-08-16
[ [ "Mamedov", "Shamil", "" ], [ "Reiter", "Rudolf", "" ], [ "Azad", "Seyed Mahdi Basiri", "" ], [ "Viljoen", "Ruan", "" ], [ "Boedecker", "Joschka", "" ], [ "Diehl", "Moritz", "" ], [ "Swevers", "Jan", "" ] ]
Flexible robots may overcome some of the industry's major challenges, such as enabling intrinsically safe human-robot collaboration and achieving a higher payload-to-mass ratio. However, controlling flexible robots is complicated due to their complex dynamics, which include oscillatory behavior and a high-dimensional state space. Nonlinear model predictive control (NMPC) offers an effective means to control such robots, but its significant computational demand often limits its application in real-time scenarios. To enable fast control of flexible robots, we propose a framework for a safe approximation of NMPC using imitation learning and a predictive safety filter. Our framework significantly reduces computation time while incurring a slight loss in performance. Compared to NMPC, our framework shows more than an eightfold improvement in computation time when controlling a three-dimensional flexible robot arm in simulation, all while guaranteeing safety constraints. Notably, our approach outperforms state-of-the-art reinforcement learning methods. The development of fast and safe approximate NMPC holds the potential to accelerate the adoption of flexible robots in industry. The project code is available at: tinyurl.com/anmpc4fr
2404.08008
Kehua Feng
Kehua Feng, Keyan Ding, Kede Ma, Zhihua Wang, Qiang Zhang, Huajun Chen
Sample-Efficient Human Evaluation of Large Language Models via Maximum Discrepancy Competition
32 pages, 6 figures
null
null
null
cs.LG cs.CL cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The past years have witnessed a proliferation of large language models (LLMs). Yet, automated and unbiased evaluation of LLMs is challenging due to the inaccuracy of standard metrics in reflecting human preferences and the inefficiency in sampling informative and diverse test examples. While human evaluation remains the gold standard, it is expensive and time-consuming, especially when dealing with a large number of testing samples. To address this problem, we propose a sample-efficient human evaluation method based on MAximum Discrepancy (MAD) competition. MAD automatically selects a small set of informative and diverse instructions, each adapted to two LLMs, whose responses are subject to three-alternative forced choice by human subjects. The pairwise comparison results are then aggregated into a global ranking using the Elo rating system. We select eight representative LLMs and compare them in terms of four skills: knowledge understanding, mathematical reasoning, writing, and coding. Experimental results show that the proposed method achieves a reliable and sensible ranking of LLMs' capabilities, identifies their relative strengths and weaknesses, and offers valuable insights for further LLM advancement.
[ { "created": "Wed, 10 Apr 2024 01:26:24 GMT", "version": "v1" } ]
2024-04-15
[ [ "Feng", "Kehua", "" ], [ "Ding", "Keyan", "" ], [ "Ma", "Kede", "" ], [ "Wang", "Zhihua", "" ], [ "Zhang", "Qiang", "" ], [ "Chen", "Huajun", "" ] ]
The past years have witnessed a proliferation of large language models (LLMs). Yet, automated and unbiased evaluation of LLMs is challenging due to the inaccuracy of standard metrics in reflecting human preferences and the inefficiency in sampling informative and diverse test examples. While human evaluation remains the gold standard, it is expensive and time-consuming, especially when dealing with a large number of testing samples. To address this problem, we propose a sample-efficient human evaluation method based on MAximum Discrepancy (MAD) competition. MAD automatically selects a small set of informative and diverse instructions, each adapted to two LLMs, whose responses are subject to three-alternative forced choice by human subjects. The pairwise comparison results are then aggregated into a global ranking using the Elo rating system. We select eight representative LLMs and compare them in terms of four skills: knowledge understanding, mathematical reasoning, writing, and coding. Experimental results show that the proposed method achieves a reliable and sensible ranking of LLMs' capabilities, identifies their relative strengths and weaknesses, and offers valuable insights for further LLM advancement.
2109.10047
Guosheng Feng
Guosheng Feng, Chunnan Wang, Hongzhi Wang
Search For Deep Graph Neural Networks
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current GNN-oriented NAS methods focus on the search for different layer aggregate components with shallow and simple architectures, which are limited by the 'over-smooth' problem. To further explore the benefits from structural diversity and depth of GNN architectures, we propose a GNN generation pipeline with a novel two-stage search space, which aims at automatically generating high-performance while transferable deep GNN models in a block-wise manner. Meanwhile, to alleviate the 'over-smooth' problem, we incorporate multiple flexible residual connection in our search space and apply identity mapping in the basic GNN layers. For the search algorithm, we use deep-q-learning with epsilon-greedy exploration strategy and reward reshaping. Extensive experiments on real-world datasets show that our generated GNN models outperforms existing manually designed and NAS-based ones.
[ { "created": "Tue, 21 Sep 2021 09:24:59 GMT", "version": "v1" } ]
2021-09-22
[ [ "Feng", "Guosheng", "" ], [ "Wang", "Chunnan", "" ], [ "Wang", "Hongzhi", "" ] ]
Current GNN-oriented NAS methods focus on the search for different layer aggregate components with shallow and simple architectures, which are limited by the 'over-smooth' problem. To further explore the benefits from structural diversity and depth of GNN architectures, we propose a GNN generation pipeline with a novel two-stage search space, which aims at automatically generating high-performance while transferable deep GNN models in a block-wise manner. Meanwhile, to alleviate the 'over-smooth' problem, we incorporate multiple flexible residual connection in our search space and apply identity mapping in the basic GNN layers. For the search algorithm, we use deep-q-learning with epsilon-greedy exploration strategy and reward reshaping. Extensive experiments on real-world datasets show that our generated GNN models outperforms existing manually designed and NAS-based ones.
1508.02479
Heejin Choi
Heejin Choi, Yutaka Sasaki, Nathan Srebro
Normalized Hierarchical SVM
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present improved methods of using structured SVMs in a large-scale hierarchical classification problem, that is when labels are leaves, or sets of leaves, in a tree or a DAG. We examine the need to normalize both the regularization and the margin and show how doing so significantly improves performance, including allowing achieving state-of-the-art results where unnormalized structured SVMs do not perform better than flat models. We also describe a further extension of hierarchical SVMs that highlight the connection between hierarchical SVMs and matrix factorization models.
[ { "created": "Tue, 11 Aug 2015 03:34:33 GMT", "version": "v1" }, { "created": "Fri, 4 Mar 2016 18:53:19 GMT", "version": "v2" } ]
2016-03-07
[ [ "Choi", "Heejin", "" ], [ "Sasaki", "Yutaka", "" ], [ "Srebro", "Nathan", "" ] ]
We present improved methods of using structured SVMs in a large-scale hierarchical classification problem, that is when labels are leaves, or sets of leaves, in a tree or a DAG. We examine the need to normalize both the regularization and the margin and show how doing so significantly improves performance, including allowing achieving state-of-the-art results where unnormalized structured SVMs do not perform better than flat models. We also describe a further extension of hierarchical SVMs that highlight the connection between hierarchical SVMs and matrix factorization models.
1408.1482
Joseph Y. Halpern
Joseph Y. Halpern
Axiomatizing Causal Reasoning
Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
null
null
UAI-P-1998-PG-202-210
cs.AI cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Causal models defined in terms of a collection of equations, as defined by Pearl, are axiomatized here. Axiomatizations are provided for three successively more general classes of causal models: (1) the class of recursive theories (those without feedback), (2) the class of theories where the solutions to the equations are unique, (3) arbitrary theories (where the equations may not have solutions and, if they do, they are not necessarily unique). It is shown that to reason about causality in the most general third class, we must extend the language used by Galles and Pearl. In addition, the complexity of the decision procedures is examined for all the languages and classes of models considered.
[ { "created": "Thu, 7 Aug 2014 06:24:41 GMT", "version": "v1" } ]
2014-08-08
[ [ "Halpern", "Joseph Y.", "" ] ]
Causal models defined in terms of a collection of equations, as defined by Pearl, are axiomatized here. Axiomatizations are provided for three successively more general classes of causal models: (1) the class of recursive theories (those without feedback), (2) the class of theories where the solutions to the equations are unique, (3) arbitrary theories (where the equations may not have solutions and, if they do, they are not necessarily unique). It is shown that to reason about causality in the most general third class, we must extend the language used by Galles and Pearl. In addition, the complexity of the decision procedures is examined for all the languages and classes of models considered.
2104.12868
Ali Akbar Sadat Asl
Ali Akbar Sadat Asl, Mohammad Mahdi Ershadi, Shahabeddin Sotudian, Xingyu Li, Scott Dick
Fuzzy Expert Systems for Prediction of ICU Admission in Patients with COVID-19
null
null
10.3233/IDT-200220
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
The pandemic COVID-19 disease has had a dramatic impact on almost all countries around the world so that many hospitals have been overwhelmed with Covid-19 cases. As medical resources are limited, deciding on the proper allocation of these resources is a very crucial issue. Besides, uncertainty is a major factor that can affect decisions, especially in medical fields. To cope with this issue, we use fuzzy logic (FL) as one of the most suitable methods in modeling systems with high uncertainty and complexity. We intend to make use of the advantages of FL in decisions on cases that need to treat in ICU. In this study, an interval type-2 fuzzy expert system is proposed for prediction of ICU admission in COVID-19 patients. For this prediction task, we also developed an adaptive neuro-fuzzy inference system (ANFIS). Finally, the results of these fuzzy systems are compared to some well-known classification methods such as Naive Bayes (NB), Case-Based Reasoning (CBR), Decision Tree (DT), and K Nearest Neighbor (KNN). The results show that the type-2 fuzzy expert system and ANFIS models perform competitively in terms of accuracy and F-measure compared to the other system modeling techniques.
[ { "created": "Thu, 22 Apr 2021 05:12:49 GMT", "version": "v1" }, { "created": "Tue, 7 Feb 2023 03:24:44 GMT", "version": "v2" }, { "created": "Wed, 8 Feb 2023 11:25:27 GMT", "version": "v3" } ]
2023-02-09
[ [ "Asl", "Ali Akbar Sadat", "" ], [ "Ershadi", "Mohammad Mahdi", "" ], [ "Sotudian", "Shahabeddin", "" ], [ "Li", "Xingyu", "" ], [ "Dick", "Scott", "" ] ]
The pandemic COVID-19 disease has had a dramatic impact on almost all countries around the world so that many hospitals have been overwhelmed with Covid-19 cases. As medical resources are limited, deciding on the proper allocation of these resources is a very crucial issue. Besides, uncertainty is a major factor that can affect decisions, especially in medical fields. To cope with this issue, we use fuzzy logic (FL) as one of the most suitable methods in modeling systems with high uncertainty and complexity. We intend to make use of the advantages of FL in decisions on cases that need to treat in ICU. In this study, an interval type-2 fuzzy expert system is proposed for prediction of ICU admission in COVID-19 patients. For this prediction task, we also developed an adaptive neuro-fuzzy inference system (ANFIS). Finally, the results of these fuzzy systems are compared to some well-known classification methods such as Naive Bayes (NB), Case-Based Reasoning (CBR), Decision Tree (DT), and K Nearest Neighbor (KNN). The results show that the type-2 fuzzy expert system and ANFIS models perform competitively in terms of accuracy and F-measure compared to the other system modeling techniques.
2111.06750
Guannan Lou
Guannan Lou, Yuze Liu, Tiehua Zhang, Xi Zheng
STFL: A Temporal-Spatial Federated Learning Framework for Graph Neural Networks
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a spatial-temporal federated learning framework for graph neural networks, namely STFL. The framework explores the underlying correlation of the input spatial-temporal data and transform it to both node features and adjacency matrix. The federated learning setting in the framework ensures data privacy while achieving a good model generalization. Experiments results on the sleep stage dataset, ISRUC_S3, illustrate the effectiveness of STFL on graph prediction tasks.
[ { "created": "Fri, 12 Nov 2021 14:55:57 GMT", "version": "v1" }, { "created": "Tue, 11 Jan 2022 08:38:21 GMT", "version": "v2" } ]
2022-01-12
[ [ "Lou", "Guannan", "" ], [ "Liu", "Yuze", "" ], [ "Zhang", "Tiehua", "" ], [ "Zheng", "Xi", "" ] ]
We present a spatial-temporal federated learning framework for graph neural networks, namely STFL. The framework explores the underlying correlation of the input spatial-temporal data and transform it to both node features and adjacency matrix. The federated learning setting in the framework ensures data privacy while achieving a good model generalization. Experiments results on the sleep stage dataset, ISRUC_S3, illustrate the effectiveness of STFL on graph prediction tasks.
1002.2294
Jean-Marc Seigneur
Jean-Marc Seigneur, Xavier Titi
Reputation-based Telecommunication Network Selection
Published in the Proceedings of the 2009 IADIS e-Society International Conference
null
null
null
cs.NI cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nowadays, mobile users can switch between different available networks, for example, nearby WiFi networks or their standard mobile operator network. Soon it will be extended to other operators. However, unless telecommunication operators can directly benefit from allowing a user to switch to another operator, operators have an incentive to keep their network quality of service confidential to avoid that their users decide to switch to another network. In contrast, in a user-centric way, the users should be allowed to share their observations regarding the networks that they have used. In this paper, we present our work in progress towards attack-resistant sharing of quality of service information and network provider reputation among mobile users.
[ { "created": "Thu, 11 Feb 2010 08:26:47 GMT", "version": "v1" } ]
2010-02-12
[ [ "Seigneur", "Jean-Marc", "" ], [ "Titi", "Xavier", "" ] ]
Nowadays, mobile users can switch between different available networks, for example, nearby WiFi networks or their standard mobile operator network. Soon it will be extended to other operators. However, unless telecommunication operators can directly benefit from allowing a user to switch to another operator, operators have an incentive to keep their network quality of service confidential to avoid that their users decide to switch to another network. In contrast, in a user-centric way, the users should be allowed to share their observations regarding the networks that they have used. In this paper, we present our work in progress towards attack-resistant sharing of quality of service information and network provider reputation among mobile users.
2104.14098
S. Akshay
Preey Shah, Aman Bansal, S. Akshay and Supratik Chakraborty
A Normal Form Characterization for Efficient Boolean Skolem Function Synthesis
Full version of conference paper accepted at LICS'2021
null
null
null
cs.LO cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
Boolean Skolem function synthesis concerns synthesizing outputs as Boolean functions of inputs such that a relational specification between inputs and outputs is satisfied. This problem, also known as Boolean functional synthesis, has several applications, including design of safe controllers for autonomous systems, certified QBF solving, cryptanalysis etc. Recently, complexity theoretic hardness results have been shown for the problem, although several algorithms proposed in the literature are known to work well in practice. This dichotomy between theoretical hardness and practical efficacy has motivated the research into normal forms or representations of input specifications that permit efficient synthesis, thus explaining perhaps the efficacy of these algorithms. In this paper we go one step beyond this and ask if there exists a normal form representation that can in fact precisely characterize "efficient" synthesis. We present a normal form called SAUNF that precisely characterizes tractable synthesis in the following sense: a specification is polynomial time synthesizable iff it can be compiled to SAUNF in polynomial time. Additionally, a specification admits a polynomial-sized functional solution iff there exists a semantically equivalent polynomial-sized SAUNF representation. SAUNF is exponentially more succinct than well-established normal forms like BDDs and DNNFs, used in the context of AI problems, and strictly subsumes other more recently proposed forms like SynNNF. It enjoys compositional properties that are similar to those of DNNF. Thus, SAUNF provides the right trade-off in knowledge representation for Boolean functional synthesis.
[ { "created": "Thu, 29 Apr 2021 04:16:41 GMT", "version": "v1" }, { "created": "Mon, 28 Jun 2021 12:52:38 GMT", "version": "v2" } ]
2021-06-29
[ [ "Shah", "Preey", "" ], [ "Bansal", "Aman", "" ], [ "Akshay", "S.", "" ], [ "Chakraborty", "Supratik", "" ] ]
Boolean Skolem function synthesis concerns synthesizing outputs as Boolean functions of inputs such that a relational specification between inputs and outputs is satisfied. This problem, also known as Boolean functional synthesis, has several applications, including design of safe controllers for autonomous systems, certified QBF solving, cryptanalysis etc. Recently, complexity theoretic hardness results have been shown for the problem, although several algorithms proposed in the literature are known to work well in practice. This dichotomy between theoretical hardness and practical efficacy has motivated the research into normal forms or representations of input specifications that permit efficient synthesis, thus explaining perhaps the efficacy of these algorithms. In this paper we go one step beyond this and ask if there exists a normal form representation that can in fact precisely characterize "efficient" synthesis. We present a normal form called SAUNF that precisely characterizes tractable synthesis in the following sense: a specification is polynomial time synthesizable iff it can be compiled to SAUNF in polynomial time. Additionally, a specification admits a polynomial-sized functional solution iff there exists a semantically equivalent polynomial-sized SAUNF representation. SAUNF is exponentially more succinct than well-established normal forms like BDDs and DNNFs, used in the context of AI problems, and strictly subsumes other more recently proposed forms like SynNNF. It enjoys compositional properties that are similar to those of DNNF. Thus, SAUNF provides the right trade-off in knowledge representation for Boolean functional synthesis.
2309.13908
Jie Luo
Jie Luo, Jakub Tomczak, Karine Miras, Agoston E. Eiben
A comparison of controller architectures and learning mechanisms for arbitrary robot morphologies
null
null
null
null
cs.RO cs.AI cs.LG cs.NE
http://creativecommons.org/licenses/by/4.0/
The main question this paper addresses is: What combination of a robot controller and a learning method should be used, if the morphology of the learning robot is not known in advance? Our interest is rooted in the context of morphologically evolving modular robots, but the question is also relevant in general, for system designers interested in widely applicable solutions. We perform an experimental comparison of three controller-and-learner combinations: one approach where controllers are based on modelling animal locomotion (Central Pattern Generators, CPG) and the learner is an evolutionary algorithm, a completely different method using Reinforcement Learning (RL) with a neural network controller architecture, and a combination `in-between' where controllers are neural networks and the learner is an evolutionary algorithm. We apply these three combinations to a test suite of modular robots and compare their efficacy, efficiency, and robustness. Surprisingly, the usual CPG-based and RL-based options are outperformed by the in-between combination that is more robust and efficient than the other two setups.
[ { "created": "Mon, 25 Sep 2023 07:11:43 GMT", "version": "v1" } ]
2023-09-26
[ [ "Luo", "Jie", "" ], [ "Tomczak", "Jakub", "" ], [ "Miras", "Karine", "" ], [ "Eiben", "Agoston E.", "" ] ]
The main question this paper addresses is: What combination of a robot controller and a learning method should be used, if the morphology of the learning robot is not known in advance? Our interest is rooted in the context of morphologically evolving modular robots, but the question is also relevant in general, for system designers interested in widely applicable solutions. We perform an experimental comparison of three controller-and-learner combinations: one approach where controllers are based on modelling animal locomotion (Central Pattern Generators, CPG) and the learner is an evolutionary algorithm, a completely different method using Reinforcement Learning (RL) with a neural network controller architecture, and a combination `in-between' where controllers are neural networks and the learner is an evolutionary algorithm. We apply these three combinations to a test suite of modular robots and compare their efficacy, efficiency, and robustness. Surprisingly, the usual CPG-based and RL-based options are outperformed by the in-between combination that is more robust and efficient than the other two setups.
2101.07725
Muhammad AL-Qurishi Dr
Majed Alrubaian, Muhammad Al-Qurishi, Sherif Omar and Mohamed A. Mostafa
DeepTrust: A Deep Learning Approach for Measuring Social Media Users Trustworthiness
18 pages,6 figures
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Veracity of data posted on the microblog platforms has in recent years been a subject of intensive study by professionals specializing in various fields of informatics as well as sociology, particularly in the light of increasing importance of online tools for news spreading. On Twitter and similar sites, it is possible to report on ongoing situations globally with minimal delay, while the cost of such reporting remains negligible. One of the most important features of this social network is that content delivery can be customized to allow users to focus only on news items covering subject matters they find interesting. With this in mind, it becomes necessary to create verification mechanisms that can ascertain whether the claims made on Twitter can be taken seriously and prevent false content from spreading too far. This study demonstrates an innovative System for verification of information that can fulfill the role described above. The System is comprised of four mutually connected modules: a legacy module, a trustworthiness classifier; a module managing user authority, and a ranking procedure. All of the modules function within an integrated framework and jointly contribute to an accurate classification of messages and authors. Effectiveness of the solution was evaluated empirically on a sample of Twitter users, with a strict 10-fold evaluation procedure applied for each module. The findings indicate that the solution successfully meets the primary objectives of the study and performs its function as expected.
[ { "created": "Tue, 19 Jan 2021 16:55:32 GMT", "version": "v1" } ]
2021-01-20
[ [ "Alrubaian", "Majed", "" ], [ "Al-Qurishi", "Muhammad", "" ], [ "Omar", "Sherif", "" ], [ "Mostafa", "Mohamed A.", "" ] ]
Veracity of data posted on the microblog platforms has in recent years been a subject of intensive study by professionals specializing in various fields of informatics as well as sociology, particularly in the light of increasing importance of online tools for news spreading. On Twitter and similar sites, it is possible to report on ongoing situations globally with minimal delay, while the cost of such reporting remains negligible. One of the most important features of this social network is that content delivery can be customized to allow users to focus only on news items covering subject matters they find interesting. With this in mind, it becomes necessary to create verification mechanisms that can ascertain whether the claims made on Twitter can be taken seriously and prevent false content from spreading too far. This study demonstrates an innovative System for verification of information that can fulfill the role described above. The System is comprised of four mutually connected modules: a legacy module, a trustworthiness classifier; a module managing user authority, and a ranking procedure. All of the modules function within an integrated framework and jointly contribute to an accurate classification of messages and authors. Effectiveness of the solution was evaluated empirically on a sample of Twitter users, with a strict 10-fold evaluation procedure applied for each module. The findings indicate that the solution successfully meets the primary objectives of the study and performs its function as expected.
1607.05812
Esmitt Ram\'irez
Juan Perozo, Mimia Lo Leung, Esmitt Ram\'irez
HoloMed: A Low-Cost Gesture-Based Holographic
English version of an accepted paper in the Proceedings of the 4th Simposio Cient\'ifico y Tecnol\'ogico en Computaci\'on, 160-168. May 2016. Original version in spanish http://ccg.ciens.ucv.ve/~esmitt/publications/2016/SCTC2016.pdf
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
During medicine studies, visualization of certain elements is common and indispensable in order to get more information about the way they work. Currently, we resort to the use of photographs -which are insufficient due to being static- or tests in patients, which can be invasive or even risky. Therefore, a low-cost approach is proposed by using a 3D visualization. This paper presents a holographic system built with low-cost materials for teaching obstetrics, where student interaction is performed by using voice and gestures. Our solution, which we called HoloMed, is focused on the projection of a euthocic normal delivery under a web-based infrastructure which also employs a Kinect. HoloMed is divided in three (3) essential modules: a gesture analyzer, a data server, and a holographic projection architecture, which can be executed in several interconnected computers using different network protocols. Tests used for determining the user's position, illumination factors, and response times, demonstrate HoloMed's effectiveness as a low-cost system for teaching, using a natural user interface and 3D images.
[ { "created": "Wed, 20 Jul 2016 04:00:44 GMT", "version": "v1" } ]
2016-07-21
[ [ "Perozo", "Juan", "" ], [ "Leung", "Mimia Lo", "" ], [ "Ramírez", "Esmitt", "" ] ]
During medicine studies, visualization of certain elements is common and indispensable in order to get more information about the way they work. Currently, we resort to the use of photographs -which are insufficient due to being static- or tests in patients, which can be invasive or even risky. Therefore, a low-cost approach is proposed by using a 3D visualization. This paper presents a holographic system built with low-cost materials for teaching obstetrics, where student interaction is performed by using voice and gestures. Our solution, which we called HoloMed, is focused on the projection of a euthocic normal delivery under a web-based infrastructure which also employs a Kinect. HoloMed is divided in three (3) essential modules: a gesture analyzer, a data server, and a holographic projection architecture, which can be executed in several interconnected computers using different network protocols. Tests used for determining the user's position, illumination factors, and response times, demonstrate HoloMed's effectiveness as a low-cost system for teaching, using a natural user interface and 3D images.
1704.01175
Jens Braband
Jens Braband
Towards an IT Security Risk Assessment Framework for Railway Automation
14 pages, 3 figures
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Some recent incidents have shown that possibly the vulnerability of IT systems in railway automation has been underestimated. Fortunately, so far, almost only denial-of-service attacks were successful, but due to several trends, such as the use of commercial IT and communication systems or privatization, the threat potential could increase in the near future. However, up to now, no harmonized IT security risk assessment framework for railway automation exists. This paper defines an IT security risk assessment framework which aims to separate IT security and safety requirements as well as certification processes as far as possible. It builds on the well-known safety and approval processes from IEC 62425 and integrates IT security requirements based on the ISA99/IEC62443 standard series. While the detailed results are related to railway automation the general concepts are also applicable to other safety-critical application areas.
[ { "created": "Tue, 4 Apr 2017 20:33:04 GMT", "version": "v1" } ]
2017-04-06
[ [ "Braband", "Jens", "" ] ]
Some recent incidents have shown that possibly the vulnerability of IT systems in railway automation has been underestimated. Fortunately, so far, almost only denial-of-service attacks were successful, but due to several trends, such as the use of commercial IT and communication systems or privatization, the threat potential could increase in the near future. However, up to now, no harmonized IT security risk assessment framework for railway automation exists. This paper defines an IT security risk assessment framework which aims to separate IT security and safety requirements as well as certification processes as far as possible. It builds on the well-known safety and approval processes from IEC 62425 and integrates IT security requirements based on the ISA99/IEC62443 standard series. While the detailed results are related to railway automation the general concepts are also applicable to other safety-critical application areas.
2401.16832
Kai Hartung
Panagiotis Pagonis and Kai Hartung and Di Wu and Munir Georges and S\"oren Gr\"ottrup
Analysis of Knowledge Tracing performance on synthesised student data
Accepted at AI4AI Education workshop 2023 ( https://sme.uni-bamberg.de/ai4ai/ )
null
null
null
cs.CY cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Knowledge Tracing (KT) aims to predict the future performance of students by tracking the development of their knowledge states. Despite all the recent progress made in this field, the application of KT models in education systems is still restricted from the data perspectives: 1) limited access to real life data due to data protection concerns, 2) lack of diversity in public datasets, 3) noises in benchmark datasets such as duplicate records. To resolve these problems, we simulated student data with three statistical strategies based on public datasets and tested their performance on two KT baselines. While we observe only minor performance improvement with additional synthetic data, our work shows that using only synthetic data for training can lead to similar performance as real data.
[ { "created": "Tue, 30 Jan 2024 09:19:50 GMT", "version": "v1" } ]
2024-01-31
[ [ "Pagonis", "Panagiotis", "" ], [ "Hartung", "Kai", "" ], [ "Wu", "Di", "" ], [ "Georges", "Munir", "" ], [ "Gröttrup", "Sören", "" ] ]
Knowledge Tracing (KT) aims to predict the future performance of students by tracking the development of their knowledge states. Despite all the recent progress made in this field, the application of KT models in education systems is still restricted from the data perspectives: 1) limited access to real life data due to data protection concerns, 2) lack of diversity in public datasets, 3) noises in benchmark datasets such as duplicate records. To resolve these problems, we simulated student data with three statistical strategies based on public datasets and tested their performance on two KT baselines. While we observe only minor performance improvement with additional synthetic data, our work shows that using only synthetic data for training can lead to similar performance as real data.
2110.04126
Hannes St\"ark
Hannes St\"ark, Dominique Beaini, Gabriele Corso, Prudencio Tossou, Christian Dallago, Stephan G\"unnemann, Pietro Li\`o
3D Infomax improves GNNs for Molecular Property Prediction
39th International Conference on Machine Learning (ICML 2022). Also accepted at NeurIPS 2021 ML4PH, AI4S, and SSL workshops and as oral at ELLIS ML4Molecules. 24 pages, 7 figures, 18 tables
39th International Conference on Machine Learning (ICML 2022)
null
null
cs.LG cs.AI q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Molecular property prediction is one of the fastest-growing applications of deep learning with critical real-world impacts. Including 3D molecular structure as input to learned models improves their performance for many molecular tasks. However, this information is infeasible to compute at the scale required by several real-world applications. We propose pre-training a model to reason about the geometry of molecules given only their 2D molecular graphs. Using methods from self-supervised learning, we maximize the mutual information between 3D summary vectors and the representations of a Graph Neural Network (GNN) such that they contain latent 3D information. During fine-tuning on molecules with unknown geometry, the GNN still generates implicit 3D information and can use it to improve downstream tasks. We show that 3D pre-training provides significant improvements for a wide range of properties, such as a 22% average MAE reduction on eight quantum mechanical properties. Moreover, the learned representations can be effectively transferred between datasets in different molecular spaces.
[ { "created": "Fri, 8 Oct 2021 13:30:49 GMT", "version": "v1" }, { "created": "Sat, 27 Nov 2021 06:54:40 GMT", "version": "v2" }, { "created": "Mon, 23 May 2022 21:48:48 GMT", "version": "v3" }, { "created": "Sat, 4 Jun 2022 22:57:54 GMT", "version": "v4" } ]
2022-06-07
[ [ "Stärk", "Hannes", "" ], [ "Beaini", "Dominique", "" ], [ "Corso", "Gabriele", "" ], [ "Tossou", "Prudencio", "" ], [ "Dallago", "Christian", "" ], [ "Günnemann", "Stephan", "" ], [ "Liò", "Pietro", "" ] ]
Molecular property prediction is one of the fastest-growing applications of deep learning with critical real-world impacts. Including 3D molecular structure as input to learned models improves their performance for many molecular tasks. However, this information is infeasible to compute at the scale required by several real-world applications. We propose pre-training a model to reason about the geometry of molecules given only their 2D molecular graphs. Using methods from self-supervised learning, we maximize the mutual information between 3D summary vectors and the representations of a Graph Neural Network (GNN) such that they contain latent 3D information. During fine-tuning on molecules with unknown geometry, the GNN still generates implicit 3D information and can use it to improve downstream tasks. We show that 3D pre-training provides significant improvements for a wide range of properties, such as a 22% average MAE reduction on eight quantum mechanical properties. Moreover, the learned representations can be effectively transferred between datasets in different molecular spaces.
1811.00692
Yuanpeng Li
Yuanpeng Li, Yi Yang, Jianyu Wang, Wei Xu
Zero-Shot Transfer VQA Dataset
null
null
null
null
cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Acquiring a large vocabulary is an important aspect of human intelligence. Onecommon approach for human to populating vocabulary is to learn words duringreading or listening, and then use them in writing or speaking. This ability totransfer from input to output is natural for human, but it is difficult for machines.Human spontaneously performs this knowledge transfer in complicated multimodaltasks, such as Visual Question Answering (VQA). In order to approach human-levelArtificial Intelligence, we hope to equip machines with such ability. Therefore, toaccelerate this research, we propose a newzero-shot transfer VQA(ZST-VQA)dataset by reorganizing the existing VQA v1.0 dataset in the way that duringtraining, some words appear only in one module (i.e. questions) but not in theother (i.e. answers). In this setting, an intelligent model should understand andlearn the concepts from one module (i.e. questions), and at test time, transfer themto the other (i.e. predict the concepts as answers). We conduct evaluation on thisnew dataset using three existing state-of-the-art VQA neural models. Experimentalresults show a significant drop in performance on this dataset, indicating existingmethods do not address the zero-shot transfer problem. Besides, our analysis findsthat this may be caused by the implicit bias learned during training.
[ { "created": "Fri, 2 Nov 2018 01:02:49 GMT", "version": "v1" } ]
2018-11-05
[ [ "Li", "Yuanpeng", "" ], [ "Yang", "Yi", "" ], [ "Wang", "Jianyu", "" ], [ "Xu", "Wei", "" ] ]
Acquiring a large vocabulary is an important aspect of human intelligence. Onecommon approach for human to populating vocabulary is to learn words duringreading or listening, and then use them in writing or speaking. This ability totransfer from input to output is natural for human, but it is difficult for machines.Human spontaneously performs this knowledge transfer in complicated multimodaltasks, such as Visual Question Answering (VQA). In order to approach human-levelArtificial Intelligence, we hope to equip machines with such ability. Therefore, toaccelerate this research, we propose a newzero-shot transfer VQA(ZST-VQA)dataset by reorganizing the existing VQA v1.0 dataset in the way that duringtraining, some words appear only in one module (i.e. questions) but not in theother (i.e. answers). In this setting, an intelligent model should understand andlearn the concepts from one module (i.e. questions), and at test time, transfer themto the other (i.e. predict the concepts as answers). We conduct evaluation on thisnew dataset using three existing state-of-the-art VQA neural models. Experimentalresults show a significant drop in performance on this dataset, indicating existingmethods do not address the zero-shot transfer problem. Besides, our analysis findsthat this may be caused by the implicit bias learned during training.
2109.06810
Akash Patel
Akash Patel, Avijit Banerjee, Bjorn Lindqvist, Christoforos Kanellakis, George Nikolakopoulos
Design and Model Predictive Control of Mars Coaxial Quadrotor
null
null
10.1109/AERO53065.2022.9843799
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mars has been a prime candidate for planetary exploration of the solar system because of the science discoveries that support chances of future habitation on this planet. Martian caves and lava tubes like terrains, which consists of uneven ground, poor visibility and confined space, makes it impossible for wheel based rovers to navigate through these areas. In order to address these limitations and advance the exploration capability in a Martian terrain, this article presents the design and control of a novel coaxial quadrotor Micro Aerial Vehicle (MAV). As it will be presented, the key contributions on the design and control architecture of the proposed Mars coaxial quadrotor, are introducing an alternative and more enhanced, from a control point of view concept, when compared in terms of autonomy to Ingenuity. Based on the presented design, the article will introduce the mathematical modelling and automatic control framework of the vehicle that will consist of a linearised model of a co-axial quadrotor and a corresponding Model Predictive Controller (MPC) for the trajectory tracking. Among the many models, proposed for the aerial flight on Mars, a reliable control architecture lacks in the related state of the art. The MPC based closed loop responses of the proposed MAV will be verified in different conditions during the flight with additional disturbances, induced to replicate a real flight scenario. In order to further validate the proposed control architecture and prove the efficacy of the suggested design, the introduced Mars coaxial quadrotor and the MPC scheme will be compared to a PID-type controller, similar to the Ingenuity helicopter's control architecture for the position and the heading.
[ { "created": "Tue, 14 Sep 2021 16:45:10 GMT", "version": "v1" }, { "created": "Fri, 1 Oct 2021 11:01:58 GMT", "version": "v2" } ]
2022-08-16
[ [ "Patel", "Akash", "" ], [ "Banerjee", "Avijit", "" ], [ "Lindqvist", "Bjorn", "" ], [ "Kanellakis", "Christoforos", "" ], [ "Nikolakopoulos", "George", "" ] ]
Mars has been a prime candidate for planetary exploration of the solar system because of the science discoveries that support chances of future habitation on this planet. Martian caves and lava tubes like terrains, which consists of uneven ground, poor visibility and confined space, makes it impossible for wheel based rovers to navigate through these areas. In order to address these limitations and advance the exploration capability in a Martian terrain, this article presents the design and control of a novel coaxial quadrotor Micro Aerial Vehicle (MAV). As it will be presented, the key contributions on the design and control architecture of the proposed Mars coaxial quadrotor, are introducing an alternative and more enhanced, from a control point of view concept, when compared in terms of autonomy to Ingenuity. Based on the presented design, the article will introduce the mathematical modelling and automatic control framework of the vehicle that will consist of a linearised model of a co-axial quadrotor and a corresponding Model Predictive Controller (MPC) for the trajectory tracking. Among the many models, proposed for the aerial flight on Mars, a reliable control architecture lacks in the related state of the art. The MPC based closed loop responses of the proposed MAV will be verified in different conditions during the flight with additional disturbances, induced to replicate a real flight scenario. In order to further validate the proposed control architecture and prove the efficacy of the suggested design, the introduced Mars coaxial quadrotor and the MPC scheme will be compared to a PID-type controller, similar to the Ingenuity helicopter's control architecture for the position and the heading.