id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1404.0783
Ismail Toroslu
Cem Evrendilek, Ismail Hakki Toroslu, Sasan Hashemi
Task Assignment in Tree-Like Hierarchical Structures
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most large organizations, such as corporations, are hierarchical organizations. In hierarchical organizations each entity in the organization, except the root entity, is a sub-part of another entity. In this paper we study the task assignment problem to the entities of tree-like hierarchical organizations. The inherent tree structure introduces an interesting and challenging constraint to the standard assignment problem. When a task is assigned to an entity in a hierarchical organization, the whole entity, including its sub-entities, is responsible from the execution of that particular task. In other words, if an entity has been assigned to a task, neither its descendants nor its ancestors can be assigned to a task. Sub-entities cannot be assigned as they have an ancestor already occupied. Ancestor entities cannot be assigned since one of their sub-entities has already been employed in an assignment. In the paper, we formally introduce this new version of the assignment problem called Maximum Weight Tree Matching ($MWTM$), and show its NP-hardness. We also propose an effective heuristic solution based on an iterative LP-relaxation to it.
[ { "created": "Thu, 3 Apr 2014 07:16:49 GMT", "version": "v1" } ]
2014-04-04
[ [ "Evrendilek", "Cem", "" ], [ "Toroslu", "Ismail Hakki", "" ], [ "Hashemi", "Sasan", "" ] ]
Most large organizations, such as corporations, are hierarchical organizations. In hierarchical organizations each entity in the organization, except the root entity, is a sub-part of another entity. In this paper we study the task assignment problem to the entities of tree-like hierarchical organizations. The inherent tree structure introduces an interesting and challenging constraint to the standard assignment problem. When a task is assigned to an entity in a hierarchical organization, the whole entity, including its sub-entities, is responsible from the execution of that particular task. In other words, if an entity has been assigned to a task, neither its descendants nor its ancestors can be assigned to a task. Sub-entities cannot be assigned as they have an ancestor already occupied. Ancestor entities cannot be assigned since one of their sub-entities has already been employed in an assignment. In the paper, we formally introduce this new version of the assignment problem called Maximum Weight Tree Matching ($MWTM$), and show its NP-hardness. We also propose an effective heuristic solution based on an iterative LP-relaxation to it.
2212.12363
Zhitong Yang
Zhitong Yang, Xing Ma, Anqi Liu, Zheyu Zhang
Discovering Customer-Service Dialog System with Semi-Supervised Learning and Coarse-to-Fine Intent Detection
Towards Semi-Supervised and Reinforced Task-Oriented Dialog Systems Co-located with EMNLP 2022, System Description Paper, 5 pages
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Task-oriented dialog(TOD) aims to assist users in achieving specific goals through multi-turn conversation. Recently, good results have been obtained based on large pre-trained models. However, the labeled-data scarcity hinders the efficient development of TOD systems at scale. In this work, we constructed a weakly supervised dataset based on a teacher/student paradigm that leverages a large collection of unlabelled dialogues. Furthermore, we built a modular dialogue system and integrated coarse-to-fine grained classification for user intent detection. Experiments show that our method can reach the dialog goal with a higher success rate and generate more coherent responses.
[ { "created": "Fri, 23 Dec 2022 14:36:43 GMT", "version": "v1" } ]
2022-12-26
[ [ "Yang", "Zhitong", "" ], [ "Ma", "Xing", "" ], [ "Liu", "Anqi", "" ], [ "Zhang", "Zheyu", "" ] ]
Task-oriented dialog(TOD) aims to assist users in achieving specific goals through multi-turn conversation. Recently, good results have been obtained based on large pre-trained models. However, the labeled-data scarcity hinders the efficient development of TOD systems at scale. In this work, we constructed a weakly supervised dataset based on a teacher/student paradigm that leverages a large collection of unlabelled dialogues. Furthermore, we built a modular dialogue system and integrated coarse-to-fine grained classification for user intent detection. Experiments show that our method can reach the dialog goal with a higher success rate and generate more coherent responses.
1701.00165
Amit Shaked
Amit Shaked and Lior Wolf
Improved Stereo Matching with Constant Highway Networks and Reflective Confidence Learning
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an improved three-step pipeline for the stereo matching problem and introduce multiple novelties at each stage. We propose a new highway network architecture for computing the matching cost at each possible disparity, based on multilevel weighted residual shortcuts, trained with a hybrid loss that supports multilevel comparison of image patches. A novel post-processing step is then introduced, which employs a second deep convolutional neural network for pooling global information from multiple disparities. This network outputs both the image disparity map, which replaces the conventional "winner takes all" strategy, and a confidence in the prediction. The confidence score is achieved by training the network with a new technique that we call the reflective loss. Lastly, the learned confidence is employed in order to better detect outliers in the refinement step. The proposed pipeline achieves state of the art accuracy on the largest and most competitive stereo benchmarks, and the learned confidence is shown to outperform all existing alternatives.
[ { "created": "Sat, 31 Dec 2016 20:24:16 GMT", "version": "v1" } ]
2017-01-03
[ [ "Shaked", "Amit", "" ], [ "Wolf", "Lior", "" ] ]
We present an improved three-step pipeline for the stereo matching problem and introduce multiple novelties at each stage. We propose a new highway network architecture for computing the matching cost at each possible disparity, based on multilevel weighted residual shortcuts, trained with a hybrid loss that supports multilevel comparison of image patches. A novel post-processing step is then introduced, which employs a second deep convolutional neural network for pooling global information from multiple disparities. This network outputs both the image disparity map, which replaces the conventional "winner takes all" strategy, and a confidence in the prediction. The confidence score is achieved by training the network with a new technique that we call the reflective loss. Lastly, the learned confidence is employed in order to better detect outliers in the refinement step. The proposed pipeline achieves state of the art accuracy on the largest and most competitive stereo benchmarks, and the learned confidence is shown to outperform all existing alternatives.
2403.01081
Akash Srivastava
Shivchander Sudalairaj, Abhishek Bhandwaldar, Aldo Pareja, Kai Xu, David D. Cox, Akash Srivastava
LAB: Large-Scale Alignment for ChatBots
Corresponding Author: Akash Srivastava. Equal Contribution: Shivchander Sudalairaj, Abhishek Bhandwaldar, Aldo Pareja, Akash Srivastava, Code: https://github.com/instructlab
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
This work introduces LAB (Large-scale Alignment for chatBots), a novel methodology designed to overcome the scalability challenges in the instruction-tuning phase of large language model (LLM) training. Leveraging a taxonomy-guided synthetic data generation process and a multi-phase tuning framework, LAB significantly reduces reliance on expensive human annotations and proprietary models like GPT-4. We demonstrate that LAB-trained models can achieve competitive performance across several benchmarks compared to models trained with traditional human-annotated or GPT-4 generated synthetic data. Thus offering a scalable, cost-effective solution for enhancing LLM capabilities and instruction-following behaviors without the drawbacks of catastrophic forgetting, marking a step forward in the efficient training of LLMs for a wide range of applications.
[ { "created": "Sat, 2 Mar 2024 03:48:37 GMT", "version": "v1" }, { "created": "Wed, 6 Mar 2024 22:25:44 GMT", "version": "v2" }, { "created": "Mon, 29 Apr 2024 18:55:34 GMT", "version": "v3" } ]
2024-05-01
[ [ "Sudalairaj", "Shivchander", "" ], [ "Bhandwaldar", "Abhishek", "" ], [ "Pareja", "Aldo", "" ], [ "Xu", "Kai", "" ], [ "Cox", "David D.", "" ], [ "Srivastava", "Akash", "" ] ]
This work introduces LAB (Large-scale Alignment for chatBots), a novel methodology designed to overcome the scalability challenges in the instruction-tuning phase of large language model (LLM) training. Leveraging a taxonomy-guided synthetic data generation process and a multi-phase tuning framework, LAB significantly reduces reliance on expensive human annotations and proprietary models like GPT-4. We demonstrate that LAB-trained models can achieve competitive performance across several benchmarks compared to models trained with traditional human-annotated or GPT-4 generated synthetic data. Thus offering a scalable, cost-effective solution for enhancing LLM capabilities and instruction-following behaviors without the drawbacks of catastrophic forgetting, marking a step forward in the efficient training of LLMs for a wide range of applications.
2109.07672
Malek Mouhoub
Munira Al-Ageili and Malek Mouhoub
An Ontology-Based Information Extraction System for Residential Land Use Suitability Analysis
17 pages, 18 figures
null
null
null
cs.AI cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
We propose an Ontology-Based Information Extraction (OBIE) system to automate the extraction of the criteria and values applied in Land Use Suitability Analysis (LUSA) from bylaw and regulation documents related to the geographic area of interest. The results obtained by our proposed LUSA OBIE system (land use suitability criteria and their values) are presented as an ontology populated with instances of the extracted criteria and property values. This latter output ontology is incorporated into a Multi-Criteria Decision Making (MCDM) model applied for constructing suitability maps for different kinds of land uses. The resulting maps may be the final desired product or can be incorporated into the cellular automata urban modeling and simulation for predicting future urban growth. A case study has been conducted where the output from LUSA OBIE is applied to help produce a suitability map for the City of Regina, Saskatchewan, to assist in the identification of suitable areas for residential development. A set of Saskatchewan bylaw and regulation documents were downloaded and input to the LUSA OBIE system. We accessed the extracted information using both the populated LUSA ontology and the set of annotated documents. In this regard, the LUSA OBIE system was effective in producing a final suitability map.
[ { "created": "Thu, 16 Sep 2021 02:18:30 GMT", "version": "v1" } ]
2021-09-17
[ [ "Al-Ageili", "Munira", "" ], [ "Mouhoub", "Malek", "" ] ]
We propose an Ontology-Based Information Extraction (OBIE) system to automate the extraction of the criteria and values applied in Land Use Suitability Analysis (LUSA) from bylaw and regulation documents related to the geographic area of interest. The results obtained by our proposed LUSA OBIE system (land use suitability criteria and their values) are presented as an ontology populated with instances of the extracted criteria and property values. This latter output ontology is incorporated into a Multi-Criteria Decision Making (MCDM) model applied for constructing suitability maps for different kinds of land uses. The resulting maps may be the final desired product or can be incorporated into the cellular automata urban modeling and simulation for predicting future urban growth. A case study has been conducted where the output from LUSA OBIE is applied to help produce a suitability map for the City of Regina, Saskatchewan, to assist in the identification of suitable areas for residential development. A set of Saskatchewan bylaw and regulation documents were downloaded and input to the LUSA OBIE system. We accessed the extracted information using both the populated LUSA ontology and the set of annotated documents. In this regard, the LUSA OBIE system was effective in producing a final suitability map.
1308.4978
Daniel Graziotin
Xiaofeng Wang, Daniel Graziotin, Juha Rikkil\"a, and Pekka Abrahamsson (Free University of Bozen-Bolzano)
Traverse the landscape of the mind by walking: an exploration of a new brainstorming practice
12 pages, 2 figures. Pilot study conducted to better understand a new brainstorming technique. Full study will follow
null
10.7287/peerj.preprints.51v1
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Group brainstorming is a well-known idea generation technique, which plays a key role in software development processes. Despite this, the relevant literature has had little to offer in advancing our understanding of the effectiveness of group brainstorming sessions. In this paper we present a research-in-progress on brainstorming while walking, which is a practice built upon the relationship between thinking and walking. The objective is to better understand how to conduct group brainstorming effectively. We compared two brainstorming sessions, one performed during a mountain walk, the other traditionally in a room. Three preliminary findings are obtained: walking can lead to an effective idea generation session; brainstorming while walking can encourage team members to participate in and contribute to the session in an equal manner; and it can help a team to maintain sustainable mental energy. Our study opens up an avenue for future exploration of effective group brainstorming practices.
[ { "created": "Thu, 22 Aug 2013 20:00:21 GMT", "version": "v1" } ]
2013-08-26
[ [ "Wang", "Xiaofeng", "", "Free University of Bozen-Bolzano" ], [ "Graziotin", "Daniel", "", "Free University of Bozen-Bolzano" ], [ "Rikkilä", "Juha", "", "Free University of Bozen-Bolzano" ], [ "Abrahamsson", "Pekka", "", "Free University of Bozen-Bolzano" ] ]
Group brainstorming is a well-known idea generation technique, which plays a key role in software development processes. Despite this, the relevant literature has had little to offer in advancing our understanding of the effectiveness of group brainstorming sessions. In this paper we present a research-in-progress on brainstorming while walking, which is a practice built upon the relationship between thinking and walking. The objective is to better understand how to conduct group brainstorming effectively. We compared two brainstorming sessions, one performed during a mountain walk, the other traditionally in a room. Three preliminary findings are obtained: walking can lead to an effective idea generation session; brainstorming while walking can encourage team members to participate in and contribute to the session in an equal manner; and it can help a team to maintain sustainable mental energy. Our study opens up an avenue for future exploration of effective group brainstorming practices.
1801.09036
Wlodek Zadrozny
Wlodek Zadrozny and Luciana Garbayo
A Sheaf Model of Contradictions and Disagreements. Preliminary Report and Discussion
This paper was presented at ISAIM 2018, International Symposium on Artificial Intelligence and Mathematics. Fort Lauderdale, FL. January 3 5, 2018. Minor typographical errors have been corrected
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a new formal model -- based on the mathematical construct of sheaves -- for representing contradictory information in textual sources. This model has the advantage of letting us (a) identify the causes of the inconsistency; (b) measure how strong it is; (c) and do something about it, e.g. suggest ways to reconcile inconsistent advice. This model naturally represents the distinction between contradictions and disagreements. It is based on the idea of representing natural language sentences as formulas with parameters sitting on lattices, creating partial orders based on predicates shared by theories, and building sheaves on these partial orders with products of lattices as stalks. Degrees of disagreement are measured by the existence of global and local sections. Limitations of the sheaf approach and connections to recent work in natural language processing, as well as the topics of contextuality in physics, data fusion, topological data analysis and epistemology are also discussed.
[ { "created": "Sat, 27 Jan 2018 05:13:55 GMT", "version": "v1" } ]
2018-01-30
[ [ "Zadrozny", "Wlodek", "" ], [ "Garbayo", "Luciana", "" ] ]
We introduce a new formal model -- based on the mathematical construct of sheaves -- for representing contradictory information in textual sources. This model has the advantage of letting us (a) identify the causes of the inconsistency; (b) measure how strong it is; (c) and do something about it, e.g. suggest ways to reconcile inconsistent advice. This model naturally represents the distinction between contradictions and disagreements. It is based on the idea of representing natural language sentences as formulas with parameters sitting on lattices, creating partial orders based on predicates shared by theories, and building sheaves on these partial orders with products of lattices as stalks. Degrees of disagreement are measured by the existence of global and local sections. Limitations of the sheaf approach and connections to recent work in natural language processing, as well as the topics of contextuality in physics, data fusion, topological data analysis and epistemology are also discussed.
2310.07765
Yonatan Kahn
Hannah Day, Yonatan Kahn, Daniel A. Roberts
Feature Learning and Generalization in Deep Networks with Orthogonal Weights
v2: numerical experiments updated with more data, plots updated to match, conclusions unchanged. 30+12 pages, 20 figures
null
null
MIT-CTP/5625
cs.LG hep-ph hep-th stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fully-connected deep neural networks with weights initialized from independent Gaussian distributions can be tuned to criticality, which prevents the exponential growth or decay of signals propagating through the network. However, such networks still exhibit fluctuations that grow linearly with the depth of the network, which may impair the training of networks with width comparable to depth. We show analytically that rectangular networks with tanh activations and weights initialized from the ensemble of orthogonal matrices have corresponding preactivation fluctuations which are independent of depth, to leading order in inverse width. Moreover, we demonstrate numerically that, at initialization, all correlators involving the neural tangent kernel (NTK) and its descendants at leading order in inverse width -- which govern the evolution of observables during training -- saturate at a depth of $\sim 20$, rather than growing without bound as in the case of Gaussian initializations. We speculate that this structure preserves finite-width feature learning while reducing overall noise, thus improving both generalization and training speed in deep networks with depth comparable to width. We provide some experimental justification by relating empirical measurements of the NTK to the superior performance of deep nonlinear orthogonal networks trained under full-batch gradient descent on the MNIST and CIFAR-10 classification tasks.
[ { "created": "Wed, 11 Oct 2023 18:00:02 GMT", "version": "v1" }, { "created": "Wed, 12 Jun 2024 14:57:56 GMT", "version": "v2" } ]
2024-06-13
[ [ "Day", "Hannah", "" ], [ "Kahn", "Yonatan", "" ], [ "Roberts", "Daniel A.", "" ] ]
Fully-connected deep neural networks with weights initialized from independent Gaussian distributions can be tuned to criticality, which prevents the exponential growth or decay of signals propagating through the network. However, such networks still exhibit fluctuations that grow linearly with the depth of the network, which may impair the training of networks with width comparable to depth. We show analytically that rectangular networks with tanh activations and weights initialized from the ensemble of orthogonal matrices have corresponding preactivation fluctuations which are independent of depth, to leading order in inverse width. Moreover, we demonstrate numerically that, at initialization, all correlators involving the neural tangent kernel (NTK) and its descendants at leading order in inverse width -- which govern the evolution of observables during training -- saturate at a depth of $\sim 20$, rather than growing without bound as in the case of Gaussian initializations. We speculate that this structure preserves finite-width feature learning while reducing overall noise, thus improving both generalization and training speed in deep networks with depth comparable to width. We provide some experimental justification by relating empirical measurements of the NTK to the superior performance of deep nonlinear orthogonal networks trained under full-batch gradient descent on the MNIST and CIFAR-10 classification tasks.
1106.6242
Sandeep Katta
Sandeep Katta
Visual Secret Sharing Scheme using Grayscale Images
6 pages, 2 figures
null
null
null
cs.CR cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pixel expansion and the quality of the reconstructed secret image has been a major issue of visual secret sharing (VSS) schemes. A number of probabilistic VSS schemes with minimum pixel expansion have been proposed for black and white (binary) secret images. This paper presents a probabilistic (2, 3)-VSS scheme for gray scale images. Its pixel expansion is larger in size but the quality of the image is perfect when it's reconstructed. The construction of the shadow images (transparent shares) is based on the binary OR operation.
[ { "created": "Thu, 30 Jun 2011 14:25:46 GMT", "version": "v1" } ]
2011-07-01
[ [ "Katta", "Sandeep", "" ] ]
Pixel expansion and the quality of the reconstructed secret image has been a major issue of visual secret sharing (VSS) schemes. A number of probabilistic VSS schemes with minimum pixel expansion have been proposed for black and white (binary) secret images. This paper presents a probabilistic (2, 3)-VSS scheme for gray scale images. Its pixel expansion is larger in size but the quality of the image is perfect when it's reconstructed. The construction of the shadow images (transparent shares) is based on the binary OR operation.
2106.04546
Yuan Yin
Yuan Yin, Ibrahim Ayed, Emmanuel de B\'ezenac, Nicolas Baskiotis, Patrick Gallinari
LEADS: Learning Dynamical Systems that Generalize Across Environments
Published at NeurIPS 2021
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When modeling dynamical systems from real-world data samples, the distribution of data often changes according to the environment in which they are captured, and the dynamics of the system itself vary from one environment to another. Generalizing across environments thus challenges the conventional frameworks. The classical settings suggest either considering data as i.i.d. and learning a single model to cover all situations or learning environment-specific models. Both are sub-optimal: the former disregards the discrepancies between environments leading to biased solutions, while the latter does not exploit their potential commonalities and is prone to scarcity problems. We propose LEADS, a novel framework that leverages the commonalities and discrepancies among known environments to improve model generalization. This is achieved with a tailored training formulation aiming at capturing common dynamics within a shared model while additional terms capture environment-specific dynamics. We ground our approach in theory, exhibiting a decrease in sample complexity with our approach and corroborate these results empirically, instantiating it for linear dynamics. Moreover, we concretize this framework for neural networks and evaluate it experimentally on representative families of nonlinear dynamics. We show that this new setting can exploit knowledge extracted from environment-dependent data and improves generalization for both known and novel environments. Code is available at https://github.com/yuan-yin/LEADS.
[ { "created": "Tue, 8 Jun 2021 17:28:19 GMT", "version": "v1" }, { "created": "Mon, 14 Feb 2022 13:46:57 GMT", "version": "v2" } ]
2022-02-15
[ [ "Yin", "Yuan", "" ], [ "Ayed", "Ibrahim", "" ], [ "de Bézenac", "Emmanuel", "" ], [ "Baskiotis", "Nicolas", "" ], [ "Gallinari", "Patrick", "" ] ]
When modeling dynamical systems from real-world data samples, the distribution of data often changes according to the environment in which they are captured, and the dynamics of the system itself vary from one environment to another. Generalizing across environments thus challenges the conventional frameworks. The classical settings suggest either considering data as i.i.d. and learning a single model to cover all situations or learning environment-specific models. Both are sub-optimal: the former disregards the discrepancies between environments leading to biased solutions, while the latter does not exploit their potential commonalities and is prone to scarcity problems. We propose LEADS, a novel framework that leverages the commonalities and discrepancies among known environments to improve model generalization. This is achieved with a tailored training formulation aiming at capturing common dynamics within a shared model while additional terms capture environment-specific dynamics. We ground our approach in theory, exhibiting a decrease in sample complexity with our approach and corroborate these results empirically, instantiating it for linear dynamics. Moreover, we concretize this framework for neural networks and evaluate it experimentally on representative families of nonlinear dynamics. We show that this new setting can exploit knowledge extracted from environment-dependent data and improves generalization for both known and novel environments. Code is available at https://github.com/yuan-yin/LEADS.
2209.09652
Chengyin Hu
Chengyin Hu, Weiwen Shi, Ling Tian
Adversarial Color Projection: A Projector-based Physical Attack to DNNs
arXiv admin note: substantial text overlap with arXiv:2209.02430
null
null
null
cs.CR cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent research has demonstrated that deep neural networks (DNNs) are vulnerable to adversarial perturbations. Therefore, it is imperative to evaluate the resilience of advanced DNNs to adversarial attacks. However, traditional methods that use stickers as physical perturbations to deceive classifiers face challenges in achieving stealthiness and are susceptible to printing loss. Recently, advancements in physical attacks have utilized light beams, such as lasers, to perform attacks, where the optical patterns generated are artificial rather than natural. In this work, we propose a black-box projector-based physical attack, referred to as adversarial color projection (AdvCP), which manipulates the physical parameters of color projection to perform an adversarial attack. We evaluate our approach on three crucial criteria: effectiveness, stealthiness, and robustness. In the digital environment, we achieve an attack success rate of 97.60% on a subset of ImageNet, while in the physical environment, we attain an attack success rate of 100% in the indoor test and 82.14% in the outdoor test. The adversarial samples generated by AdvCP are compared with baseline samples to demonstrate the stealthiness of our approach. When attacking advanced DNNs, experimental results show that our method can achieve more than 85% attack success rate in all cases, which verifies the robustness of AdvCP. Finally, we consider the potential threats posed by AdvCP to future vision-based systems and applications and suggest some ideas for light-based physical attacks.
[ { "created": "Mon, 19 Sep 2022 12:27:32 GMT", "version": "v1" }, { "created": "Tue, 23 May 2023 11:56:41 GMT", "version": "v2" } ]
2023-05-24
[ [ "Hu", "Chengyin", "" ], [ "Shi", "Weiwen", "" ], [ "Tian", "Ling", "" ] ]
Recent research has demonstrated that deep neural networks (DNNs) are vulnerable to adversarial perturbations. Therefore, it is imperative to evaluate the resilience of advanced DNNs to adversarial attacks. However, traditional methods that use stickers as physical perturbations to deceive classifiers face challenges in achieving stealthiness and are susceptible to printing loss. Recently, advancements in physical attacks have utilized light beams, such as lasers, to perform attacks, where the optical patterns generated are artificial rather than natural. In this work, we propose a black-box projector-based physical attack, referred to as adversarial color projection (AdvCP), which manipulates the physical parameters of color projection to perform an adversarial attack. We evaluate our approach on three crucial criteria: effectiveness, stealthiness, and robustness. In the digital environment, we achieve an attack success rate of 97.60% on a subset of ImageNet, while in the physical environment, we attain an attack success rate of 100% in the indoor test and 82.14% in the outdoor test. The adversarial samples generated by AdvCP are compared with baseline samples to demonstrate the stealthiness of our approach. When attacking advanced DNNs, experimental results show that our method can achieve more than 85% attack success rate in all cases, which verifies the robustness of AdvCP. Finally, we consider the potential threats posed by AdvCP to future vision-based systems and applications and suggest some ideas for light-based physical attacks.
2003.13045
Liang Liu
Liang Liu, Jiangning Zhang, Ruifei He, Yong Liu, Yabiao Wang, Ying Tai, Donghao Luo, Chengjie Wang, Jilin Li, Feiyue Huang
Learning by Analogy: Reliable Supervision from Transformations for Unsupervised Optical Flow Estimation
Accepted to CVPR 2020, https://github.com/lliuz/ARFlow
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unsupervised learning of optical flow, which leverages the supervision from view synthesis, has emerged as a promising alternative to supervised methods. However, the objective of unsupervised learning is likely to be unreliable in challenging scenes. In this work, we present a framework to use more reliable supervision from transformations. It simply twists the general unsupervised learning pipeline by running another forward pass with transformed data from augmentation, along with using transformed predictions of original data as the self-supervision signal. Besides, we further introduce a lightweight network with multiple frames by a highly-shared flow decoder. Our method consistently gets a leap of performance on several benchmarks with the best accuracy among deep unsupervised methods. Also, our method achieves competitive results to recent fully supervised methods while with much fewer parameters.
[ { "created": "Sun, 29 Mar 2020 14:55:24 GMT", "version": "v1" }, { "created": "Sun, 29 Nov 2020 12:26:25 GMT", "version": "v2" } ]
2020-12-01
[ [ "Liu", "Liang", "" ], [ "Zhang", "Jiangning", "" ], [ "He", "Ruifei", "" ], [ "Liu", "Yong", "" ], [ "Wang", "Yabiao", "" ], [ "Tai", "Ying", "" ], [ "Luo", "Donghao", "" ], [ "Wang", "Chengjie", "" ], [ "Li", "Jilin", "" ], [ "Huang", "Feiyue", "" ] ]
Unsupervised learning of optical flow, which leverages the supervision from view synthesis, has emerged as a promising alternative to supervised methods. However, the objective of unsupervised learning is likely to be unreliable in challenging scenes. In this work, we present a framework to use more reliable supervision from transformations. It simply twists the general unsupervised learning pipeline by running another forward pass with transformed data from augmentation, along with using transformed predictions of original data as the self-supervision signal. Besides, we further introduce a lightweight network with multiple frames by a highly-shared flow decoder. Our method consistently gets a leap of performance on several benchmarks with the best accuracy among deep unsupervised methods. Also, our method achieves competitive results to recent fully supervised methods while with much fewer parameters.
2404.01203
Siddhant Jain
Siddhant Jain, Daniel Watson, Eric Tabellion, Aleksander Ho{\l}y\'nski, Ben Poole, Janne Kontkanen
Video Interpolation with Diffusion Models
CVPR 2024, Project page at https://vidim-interpolation.github.io/
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We present VIDIM, a generative model for video interpolation, which creates short videos given a start and end frame. In order to achieve high fidelity and generate motions unseen in the input data, VIDIM uses cascaded diffusion models to first generate the target video at low resolution, and then generate the high-resolution video conditioned on the low-resolution generated video. We compare VIDIM to previous state-of-the-art methods on video interpolation, and demonstrate how such works fail in most settings where the underlying motion is complex, nonlinear, or ambiguous while VIDIM can easily handle such cases. We additionally demonstrate how classifier-free guidance on the start and end frame and conditioning the super-resolution model on the original high-resolution frames without additional parameters unlocks high-fidelity results. VIDIM is fast to sample from as it jointly denoises all the frames to be generated, requires less than a billion parameters per diffusion model to produce compelling results, and still enjoys scalability and improved quality at larger parameter counts.
[ { "created": "Mon, 1 Apr 2024 15:59:32 GMT", "version": "v1" } ]
2024-04-02
[ [ "Jain", "Siddhant", "" ], [ "Watson", "Daniel", "" ], [ "Tabellion", "Eric", "" ], [ "Hołyński", "Aleksander", "" ], [ "Poole", "Ben", "" ], [ "Kontkanen", "Janne", "" ] ]
We present VIDIM, a generative model for video interpolation, which creates short videos given a start and end frame. In order to achieve high fidelity and generate motions unseen in the input data, VIDIM uses cascaded diffusion models to first generate the target video at low resolution, and then generate the high-resolution video conditioned on the low-resolution generated video. We compare VIDIM to previous state-of-the-art methods on video interpolation, and demonstrate how such works fail in most settings where the underlying motion is complex, nonlinear, or ambiguous while VIDIM can easily handle such cases. We additionally demonstrate how classifier-free guidance on the start and end frame and conditioning the super-resolution model on the original high-resolution frames without additional parameters unlocks high-fidelity results. VIDIM is fast to sample from as it jointly denoises all the frames to be generated, requires less than a billion parameters per diffusion model to produce compelling results, and still enjoys scalability and improved quality at larger parameter counts.
1112.1828
L\'aszl\'o Kozma
Laszlo Kozma
Minimum Average Distance Triangulations
ESA 2012
null
null
null
cs.CG cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of finding a triangulation T of a planar point set S such as to minimize the expected distance between two points x and y chosen uniformly at random from S. By distance we mean the length of the shortest path between x and y along edges of T. The length of a path is the sum of the weights of its edges. Edge weights are assumed to be given as part of the problem for every pair of distinct points (x,y) in S^2. In a different variant of the problem, the points are vertices of a simple polygon and we look for a triangulation of the interior of the polygon that is optimal in the same sense. We prove that a general formulation of the problem in which the weights are arbitrary positive numbers is strongly NP-complete. For the case when all the weights are equal we give polynomial-time algorithms. In the end we mention several open problems.
[ { "created": "Thu, 8 Dec 2011 13:19:24 GMT", "version": "v1" }, { "created": "Mon, 20 Feb 2012 16:35:46 GMT", "version": "v2" }, { "created": "Wed, 20 Jun 2012 13:18:20 GMT", "version": "v3" } ]
2012-06-21
[ [ "Kozma", "Laszlo", "" ] ]
We study the problem of finding a triangulation T of a planar point set S such as to minimize the expected distance between two points x and y chosen uniformly at random from S. By distance we mean the length of the shortest path between x and y along edges of T. The length of a path is the sum of the weights of its edges. Edge weights are assumed to be given as part of the problem for every pair of distinct points (x,y) in S^2. In a different variant of the problem, the points are vertices of a simple polygon and we look for a triangulation of the interior of the polygon that is optimal in the same sense. We prove that a general formulation of the problem in which the weights are arbitrary positive numbers is strongly NP-complete. For the case when all the weights are equal we give polynomial-time algorithms. In the end we mention several open problems.
2312.12466
Santhosh Pogaku
Santhosh Pogaku
Users Approach on Providing Feedback for Smart Home Devices
arXiv admin note: text overlap with arXiv:2312.11817
null
null
null
cs.HC cs.CL
http://creativecommons.org/licenses/by/4.0/
Smart Home technology has accomplished extraordinary interest in making individuals' lives more straightforward and more relaxing as of late. Technology as of late brought about delivering numerous savvy and refined frameworks which advanced clever living innovation. In this paper, we will be investigating the behavioural intention of user's approach on providing feedback for smart home devices. We will be conducting an online survey for sample of three to five students selected by simple random sampling to study the user's motto for giving feedback on smart home devices and their expectations. We have observed that most users are ready to share their feedback on smart home devices actively to improvise the service and quality of the product to fulfill the user needs and make their lives easier.
[ { "created": "Tue, 19 Dec 2023 03:18:12 GMT", "version": "v1" } ]
2023-12-21
[ [ "Pogaku", "Santhosh", "" ] ]
Smart Home technology has accomplished extraordinary interest in making individuals' lives more straightforward and more relaxing as of late. Technology as of late brought about delivering numerous savvy and refined frameworks which advanced clever living innovation. In this paper, we will be investigating the behavioural intention of user's approach on providing feedback for smart home devices. We will be conducting an online survey for sample of three to five students selected by simple random sampling to study the user's motto for giving feedback on smart home devices and their expectations. We have observed that most users are ready to share their feedback on smart home devices actively to improvise the service and quality of the product to fulfill the user needs and make their lives easier.
2404.07336
Prashant Mathur
Lucas Goncalves, Prashant Mathur, Chandrashekhar Lavania, Metehan Cekic, Marcello Federico, Kyu J. Han
PEAVS: Perceptual Evaluation of Audio-Visual Synchrony Grounded in Viewers' Opinion Scores
24 pages
null
null
null
cs.CV cs.MM eess.AS
http://creativecommons.org/licenses/by/4.0/
Recent advancements in audio-visual generative modeling have been propelled by progress in deep learning and the availability of data-rich benchmarks. However, the growth is not attributed solely to models and benchmarks. Universally accepted evaluation metrics also play an important role in advancing the field. While there are many metrics available to evaluate audio and visual content separately, there is a lack of metrics that offer a quantitative and interpretable measure of audio-visual synchronization for videos "in the wild". To address this gap, we first created a large scale human annotated dataset (100+ hrs) representing nine types of synchronization errors in audio-visual content and how human perceive them. We then developed a PEAVS (Perceptual Evaluation of Audio-Visual Synchrony) score, a novel automatic metric with a 5-point scale that evaluates the quality of audio-visual synchronization. We validate PEAVS using a newly generated dataset, achieving a Pearson correlation of 0.79 at the set level and 0.54 at the clip level when compared to human labels. In our experiments, we observe a relative gain 50% over a natural extension of Fr\'echet based metrics for Audio-Visual synchrony, confirming PEAVS efficacy in objectively modeling subjective perceptions of audio-visual synchronization for videos "in the wild".
[ { "created": "Wed, 10 Apr 2024 20:32:24 GMT", "version": "v1" } ]
2024-04-12
[ [ "Goncalves", "Lucas", "" ], [ "Mathur", "Prashant", "" ], [ "Lavania", "Chandrashekhar", "" ], [ "Cekic", "Metehan", "" ], [ "Federico", "Marcello", "" ], [ "Han", "Kyu J.", "" ] ]
Recent advancements in audio-visual generative modeling have been propelled by progress in deep learning and the availability of data-rich benchmarks. However, the growth is not attributed solely to models and benchmarks. Universally accepted evaluation metrics also play an important role in advancing the field. While there are many metrics available to evaluate audio and visual content separately, there is a lack of metrics that offer a quantitative and interpretable measure of audio-visual synchronization for videos "in the wild". To address this gap, we first created a large scale human annotated dataset (100+ hrs) representing nine types of synchronization errors in audio-visual content and how human perceive them. We then developed a PEAVS (Perceptual Evaluation of Audio-Visual Synchrony) score, a novel automatic metric with a 5-point scale that evaluates the quality of audio-visual synchronization. We validate PEAVS using a newly generated dataset, achieving a Pearson correlation of 0.79 at the set level and 0.54 at the clip level when compared to human labels. In our experiments, we observe a relative gain 50% over a natural extension of Fr\'echet based metrics for Audio-Visual synchrony, confirming PEAVS efficacy in objectively modeling subjective perceptions of audio-visual synchronization for videos "in the wild".
2309.08794
Deval Mehta
Deval Mehta, Shobi Sivathamboo, Hugh Simpson, Patrick Kwan, Terence O`Brien, Zongyuan Ge
Privacy-preserving Early Detection of Epileptic Seizures in Videos
Accepted to MICCAI 2023
null
null
null
cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
In this work, we contribute towards the development of video-based epileptic seizure classification by introducing a novel framework (SETR-PKD), which could achieve privacy-preserved early detection of seizures in videos. Specifically, our framework has two significant components - (1) It is built upon optical flow features extracted from the video of a seizure, which encodes the seizure motion semiotics while preserving the privacy of the patient; (2) It utilizes a transformer based progressive knowledge distillation, where the knowledge is gradually distilled from networks trained on a longer portion of video samples to the ones which will operate on shorter portions. Thus, our proposed framework addresses the limitations of the current approaches which compromise the privacy of the patients by directly operating on the RGB video of a seizure as well as impede real-time detection of a seizure by utilizing the full video sample to make a prediction. Our SETR-PKD framework could detect tonic-clonic seizures (TCSs) in a privacy-preserving manner with an accuracy of 83.9% while they are only half-way into their progression. Our data and code is available at https://github.com/DevD1092/seizure-detection
[ { "created": "Fri, 15 Sep 2023 22:29:07 GMT", "version": "v1" } ]
2023-09-19
[ [ "Mehta", "Deval", "" ], [ "Sivathamboo", "Shobi", "" ], [ "Simpson", "Hugh", "" ], [ "Kwan", "Patrick", "" ], [ "O`Brien", "Terence", "" ], [ "Ge", "Zongyuan", "" ] ]
In this work, we contribute towards the development of video-based epileptic seizure classification by introducing a novel framework (SETR-PKD), which could achieve privacy-preserved early detection of seizures in videos. Specifically, our framework has two significant components - (1) It is built upon optical flow features extracted from the video of a seizure, which encodes the seizure motion semiotics while preserving the privacy of the patient; (2) It utilizes a transformer based progressive knowledge distillation, where the knowledge is gradually distilled from networks trained on a longer portion of video samples to the ones which will operate on shorter portions. Thus, our proposed framework addresses the limitations of the current approaches which compromise the privacy of the patients by directly operating on the RGB video of a seizure as well as impede real-time detection of a seizure by utilizing the full video sample to make a prediction. Our SETR-PKD framework could detect tonic-clonic seizures (TCSs) in a privacy-preserving manner with an accuracy of 83.9% while they are only half-way into their progression. Our data and code is available at https://github.com/DevD1092/seizure-detection
2403.12706
Shanchuan Lin
Shanchuan Lin, Xiao Yang
AnimateDiff-Lightning: Cross-Model Diffusion Distillation
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
We present AnimateDiff-Lightning for lightning-fast video generation. Our model uses progressive adversarial diffusion distillation to achieve new state-of-the-art in few-step video generation. We discuss our modifications to adapt it for the video modality. Furthermore, we propose to simultaneously distill the probability flow of multiple base diffusion models, resulting in a single distilled motion module with broader style compatibility. We are pleased to release our distilled AnimateDiff-Lightning model for the community's use.
[ { "created": "Tue, 19 Mar 2024 13:08:54 GMT", "version": "v1" } ]
2024-03-20
[ [ "Lin", "Shanchuan", "" ], [ "Yang", "Xiao", "" ] ]
We present AnimateDiff-Lightning for lightning-fast video generation. Our model uses progressive adversarial diffusion distillation to achieve new state-of-the-art in few-step video generation. We discuss our modifications to adapt it for the video modality. Furthermore, we propose to simultaneously distill the probability flow of multiple base diffusion models, resulting in a single distilled motion module with broader style compatibility. We are pleased to release our distilled AnimateDiff-Lightning model for the community's use.
cs/0306063
Dantong Yu
Richard Baker, Dantong Yu, Tomasz Wlodek
A Model for Grid User Management
Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 6 pages, 2 figures and 1 style file, PSN TUBT002
null
null
null
cs.DC
null
Registration and management of users in a large scale Grid computing environment presents new challenges that are not well addressed by existing protocols. Within a single Virtual Organization (VO), thousands of users will potentially need access to hundreds of computing sites, and the traditional model where users register for local accounts at each site will present significant scaling problems. However, computing sites must maintain control over access to the site and site policies generally require individual local accounts for every user. We present here a model that allows users to register once with a VO and yet still provides all of the computing sites the information they require with the required level of trust. We have developed tools to allow sites to automate the management of local accounts and the mappings between Grid identities and local accounts.
[ { "created": "Fri, 13 Jun 2003 17:01:45 GMT", "version": "v1" } ]
2007-05-23
[ [ "Baker", "Richard", "" ], [ "Yu", "Dantong", "" ], [ "Wlodek", "Tomasz", "" ] ]
Registration and management of users in a large scale Grid computing environment presents new challenges that are not well addressed by existing protocols. Within a single Virtual Organization (VO), thousands of users will potentially need access to hundreds of computing sites, and the traditional model where users register for local accounts at each site will present significant scaling problems. However, computing sites must maintain control over access to the site and site policies generally require individual local accounts for every user. We present here a model that allows users to register once with a VO and yet still provides all of the computing sites the information they require with the required level of trust. We have developed tools to allow sites to automate the management of local accounts and the mappings between Grid identities and local accounts.
1903.03036
David McDonald
David McDonald and Shan He
HEAT: Hyperbolic Embedding of Attributed Networks
15 pages, 4 figures
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Finding a low dimensional representation of hierarchical, structured data described by a network remains a challenging problem in the machine learning community. An emerging approach is embedding these networks into hyperbolic space because it can naturally represent a network's hierarchical structure. However, existing hyperbolic embedding approaches cannot deal with attributed networks, in which nodes are annotated with additional attributes. These attributes might provide additional proximity information to constrain the representations of the nodes, which is important to learn high quality hyperbolic embeddings. To fill this gap, we introduce HEAT (Hyperbolic Embedding of ATributed networks), the first method for embedding attributed networks to a hyperbolic space. HEAT consists of 1) a modified random walk algorithm to obtain training samples that capture both topological and attribute similarity; and 2) a learning algorithm for learning hyperboloid embeddings from the obtained training samples. We show that by leveraging node attributes, HEAT can outperform a state-of-the-art Hyperbolic embedding algorithm on several downstream tasks. As a general embedding method, HEAT opens the door to hyperbolic manifold learning on a wide range of attributed and unattributed networks.
[ { "created": "Thu, 7 Mar 2019 16:50:26 GMT", "version": "v1" }, { "created": "Thu, 2 May 2019 11:17:22 GMT", "version": "v2" } ]
2019-05-03
[ [ "McDonald", "David", "" ], [ "He", "Shan", "" ] ]
Finding a low dimensional representation of hierarchical, structured data described by a network remains a challenging problem in the machine learning community. An emerging approach is embedding these networks into hyperbolic space because it can naturally represent a network's hierarchical structure. However, existing hyperbolic embedding approaches cannot deal with attributed networks, in which nodes are annotated with additional attributes. These attributes might provide additional proximity information to constrain the representations of the nodes, which is important to learn high quality hyperbolic embeddings. To fill this gap, we introduce HEAT (Hyperbolic Embedding of ATributed networks), the first method for embedding attributed networks to a hyperbolic space. HEAT consists of 1) a modified random walk algorithm to obtain training samples that capture both topological and attribute similarity; and 2) a learning algorithm for learning hyperboloid embeddings from the obtained training samples. We show that by leveraging node attributes, HEAT can outperform a state-of-the-art Hyperbolic embedding algorithm on several downstream tasks. As a general embedding method, HEAT opens the door to hyperbolic manifold learning on a wide range of attributed and unattributed networks.
2206.12946
Nimet Kaygusuz
Nimet Kaygusuz, Oscar Mendez, Richard Bowden
AFT-VO: Asynchronous Fusion Transformers for Multi-View Visual Odometry Estimation
null
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motion estimation approaches typically employ sensor fusion techniques, such as the Kalman Filter, to handle individual sensor failures. More recently, deep learning-based fusion approaches have been proposed, increasing the performance and requiring less model-specific implementations. However, current deep fusion approaches often assume that sensors are synchronised, which is not always practical, especially for low-cost hardware. To address this limitation, in this work, we propose AFT-VO, a novel transformer-based sensor fusion architecture to estimate VO from multiple sensors. Our framework combines predictions from asynchronous multi-view cameras and accounts for the time discrepancies of measurements coming from different sources. Our approach first employs a Mixture Density Network (MDN) to estimate the probability distributions of the 6-DoF poses for every camera in the system. Then a novel transformer-based fusion module, AFT-VO, is introduced, which combines these asynchronous pose estimations, along with their confidences. More specifically, we introduce Discretiser and Source Encoding techniques which enable the fusion of multi-source asynchronous signals. We evaluate our approach on the popular nuScenes and KITTI datasets. Our experiments demonstrate that multi-view fusion for VO estimation provides robust and accurate trajectories, outperforming the state of the art in both challenging weather and lighting conditions.
[ { "created": "Sun, 26 Jun 2022 19:29:08 GMT", "version": "v1" }, { "created": "Fri, 16 Sep 2022 13:47:18 GMT", "version": "v2" } ]
2022-09-19
[ [ "Kaygusuz", "Nimet", "" ], [ "Mendez", "Oscar", "" ], [ "Bowden", "Richard", "" ] ]
Motion estimation approaches typically employ sensor fusion techniques, such as the Kalman Filter, to handle individual sensor failures. More recently, deep learning-based fusion approaches have been proposed, increasing the performance and requiring less model-specific implementations. However, current deep fusion approaches often assume that sensors are synchronised, which is not always practical, especially for low-cost hardware. To address this limitation, in this work, we propose AFT-VO, a novel transformer-based sensor fusion architecture to estimate VO from multiple sensors. Our framework combines predictions from asynchronous multi-view cameras and accounts for the time discrepancies of measurements coming from different sources. Our approach first employs a Mixture Density Network (MDN) to estimate the probability distributions of the 6-DoF poses for every camera in the system. Then a novel transformer-based fusion module, AFT-VO, is introduced, which combines these asynchronous pose estimations, along with their confidences. More specifically, we introduce Discretiser and Source Encoding techniques which enable the fusion of multi-source asynchronous signals. We evaluate our approach on the popular nuScenes and KITTI datasets. Our experiments demonstrate that multi-view fusion for VO estimation provides robust and accurate trajectories, outperforming the state of the art in both challenging weather and lighting conditions.
2111.07271
Auriol Degbelo
Lucas Braun, Auriol Degbelo, Christian Kray
Geofreebie: A Location-Based Freecycling App to Support Forced Migrant Resettlement
Article accepted for publication in the Journal of Location-based Services
null
10.1080/17489725.2021.1874553
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Germany has witnessed an influx of forced migrants in recent years. Promoting social interaction with the local community is key to supporting the resettlement of these newcomers. Location-based freecycling services present important benefits due to freecycling's potential to bolster social engagement and location-based services' ability to adapt to the user's context. Yet, their potential to support forced migrants' resettlement is yet to be examined. We conducted needs assessment interviews with 11 participants in Muenster, Germany. We analyzed the interview results to develop user requirements for location-based freecycling services. We then implemented a subset of the user requirements as a prototype mobile app called Geofreebie. The evaluation of the app with 22 participants showed that Geofreebie offered two key advantages for forced migrants' resettlement: it increased the size of their social network, and created a sense of community on their side. These findings can benefit researchers and developers of location-based services to support forced migrant resettlement.
[ { "created": "Sun, 14 Nov 2021 08:12:46 GMT", "version": "v1" } ]
2021-11-16
[ [ "Braun", "Lucas", "" ], [ "Degbelo", "Auriol", "" ], [ "Kray", "Christian", "" ] ]
Germany has witnessed an influx of forced migrants in recent years. Promoting social interaction with the local community is key to supporting the resettlement of these newcomers. Location-based freecycling services present important benefits due to freecycling's potential to bolster social engagement and location-based services' ability to adapt to the user's context. Yet, their potential to support forced migrants' resettlement is yet to be examined. We conducted needs assessment interviews with 11 participants in Muenster, Germany. We analyzed the interview results to develop user requirements for location-based freecycling services. We then implemented a subset of the user requirements as a prototype mobile app called Geofreebie. The evaluation of the app with 22 participants showed that Geofreebie offered two key advantages for forced migrants' resettlement: it increased the size of their social network, and created a sense of community on their side. These findings can benefit researchers and developers of location-based services to support forced migrant resettlement.
cs/0406033
Manor Mendel
Manor Mendel
Randomized k-server algorithms for growth-rate bounded graphs
The paper is withdrawn
J. Algorithms, 55(2): 192-202, 2005
10.1016/j.jalgor.2004.06.002
null
cs.DS
null
The paper referred to in the title is withdrawn.
[ { "created": "Thu, 17 Jun 2004 15:11:54 GMT", "version": "v1" }, { "created": "Fri, 28 Sep 2007 22:31:51 GMT", "version": "v2" } ]
2007-10-01
[ [ "Mendel", "Manor", "" ] ]
The paper referred to in the title is withdrawn.
1901.01651
Gary Pui-Tung Choi
Gary P. T. Choi, Hei Long Chan, Robin Yong, Sarbin Ranjitkar, Alan Brook, Grant Townsend, Ke Chen, Lok Ming Lui
Tooth morphometry using quasi-conformal theory
null
Pattern Recognition 99, 107064 (2020)
10.1016/j.patcog.2019.107064
null
cs.CV cs.CG q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Shape analysis is important in anthropology, bioarchaeology and forensic science for interpreting useful information from human remains. In particular, teeth are morphologically stable and hence well-suited for shape analysis. In this work, we propose a framework for tooth morphometry using quasi-conformal theory. Landmark-matching Teichm\"uller maps are used for establishing a 1-1 correspondence between tooth surfaces with prescribed anatomical landmarks. Then, a quasi-conformal statistical shape analysis model based on the Teichm\"uller mapping results is proposed for building a tooth classification scheme. We deploy our framework on a dataset of human premolars to analyze the tooth shape variation among genders and ancestries. Experimental results show that our method achieves much higher classification accuracy with respect to both gender and ancestry when compared to the existing methods. Furthermore, our model reveals the underlying tooth shape difference between different genders and ancestries in terms of the local geometric distortion and curvatures.
[ { "created": "Mon, 7 Jan 2019 03:00:12 GMT", "version": "v1" } ]
2020-02-10
[ [ "Choi", "Gary P. T.", "" ], [ "Chan", "Hei Long", "" ], [ "Yong", "Robin", "" ], [ "Ranjitkar", "Sarbin", "" ], [ "Brook", "Alan", "" ], [ "Townsend", "Grant", "" ], [ "Chen", "Ke", "" ], [ "Lui", "Lok Ming", "" ] ]
Shape analysis is important in anthropology, bioarchaeology and forensic science for interpreting useful information from human remains. In particular, teeth are morphologically stable and hence well-suited for shape analysis. In this work, we propose a framework for tooth morphometry using quasi-conformal theory. Landmark-matching Teichm\"uller maps are used for establishing a 1-1 correspondence between tooth surfaces with prescribed anatomical landmarks. Then, a quasi-conformal statistical shape analysis model based on the Teichm\"uller mapping results is proposed for building a tooth classification scheme. We deploy our framework on a dataset of human premolars to analyze the tooth shape variation among genders and ancestries. Experimental results show that our method achieves much higher classification accuracy with respect to both gender and ancestry when compared to the existing methods. Furthermore, our model reveals the underlying tooth shape difference between different genders and ancestries in terms of the local geometric distortion and curvatures.
2308.09866
Junyan Su
Junyan Su, Qiulin Lin, Minghua Chen, Haibo Zeng
Minimizing Carbon Footprint for Timely E-Truck Transportation: Hardness and Approximation Algorithm
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Carbon footprint optimization (CFO) is important for sustainable heavy-duty e-truck transportation. We consider the CFO problem for timely transportation of e-trucks, where the truck travels from an origin to a destination across a national highway network subject to a deadline. The goal is to minimize the carbon footprint by orchestrating path planning, speed planning, and intermediary charging planning. We first show that it is NP-hard even just to find a feasible CFO solution. We then develop a $(1+\epsilon_F, 1+\epsilon_\beta)$ bi-criteria approximation algorithm that achieves a carbon footprint within a ratio of $(1+\epsilon_F)$ to the minimum with no deadline violation and at most a ratio of $(1+\epsilon_\beta)$ battery capacity violation (for any positive $\epsilon_F$ and $\epsilon_\beta$). Its time complexity is polynomial in the size of the highway network, $1/\epsilon_F$, and $1/\epsilon_\beta$. Such algorithmic results are among the best possible unless P=NP. Simulation results based on real-world traces show that our scheme reduces up to 11\% carbon footprint as compared to baseline alternatives considering only energy consumption but not carbon footprint.
[ { "created": "Sat, 19 Aug 2023 00:59:17 GMT", "version": "v1" } ]
2023-08-22
[ [ "Su", "Junyan", "" ], [ "Lin", "Qiulin", "" ], [ "Chen", "Minghua", "" ], [ "Zeng", "Haibo", "" ] ]
Carbon footprint optimization (CFO) is important for sustainable heavy-duty e-truck transportation. We consider the CFO problem for timely transportation of e-trucks, where the truck travels from an origin to a destination across a national highway network subject to a deadline. The goal is to minimize the carbon footprint by orchestrating path planning, speed planning, and intermediary charging planning. We first show that it is NP-hard even just to find a feasible CFO solution. We then develop a $(1+\epsilon_F, 1+\epsilon_\beta)$ bi-criteria approximation algorithm that achieves a carbon footprint within a ratio of $(1+\epsilon_F)$ to the minimum with no deadline violation and at most a ratio of $(1+\epsilon_\beta)$ battery capacity violation (for any positive $\epsilon_F$ and $\epsilon_\beta$). Its time complexity is polynomial in the size of the highway network, $1/\epsilon_F$, and $1/\epsilon_\beta$. Such algorithmic results are among the best possible unless P=NP. Simulation results based on real-world traces show that our scheme reduces up to 11\% carbon footprint as compared to baseline alternatives considering only energy consumption but not carbon footprint.
2310.01292
Luyi Qiu
Luyi Qiu and Dayu Yu and Xiaofeng Zhang and Chenxiao Zhang
Efficient Remote Sensing Segmentation With Generative Adversarial Transformer
null
null
null
null
cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most deep learning methods that achieve high segmentation accuracy require deep network architectures that are too heavy and complex to run on embedded devices with limited storage and memory space. To address this issue, this paper proposes an efficient Generative Adversarial Transfomer (GATrans) for achieving high-precision semantic segmentation while maintaining an extremely efficient size. The framework utilizes a Global Transformer Network (GTNet) as the generator, efficiently extracting multi-level features through residual connections. GTNet employs global transformer blocks with progressively linear computational complexity to reassign global features based on a learnable similarity function. To focus on object-level and pixel-level information, the GATrans optimizes the objective function by combining structural similarity losses. We validate the effectiveness of our approach through extensive experiments on the Vaihingen dataset, achieving an average F1 score of 90.17% and an overall accuracy of 91.92%.
[ { "created": "Mon, 2 Oct 2023 15:46:59 GMT", "version": "v1" } ]
2023-10-03
[ [ "Qiu", "Luyi", "" ], [ "Yu", "Dayu", "" ], [ "Zhang", "Xiaofeng", "" ], [ "Zhang", "Chenxiao", "" ] ]
Most deep learning methods that achieve high segmentation accuracy require deep network architectures that are too heavy and complex to run on embedded devices with limited storage and memory space. To address this issue, this paper proposes an efficient Generative Adversarial Transfomer (GATrans) for achieving high-precision semantic segmentation while maintaining an extremely efficient size. The framework utilizes a Global Transformer Network (GTNet) as the generator, efficiently extracting multi-level features through residual connections. GTNet employs global transformer blocks with progressively linear computational complexity to reassign global features based on a learnable similarity function. To focus on object-level and pixel-level information, the GATrans optimizes the objective function by combining structural similarity losses. We validate the effectiveness of our approach through extensive experiments on the Vaihingen dataset, achieving an average F1 score of 90.17% and an overall accuracy of 91.92%.
2103.13188
Alexander Venus MSc
Alexander Venus, Erik Leitinger, Stefan Tertinek and Klaus Witrisal
A Message Passing based Adaptive PDA Algorithm for Robust Radio-based Localization and Tracking
6 pages (two column), 6 figures, IEEE RadarConf 2021: Synergistic Radar Signal Processing and Tracking
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
We present a message passing algorithm for localization and tracking in multipath-prone environments that implicitly considers obstructed line-of-sight situations. The proposed adaptive probabilistic data association algorithm infers the position of a mobile agent using multiple anchors by utilizing delay and amplitude of the multipath components (MPCs) as well as their respective uncertainties. By employing a nonuniform clutter model, we enable the algorithm to facilitate the position information contained in the MPCs to support the estimation of the agent position without exact knowledge about the environment geometry. Our algorithm adapts in an online manner to both, the time-varying signal-to-noise-ratio and line-of-sight (LOS) existence probability of each anchor. In a numerical analysis we show that the algorithm is able to operate reliably in environments characterized by strong multipath propagation, even if a temporary obstruction of all anchors occurs simultaneously.
[ { "created": "Wed, 24 Mar 2021 13:43:34 GMT", "version": "v1" }, { "created": "Thu, 25 Mar 2021 12:06:28 GMT", "version": "v2" } ]
2021-03-26
[ [ "Venus", "Alexander", "" ], [ "Leitinger", "Erik", "" ], [ "Tertinek", "Stefan", "" ], [ "Witrisal", "Klaus", "" ] ]
We present a message passing algorithm for localization and tracking in multipath-prone environments that implicitly considers obstructed line-of-sight situations. The proposed adaptive probabilistic data association algorithm infers the position of a mobile agent using multiple anchors by utilizing delay and amplitude of the multipath components (MPCs) as well as their respective uncertainties. By employing a nonuniform clutter model, we enable the algorithm to facilitate the position information contained in the MPCs to support the estimation of the agent position without exact knowledge about the environment geometry. Our algorithm adapts in an online manner to both, the time-varying signal-to-noise-ratio and line-of-sight (LOS) existence probability of each anchor. In a numerical analysis we show that the algorithm is able to operate reliably in environments characterized by strong multipath propagation, even if a temporary obstruction of all anchors occurs simultaneously.
2104.02995
Qingqing Long
Qingqing Long, Yilun Jin, Yi Wu, Guojie Song
Theoretically Improving Graph Neural Networks via Anonymous Walk Graph Kernels
11 pages
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph neural networks (GNNs) have achieved tremendous success in graph mining. However, the inability of GNNs to model substructures in graphs remains a significant drawback. Specifically, message-passing GNNs (MPGNNs), as the prevailing type of GNNs, have been theoretically shown unable to distinguish, detect or count many graph substructures. While efforts have been paid to complement the inability, existing works either rely on pre-defined substructure sets, thus being less flexible, or are lacking in theoretical insights. In this paper, we propose GSKN, a GNN model with a theoretically stronger ability to distinguish graph structures. Specifically, we design GSKN based on anonymous walks (AWs), flexible substructure units, and derive it upon feature mappings of graph kernels (GKs). We theoretically show that GSKN provably extends the 1-WL test, and hence the maximally powerful MPGNNs from both graph-level and node-level viewpoints. Correspondingly, various experiments are leveraged to evaluate GSKN, where GSKN outperforms a wide range of baselines, endorsing the analysis.
[ { "created": "Wed, 7 Apr 2021 08:50:34 GMT", "version": "v1" } ]
2021-04-08
[ [ "Long", "Qingqing", "" ], [ "Jin", "Yilun", "" ], [ "Wu", "Yi", "" ], [ "Song", "Guojie", "" ] ]
Graph neural networks (GNNs) have achieved tremendous success in graph mining. However, the inability of GNNs to model substructures in graphs remains a significant drawback. Specifically, message-passing GNNs (MPGNNs), as the prevailing type of GNNs, have been theoretically shown unable to distinguish, detect or count many graph substructures. While efforts have been paid to complement the inability, existing works either rely on pre-defined substructure sets, thus being less flexible, or are lacking in theoretical insights. In this paper, we propose GSKN, a GNN model with a theoretically stronger ability to distinguish graph structures. Specifically, we design GSKN based on anonymous walks (AWs), flexible substructure units, and derive it upon feature mappings of graph kernels (GKs). We theoretically show that GSKN provably extends the 1-WL test, and hence the maximally powerful MPGNNs from both graph-level and node-level viewpoints. Correspondingly, various experiments are leveraged to evaluate GSKN, where GSKN outperforms a wide range of baselines, endorsing the analysis.
2311.10785
Federico Albanese
Federico Albanese and Daniel Ciolek and Nicolas D'Ippolito
Text Sanitization Beyond Specific Domains: Zero-Shot Redaction & Substitution with Large Language Models
null
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the context of information systems, text sanitization techniques are used to identify and remove sensitive data to comply with security and regulatory requirements. Even though many methods for privacy preservation have been proposed, most of them are focused on the detection of entities from specific domains (e.g., credit card numbers, social security numbers), lacking generality and requiring customization for each desirable domain. Moreover, removing words is, in general, a drastic measure, as it can degrade text coherence and contextual information. Less severe measures include substituting a word for a safe alternative, yet it can be challenging to automatically find meaningful substitutions. We present a zero-shot text sanitization technique that detects and substitutes potentially sensitive information using Large Language Models. Our evaluation shows that our method excels at protecting privacy while maintaining text coherence and contextual information, preserving data utility for downstream tasks.
[ { "created": "Thu, 16 Nov 2023 18:42:37 GMT", "version": "v1" } ]
2023-11-21
[ [ "Albanese", "Federico", "" ], [ "Ciolek", "Daniel", "" ], [ "D'Ippolito", "Nicolas", "" ] ]
In the context of information systems, text sanitization techniques are used to identify and remove sensitive data to comply with security and regulatory requirements. Even though many methods for privacy preservation have been proposed, most of them are focused on the detection of entities from specific domains (e.g., credit card numbers, social security numbers), lacking generality and requiring customization for each desirable domain. Moreover, removing words is, in general, a drastic measure, as it can degrade text coherence and contextual information. Less severe measures include substituting a word for a safe alternative, yet it can be challenging to automatically find meaningful substitutions. We present a zero-shot text sanitization technique that detects and substitutes potentially sensitive information using Large Language Models. Our evaluation shows that our method excels at protecting privacy while maintaining text coherence and contextual information, preserving data utility for downstream tasks.
2401.03552
Vinod Puthuvath
Sameera K. M., Serena Nicolazzo, Marco Arazzi, Antonino Nocera, Rafidha Rehiman K. A., Vinod P and Mauro Conti
Privacy-Preserving in Blockchain-based Federated Learning Systems
44 pages, 11 figures
null
null
null
cs.CR cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated Learning (FL) has recently arisen as a revolutionary approach to collaborative training Machine Learning models. According to this novel framework, multiple participants train a global model collaboratively, coordinating with a central aggregator without sharing their local data. As FL gains popularity in diverse domains, security, and privacy concerns arise due to the distributed nature of this solution. Therefore, integrating this strategy with Blockchain technology has been consolidated as a preferred choice to ensure the privacy and security of participants. This paper explores the research efforts carried out by the scientific community to define privacy solutions in scenarios adopting Blockchain-Enabled FL. It comprehensively summarizes the background related to FL and Blockchain, evaluates existing architectures for their integration, and the primary attacks and possible countermeasures to guarantee privacy in this setting. Finally, it reviews the main application scenarios where Blockchain-Enabled FL approaches have been proficiently applied. This survey can help academia and industry practitioners understand which theories and techniques exist to improve the performance of FL through Blockchain to preserve privacy and which are the main challenges and future directions in this novel and still under-explored context. We believe this work provides a novel contribution respect to the previous surveys and is a valuable tool to explore the current landscape, understand perspectives, and pave the way for advancements or improvements in this amalgamation of Blockchain and Federated Learning.
[ { "created": "Sun, 7 Jan 2024 17:23:55 GMT", "version": "v1" } ]
2024-01-09
[ [ "M.", "Sameera K.", "" ], [ "Nicolazzo", "Serena", "" ], [ "Arazzi", "Marco", "" ], [ "Nocera", "Antonino", "" ], [ "A.", "Rafidha Rehiman K.", "" ], [ "P", "Vinod", "" ], [ "Conti", "Mauro", "" ] ]
Federated Learning (FL) has recently arisen as a revolutionary approach to collaborative training Machine Learning models. According to this novel framework, multiple participants train a global model collaboratively, coordinating with a central aggregator without sharing their local data. As FL gains popularity in diverse domains, security, and privacy concerns arise due to the distributed nature of this solution. Therefore, integrating this strategy with Blockchain technology has been consolidated as a preferred choice to ensure the privacy and security of participants. This paper explores the research efforts carried out by the scientific community to define privacy solutions in scenarios adopting Blockchain-Enabled FL. It comprehensively summarizes the background related to FL and Blockchain, evaluates existing architectures for their integration, and the primary attacks and possible countermeasures to guarantee privacy in this setting. Finally, it reviews the main application scenarios where Blockchain-Enabled FL approaches have been proficiently applied. This survey can help academia and industry practitioners understand which theories and techniques exist to improve the performance of FL through Blockchain to preserve privacy and which are the main challenges and future directions in this novel and still under-explored context. We believe this work provides a novel contribution respect to the previous surveys and is a valuable tool to explore the current landscape, understand perspectives, and pave the way for advancements or improvements in this amalgamation of Blockchain and Federated Learning.
1905.11471
Jasdeep Singh
Jasdeep Singh, Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, Richard Socher
XLDA: Cross-Lingual Data Augmentation for Natural Language Inference and Question Answering
null
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While natural language processing systems often focus on a single language, multilingual transfer learning has the potential to improve performance, especially for low-resource languages. We introduce XLDA, cross-lingual data augmentation, a method that replaces a segment of the input text with its translation in another language. XLDA enhances performance of all 14 tested languages of the cross-lingual natural language inference (XNLI) benchmark. With improvements of up to $4.8\%$, training with XLDA achieves state-of-the-art performance for Greek, Turkish, and Urdu. XLDA is in contrast to, and performs markedly better than, a more naive approach that aggregates examples in various languages in a way that each example is solely in one language. On the SQuAD question answering task, we see that XLDA provides a $1.0\%$ performance increase on the English evaluation set. Comprehensive experiments suggest that most languages are effective as cross-lingual augmentors, that XLDA is robust to a wide range of translation quality, and that XLDA is even more effective for randomly initialized models than for pretrained models.
[ { "created": "Mon, 27 May 2019 19:44:33 GMT", "version": "v1" } ]
2019-05-29
[ [ "Singh", "Jasdeep", "" ], [ "McCann", "Bryan", "" ], [ "Keskar", "Nitish Shirish", "" ], [ "Xiong", "Caiming", "" ], [ "Socher", "Richard", "" ] ]
While natural language processing systems often focus on a single language, multilingual transfer learning has the potential to improve performance, especially for low-resource languages. We introduce XLDA, cross-lingual data augmentation, a method that replaces a segment of the input text with its translation in another language. XLDA enhances performance of all 14 tested languages of the cross-lingual natural language inference (XNLI) benchmark. With improvements of up to $4.8\%$, training with XLDA achieves state-of-the-art performance for Greek, Turkish, and Urdu. XLDA is in contrast to, and performs markedly better than, a more naive approach that aggregates examples in various languages in a way that each example is solely in one language. On the SQuAD question answering task, we see that XLDA provides a $1.0\%$ performance increase on the English evaluation set. Comprehensive experiments suggest that most languages are effective as cross-lingual augmentors, that XLDA is robust to a wide range of translation quality, and that XLDA is even more effective for randomly initialized models than for pretrained models.
2210.07071
Yunhua Zhou
Yunhua Zhou, Pengyu Wang, Peiju Liu, Yuxin Wang, Xipeng Qiu
The Open-World Lottery Ticket Hypothesis for OOD Intent Classification
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most existing methods of Out-of-Domain (OOD) intent classification rely on extensive auxiliary OOD corpora or specific training paradigms. However, they are underdeveloped in the underlying principle that the models should have differentiated confidence in In- and Out-of-domain intent. In this work, we shed light on the fundamental cause of model overconfidence on OOD and demonstrate that calibrated subnetworks can be uncovered by pruning the overparameterized model. Calibrated confidence provided by the subnetwork can better distinguish In- and Out-of-domain, which can be a benefit for almost all post hoc methods. In addition to bringing fundamental insights, we also extend the Lottery Ticket Hypothesis to open-world scenarios. We conduct extensive experiments on four real-world datasets to demonstrate our approach can establish consistent improvements compared with a suite of competitive baselines.
[ { "created": "Thu, 13 Oct 2022 14:58:35 GMT", "version": "v1" }, { "created": "Mon, 15 Apr 2024 13:42:13 GMT", "version": "v2" }, { "created": "Wed, 24 Apr 2024 02:37:55 GMT", "version": "v3" } ]
2024-04-25
[ [ "Zhou", "Yunhua", "" ], [ "Wang", "Pengyu", "" ], [ "Liu", "Peiju", "" ], [ "Wang", "Yuxin", "" ], [ "Qiu", "Xipeng", "" ] ]
Most existing methods of Out-of-Domain (OOD) intent classification rely on extensive auxiliary OOD corpora or specific training paradigms. However, they are underdeveloped in the underlying principle that the models should have differentiated confidence in In- and Out-of-domain intent. In this work, we shed light on the fundamental cause of model overconfidence on OOD and demonstrate that calibrated subnetworks can be uncovered by pruning the overparameterized model. Calibrated confidence provided by the subnetwork can better distinguish In- and Out-of-domain, which can be a benefit for almost all post hoc methods. In addition to bringing fundamental insights, we also extend the Lottery Ticket Hypothesis to open-world scenarios. We conduct extensive experiments on four real-world datasets to demonstrate our approach can establish consistent improvements compared with a suite of competitive baselines.
2006.03659
John Giorgi
John Giorgi, Osvald Nitski, Bo Wang, Gary Bader
DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations
ACL2021 Camera Ready V2
null
null
null
cs.CL cs.LG
http://creativecommons.org/publicdomain/zero/1.0/
Sentence embeddings are an important component of many natural language processing (NLP) systems. Like word embeddings, sentence embeddings are typically learned on large text corpora and then transferred to various downstream tasks, such as clustering and retrieval. Unlike word embeddings, the highest performing solutions for learning sentence embeddings require labelled data, limiting their usefulness to languages and domains where labelled data is abundant. In this paper, we present DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations. Inspired by recent advances in deep metric learning (DML), we carefully design a self-supervised objective for learning universal sentence embeddings that does not require labelled training data. When used to extend the pretraining of transformer-based language models, our approach closes the performance gap between unsupervised and supervised pretraining for universal sentence encoders. Importantly, our experiments suggest that the quality of the learned embeddings scale with both the number of trainable parameters and the amount of unlabelled training data. Our code and pretrained models are publicly available and can be easily adapted to new domains or used to embed unseen text.
[ { "created": "Fri, 5 Jun 2020 20:00:28 GMT", "version": "v1" }, { "created": "Thu, 11 Jun 2020 20:24:17 GMT", "version": "v2" }, { "created": "Thu, 20 May 2021 19:47:53 GMT", "version": "v3" }, { "created": "Thu, 27 May 2021 14:57:02 GMT", "version": "v4" } ]
2021-05-28
[ [ "Giorgi", "John", "" ], [ "Nitski", "Osvald", "" ], [ "Wang", "Bo", "" ], [ "Bader", "Gary", "" ] ]
Sentence embeddings are an important component of many natural language processing (NLP) systems. Like word embeddings, sentence embeddings are typically learned on large text corpora and then transferred to various downstream tasks, such as clustering and retrieval. Unlike word embeddings, the highest performing solutions for learning sentence embeddings require labelled data, limiting their usefulness to languages and domains where labelled data is abundant. In this paper, we present DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations. Inspired by recent advances in deep metric learning (DML), we carefully design a self-supervised objective for learning universal sentence embeddings that does not require labelled training data. When used to extend the pretraining of transformer-based language models, our approach closes the performance gap between unsupervised and supervised pretraining for universal sentence encoders. Importantly, our experiments suggest that the quality of the learned embeddings scale with both the number of trainable parameters and the amount of unlabelled training data. Our code and pretrained models are publicly available and can be easily adapted to new domains or used to embed unseen text.
1910.13181
Talip Ucar
Talip Ucar
Bridging the ELBO and MMD
14 pages, 11 figures
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the challenges in training generative models such as the variational auto encoder (VAE) is avoiding posterior collapse. When the generator has too much capacity, it is prone to ignoring latent code. This problem is exacerbated when the dataset is small, and the latent dimension is high. The root of the problem is the ELBO objective, specifically the Kullback-Leibler (KL) divergence term in objective function \citep{zhao2019infovae}. This paper proposes a new objective function to replace the KL term with one that emulates the maximum mean discrepancy (MMD) objective. It also introduces a new technique, named latent clipping, that is used to control distance between samples in latent space. A probabilistic autoencoder model, named $\mu$-VAE, is designed and trained on MNIST and MNIST Fashion datasets, using the new objective function and is shown to outperform models trained with ELBO and $\beta$-VAE objective. The $\mu$-VAE is less prone to posterior collapse, and can generate reconstructions and new samples in good quality. Latent representations learned by $\mu$-VAE are shown to be good and can be used for downstream tasks such as classification.
[ { "created": "Tue, 29 Oct 2019 10:32:40 GMT", "version": "v1" } ]
2019-10-30
[ [ "Ucar", "Talip", "" ] ]
One of the challenges in training generative models such as the variational auto encoder (VAE) is avoiding posterior collapse. When the generator has too much capacity, it is prone to ignoring latent code. This problem is exacerbated when the dataset is small, and the latent dimension is high. The root of the problem is the ELBO objective, specifically the Kullback-Leibler (KL) divergence term in objective function \citep{zhao2019infovae}. This paper proposes a new objective function to replace the KL term with one that emulates the maximum mean discrepancy (MMD) objective. It also introduces a new technique, named latent clipping, that is used to control distance between samples in latent space. A probabilistic autoencoder model, named $\mu$-VAE, is designed and trained on MNIST and MNIST Fashion datasets, using the new objective function and is shown to outperform models trained with ELBO and $\beta$-VAE objective. The $\mu$-VAE is less prone to posterior collapse, and can generate reconstructions and new samples in good quality. Latent representations learned by $\mu$-VAE are shown to be good and can be used for downstream tasks such as classification.
1707.06070
Nicolas Robinson-Garcia
Nicolas Robinson-Garcia, Philippe Mongeon, Wei Jeng and Rodrigo Costas
DataCite as a novel bibliometric source: Coverage, strengths and limitations
Paper accepted for publication in Journal of Informetrics
Journal of Informetrics, 11(3), 841-854 (2017)
10.1016/j.joi.2017.07.003
null
cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper explores the characteristics of DataCite to determine its possibilities and potential as a new bibliometric data source to analyze the scholarly production of open data. Open science and the increasing data sharing requirements from governments, funding bodies, institutions and scientific journals has led to a pressing demand for the development of data metrics. As a very first step towards reliable data metrics, we need to better comprehend the limitations and caveats of the information provided by sources of open data. In this paper, we critically examine records downloaded from the DataCite's OAI API and elaborate a series of recommendations regarding the use of this source for bibliometric analyses of open data. We highlight issues related to metadata incompleteness, lack of standardization, and ambiguous definitions of several fields. Despite these limitations, we emphasize DataCite's value and potential to become one of the main sources for data metrics development.
[ { "created": "Wed, 19 Jul 2017 13:14:44 GMT", "version": "v1" } ]
2017-10-13
[ [ "Robinson-Garcia", "Nicolas", "" ], [ "Mongeon", "Philippe", "" ], [ "Jeng", "Wei", "" ], [ "Costas", "Rodrigo", "" ] ]
This paper explores the characteristics of DataCite to determine its possibilities and potential as a new bibliometric data source to analyze the scholarly production of open data. Open science and the increasing data sharing requirements from governments, funding bodies, institutions and scientific journals has led to a pressing demand for the development of data metrics. As a very first step towards reliable data metrics, we need to better comprehend the limitations and caveats of the information provided by sources of open data. In this paper, we critically examine records downloaded from the DataCite's OAI API and elaborate a series of recommendations regarding the use of this source for bibliometric analyses of open data. We highlight issues related to metadata incompleteness, lack of standardization, and ambiguous definitions of several fields. Despite these limitations, we emphasize DataCite's value and potential to become one of the main sources for data metrics development.
1605.00031
Thomas Wiatowski
Philipp Grohs, Thomas Wiatowski, Helmut B\"olcskei
Deep Convolutional Neural Networks on Cartoon Functions
This is a slightly updated version of the paper published in the ISIT proceedings. Specifically, we corrected errors in the arguments on the volume of tubes. Note that this correction does not affect the main statements of the paper
Proc. of IEEE International Symposium on Information Theory (ISIT), Barcelona, Spain, pp. 1163-1167, July 2016
10.1109/ISIT.2016.7541482
null
cs.LG cs.CV math.NA stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wiatowski and B\"olcskei, 2015, proved that deformation stability and vertical translation invariance of deep convolutional neural network-based feature extractors are guaranteed by the network structure per se rather than the specific convolution kernels and non-linearities. While the translation invariance result applies to square-integrable functions, the deformation stability bound holds for band-limited functions only. Many signals of practical relevance (such as natural images) exhibit, however, sharp and curved discontinuities and are, hence, not band-limited. The main contribution of this paper is a deformation stability result that takes these structural properties into account. Specifically, we establish deformation stability bounds for the class of cartoon functions introduced by Donoho, 2001.
[ { "created": "Fri, 29 Apr 2016 21:40:16 GMT", "version": "v1" }, { "created": "Mon, 12 Feb 2018 13:47:49 GMT", "version": "v2" } ]
2018-02-13
[ [ "Grohs", "Philipp", "" ], [ "Wiatowski", "Thomas", "" ], [ "Bölcskei", "Helmut", "" ] ]
Wiatowski and B\"olcskei, 2015, proved that deformation stability and vertical translation invariance of deep convolutional neural network-based feature extractors are guaranteed by the network structure per se rather than the specific convolution kernels and non-linearities. While the translation invariance result applies to square-integrable functions, the deformation stability bound holds for band-limited functions only. Many signals of practical relevance (such as natural images) exhibit, however, sharp and curved discontinuities and are, hence, not band-limited. The main contribution of this paper is a deformation stability result that takes these structural properties into account. Specifically, we establish deformation stability bounds for the class of cartoon functions introduced by Donoho, 2001.
cs/0605041
Jian Cao
Jian Cao, Edmund M. Yeh
Asymptotically Optimal Multiple-access Communication via Distributed Rate Splitting
Submitted to the IEEE Transactions on Information Theory. 15 Pages
null
10.1109/TIT.2006.887497
null
cs.IT math.IT
null
We consider the multiple-access communication problem in a distributed setting for both the additive white Gaussian noise channel and the discrete memoryless channel. We propose a scheme called Distributed Rate Splitting to achieve the optimal rates allowed by information theory in a distributed manner. In this scheme, each real user creates a number of virtual users via a power/rate splitting mechanism in the M-user Gaussian channel or via a random switching mechanism in the M-user discrete memoryless channel. At the receiver, all virtual users are successively decoded. Compared with other multiple-access techniques, Distributed Rate Splitting can be implemented with lower complexity and less coordination. Furthermore, in a symmetric setting, we show that the rate tuple achieved by this scheme converges to the maximum equal rate point allowed by the information-theoretic bound as the number of virtual users per real user tends to infinity. When the capacity regions are asymmetric, we show that a point on the dominant face can be achieved asymptotically. Finally, when there is an unequal number of virtual users per real user, we show that differential user rate requirements can be accommodated in a distributed fashion.
[ { "created": "Tue, 9 May 2006 14:32:51 GMT", "version": "v1" }, { "created": "Tue, 3 Oct 2006 19:48:15 GMT", "version": "v2" } ]
2016-11-18
[ [ "Cao", "Jian", "" ], [ "Yeh", "Edmund M.", "" ] ]
We consider the multiple-access communication problem in a distributed setting for both the additive white Gaussian noise channel and the discrete memoryless channel. We propose a scheme called Distributed Rate Splitting to achieve the optimal rates allowed by information theory in a distributed manner. In this scheme, each real user creates a number of virtual users via a power/rate splitting mechanism in the M-user Gaussian channel or via a random switching mechanism in the M-user discrete memoryless channel. At the receiver, all virtual users are successively decoded. Compared with other multiple-access techniques, Distributed Rate Splitting can be implemented with lower complexity and less coordination. Furthermore, in a symmetric setting, we show that the rate tuple achieved by this scheme converges to the maximum equal rate point allowed by the information-theoretic bound as the number of virtual users per real user tends to infinity. When the capacity regions are asymmetric, we show that a point on the dominant face can be achieved asymptotically. Finally, when there is an unequal number of virtual users per real user, we show that differential user rate requirements can be accommodated in a distributed fashion.
2206.03657
Zhuoling Li
Zhuoling Li, Chuanrui Zhang, En Yu, Haoqian Wang
Delving into the Pre-training Paradigm of Monocular 3D Object Detection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The labels of monocular 3D object detection (M3OD) are expensive to obtain. Meanwhile, there usually exists numerous unlabeled data in practical applications, and pre-training is an efficient way of exploiting the knowledge in unlabeled data. However, the pre-training paradigm for M3OD is hardly studied. We aim to bridge this gap in this work. To this end, we first draw two observations: (1) The guideline of devising pre-training tasks is imitating the representation of the target task. (2) Combining depth estimation and 2D object detection is a promising M3OD pre-training baseline. Afterwards, following the guideline, we propose several strategies to further improve this baseline, which mainly include target guided semi-dense depth estimation, keypoint-aware 2D object detection, and class-level loss adjustment. Combining all the developed techniques, the obtained pre-training framework produces pre-trained backbones that improve M3OD performance significantly on both the KITTI-3D and nuScenes benchmarks. For example, by applying a DLA34 backbone to a naive center-based M3OD detector, the moderate ${\rm AP}_{3D}70$ score of Car on the KITTI-3D testing set is boosted by 18.71\% and the NDS score on the nuScenes validation set is improved by 40.41\% relatively.
[ { "created": "Wed, 8 Jun 2022 03:01:13 GMT", "version": "v1" }, { "created": "Wed, 15 Jun 2022 02:50:31 GMT", "version": "v2" } ]
2022-06-16
[ [ "Li", "Zhuoling", "" ], [ "Zhang", "Chuanrui", "" ], [ "Yu", "En", "" ], [ "Wang", "Haoqian", "" ] ]
The labels of monocular 3D object detection (M3OD) are expensive to obtain. Meanwhile, there usually exists numerous unlabeled data in practical applications, and pre-training is an efficient way of exploiting the knowledge in unlabeled data. However, the pre-training paradigm for M3OD is hardly studied. We aim to bridge this gap in this work. To this end, we first draw two observations: (1) The guideline of devising pre-training tasks is imitating the representation of the target task. (2) Combining depth estimation and 2D object detection is a promising M3OD pre-training baseline. Afterwards, following the guideline, we propose several strategies to further improve this baseline, which mainly include target guided semi-dense depth estimation, keypoint-aware 2D object detection, and class-level loss adjustment. Combining all the developed techniques, the obtained pre-training framework produces pre-trained backbones that improve M3OD performance significantly on both the KITTI-3D and nuScenes benchmarks. For example, by applying a DLA34 backbone to a naive center-based M3OD detector, the moderate ${\rm AP}_{3D}70$ score of Car on the KITTI-3D testing set is boosted by 18.71\% and the NDS score on the nuScenes validation set is improved by 40.41\% relatively.
2101.02028
Ye Tian
Ye Tian
A Multilayer Correlated Topic Model
11 pages, 4 figures
null
null
null
cs.IR cs.LG stat.CO stat.ME stat.ML
http://creativecommons.org/licenses/by/4.0/
We proposed a novel multilayer correlated topic model (MCTM) to analyze how the main ideas inherit and vary between a document and its different segments, which helps understand an article's structure. The variational expectation-maximization (EM) algorithm was derived to estimate the posterior and parameters in MCTM. We introduced two potential applications of MCTM, including the paragraph-level document analysis and market basket data analysis. The effectiveness of MCTM in understanding the document structure has been verified by the great predictive performance on held-out documents and intuitive visualization. We also showed that MCTM could successfully capture customers' popular shopping patterns in the market basket analysis.
[ { "created": "Sat, 2 Jan 2021 21:50:36 GMT", "version": "v1" } ]
2021-01-07
[ [ "Tian", "Ye", "" ] ]
We proposed a novel multilayer correlated topic model (MCTM) to analyze how the main ideas inherit and vary between a document and its different segments, which helps understand an article's structure. The variational expectation-maximization (EM) algorithm was derived to estimate the posterior and parameters in MCTM. We introduced two potential applications of MCTM, including the paragraph-level document analysis and market basket data analysis. The effectiveness of MCTM in understanding the document structure has been verified by the great predictive performance on held-out documents and intuitive visualization. We also showed that MCTM could successfully capture customers' popular shopping patterns in the market basket analysis.
2002.10451
Mayank Mittal
Mayank Mittal, Marco Gallieri, Alessio Quaglino, Seyed Sina Mirrazavi Salehian, Jan Koutn\'ik
Neural Lyapunov Model Predictive Control: Learning Safe Global Controllers from Sub-optimal Examples
null
null
null
null
cs.AI cs.NE cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With a growing interest in data-driven control techniques, Model Predictive Control (MPC) provides an opportunity to exploit the surplus of data reliably, particularly while taking safety and stability into account. In many real-world and industrial applications, it is typical to have an existing control strategy, for instance, execution from a human operator. The objective of this work is to improve upon this unknown, safe but suboptimal policy by learning a new controller that retains safety and stability. Learning how to be safe is achieved directly from data and from a knowledge of the system constraints. The proposed algorithm alternatively learns the terminal cost and updates the MPC parameters according to a stability metric. The terminal cost is constructed as a Lyapunov function neural network with the aim of recovering or extending the stable region of the initial demonstrator using a short prediction horizon. Theorems that characterize the stability and performance of the learned MPC in the bearing of model uncertainties and sub-optimality due to function approximation are presented. The efficacy of the proposed algorithm is demonstrated on non-linear continuous control tasks with soft constraints. The proposed approach can improve upon the initial demonstrator also in practice and achieve better stability than popular reinforcement learning baselines.
[ { "created": "Fri, 21 Feb 2020 16:57:38 GMT", "version": "v1" }, { "created": "Thu, 3 Jun 2021 14:37:05 GMT", "version": "v2" } ]
2021-06-04
[ [ "Mittal", "Mayank", "" ], [ "Gallieri", "Marco", "" ], [ "Quaglino", "Alessio", "" ], [ "Salehian", "Seyed Sina Mirrazavi", "" ], [ "Koutník", "Jan", "" ] ]
With a growing interest in data-driven control techniques, Model Predictive Control (MPC) provides an opportunity to exploit the surplus of data reliably, particularly while taking safety and stability into account. In many real-world and industrial applications, it is typical to have an existing control strategy, for instance, execution from a human operator. The objective of this work is to improve upon this unknown, safe but suboptimal policy by learning a new controller that retains safety and stability. Learning how to be safe is achieved directly from data and from a knowledge of the system constraints. The proposed algorithm alternatively learns the terminal cost and updates the MPC parameters according to a stability metric. The terminal cost is constructed as a Lyapunov function neural network with the aim of recovering or extending the stable region of the initial demonstrator using a short prediction horizon. Theorems that characterize the stability and performance of the learned MPC in the bearing of model uncertainties and sub-optimality due to function approximation are presented. The efficacy of the proposed algorithm is demonstrated on non-linear continuous control tasks with soft constraints. The proposed approach can improve upon the initial demonstrator also in practice and achieve better stability than popular reinforcement learning baselines.
1401.7828
Cristina Flaut
Cristina Flaut
Codes over a subset of Octonion Integers
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we define codes over some Octonion integers. We prove that in some conditions these codes can correct up to two errors for a transmitted vector and the code rate of the codes is grater than the code rate of the codes defined on some subset of Quaternion integers.
[ { "created": "Thu, 30 Jan 2014 12:38:13 GMT", "version": "v1" } ]
2014-01-31
[ [ "Flaut", "Cristina", "" ] ]
In this paper we define codes over some Octonion integers. We prove that in some conditions these codes can correct up to two errors for a transmitted vector and the code rate of the codes is grater than the code rate of the codes defined on some subset of Quaternion integers.
2005.00247
Jonas Pfeiffer
Jonas Pfeiffer, Aishwarya Kamath, Andreas R\"uckl\'e, Kyunghyun Cho, Iryna Gurevych
AdapterFusion: Non-Destructive Task Composition for Transfer Learning
null
Proceedings of EACL 2021
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sequential fine-tuning and multi-task learning are methods aiming to incorporate knowledge from multiple tasks; however, they suffer from catastrophic forgetting and difficulties in dataset balancing. To address these shortcomings, we propose AdapterFusion, a new two stage learning algorithm that leverages knowledge from multiple tasks. First, in the knowledge extraction stage we learn task specific parameters called adapters, that encapsulate the task-specific information. We then combine the adapters in a separate knowledge composition step. We show that by separating the two stages, i.e., knowledge extraction and knowledge composition, the classifier can effectively exploit the representations learned from multiple tasks in a non-destructive manner. We empirically evaluate AdapterFusion on 16 diverse NLU tasks, and find that it effectively combines various types of knowledge at different layers of the model. We show that our approach outperforms traditional strategies such as full fine-tuning as well as multi-task learning. Our code and adapters are available at AdapterHub.ml.
[ { "created": "Fri, 1 May 2020 07:03:42 GMT", "version": "v1" }, { "created": "Mon, 25 Jan 2021 14:34:32 GMT", "version": "v2" }, { "created": "Tue, 26 Jan 2021 12:54:33 GMT", "version": "v3" } ]
2021-01-27
[ [ "Pfeiffer", "Jonas", "" ], [ "Kamath", "Aishwarya", "" ], [ "Rücklé", "Andreas", "" ], [ "Cho", "Kyunghyun", "" ], [ "Gurevych", "Iryna", "" ] ]
Sequential fine-tuning and multi-task learning are methods aiming to incorporate knowledge from multiple tasks; however, they suffer from catastrophic forgetting and difficulties in dataset balancing. To address these shortcomings, we propose AdapterFusion, a new two stage learning algorithm that leverages knowledge from multiple tasks. First, in the knowledge extraction stage we learn task specific parameters called adapters, that encapsulate the task-specific information. We then combine the adapters in a separate knowledge composition step. We show that by separating the two stages, i.e., knowledge extraction and knowledge composition, the classifier can effectively exploit the representations learned from multiple tasks in a non-destructive manner. We empirically evaluate AdapterFusion on 16 diverse NLU tasks, and find that it effectively combines various types of knowledge at different layers of the model. We show that our approach outperforms traditional strategies such as full fine-tuning as well as multi-task learning. Our code and adapters are available at AdapterHub.ml.
2212.04634
Yuxin Wang
Yuxin Wang, Jieru Lin, Zhiwei Yu, Wei Hu, B\"orje F. Karlsson
Open-world Story Generation with Structured Knowledge Enhancement: A Comprehensive Survey
Accepted in Neurocomputing
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Storytelling and narrative are fundamental to human experience, intertwined with our social and cultural engagement. As such, researchers have long attempted to create systems that can generate stories automatically. In recent years, powered by deep learning and massive data resources, automatic story generation has shown significant advances. However, considerable challenges, like the need for global coherence in generated stories, still hamper generative models from reaching the same storytelling ability as human narrators. To tackle these challenges, many studies seek to inject structured knowledge into the generation process, which is referred to as structured knowledge-enhanced story generation. Incorporating external knowledge can enhance the logical coherence among story events, achieve better knowledge grounding, and alleviate over-generalization and repetition problems in stories. This survey provides the latest and comprehensive review of this research field: (i) we present a systematic taxonomy regarding how existing methods integrate structured knowledge into story generation; (ii) we summarize involved story corpora, structured knowledge datasets, and evaluation metrics; (iii) we give multidimensional insights into the challenges of knowledge-enhanced story generation and cast light on promising directions for future study.
[ { "created": "Fri, 9 Dec 2022 02:19:07 GMT", "version": "v1" }, { "created": "Fri, 24 Mar 2023 13:20:05 GMT", "version": "v2" }, { "created": "Tue, 12 Sep 2023 17:38:30 GMT", "version": "v3" } ]
2023-09-13
[ [ "Wang", "Yuxin", "" ], [ "Lin", "Jieru", "" ], [ "Yu", "Zhiwei", "" ], [ "Hu", "Wei", "" ], [ "Karlsson", "Börje F.", "" ] ]
Storytelling and narrative are fundamental to human experience, intertwined with our social and cultural engagement. As such, researchers have long attempted to create systems that can generate stories automatically. In recent years, powered by deep learning and massive data resources, automatic story generation has shown significant advances. However, considerable challenges, like the need for global coherence in generated stories, still hamper generative models from reaching the same storytelling ability as human narrators. To tackle these challenges, many studies seek to inject structured knowledge into the generation process, which is referred to as structured knowledge-enhanced story generation. Incorporating external knowledge can enhance the logical coherence among story events, achieve better knowledge grounding, and alleviate over-generalization and repetition problems in stories. This survey provides the latest and comprehensive review of this research field: (i) we present a systematic taxonomy regarding how existing methods integrate structured knowledge into story generation; (ii) we summarize involved story corpora, structured knowledge datasets, and evaluation metrics; (iii) we give multidimensional insights into the challenges of knowledge-enhanced story generation and cast light on promising directions for future study.
cs/0606100
Marco Cuturi
Marco Cuturi
The generating function of the polytope of transport matrices $U(r,c)$ as a positive semidefinite kernel of the marginals $r$ and $c$
This paper has been withdrawn
null
null
null
cs.LG cs.DM
null
This paper has been withdrawn by the author due to a crucial error in the proof of Lemma 5.
[ { "created": "Fri, 23 Jun 2006 10:19:40 GMT", "version": "v1" }, { "created": "Mon, 26 Jun 2006 05:46:00 GMT", "version": "v2" }, { "created": "Tue, 4 Jan 2011 08:26:13 GMT", "version": "v3" }, { "created": "Tue, 11 Oct 2011 10:21:45 GMT", "version": "v4" } ]
2011-10-12
[ [ "Cuturi", "Marco", "" ] ]
This paper has been withdrawn by the author due to a crucial error in the proof of Lemma 5.
2311.06362
Yunting Yin
Yunting Yin and Steven Skiena
Word Definitions from Large Language Models
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dictionary definitions are historically the arbitrator of what words mean, but this primacy has come under threat by recent progress in NLP, including word embeddings and generative models like ChatGPT. We present an exploratory study of the degree of alignment between word definitions from classical dictionaries and these newer computational artifacts. Specifically, we compare definitions from three published dictionaries to those generated from variants of ChatGPT. We show that (i) definitions from different traditional dictionaries exhibit more surface form similarity than do model-generated definitions, (ii) that the ChatGPT definitions are highly accurate, comparable to traditional dictionaries, and (iii) ChatGPT-based embedding definitions retain their accuracy even on low frequency words, much better than GloVE and FastText word embeddings.
[ { "created": "Fri, 10 Nov 2023 19:27:20 GMT", "version": "v1" } ]
2023-11-14
[ [ "Yin", "Yunting", "" ], [ "Skiena", "Steven", "" ] ]
Dictionary definitions are historically the arbitrator of what words mean, but this primacy has come under threat by recent progress in NLP, including word embeddings and generative models like ChatGPT. We present an exploratory study of the degree of alignment between word definitions from classical dictionaries and these newer computational artifacts. Specifically, we compare definitions from three published dictionaries to those generated from variants of ChatGPT. We show that (i) definitions from different traditional dictionaries exhibit more surface form similarity than do model-generated definitions, (ii) that the ChatGPT definitions are highly accurate, comparable to traditional dictionaries, and (iii) ChatGPT-based embedding definitions retain their accuracy even on low frequency words, much better than GloVE and FastText word embeddings.
2012.11339
Anh Tong
Anh Tong, Toan Tran, Hung Bui, Jaesik Choi
Learning Compositional Sparse Gaussian Processes with a Shrinkage Prior
AAAI 2021
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Choosing a proper set of kernel functions is an important problem in learning Gaussian Process (GP) models since each kernel structure has different model complexity and data fitness. Recently, automatic kernel composition methods provide not only accurate prediction but also attractive interpretability through search-based methods. However, existing methods suffer from slow kernel composition learning. To tackle large-scaled data, we propose a new sparse approximate posterior for GPs, MultiSVGP, constructed from groups of inducing points associated with individual additive kernels in compositional kernels. We demonstrate that this approximation provides a better fit to learn compositional kernels given empirical observations. We also provide theoretically justification on error bound when compared to the traditional sparse GP. In contrast to the search-based approach, we present a novel probabilistic algorithm to learn a kernel composition by handling the sparsity in the kernel selection with Horseshoe prior. We demonstrate that our model can capture characteristics of time series with significant reductions in computational time and have competitive regression performance on real-world data sets.
[ { "created": "Mon, 21 Dec 2020 13:41:15 GMT", "version": "v1" }, { "created": "Wed, 24 Feb 2021 07:11:56 GMT", "version": "v2" } ]
2021-02-25
[ [ "Tong", "Anh", "" ], [ "Tran", "Toan", "" ], [ "Bui", "Hung", "" ], [ "Choi", "Jaesik", "" ] ]
Choosing a proper set of kernel functions is an important problem in learning Gaussian Process (GP) models since each kernel structure has different model complexity and data fitness. Recently, automatic kernel composition methods provide not only accurate prediction but also attractive interpretability through search-based methods. However, existing methods suffer from slow kernel composition learning. To tackle large-scaled data, we propose a new sparse approximate posterior for GPs, MultiSVGP, constructed from groups of inducing points associated with individual additive kernels in compositional kernels. We demonstrate that this approximation provides a better fit to learn compositional kernels given empirical observations. We also provide theoretically justification on error bound when compared to the traditional sparse GP. In contrast to the search-based approach, we present a novel probabilistic algorithm to learn a kernel composition by handling the sparsity in the kernel selection with Horseshoe prior. We demonstrate that our model can capture characteristics of time series with significant reductions in computational time and have competitive regression performance on real-world data sets.
0710.4318
Evelyne Hubert
Evelyne Hubert
Differential invariants of a Lie group action: syzygies on a generating set
Journal of Symbolic Computation (2008)
null
10.1016/j.jsc.2008.08.003
null
cs.SC math.DG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a group action, known by its infinitesimal generators, we exhibit a complete set of syzygies on a generating set of differential invariants. For that we elaborate on the reinterpretation of Cartan's moving frame by Fels and Olver (1999). This provides constructive tools for exploring algebras of differential invariants.
[ { "created": "Tue, 23 Oct 2007 19:20:10 GMT", "version": "v1" }, { "created": "Thu, 6 Dec 2007 15:49:21 GMT", "version": "v2" }, { "created": "Thu, 4 Sep 2008 15:06:25 GMT", "version": "v3" }, { "created": "Mon, 3 Nov 2008 10:42:30 GMT", "version": "v4" } ]
2008-11-03
[ [ "Hubert", "Evelyne", "" ] ]
Given a group action, known by its infinitesimal generators, we exhibit a complete set of syzygies on a generating set of differential invariants. For that we elaborate on the reinterpretation of Cartan's moving frame by Fels and Olver (1999). This provides constructive tools for exploring algebras of differential invariants.
2406.14150
Guillaume Richard
Juan Jose Garau-Luis, Patrick Bordes, Liam Gonzalez, Masa Roller, Bernardo P. de Almeida, Lorenz Hexemer, Christopher Blum, Stefan Laurent, Jan Grzegorzewski, Maren Lang, Thomas Pierrot, Guillaume Richard
Multi-modal Transfer Learning between Biological Foundation Models
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Biological sequences encode fundamental instructions for the building blocks of life, in the form of DNA, RNA, and proteins. Modeling these sequences is key to understand disease mechanisms and is an active research area in computational biology. Recently, Large Language Models have shown great promise in solving certain biological tasks but current approaches are limited to a single sequence modality (DNA, RNA, or protein). Key problems in genomics intrinsically involve multiple modalities, but it remains unclear how to adapt general-purpose sequence models to those cases. In this work we propose a multi-modal model that connects DNA, RNA, and proteins by leveraging information from different pre-trained modality-specific encoders. We demonstrate its capabilities by applying it to the largely unsolved problem of predicting how multiple RNA transcript isoforms originate from the same gene (i.e. same DNA sequence) and map to different transcription expression levels across various human tissues. We show that our model, dubbed IsoFormer, is able to accurately predict differential transcript expression, outperforming existing methods and leveraging the use of multiple modalities. Our framework also achieves efficient transfer knowledge from the encoders pre-training as well as in between modalities. We open-source our model, paving the way for new multi-modal gene expression approaches.
[ { "created": "Thu, 20 Jun 2024 09:44:53 GMT", "version": "v1" } ]
2024-06-21
[ [ "Garau-Luis", "Juan Jose", "" ], [ "Bordes", "Patrick", "" ], [ "Gonzalez", "Liam", "" ], [ "Roller", "Masa", "" ], [ "de Almeida", "Bernardo P.", "" ], [ "Hexemer", "Lorenz", "" ], [ "Blum", "Christopher", "" ], [ "Laurent", "Stefan", "" ], [ "Grzegorzewski", "Jan", "" ], [ "Lang", "Maren", "" ], [ "Pierrot", "Thomas", "" ], [ "Richard", "Guillaume", "" ] ]
Biological sequences encode fundamental instructions for the building blocks of life, in the form of DNA, RNA, and proteins. Modeling these sequences is key to understand disease mechanisms and is an active research area in computational biology. Recently, Large Language Models have shown great promise in solving certain biological tasks but current approaches are limited to a single sequence modality (DNA, RNA, or protein). Key problems in genomics intrinsically involve multiple modalities, but it remains unclear how to adapt general-purpose sequence models to those cases. In this work we propose a multi-modal model that connects DNA, RNA, and proteins by leveraging information from different pre-trained modality-specific encoders. We demonstrate its capabilities by applying it to the largely unsolved problem of predicting how multiple RNA transcript isoforms originate from the same gene (i.e. same DNA sequence) and map to different transcription expression levels across various human tissues. We show that our model, dubbed IsoFormer, is able to accurately predict differential transcript expression, outperforming existing methods and leveraging the use of multiple modalities. Our framework also achieves efficient transfer knowledge from the encoders pre-training as well as in between modalities. We open-source our model, paving the way for new multi-modal gene expression approaches.
2406.07840
Shubham Dokania
Abhay Rawat, Shubham Dokania, Astitva Srivastava, Shuaib Ahmed, Haiwen Feng, Rahul Tallamraju
SynthForge: Synthesizing High-Quality Face Dataset with Controllable 3D Generative Models
11 pages, 4 figures, 3 tables. Under Review
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Recent advancements in generative models have unlocked the capabilities to render photo-realistic data in a controllable fashion. Trained on the real data, these generative models are capable of producing realistic samples with minimal to no domain gap, as compared to the traditional graphics rendering. However, using the data generated using such models for training downstream tasks remains under-explored, mainly due to the lack of 3D consistent annotations. Moreover, controllable generative models are learned from massive data and their latent space is often too vast to obtain meaningful sample distributions for downstream task with limited generation. To overcome these challenges, we extract 3D consistent annotations from an existing controllable generative model, making the data useful for downstream tasks. Our experiments show competitive performance against state-of-the-art models using only generated synthetic data, demonstrating potential for solving downstream tasks. Project page: https://synth-forge.github.io
[ { "created": "Wed, 12 Jun 2024 03:15:15 GMT", "version": "v1" } ]
2024-06-13
[ [ "Rawat", "Abhay", "" ], [ "Dokania", "Shubham", "" ], [ "Srivastava", "Astitva", "" ], [ "Ahmed", "Shuaib", "" ], [ "Feng", "Haiwen", "" ], [ "Tallamraju", "Rahul", "" ] ]
Recent advancements in generative models have unlocked the capabilities to render photo-realistic data in a controllable fashion. Trained on the real data, these generative models are capable of producing realistic samples with minimal to no domain gap, as compared to the traditional graphics rendering. However, using the data generated using such models for training downstream tasks remains under-explored, mainly due to the lack of 3D consistent annotations. Moreover, controllable generative models are learned from massive data and their latent space is often too vast to obtain meaningful sample distributions for downstream task with limited generation. To overcome these challenges, we extract 3D consistent annotations from an existing controllable generative model, making the data useful for downstream tasks. Our experiments show competitive performance against state-of-the-art models using only generated synthetic data, demonstrating potential for solving downstream tasks. Project page: https://synth-forge.github.io
1712.07752
Rajesh Chidambaram
Rajesh Chidambaram
Towards an unanimous international regulatory body for responsible use of Artificial Intelligence [UIRB-AI]
The paper covers a diverse range of topics but doesn't get into the details of any and hence the proposals remain pragmatically irrelevant
null
null
null
cs.AI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artificial Intelligence (AI), is once again in the phase of drastic advancements. Unarguably, the technology itself can revolutionize the way we live our everyday life. But the exponential growth of technology poses a daunting task for policy researchers and law makers in making amendments to the existing norms. In addition, not everyone in the society is studying the potential socio-economic intricacies and cultural drifts that AI can bring about. It is prudence to reflect from our historical past to propel the development of technology in the right direction. To benefit the society of the present and future, I scientifically explore the societal impact of AI. While there are many public and private partnerships working on similar aspects, here I describe the necessity for an Unanimous International Regulatory Body for all applications of AI (UIRB-AI). I also discuss the benefits and drawbacks of such an organization. To combat any drawbacks in the formation of an UIRB-AI, both idealistic and pragmatic perspectives are discussed alternatively. The paper further advances the discussion by proposing novel policies on how such organization should be structured and how it can bring about a win-win situation for everyone in the society.
[ { "created": "Thu, 21 Dec 2017 00:29:48 GMT", "version": "v1" }, { "created": "Fri, 29 Dec 2017 16:39:50 GMT", "version": "v2" }, { "created": "Thu, 28 Jun 2018 22:24:09 GMT", "version": "v3" } ]
2018-07-02
[ [ "Chidambaram", "Rajesh", "" ] ]
Artificial Intelligence (AI), is once again in the phase of drastic advancements. Unarguably, the technology itself can revolutionize the way we live our everyday life. But the exponential growth of technology poses a daunting task for policy researchers and law makers in making amendments to the existing norms. In addition, not everyone in the society is studying the potential socio-economic intricacies and cultural drifts that AI can bring about. It is prudence to reflect from our historical past to propel the development of technology in the right direction. To benefit the society of the present and future, I scientifically explore the societal impact of AI. While there are many public and private partnerships working on similar aspects, here I describe the necessity for an Unanimous International Regulatory Body for all applications of AI (UIRB-AI). I also discuss the benefits and drawbacks of such an organization. To combat any drawbacks in the formation of an UIRB-AI, both idealistic and pragmatic perspectives are discussed alternatively. The paper further advances the discussion by proposing novel policies on how such organization should be structured and how it can bring about a win-win situation for everyone in the society.
2402.18511
Thiago Eustaquio Alves De Oliveira Dr.
Laurent Yves Emile Ramos Cheret, Vinicius Prado da Fonseca, Thiago Eustaquio Alves de Oliveira
Leveraging Compliant Tactile Perception for Haptic Blind Surface Reconstruction
7 pages, 9 figures, 2024 IEEE International Conference on Robotics and Automation (ICRA 2024)
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Non-flat surfaces pose difficulties for robots operating in unstructured environments. Reconstructions of uneven surfaces may only be partially possible due to non-compliant end-effectors and limitations on vision systems such as transparency, reflections, and occlusions. This study achieves blind surface reconstruction by harnessing the robotic manipulator's kinematic data and a compliant tactile sensing module, which incorporates inertial, magnetic, and pressure sensors. The module's flexibility enables us to estimate contact positions and surface normals by analyzing its deformation during interactions with unknown objects. While previous works collect only positional information, we include the local normals in a geometrical approach to estimate curvatures between adjacent contact points. These parameters then guide a spline-based patch generation, which allows us to recreate larger surfaces without an increase in complexity while reducing the time-consuming step of probing the surface. Experimental validation demonstrates that this approach outperforms an off-the-shelf vision system in estimation accuracy. Moreover, this compliant haptic method works effectively even when the manipulator's approach angle is not aligned with the surface normals, which is ideal for unknown non-flat surfaces.
[ { "created": "Wed, 28 Feb 2024 17:40:01 GMT", "version": "v1" } ]
2024-02-29
[ [ "Cheret", "Laurent Yves Emile Ramos", "" ], [ "da Fonseca", "Vinicius Prado", "" ], [ "de Oliveira", "Thiago Eustaquio Alves", "" ] ]
Non-flat surfaces pose difficulties for robots operating in unstructured environments. Reconstructions of uneven surfaces may only be partially possible due to non-compliant end-effectors and limitations on vision systems such as transparency, reflections, and occlusions. This study achieves blind surface reconstruction by harnessing the robotic manipulator's kinematic data and a compliant tactile sensing module, which incorporates inertial, magnetic, and pressure sensors. The module's flexibility enables us to estimate contact positions and surface normals by analyzing its deformation during interactions with unknown objects. While previous works collect only positional information, we include the local normals in a geometrical approach to estimate curvatures between adjacent contact points. These parameters then guide a spline-based patch generation, which allows us to recreate larger surfaces without an increase in complexity while reducing the time-consuming step of probing the surface. Experimental validation demonstrates that this approach outperforms an off-the-shelf vision system in estimation accuracy. Moreover, this compliant haptic method works effectively even when the manipulator's approach angle is not aligned with the surface normals, which is ideal for unknown non-flat surfaces.
1808.06303
Ian Schmutte
John M. Abowd and Ian M. Schmutte
An Economic Analysis of Privacy Protection and Statistical Accuracy as Social Choices
Forthcoming in American Economic Review
null
10.1257/aer.20170627
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Statistical agencies face a dual mandate to publish accurate statistics while protecting respondent privacy. Increasing privacy protection requires decreased accuracy. Recognizing this as a resource allocation problem, we propose an economic solution: operate where the marginal cost of increasing privacy equals the marginal benefit. Our model of production, from computer science, assumes data are published using an efficient differentially private algorithm. Optimal choice weighs the demand for accurate statistics against the demand for privacy. Examples from U.S.\ statistical programs show how our framework can guide decision-making. Further progress requires a better understanding of willingness-to-pay for privacy and statistical accuracy.
[ { "created": "Mon, 20 Aug 2018 04:34:43 GMT", "version": "v1" } ]
2019-03-12
[ [ "Abowd", "John M.", "" ], [ "Schmutte", "Ian M.", "" ] ]
Statistical agencies face a dual mandate to publish accurate statistics while protecting respondent privacy. Increasing privacy protection requires decreased accuracy. Recognizing this as a resource allocation problem, we propose an economic solution: operate where the marginal cost of increasing privacy equals the marginal benefit. Our model of production, from computer science, assumes data are published using an efficient differentially private algorithm. Optimal choice weighs the demand for accurate statistics against the demand for privacy. Examples from U.S.\ statistical programs show how our framework can guide decision-making. Further progress requires a better understanding of willingness-to-pay for privacy and statistical accuracy.
1512.08292
Saeed Mehrabi
Saeed Mehrabi
Guarding the Vertices of an Orthogonal Terrain using Vertex Guards
null
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A terrain T is an x-monotone polygonal chain in the plane; T is orthogonal if each edge of T is either horizontal or vertical. In this paper, we give an exact algorithm for the problem of guarding the convex vertices of an orthogonal terrain with the minimum number of reflex vertices.
[ { "created": "Mon, 28 Dec 2015 00:01:52 GMT", "version": "v1" } ]
2015-12-29
[ [ "Mehrabi", "Saeed", "" ] ]
A terrain T is an x-monotone polygonal chain in the plane; T is orthogonal if each edge of T is either horizontal or vertical. In this paper, we give an exact algorithm for the problem of guarding the convex vertices of an orthogonal terrain with the minimum number of reflex vertices.
2306.11879
Sidi Lu
Sidi Lu and Hongyi Liu and Asli Celikyilmaz and Tianlu Wang and Nanyun Peng
Open-Domain Text Evaluation via Contrastive Distribution Methods
Accepted to ICML 2024
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Recent advancements in open-domain text generation, driven by the power of large pre-trained language models (LLMs), have demonstrated remarkable performance. However, assessing these models' generation quality remains a challenge. In this paper, we introduce a novel method for evaluating open-domain text generation called Contrastive Distribution Methods (CDM). Leveraging the connection between increasing model parameters and enhanced LLM performance, CDM creates a mapping from the _contrast_ of two probabilistic distributions -- one known to be superior to the other -- to quality measures. We investigate CDM for open-domain text generation evaluation under two paradigms: 1) _Generative_ CDM, which harnesses the contrast of two language models' distributions to generate synthetic examples for training discriminator-based metrics; 2) _Discriminative_ CDM, which directly uses distribution disparities between two language models for evaluation. Our experiments on coherence evaluation for multi-turn dialogue and commonsense evaluation for controllable generation demonstrate CDM's superior correlate with human judgment than existing automatic evaluation metrics, highlighting the strong performance and generalizability of our approach.
[ { "created": "Tue, 20 Jun 2023 20:37:54 GMT", "version": "v1" }, { "created": "Fri, 3 May 2024 23:21:45 GMT", "version": "v2" }, { "created": "Thu, 6 Jun 2024 21:24:17 GMT", "version": "v3" }, { "created": "Mon, 10 Jun 2024 00:44:32 GMT", "version": "v4" } ]
2024-06-11
[ [ "Lu", "Sidi", "" ], [ "Liu", "Hongyi", "" ], [ "Celikyilmaz", "Asli", "" ], [ "Wang", "Tianlu", "" ], [ "Peng", "Nanyun", "" ] ]
Recent advancements in open-domain text generation, driven by the power of large pre-trained language models (LLMs), have demonstrated remarkable performance. However, assessing these models' generation quality remains a challenge. In this paper, we introduce a novel method for evaluating open-domain text generation called Contrastive Distribution Methods (CDM). Leveraging the connection between increasing model parameters and enhanced LLM performance, CDM creates a mapping from the _contrast_ of two probabilistic distributions -- one known to be superior to the other -- to quality measures. We investigate CDM for open-domain text generation evaluation under two paradigms: 1) _Generative_ CDM, which harnesses the contrast of two language models' distributions to generate synthetic examples for training discriminator-based metrics; 2) _Discriminative_ CDM, which directly uses distribution disparities between two language models for evaluation. Our experiments on coherence evaluation for multi-turn dialogue and commonsense evaluation for controllable generation demonstrate CDM's superior correlate with human judgment than existing automatic evaluation metrics, highlighting the strong performance and generalizability of our approach.
2401.08073
Alagappan Ramanathan
Alagappan Ramanathan, Rishika Sankaran, Sangeetha Abdu Jyothi
Xaminer: An Internet Cross-Layer Resilience Analysis Tool
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A resilient Internet infrastructure is critical in our highly interconnected society. However, the Internet faces several vulnerabilities, ranging from natural disasters to human activities, that can impact the physical layer and, in turn, the higher network layers, such as IP links. In this paper, we introduce Xaminer, the first Internet cross-layer resilience analysis tool, to evaluate the interplay between physical- and network-layer failures. Using a cross-layer Internet map and a failure event model, Xaminer generates a risk profile encompassing a cross-layer impact report, critical infrastructure identification at each layer, and the discovery of trends and patterns under different failure event settings. Xaminer's key strengths lie in its adaptability to diverse disaster scenarios, the ability to assess risks at various granularities, and the capability to generate joint risk profiles for multiple events. We demonstrate Xaminer's capabilities in cross-layer analysis across a spectrum of disaster event models and regions, showcasing its potential role in facilitating well-informed decision-making for resilience planning and deployments.
[ { "created": "Tue, 16 Jan 2024 02:58:27 GMT", "version": "v1" } ]
2024-01-17
[ [ "Ramanathan", "Alagappan", "" ], [ "Sankaran", "Rishika", "" ], [ "Jyothi", "Sangeetha Abdu", "" ] ]
A resilient Internet infrastructure is critical in our highly interconnected society. However, the Internet faces several vulnerabilities, ranging from natural disasters to human activities, that can impact the physical layer and, in turn, the higher network layers, such as IP links. In this paper, we introduce Xaminer, the first Internet cross-layer resilience analysis tool, to evaluate the interplay between physical- and network-layer failures. Using a cross-layer Internet map and a failure event model, Xaminer generates a risk profile encompassing a cross-layer impact report, critical infrastructure identification at each layer, and the discovery of trends and patterns under different failure event settings. Xaminer's key strengths lie in its adaptability to diverse disaster scenarios, the ability to assess risks at various granularities, and the capability to generate joint risk profiles for multiple events. We demonstrate Xaminer's capabilities in cross-layer analysis across a spectrum of disaster event models and regions, showcasing its potential role in facilitating well-informed decision-making for resilience planning and deployments.
1902.00172
Shikhar Vashishth
Shikhar Vashishth, Prince Jain, Partha Talukdar
CESI: Canonicalizing Open Knowledge Bases using Embeddings and Side Information
Accepted at WWW 2018
International World Wide Web Conferences Steering Committee 2018
10.1145/3178876.3186030
null
cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Open Information Extraction (OpenIE) methods extract (noun phrase, relation phrase, noun phrase) triples from text, resulting in the construction of large Open Knowledge Bases (Open KBs). The noun phrases (NPs) and relation phrases in such Open KBs are not canonicalized, leading to the storage of redundant and ambiguous facts. Recent research has posed canonicalization of Open KBs as clustering over manuallydefined feature spaces. Manual feature engineering is expensive and often sub-optimal. In order to overcome this challenge, we propose Canonicalization using Embeddings and Side Information (CESI) - a novel approach which performs canonicalization over learned embeddings of Open KBs. CESI extends recent advances in KB embedding by incorporating relevant NP and relation phrase side information in a principled manner. Through extensive experiments on multiple real-world datasets, we demonstrate CESI's effectiveness.
[ { "created": "Fri, 1 Feb 2019 04:18:49 GMT", "version": "v1" } ]
2019-02-04
[ [ "Vashishth", "Shikhar", "" ], [ "Jain", "Prince", "" ], [ "Talukdar", "Partha", "" ] ]
Open Information Extraction (OpenIE) methods extract (noun phrase, relation phrase, noun phrase) triples from text, resulting in the construction of large Open Knowledge Bases (Open KBs). The noun phrases (NPs) and relation phrases in such Open KBs are not canonicalized, leading to the storage of redundant and ambiguous facts. Recent research has posed canonicalization of Open KBs as clustering over manuallydefined feature spaces. Manual feature engineering is expensive and often sub-optimal. In order to overcome this challenge, we propose Canonicalization using Embeddings and Side Information (CESI) - a novel approach which performs canonicalization over learned embeddings of Open KBs. CESI extends recent advances in KB embedding by incorporating relevant NP and relation phrase side information in a principled manner. Through extensive experiments on multiple real-world datasets, we demonstrate CESI's effectiveness.
2006.04611
Kaustubh Yadav
Kaustubh Yadav
A Comprehensive Survey on Aspect Based Sentiment Analysis
null
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Aspect Based Sentiment Analysis (ABSA) is the sub-field of Natural Language Processing that deals with essentially splitting our data into aspects ad finally extracting the sentiment information. ABSA is known to provide more information about the context than general sentiment analysis. In this study, our aim is to explore the various methodologies practiced while performing ABSA, and providing a comparative study. This survey paper discusses various solutions in-depth and gives a comparison between them. And is conveniently divided into sections to get a holistic view on the process.
[ { "created": "Mon, 8 Jun 2020 14:07:58 GMT", "version": "v1" } ]
2020-06-09
[ [ "Yadav", "Kaustubh", "" ] ]
Aspect Based Sentiment Analysis (ABSA) is the sub-field of Natural Language Processing that deals with essentially splitting our data into aspects ad finally extracting the sentiment information. ABSA is known to provide more information about the context than general sentiment analysis. In this study, our aim is to explore the various methodologies practiced while performing ABSA, and providing a comparative study. This survey paper discusses various solutions in-depth and gives a comparison between them. And is conveniently divided into sections to get a holistic view on the process.
2406.09095
Yuhao Dan
Yuhao Dan, Junfeng Tian, Jie Zhou, Ming Yan, Ji Zhang, Qin Chen, Liang He
Modeling Comparative Logical Relation with Contrastive Learning for Text Generation
NLPCC 2024
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Data-to-Text Generation (D2T), a classic natural language generation problem, aims at producing fluent descriptions for structured input data, such as a table. Existing D2T works mainly focus on describing the superficial associative relations among entities, while ignoring the deep comparative logical relations, such as A is better than B in a certain aspect with a corresponding opinion, which is quite common in our daily life. In this paper, we introduce a new D2T task named comparative logical relation generation (CLRG). Additionally, we propose a Comparative Logic (CoLo) based text generation method, which generates texts following specific comparative logical relations with contrastive learning. Specifically, we first construct various positive and negative samples by fine-grained perturbations in entities, aspects and opinions. Then, we perform contrastive learning in the encoder layer to have a better understanding of the comparative logical relations, and integrate it in the decoder layer to guide the model to correctly generate the relations. Noting the data scarcity problem, we construct a Chinese Comparative Logical Relation Dataset (CLRD), which is a high-quality human-annotated dataset and challenging for text generation with descriptions of multiple entities and annotations on their comparative logical relations. Extensive experiments show that our method achieves impressive performance in both automatic and human evaluations.
[ { "created": "Thu, 13 Jun 2024 13:25:50 GMT", "version": "v1" }, { "created": "Thu, 15 Aug 2024 04:47:29 GMT", "version": "v2" } ]
2024-08-16
[ [ "Dan", "Yuhao", "" ], [ "Tian", "Junfeng", "" ], [ "Zhou", "Jie", "" ], [ "Yan", "Ming", "" ], [ "Zhang", "Ji", "" ], [ "Chen", "Qin", "" ], [ "He", "Liang", "" ] ]
Data-to-Text Generation (D2T), a classic natural language generation problem, aims at producing fluent descriptions for structured input data, such as a table. Existing D2T works mainly focus on describing the superficial associative relations among entities, while ignoring the deep comparative logical relations, such as A is better than B in a certain aspect with a corresponding opinion, which is quite common in our daily life. In this paper, we introduce a new D2T task named comparative logical relation generation (CLRG). Additionally, we propose a Comparative Logic (CoLo) based text generation method, which generates texts following specific comparative logical relations with contrastive learning. Specifically, we first construct various positive and negative samples by fine-grained perturbations in entities, aspects and opinions. Then, we perform contrastive learning in the encoder layer to have a better understanding of the comparative logical relations, and integrate it in the decoder layer to guide the model to correctly generate the relations. Noting the data scarcity problem, we construct a Chinese Comparative Logical Relation Dataset (CLRD), which is a high-quality human-annotated dataset and challenging for text generation with descriptions of multiple entities and annotations on their comparative logical relations. Extensive experiments show that our method achieves impressive performance in both automatic and human evaluations.
1012.5506
Adrian Paschke
Alejandra Gonzalez-Beltran, Ben Tagger, and Anthony Finkelstein
Ontology-based Queries over Cancer Data
in Adrian Paschke, Albert Burger, Andrea Splendiani, M. Scott Marshall, Paolo Romano: Proceedings of the 3rd International Workshop on Semantic Web Applications and Tools for the Life Sciences, Berlin,Germany, December 8-10, 2010
null
null
SWAT4LS 2010
cs.AI cs.DB cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ever-increasing amount of data in biomedical research, and in cancer research in particular, needs to be managed to support efficient data access, exchange and integration. Existing software infrastructures, such caGrid, support access to distributed information annotated with a domain ontology. However, caGrid's current querying functionality depends on the structure of individual data resources without exploiting the semantic annotations. In this paper, we present the design and development of an ontology-based querying functionality that consists of: the generation of OWL2 ontologies from the underlying data resources metadata and a query rewriting and translation process based on reasoning, which converts a query at the domain ontology level into queries at the software infrastructure level. We present a detailed analysis of our approach as well as an extensive performance evaluation. While the implementation and evaluation was performed for the caGrid infrastructure, the approach could be applicable to other model and metadata-driven environments for data sharing.
[ { "created": "Sun, 26 Dec 2010 10:49:52 GMT", "version": "v1" } ]
2010-12-30
[ [ "Gonzalez-Beltran", "Alejandra", "" ], [ "Tagger", "Ben", "" ], [ "Finkelstein", "Anthony", "" ] ]
The ever-increasing amount of data in biomedical research, and in cancer research in particular, needs to be managed to support efficient data access, exchange and integration. Existing software infrastructures, such caGrid, support access to distributed information annotated with a domain ontology. However, caGrid's current querying functionality depends on the structure of individual data resources without exploiting the semantic annotations. In this paper, we present the design and development of an ontology-based querying functionality that consists of: the generation of OWL2 ontologies from the underlying data resources metadata and a query rewriting and translation process based on reasoning, which converts a query at the domain ontology level into queries at the software infrastructure level. We present a detailed analysis of our approach as well as an extensive performance evaluation. While the implementation and evaluation was performed for the caGrid infrastructure, the approach could be applicable to other model and metadata-driven environments for data sharing.
1707.05468
Elena Mikhalkova
Elena Mikhalkova and Yuri Karyakin
Detecting Intentional Lexical Ambiguity in English Puns
In Proceedings of the International Conference "Dialogue 2017" Moscow, May 31-June 3, 2017
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The article describes a model of automatic analysis of puns, where a word is intentionally used in two meanings at the same time (the target word). We employ Roget's Thesaurus to discover two groups of words which, in a pun, form around two abstract bits of meaning (semes). They become a semantic vector, based on which an SVM classifier learns to recognize puns, reaching a score 0.73 for F-measure. We apply several rule-based methods to locate intentionally ambiguous (target) words, based on structural and semantic criteria. It appears that the structural criterion is more effective, although it possibly characterizes only the tested dataset. The results we get correlate with the results of other teams at SemEval-2017 competition (Task 7 Detection and Interpretation of English Puns) considering effects of using supervised learning models and word statistics.
[ { "created": "Tue, 18 Jul 2017 05:04:03 GMT", "version": "v1" } ]
2017-07-19
[ [ "Mikhalkova", "Elena", "" ], [ "Karyakin", "Yuri", "" ] ]
The article describes a model of automatic analysis of puns, where a word is intentionally used in two meanings at the same time (the target word). We employ Roget's Thesaurus to discover two groups of words which, in a pun, form around two abstract bits of meaning (semes). They become a semantic vector, based on which an SVM classifier learns to recognize puns, reaching a score 0.73 for F-measure. We apply several rule-based methods to locate intentionally ambiguous (target) words, based on structural and semantic criteria. It appears that the structural criterion is more effective, although it possibly characterizes only the tested dataset. The results we get correlate with the results of other teams at SemEval-2017 competition (Task 7 Detection and Interpretation of English Puns) considering effects of using supervised learning models and word statistics.
2407.02775
Ying Zhang
Ying Zhang and Ziheng Yang and Shufan Ji
MLKD-BERT: Multi-level Knowledge Distillation for Pre-trained Language Models
null
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Knowledge distillation is an effective technique for pre-trained language model compression. Although existing knowledge distillation methods perform well for the most typical model BERT, they could be further improved in two aspects: the relation-level knowledge could be further explored to improve model performance; and the setting of student attention head number could be more flexible to decrease inference time. Therefore, we are motivated to propose a novel knowledge distillation method MLKD-BERT to distill multi-level knowledge in teacher-student framework. Extensive experiments on GLUE benchmark and extractive question answering tasks demonstrate that our method outperforms state-of-the-art knowledge distillation methods on BERT. In addition, MLKD-BERT can flexibly set student attention head number, allowing for substantial inference time decrease with little performance drop.
[ { "created": "Wed, 3 Jul 2024 03:03:30 GMT", "version": "v1" } ]
2024-07-04
[ [ "Zhang", "Ying", "" ], [ "Yang", "Ziheng", "" ], [ "Ji", "Shufan", "" ] ]
Knowledge distillation is an effective technique for pre-trained language model compression. Although existing knowledge distillation methods perform well for the most typical model BERT, they could be further improved in two aspects: the relation-level knowledge could be further explored to improve model performance; and the setting of student attention head number could be more flexible to decrease inference time. Therefore, we are motivated to propose a novel knowledge distillation method MLKD-BERT to distill multi-level knowledge in teacher-student framework. Extensive experiments on GLUE benchmark and extractive question answering tasks demonstrate that our method outperforms state-of-the-art knowledge distillation methods on BERT. In addition, MLKD-BERT can flexibly set student attention head number, allowing for substantial inference time decrease with little performance drop.
2102.06774
Hanieh Rafiee
Haniyeh Rafiee and Mohammad Fakhredanesh
Presenting a Method for Improving Echo Hiding
14 page, This paper is printed in Journal of Computer and Knowledge Engineering, Vol. 2, No. 1
null
10.22067/CKE.V2I1.74388
null
cs.CR cs.MM
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this article, one of the most important methods of steganography on VoIP called echo hiding is improved. This method has advantages in maintaining the statistical and perceptual characteristics of audio signals as well as security against the sensitivity of the human audio system (HAS). However, it has lots of errors in detecting coded and hidden messages, which is detectable using existing steganalysis methods. The percentage of extracting messages in these improved methods of echo hiding is high, but they lower the security of the method. In this article, a method is presented to improve the method of extracting echo hiding, and enhance its security through a combined method based on spread spectrum. To improve the extraction, a wrong hypothesis is corrected and substituted. To improve security using a pseudo-random key generation algorithm, spread spectrum and echo hiding methods are used randomly. To evaluate the proposed extraction, numerous extraction tests are carried out in the normal state and in the event of attacks. A steganalyser has also been used to assess security improvements. The results gained through different experiments on the security of steganography indicate a 3-percent increase in steganalysis errors. The proposed extraction method was modified based on the main method and resulted in more than 10% improvement.
[ { "created": "Fri, 12 Feb 2021 21:09:36 GMT", "version": "v1" } ]
2021-02-16
[ [ "Rafiee", "Haniyeh", "" ], [ "Fakhredanesh", "Mohammad", "" ] ]
In this article, one of the most important methods of steganography on VoIP called echo hiding is improved. This method has advantages in maintaining the statistical and perceptual characteristics of audio signals as well as security against the sensitivity of the human audio system (HAS). However, it has lots of errors in detecting coded and hidden messages, which is detectable using existing steganalysis methods. The percentage of extracting messages in these improved methods of echo hiding is high, but they lower the security of the method. In this article, a method is presented to improve the method of extracting echo hiding, and enhance its security through a combined method based on spread spectrum. To improve the extraction, a wrong hypothesis is corrected and substituted. To improve security using a pseudo-random key generation algorithm, spread spectrum and echo hiding methods are used randomly. To evaluate the proposed extraction, numerous extraction tests are carried out in the normal state and in the event of attacks. A steganalyser has also been used to assess security improvements. The results gained through different experiments on the security of steganography indicate a 3-percent increase in steganalysis errors. The proposed extraction method was modified based on the main method and resulted in more than 10% improvement.
1005.5489
Constantin Jucovschi
Constantin Jucovschi, Michael Kohlhase
sTeXIDE: An Integrated Development Environment for sTeX Collections
To appear in The 9th International Conference on Mathematical Knowledge Management: MKM 2010
null
null
null
cs.OH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Authoring documents in MKM formats like OMDoc is a very tedious task. After years of working on a semantically annotated corpus of sTeX documents (GenCS), we identified a set of common, time-consuming subtasks, which can be supported in an integrated authoring environment. We have adapted the modular Eclipse IDE into sTeXIDE, an authoring solution for enhancing productivity in contributing to sTeX based corpora. sTeXIDE supports context-aware command completion, module management, semantic macro retrieval, and theory graph navigation.
[ { "created": "Sat, 29 May 2010 22:31:05 GMT", "version": "v1" } ]
2010-06-01
[ [ "Jucovschi", "Constantin", "" ], [ "Kohlhase", "Michael", "" ] ]
Authoring documents in MKM formats like OMDoc is a very tedious task. After years of working on a semantically annotated corpus of sTeX documents (GenCS), we identified a set of common, time-consuming subtasks, which can be supported in an integrated authoring environment. We have adapted the modular Eclipse IDE into sTeXIDE, an authoring solution for enhancing productivity in contributing to sTeX based corpora. sTeXIDE supports context-aware command completion, module management, semantic macro retrieval, and theory graph navigation.
2206.07089
Boyang Li
Boyang Li, Qing Lu, Weiwen Jiang, Taeho Jung, Yiyu Shi
A Collaboration Strategy in the Mining Pool for Proof-of-Neural-Architecture Consensus
null
null
null
null
cs.DC cs.LG
http://creativecommons.org/publicdomain/zero/1.0/
In most popular public accessible cryptocurrency systems, the mining pool plays a key role because mining cryptocurrency with the mining pool turns the non-profitable situation into profitable for individual miners. In many recent novel blockchain consensuses, the deep learning training procedure becomes the task for miners to prove their workload, thus the computation power of miners will not purely be spent on the hash puzzle. In this way, the hardware and energy will support the blockchain service and deep learning training simultaneously. While the incentive of miners is to earn tokens, individual miners are motivated to join mining pools to become more competitive. In this paper, we are the first to demonstrate a mining pool solution for novel consensuses based on deep learning. The mining pool manager partitions the full searching space into subspaces and all miners are scheduled to collaborate on the Neural Architecture Search (NAS) tasks in the assigned subspace. Experiments demonstrate that the performance of this type of mining pool is more competitive than an individual miner. Due to the uncertainty of miners' behaviors, the mining pool manager checks the standard deviation of the performance of high reward miners and prepares backup miners to ensure the completion of the tasks of high reward miners.
[ { "created": "Thu, 5 May 2022 17:08:02 GMT", "version": "v1" } ]
2022-06-16
[ [ "Li", "Boyang", "" ], [ "Lu", "Qing", "" ], [ "Jiang", "Weiwen", "" ], [ "Jung", "Taeho", "" ], [ "Shi", "Yiyu", "" ] ]
In most popular public accessible cryptocurrency systems, the mining pool plays a key role because mining cryptocurrency with the mining pool turns the non-profitable situation into profitable for individual miners. In many recent novel blockchain consensuses, the deep learning training procedure becomes the task for miners to prove their workload, thus the computation power of miners will not purely be spent on the hash puzzle. In this way, the hardware and energy will support the blockchain service and deep learning training simultaneously. While the incentive of miners is to earn tokens, individual miners are motivated to join mining pools to become more competitive. In this paper, we are the first to demonstrate a mining pool solution for novel consensuses based on deep learning. The mining pool manager partitions the full searching space into subspaces and all miners are scheduled to collaborate on the Neural Architecture Search (NAS) tasks in the assigned subspace. Experiments demonstrate that the performance of this type of mining pool is more competitive than an individual miner. Due to the uncertainty of miners' behaviors, the mining pool manager checks the standard deviation of the performance of high reward miners and prepares backup miners to ensure the completion of the tasks of high reward miners.
2209.07367
Turgay Pamuklu
Anne Catherine Nguyen, Turgay Pamuklu, Aisha Syed, W. Sean Kennedy, Melike Erol-Kantarci
Deep Reinforcement Learning for Task Offloading in UAV-Aided Smart Farm Networks
Accepted Paper
null
null
null
cs.NI cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The fifth and sixth generations of wireless communication networks are enabling tools such as internet of things devices, unmanned aerial vehicles (UAVs), and artificial intelligence, to improve the agricultural landscape using a network of devices to automatically monitor farmlands. Surveying a large area requires performing a lot of image classification tasks within a specific period of time in order to prevent damage to the farm in case of an incident, such as fire or flood. UAVs have limited energy and computing power, and may not be able to perform all of the intense image classification tasks locally and within an appropriate amount of time. Hence, it is assumed that the UAVs are able to partially offload their workload to nearby multi-access edge computing devices. The UAVs need a decision-making algorithm that will decide where the tasks will be performed, while also considering the time constraints and energy level of the other UAVs in the network. In this paper, we introduce a Deep Q-Learning (DQL) approach to solve this multi-objective problem. The proposed method is compared with Q-Learning and three heuristic baselines, and the simulation results show that our proposed DQL-based method achieves comparable results when it comes to the UAVs' remaining battery levels and percentage of deadline violations. In addition, our method is able to reach convergence 13 times faster than Q-Learning.
[ { "created": "Thu, 15 Sep 2022 15:29:57 GMT", "version": "v1" } ]
2022-09-16
[ [ "Nguyen", "Anne Catherine", "" ], [ "Pamuklu", "Turgay", "" ], [ "Syed", "Aisha", "" ], [ "Kennedy", "W. Sean", "" ], [ "Erol-Kantarci", "Melike", "" ] ]
The fifth and sixth generations of wireless communication networks are enabling tools such as internet of things devices, unmanned aerial vehicles (UAVs), and artificial intelligence, to improve the agricultural landscape using a network of devices to automatically monitor farmlands. Surveying a large area requires performing a lot of image classification tasks within a specific period of time in order to prevent damage to the farm in case of an incident, such as fire or flood. UAVs have limited energy and computing power, and may not be able to perform all of the intense image classification tasks locally and within an appropriate amount of time. Hence, it is assumed that the UAVs are able to partially offload their workload to nearby multi-access edge computing devices. The UAVs need a decision-making algorithm that will decide where the tasks will be performed, while also considering the time constraints and energy level of the other UAVs in the network. In this paper, we introduce a Deep Q-Learning (DQL) approach to solve this multi-objective problem. The proposed method is compared with Q-Learning and three heuristic baselines, and the simulation results show that our proposed DQL-based method achieves comparable results when it comes to the UAVs' remaining battery levels and percentage of deadline violations. In addition, our method is able to reach convergence 13 times faster than Q-Learning.
2105.06714
Peijia Chen
Peijia Chen, Jianhuang Lai, Guangcong Wang, Huajun Zhou
Confidence-guided Adaptive Gate and Dual Differential Enhancement for Video Salient Object Detection
Accepted by ICME2021 as oral
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video salient object detection (VSOD) aims to locate and segment the most attractive object by exploiting both spatial cues and temporal cues hidden in video sequences. However, spatial and temporal cues are often unreliable in real-world scenarios, such as low-contrast foreground, fast motion, and multiple moving objects. To address these problems, we propose a new framework to adaptively capture available information from spatial and temporal cues, which contains Confidence-guided Adaptive Gate (CAG) modules and Dual Differential Enhancement (DDE) modules. For both RGB features and optical flow features, CAG estimates confidence scores supervised by the IoU between predictions and the ground truths to re-calibrate the information with a gate mechanism. DDE captures the differential feature representation to enrich the spatial and temporal information and generate the fused features. Experimental results on four widely used datasets demonstrate the effectiveness of the proposed method against thirteen state-of-the-art methods.
[ { "created": "Fri, 14 May 2021 08:49:37 GMT", "version": "v1" } ]
2021-05-17
[ [ "Chen", "Peijia", "" ], [ "Lai", "Jianhuang", "" ], [ "Wang", "Guangcong", "" ], [ "Zhou", "Huajun", "" ] ]
Video salient object detection (VSOD) aims to locate and segment the most attractive object by exploiting both spatial cues and temporal cues hidden in video sequences. However, spatial and temporal cues are often unreliable in real-world scenarios, such as low-contrast foreground, fast motion, and multiple moving objects. To address these problems, we propose a new framework to adaptively capture available information from spatial and temporal cues, which contains Confidence-guided Adaptive Gate (CAG) modules and Dual Differential Enhancement (DDE) modules. For both RGB features and optical flow features, CAG estimates confidence scores supervised by the IoU between predictions and the ground truths to re-calibrate the information with a gate mechanism. DDE captures the differential feature representation to enrich the spatial and temporal information and generate the fused features. Experimental results on four widely used datasets demonstrate the effectiveness of the proposed method against thirteen state-of-the-art methods.
2403.10746
Matthijs Douze
Gergely Szilvasy and Pierre-Emmanuel Mazar\'e and Matthijs Douze
Vector search with small radiuses
null
null
null
null
cs.CV cs.DB
http://creativecommons.org/licenses/by/4.0/
In recent years, the dominant accuracy metric for vector search is the recall of a result list of fixed size (top-k retrieval), considering as ground truth the exact vector retrieval results. Although convenient to compute, this metric is distantly related to the end-to-end accuracy of a full system that integrates vector search. In this paper we focus on the common case where a hard decision needs to be taken depending on the vector retrieval results, for example, deciding whether a query image matches a database image or not. We solve this as a range search task, where all vectors within a certain radius from the query are returned. We show that the value of a range search result can be modeled rigorously based on the query-to-vector distance. This yields a metric for range search, RSM, that is both principled and easy to compute without running an end-to-end evaluation. We apply this metric to the case of image retrieval. We show that indexing methods that are adapted for top-k retrieval do not necessarily maximize the RSM. In particular, for inverted file based indexes, we show that visiting a limited set of clusters and encoding vectors compactly yields near optimal results.
[ { "created": "Sat, 16 Mar 2024 00:34:25 GMT", "version": "v1" } ]
2024-03-19
[ [ "Szilvasy", "Gergely", "" ], [ "Mazaré", "Pierre-Emmanuel", "" ], [ "Douze", "Matthijs", "" ] ]
In recent years, the dominant accuracy metric for vector search is the recall of a result list of fixed size (top-k retrieval), considering as ground truth the exact vector retrieval results. Although convenient to compute, this metric is distantly related to the end-to-end accuracy of a full system that integrates vector search. In this paper we focus on the common case where a hard decision needs to be taken depending on the vector retrieval results, for example, deciding whether a query image matches a database image or not. We solve this as a range search task, where all vectors within a certain radius from the query are returned. We show that the value of a range search result can be modeled rigorously based on the query-to-vector distance. This yields a metric for range search, RSM, that is both principled and easy to compute without running an end-to-end evaluation. We apply this metric to the case of image retrieval. We show that indexing methods that are adapted for top-k retrieval do not necessarily maximize the RSM. In particular, for inverted file based indexes, we show that visiting a limited set of clusters and encoding vectors compactly yields near optimal results.
2312.06795
MohammadReza Davari
MohammadReza Davari and Eugene Belilovsky
Model Breadcrumbs: Scaling Multi-Task Model Merging with Sparse Masks
Published in ECCV 2024
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
The rapid development of AI systems has been greatly influenced by the emergence of foundation models. A common approach for targeted problems involves fine-tuning these pre-trained foundation models for specific target tasks, resulting in a rapid spread of models fine-tuned across a diverse array of tasks. This work focuses on the problem of merging multiple fine-tunings of the same foundation model derived from a spectrum of auxiliary tasks. We introduce a new simple method, Model Breadcrumbs, which consists of a sparsely defined weight set that guides model adaptation within the weight space of a pre-trained model. These breadcrumbs are constructed by subtracting the weights from a pre-trained model before and after fine-tuning, followed by a sparsification process that eliminates weight outliers and negligible perturbations. Our experiments demonstrate the effectiveness of Model Breadcrumbs to simultaneously improve performance across multiple tasks. This contribution aligns with the evolving paradigm of updatable machine learning, reminiscent of the collaborative principles underlying open-source software development, fostering a community-driven effort to reliably update machine learning models. Our method is shown to be more efficient and unlike previous proposals does not require hyperparameter tuning for each new task added. Through extensive experimentation involving various models, tasks, and modalities we establish that integrating Model Breadcrumbs offers a simple, efficient, and highly effective approach for constructing multi-task models and facilitating updates to foundation models.
[ { "created": "Mon, 11 Dec 2023 19:10:55 GMT", "version": "v1" }, { "created": "Sat, 10 Aug 2024 00:02:00 GMT", "version": "v2" } ]
2024-08-13
[ [ "Davari", "MohammadReza", "" ], [ "Belilovsky", "Eugene", "" ] ]
The rapid development of AI systems has been greatly influenced by the emergence of foundation models. A common approach for targeted problems involves fine-tuning these pre-trained foundation models for specific target tasks, resulting in a rapid spread of models fine-tuned across a diverse array of tasks. This work focuses on the problem of merging multiple fine-tunings of the same foundation model derived from a spectrum of auxiliary tasks. We introduce a new simple method, Model Breadcrumbs, which consists of a sparsely defined weight set that guides model adaptation within the weight space of a pre-trained model. These breadcrumbs are constructed by subtracting the weights from a pre-trained model before and after fine-tuning, followed by a sparsification process that eliminates weight outliers and negligible perturbations. Our experiments demonstrate the effectiveness of Model Breadcrumbs to simultaneously improve performance across multiple tasks. This contribution aligns with the evolving paradigm of updatable machine learning, reminiscent of the collaborative principles underlying open-source software development, fostering a community-driven effort to reliably update machine learning models. Our method is shown to be more efficient and unlike previous proposals does not require hyperparameter tuning for each new task added. Through extensive experimentation involving various models, tasks, and modalities we establish that integrating Model Breadcrumbs offers a simple, efficient, and highly effective approach for constructing multi-task models and facilitating updates to foundation models.
2208.08612
Chen Yang
Chen Yang and Yupeng Hou and Yang Song and Tao Zhang and Ji-Rong Wen and Wayne Xin Zhao
Modeling Two-Way Selection Preference for Person-Job Fit
10 pages, Accepted by RecSys 2022
null
10.1145/3523227.3546752
null
cs.IR
http://creativecommons.org/licenses/by/4.0/
Person-job fit is the core technique of online recruitment platforms, which can improve the efficiency of recruitment by accurately matching the job positions with the job seekers. Existing works mainly focus on modeling the unidirectional process or overall matching. However, recruitment is a two-way selection process, which means that both candidate and employer involved in the interaction should meet the expectation of each other, instead of unilateral satisfaction. In this paper, we propose a dual-perspective graph representation learning approach to model directed interactions between candidates and jobs. To model the two-way selection preference from the dual-perspective of job seekers and employers, we incorporate two different nodes for each candidate (or job) and characterize both successful matching and failed matching via a unified dual-perspective interaction graph. To learn dual-perspective node representations effectively, we design an effective optimization algorithm, which involves a quadruple-based loss and a dual-perspective contrastive learning loss. Extensive experiments on three large real-world recruitment datasets have shown the effectiveness of our approach.
[ { "created": "Thu, 18 Aug 2022 03:16:11 GMT", "version": "v1" }, { "created": "Fri, 19 Aug 2022 15:10:46 GMT", "version": "v2" } ]
2022-08-22
[ [ "Yang", "Chen", "" ], [ "Hou", "Yupeng", "" ], [ "Song", "Yang", "" ], [ "Zhang", "Tao", "" ], [ "Wen", "Ji-Rong", "" ], [ "Zhao", "Wayne Xin", "" ] ]
Person-job fit is the core technique of online recruitment platforms, which can improve the efficiency of recruitment by accurately matching the job positions with the job seekers. Existing works mainly focus on modeling the unidirectional process or overall matching. However, recruitment is a two-way selection process, which means that both candidate and employer involved in the interaction should meet the expectation of each other, instead of unilateral satisfaction. In this paper, we propose a dual-perspective graph representation learning approach to model directed interactions between candidates and jobs. To model the two-way selection preference from the dual-perspective of job seekers and employers, we incorporate two different nodes for each candidate (or job) and characterize both successful matching and failed matching via a unified dual-perspective interaction graph. To learn dual-perspective node representations effectively, we design an effective optimization algorithm, which involves a quadruple-based loss and a dual-perspective contrastive learning loss. Extensive experiments on three large real-world recruitment datasets have shown the effectiveness of our approach.
2212.00501
Lb Luo
Linbo Luo, Yuanjing Li, Haiyan Yin, Shangwei Xie, Ruimin Hu, Wentong Cai
Crowd-level Abnormal Behavior Detection via Multi-scale Motion Consistency Learning
Version with appendix for the AAAI-23 publication
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Detecting abnormal crowd motion emerging from complex interactions of individuals is paramount to ensure the safety of crowds. Crowd-level abnormal behaviors (CABs), e.g., counter flow and crowd turbulence, are proven to be the crucial causes of many crowd disasters. In the recent decade, video anomaly detection (VAD) techniques have achieved remarkable success in detecting individual-level abnormal behaviors (e.g., sudden running, fighting and stealing), but research on VAD for CABs is rather limited. Unlike individual-level anomaly, CABs usually do not exhibit salient difference from the normal behaviors when observed locally, and the scale of CABs could vary from one scenario to another. In this paper, we present a systematic study to tackle the important problem of VAD for CABs with a novel crowd motion learning framework, multi-scale motion consistency network (MSMC-Net). MSMC-Net first captures the spatial and temporal crowd motion consistency information in a graph representation. Then, it simultaneously trains multiple feature graphs constructed at different scales to capture rich crowd patterns. An attention network is used to adaptively fuse the multi-scale features for better CAB detection. For the empirical study, we consider three large-scale crowd event datasets, UMN, Hajj and Love Parade. Experimental results show that MSMC-Net could substantially improve the state-of-the-art performance on all the datasets.
[ { "created": "Thu, 1 Dec 2022 13:52:32 GMT", "version": "v1" } ]
2022-12-02
[ [ "Luo", "Linbo", "" ], [ "Li", "Yuanjing", "" ], [ "Yin", "Haiyan", "" ], [ "Xie", "Shangwei", "" ], [ "Hu", "Ruimin", "" ], [ "Cai", "Wentong", "" ] ]
Detecting abnormal crowd motion emerging from complex interactions of individuals is paramount to ensure the safety of crowds. Crowd-level abnormal behaviors (CABs), e.g., counter flow and crowd turbulence, are proven to be the crucial causes of many crowd disasters. In the recent decade, video anomaly detection (VAD) techniques have achieved remarkable success in detecting individual-level abnormal behaviors (e.g., sudden running, fighting and stealing), but research on VAD for CABs is rather limited. Unlike individual-level anomaly, CABs usually do not exhibit salient difference from the normal behaviors when observed locally, and the scale of CABs could vary from one scenario to another. In this paper, we present a systematic study to tackle the important problem of VAD for CABs with a novel crowd motion learning framework, multi-scale motion consistency network (MSMC-Net). MSMC-Net first captures the spatial and temporal crowd motion consistency information in a graph representation. Then, it simultaneously trains multiple feature graphs constructed at different scales to capture rich crowd patterns. An attention network is used to adaptively fuse the multi-scale features for better CAB detection. For the empirical study, we consider three large-scale crowd event datasets, UMN, Hajj and Love Parade. Experimental results show that MSMC-Net could substantially improve the state-of-the-art performance on all the datasets.
2405.19743
Hyemin Ahn
Hyemin Ahn
May the Dance be with You: Dance Generation Framework for Non-Humanoids
13 pages, 6 Figures, Rejected at Neurips 2023
null
null
null
cs.CV cs.AI cs.RO
http://creativecommons.org/licenses/by/4.0/
We hypothesize dance as a motion that forms a visual rhythm from music, where the visual rhythm can be perceived from an optical flow. If an agent can recognize the relationship between visual rhythm and music, it will be able to dance by generating a motion to create a visual rhythm that matches the music. Based on this, we propose a framework for any kind of non-humanoid agents to learn how to dance from human videos. Our framework works in two processes: (1) training a reward model which perceives the relationship between optical flow (visual rhythm) and music from human dance videos, (2) training the non-humanoid dancer based on that reward model, and reinforcement learning. Our reward model consists of two feature encoders for optical flow and music. They are trained based on contrastive learning which makes the higher similarity between concurrent optical flow and music features. With this reward model, the agent learns dancing by getting a higher reward when its action creates an optical flow whose feature has a higher similarity with the given music feature. Experiment results show that generated dance motion can align with the music beat properly, and user study result indicates that our framework is more preferred by humans compared to the baselines. To the best of our knowledge, our work of non-humanoid agents which learn dance from human videos is unprecedented. An example video can be found at https://youtu.be/dOUPvo-O3QY.
[ { "created": "Thu, 30 May 2024 06:43:55 GMT", "version": "v1" } ]
2024-05-31
[ [ "Ahn", "Hyemin", "" ] ]
We hypothesize dance as a motion that forms a visual rhythm from music, where the visual rhythm can be perceived from an optical flow. If an agent can recognize the relationship between visual rhythm and music, it will be able to dance by generating a motion to create a visual rhythm that matches the music. Based on this, we propose a framework for any kind of non-humanoid agents to learn how to dance from human videos. Our framework works in two processes: (1) training a reward model which perceives the relationship between optical flow (visual rhythm) and music from human dance videos, (2) training the non-humanoid dancer based on that reward model, and reinforcement learning. Our reward model consists of two feature encoders for optical flow and music. They are trained based on contrastive learning which makes the higher similarity between concurrent optical flow and music features. With this reward model, the agent learns dancing by getting a higher reward when its action creates an optical flow whose feature has a higher similarity with the given music feature. Experiment results show that generated dance motion can align with the music beat properly, and user study result indicates that our framework is more preferred by humans compared to the baselines. To the best of our knowledge, our work of non-humanoid agents which learn dance from human videos is unprecedented. An example video can be found at https://youtu.be/dOUPvo-O3QY.
2212.11756
Yuanbo Li
Yuanbo Li, Chong Han, Yi Chen, Ziming Yu, and Xuefeng Yin
DSS-o-SAGE: Direction-Scan Sounding-Oriented SAGE Algorithm for Channel Parameter Estimation in mmWave and THz Bands
15 pages, 10 figures, 3 tables
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Investigation of millimeter (mmWave) and Terahertz (THz) channels relies on channel measurements and estimation of multi-path component (MPC) parameters. As a common measurement technique in the mmWave and THz bands, direction-scan sounding (DSS) resolves angular information and increases the measurable distance. Through mechanical rotation, the DSS creates a virtual multi-antenna sounding system, which however incurs signal phase instability and large data sizes, which are not fully considered in existing estimation algorithms and thus make them ineffective. To tackle this research gap, in this paper, a DSS-oriented space-alternating generalized expectation-maximization (DSS-o-SAGE) algorithm is proposed for channel parameter estimation in mmWave and THz bands. To appropriately capture the measured data in mmWave and THz DSS, the phase instability is modeled by the scanning-direction-dependent signal phases. Furthermore, based on the signal model, the DSS-o-SAGE algorithm is developed, which not only addresses the problems brought by phase instability, but also achieves ultra-low computational complexity by exploiting the narrow antenna beam property of DSS. Simulations in synthetic channels are conducted to demonstrate the efficacy of the proposed algorithm and explore the applicable region of the far-field approximation in DSS-o-SAGE. Last but not least, the proposed DSS-o-SAGE algorithm is applied in real measurements in an indoor corridor scenario at 300~GHz. Compared with results using the baseline noise-elimination method, the channel is characterized more correctly and reasonably based on the DSS-o-SAGE.
[ { "created": "Mon, 28 Nov 2022 16:11:52 GMT", "version": "v1" }, { "created": "Mon, 4 Mar 2024 06:41:17 GMT", "version": "v2" } ]
2024-03-05
[ [ "Li", "Yuanbo", "" ], [ "Han", "Chong", "" ], [ "Chen", "Yi", "" ], [ "Yu", "Ziming", "" ], [ "Yin", "Xuefeng", "" ] ]
Investigation of millimeter (mmWave) and Terahertz (THz) channels relies on channel measurements and estimation of multi-path component (MPC) parameters. As a common measurement technique in the mmWave and THz bands, direction-scan sounding (DSS) resolves angular information and increases the measurable distance. Through mechanical rotation, the DSS creates a virtual multi-antenna sounding system, which however incurs signal phase instability and large data sizes, which are not fully considered in existing estimation algorithms and thus make them ineffective. To tackle this research gap, in this paper, a DSS-oriented space-alternating generalized expectation-maximization (DSS-o-SAGE) algorithm is proposed for channel parameter estimation in mmWave and THz bands. To appropriately capture the measured data in mmWave and THz DSS, the phase instability is modeled by the scanning-direction-dependent signal phases. Furthermore, based on the signal model, the DSS-o-SAGE algorithm is developed, which not only addresses the problems brought by phase instability, but also achieves ultra-low computational complexity by exploiting the narrow antenna beam property of DSS. Simulations in synthetic channels are conducted to demonstrate the efficacy of the proposed algorithm and explore the applicable region of the far-field approximation in DSS-o-SAGE. Last but not least, the proposed DSS-o-SAGE algorithm is applied in real measurements in an indoor corridor scenario at 300~GHz. Compared with results using the baseline noise-elimination method, the channel is characterized more correctly and reasonably based on the DSS-o-SAGE.
1807.05153
Hongwei Li
Hongwei Li, Jianguo Zhang, Mark Muehlau, Jan Kirschke and Bjoern Menze
Multi-Scale Convolutional-Stack Aggregation for Robust White Matter Hyperintensities Segmentation
accepted by MICCAI brain lesion workshop
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Segmentation of both large and small white matter hyperintensities/lesions in brain MR images is a challenging task which has drawn much attention in recent years. We propose a multi-scale aggregation model framework to deal with volume-varied lesions. Firstly, we present a specifically-designed network for small lesion segmentation called Stack-Net, in which multiple convolutional layers are connected, aiming to preserve rich local spatial information of small lesions before the sub-sampling layer. Secondly, we aggregate multi-scale Stack-Nets with different receptive fields to learn multi-scale contextual information of both large and small lesions. Our model is evaluated on recent MICCAI WMH Challenge Dataset and outperforms the state-of-the-art on lesion recall and lesion F1-score under 5-fold cross validation. In addition, we further test our pre-trained models on a Multiple Sclerosis lesion dataset with 30 subjects under cross-center evaluation. Results show that the aggregation model is effective in learning multi-scale spatial information.It claimed the first place on the hidden test set after independent evaluation by the challenge organizer. In addition, we further test our pre-trained models on a Multiple Sclerosis lesion dataset with 30 subjects under cross-center evaluation. Results show that the aggregation model is effective in learning multi-scale spatial information.
[ { "created": "Fri, 13 Jul 2018 15:56:20 GMT", "version": "v1" }, { "created": "Wed, 29 Aug 2018 21:55:37 GMT", "version": "v2" }, { "created": "Wed, 27 Feb 2019 14:57:19 GMT", "version": "v3" } ]
2019-02-28
[ [ "Li", "Hongwei", "" ], [ "Zhang", "Jianguo", "" ], [ "Muehlau", "Mark", "" ], [ "Kirschke", "Jan", "" ], [ "Menze", "Bjoern", "" ] ]
Segmentation of both large and small white matter hyperintensities/lesions in brain MR images is a challenging task which has drawn much attention in recent years. We propose a multi-scale aggregation model framework to deal with volume-varied lesions. Firstly, we present a specifically-designed network for small lesion segmentation called Stack-Net, in which multiple convolutional layers are connected, aiming to preserve rich local spatial information of small lesions before the sub-sampling layer. Secondly, we aggregate multi-scale Stack-Nets with different receptive fields to learn multi-scale contextual information of both large and small lesions. Our model is evaluated on recent MICCAI WMH Challenge Dataset and outperforms the state-of-the-art on lesion recall and lesion F1-score under 5-fold cross validation. In addition, we further test our pre-trained models on a Multiple Sclerosis lesion dataset with 30 subjects under cross-center evaluation. Results show that the aggregation model is effective in learning multi-scale spatial information.It claimed the first place on the hidden test set after independent evaluation by the challenge organizer. In addition, we further test our pre-trained models on a Multiple Sclerosis lesion dataset with 30 subjects under cross-center evaluation. Results show that the aggregation model is effective in learning multi-scale spatial information.
1910.08248
Duong Nguyen
Duong Nguyen, Sandeep S. Kulkarni
Benefits of Stabilization versus Rollback in Eventually Consistent Key-Value Stores
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we evaluate and compare the performance of two approaches, namely self-stabilization and rollback, to handling consistency violation faults (cvf) that occurred when a distributed program is executed on eventually consistent key-value store. We observe that self-stabilization is usually better than rollbacks in our experiments. Moreover, when we aggressively allow more cvf in exchange of eliminating mechanisms for guaranteeing atomicity requirements of actions, we observe the programs in our case studies achieve a speedup between 2--15 times compared with the standard implementation. We also analyze different factors that contribute to the results. Our results and analysis are useful in helping a system designer choose proper design options for their program.
[ { "created": "Fri, 18 Oct 2019 03:53:11 GMT", "version": "v1" } ]
2019-10-21
[ [ "Nguyen", "Duong", "" ], [ "Kulkarni", "Sandeep S.", "" ] ]
In this paper, we evaluate and compare the performance of two approaches, namely self-stabilization and rollback, to handling consistency violation faults (cvf) that occurred when a distributed program is executed on eventually consistent key-value store. We observe that self-stabilization is usually better than rollbacks in our experiments. Moreover, when we aggressively allow more cvf in exchange of eliminating mechanisms for guaranteeing atomicity requirements of actions, we observe the programs in our case studies achieve a speedup between 2--15 times compared with the standard implementation. We also analyze different factors that contribute to the results. Our results and analysis are useful in helping a system designer choose proper design options for their program.
1702.00855
Enno Shioji
Enno Shioji, Masayuki Arai
Neural Feature Embedding for User Response Prediction in Real-Time Bidding (RTB)
null
Proc. of the Workshop on Social Media for Personalization and Search (2017) 8-13
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the area of ad-targeting, predicting user responses is essential for many applications such as Real-Time Bidding (RTB). Many of the features available in this domain are sparse categorical features. This presents a challenge especially when the user responses to be predicted are rare, because each feature will only have very few positive examples. Recently, neural embedding techniques such as word2vec which learn distributed representations of words using occurrence statistics in the corpus have been shown to be effective in many Natural Language Processing tasks. In this paper, we use real-world data set to show that a similar technique can be used to learn distributed representations of features from users' web history, and that such representations can be used to improve the accuracy of commonly used models for predicting rare user responses.
[ { "created": "Thu, 2 Feb 2017 22:32:29 GMT", "version": "v1" }, { "created": "Thu, 23 Feb 2017 10:37:24 GMT", "version": "v2" }, { "created": "Wed, 26 Apr 2017 17:01:40 GMT", "version": "v3" }, { "created": "Tue, 9 May 2017 11:21:42 GMT", "version": "v4" }, { "created": "Wed, 17 May 2017 07:05:42 GMT", "version": "v5" }, { "created": "Thu, 18 May 2017 17:35:36 GMT", "version": "v6" } ]
2017-05-19
[ [ "Shioji", "Enno", "" ], [ "Arai", "Masayuki", "" ] ]
In the area of ad-targeting, predicting user responses is essential for many applications such as Real-Time Bidding (RTB). Many of the features available in this domain are sparse categorical features. This presents a challenge especially when the user responses to be predicted are rare, because each feature will only have very few positive examples. Recently, neural embedding techniques such as word2vec which learn distributed representations of words using occurrence statistics in the corpus have been shown to be effective in many Natural Language Processing tasks. In this paper, we use real-world data set to show that a similar technique can be used to learn distributed representations of features from users' web history, and that such representations can be used to improve the accuracy of commonly used models for predicting rare user responses.
1809.07948
Michael Fulton
Michael Fulton, Chelsey Edge, Junaed Sattar
Robot Communication Via Motion: Closing the Underwater Human-Robot Interaction Loop
Under review for ICRA 2019
null
null
null
cs.RO cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a novel method for underwater robot-to-human communication using the motion of the robot as "body language". To evaluate this system, we develop simulated examples of the system's body language gestures, called kinemes, and compare them to a baseline system using flashing colored lights through a user study. Our work shows evidence that motion can be used as a successful communication vector which is accurate, easy to learn, and quick enough to be used, all without requiring any additional hardware to be added to our platform. We thus contribute to "closing the loop" for human-robot interaction underwater by proposing and testing this system, suggesting a library of possible body language gestures for underwater robots, and offering insight on the design of nonverbal robot-to-human communication methods.
[ { "created": "Fri, 21 Sep 2018 05:22:58 GMT", "version": "v1" } ]
2018-09-24
[ [ "Fulton", "Michael", "" ], [ "Edge", "Chelsey", "" ], [ "Sattar", "Junaed", "" ] ]
In this paper, we propose a novel method for underwater robot-to-human communication using the motion of the robot as "body language". To evaluate this system, we develop simulated examples of the system's body language gestures, called kinemes, and compare them to a baseline system using flashing colored lights through a user study. Our work shows evidence that motion can be used as a successful communication vector which is accurate, easy to learn, and quick enough to be used, all without requiring any additional hardware to be added to our platform. We thus contribute to "closing the loop" for human-robot interaction underwater by proposing and testing this system, suggesting a library of possible body language gestures for underwater robots, and offering insight on the design of nonverbal robot-to-human communication methods.
1711.01306
Aidin Ferdowsi
Aidin Ferdowsi and Walid Saad
Deep Learning-Based Dynamic Watermarking for Secure Signal Authentication in the Internet of Things
6 pages, 9 figures
null
null
null
cs.IT cs.CR cs.MM math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Securing the Internet of Things (IoT) is a necessary milestone toward expediting the deployment of its applications and services. In particular, the functionality of the IoT devices is extremely dependent on the reliability of their message transmission. Cyber attacks such as data injection, eavesdropping, and man-in-the-middle threats can lead to security challenges. Securing IoT devices against such attacks requires accounting for their stringent computational power and need for low-latency operations. In this paper, a novel deep learning method is proposed for dynamic watermarking of IoT signals to detect cyber attacks. The proposed learning framework, based on a long short-term memory (LSTM) structure, enables the IoT devices to extract a set of stochastic features from their generated signal and dynamically watermark these features into the signal. This method enables the IoT's cloud center, which collects signals from the IoT devices, to effectively authenticate the reliability of the signals. Furthermore, the proposed method prevents complicated attack scenarios such as eavesdropping in which the cyber attacker collects the data from the IoT devices and aims to break the watermarking algorithm. Simulation results show that, with an attack detection delay of under 1 second the messages can be transmitted from IoT devices with an almost 100% reliability.
[ { "created": "Fri, 3 Nov 2017 19:12:23 GMT", "version": "v1" } ]
2017-11-07
[ [ "Ferdowsi", "Aidin", "" ], [ "Saad", "Walid", "" ] ]
Securing the Internet of Things (IoT) is a necessary milestone toward expediting the deployment of its applications and services. In particular, the functionality of the IoT devices is extremely dependent on the reliability of their message transmission. Cyber attacks such as data injection, eavesdropping, and man-in-the-middle threats can lead to security challenges. Securing IoT devices against such attacks requires accounting for their stringent computational power and need for low-latency operations. In this paper, a novel deep learning method is proposed for dynamic watermarking of IoT signals to detect cyber attacks. The proposed learning framework, based on a long short-term memory (LSTM) structure, enables the IoT devices to extract a set of stochastic features from their generated signal and dynamically watermark these features into the signal. This method enables the IoT's cloud center, which collects signals from the IoT devices, to effectively authenticate the reliability of the signals. Furthermore, the proposed method prevents complicated attack scenarios such as eavesdropping in which the cyber attacker collects the data from the IoT devices and aims to break the watermarking algorithm. Simulation results show that, with an attack detection delay of under 1 second the messages can be transmitted from IoT devices with an almost 100% reliability.
2005.05487
Takashi Morita
Takashi Morita and Hiroki Koda
Exploring TTS without T Using Biologically/Psychologically Motivated Neural Network Modules (ZeroSpeech 2020)
Accepted in INTERSPEECH 2020
null
10.21437/Interspeech.2020-3127
null
cs.CL cs.LG cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this study, we reported our exploration of Text-To-Speech without Text (TTS without T) in the Zero Resource Speech Challenge 2020, in which participants proposed an end-to-end, unsupervised system that learned speech recognition and TTS together. We addressed the challenge using biologically/psychologically motivated modules of Artificial Neural Networks (ANN), with a particular interest in unsupervised learning of human language as a biological/psychological problem. The system first processes Mel Frequency Cepstral Coefficient (MFCC) frames with an Echo-State Network (ESN), and simulates computations in cortical microcircuits. The outcome is discretized by our original Variational Autoencoder (VAE) that implements the Dirichlet-based Bayesian clustering widely accepted in computational linguistics and cognitive science. The discretized signal is then reverted into sound waveform via a neural-network implementation of the source-filter model for speech production.
[ { "created": "Mon, 11 May 2020 23:44:37 GMT", "version": "v1" }, { "created": "Fri, 15 May 2020 09:18:57 GMT", "version": "v2" }, { "created": "Mon, 10 Aug 2020 09:13:40 GMT", "version": "v3" } ]
2020-11-03
[ [ "Morita", "Takashi", "" ], [ "Koda", "Hiroki", "" ] ]
In this study, we reported our exploration of Text-To-Speech without Text (TTS without T) in the Zero Resource Speech Challenge 2020, in which participants proposed an end-to-end, unsupervised system that learned speech recognition and TTS together. We addressed the challenge using biologically/psychologically motivated modules of Artificial Neural Networks (ANN), with a particular interest in unsupervised learning of human language as a biological/psychological problem. The system first processes Mel Frequency Cepstral Coefficient (MFCC) frames with an Echo-State Network (ESN), and simulates computations in cortical microcircuits. The outcome is discretized by our original Variational Autoencoder (VAE) that implements the Dirichlet-based Bayesian clustering widely accepted in computational linguistics and cognitive science. The discretized signal is then reverted into sound waveform via a neural-network implementation of the source-filter model for speech production.
2003.05171
Peter beim Graben
Peter beim Graben, Markus Huber, Werner Meyer, Ronald R\"omer and Matthias Wolff
Vector symbolic architectures for context-free grammars
36 pages, 3 figures
null
null
null
cs.CL q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background / introduction. Vector symbolic architectures (VSA) are a viable approach for the hyperdimensional representation of symbolic data, such as documents, syntactic structures, or semantic frames. Methods. We present a rigorous mathematical framework for the representation of phrase structure trees and parse trees of context-free grammars (CFG) in Fock space, i.e. infinite-dimensional Hilbert space as being used in quantum field theory. We define a novel normal form for CFG by means of term algebras. Using a recently developed software toolbox, called FockBox, we construct Fock space representations for the trees built up by a CFG left-corner (LC) parser. Results. We prove a universal representation theorem for CFG term algebras in Fock space and illustrate our findings through a low-dimensional principal component projection of the LC parser states. Conclusions. Our approach could leverage the development of VSA for explainable artificial intelligence (XAI) by means of hyperdimensional deep neural computation. It could be of significance for the improvement of cognitive user interfaces and other applications of VSA in machine learning.
[ { "created": "Wed, 11 Mar 2020 09:07:02 GMT", "version": "v1" }, { "created": "Fri, 25 Sep 2020 08:34:46 GMT", "version": "v2" } ]
2020-09-28
[ [ "Graben", "Peter beim", "" ], [ "Huber", "Markus", "" ], [ "Meyer", "Werner", "" ], [ "Römer", "Ronald", "" ], [ "Wolff", "Matthias", "" ] ]
Background / introduction. Vector symbolic architectures (VSA) are a viable approach for the hyperdimensional representation of symbolic data, such as documents, syntactic structures, or semantic frames. Methods. We present a rigorous mathematical framework for the representation of phrase structure trees and parse trees of context-free grammars (CFG) in Fock space, i.e. infinite-dimensional Hilbert space as being used in quantum field theory. We define a novel normal form for CFG by means of term algebras. Using a recently developed software toolbox, called FockBox, we construct Fock space representations for the trees built up by a CFG left-corner (LC) parser. Results. We prove a universal representation theorem for CFG term algebras in Fock space and illustrate our findings through a low-dimensional principal component projection of the LC parser states. Conclusions. Our approach could leverage the development of VSA for explainable artificial intelligence (XAI) by means of hyperdimensional deep neural computation. It could be of significance for the improvement of cognitive user interfaces and other applications of VSA in machine learning.
2405.16103
Andrzej Lingas
Andrzej Lingas
Boolean Matrix Multiplication for Highly Clustered Data on the Congested Clique
To appear in Euro-Par 2024 proceedings, 14 pages
null
null
null
cs.DS cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a protocol for the Boolean matrix product of two $n\times b$ Boolean matrices on the congested clique designed for the situation when the rows of the first matrix or the columns of the second matrix are highly clustered in the space $\{0,1\}^n.$ With high probability (w.h.p), it uses $\tilde{O}\left(\sqrt {\frac M n+1}\right)$ rounds on the congested clique with $n$ nodes, where $M$ is the minimum of the cost of a minimum spanning tree (MST) of the rows of the first input matrix and the cost of an MST of the columns of the second input matrix in the Hamming space $\{0,1\}^n.$ A key step in our protocol is the computation of an approximate minimum spanning tree of a set of $n$ points in the space $\{0,1\}^n$. We provide a protocol for this problem (of interest in its own rights) based on a known randomized technique of dimension reduction in Hamming spaces. W.h.p., it constructs an $O(1)$-factor approximation of an MST of $n$ points in the Hamming space $\{ 0,\ 1\}^n$ using $O(\log^3 n)$ rounds on the congested clique with $n$ nodes.
[ { "created": "Sat, 25 May 2024 07:31:05 GMT", "version": "v1" } ]
2024-05-28
[ [ "Lingas", "Andrzej", "" ] ]
We present a protocol for the Boolean matrix product of two $n\times b$ Boolean matrices on the congested clique designed for the situation when the rows of the first matrix or the columns of the second matrix are highly clustered in the space $\{0,1\}^n.$ With high probability (w.h.p), it uses $\tilde{O}\left(\sqrt {\frac M n+1}\right)$ rounds on the congested clique with $n$ nodes, where $M$ is the minimum of the cost of a minimum spanning tree (MST) of the rows of the first input matrix and the cost of an MST of the columns of the second input matrix in the Hamming space $\{0,1\}^n.$ A key step in our protocol is the computation of an approximate minimum spanning tree of a set of $n$ points in the space $\{0,1\}^n$. We provide a protocol for this problem (of interest in its own rights) based on a known randomized technique of dimension reduction in Hamming spaces. W.h.p., it constructs an $O(1)$-factor approximation of an MST of $n$ points in the Hamming space $\{ 0,\ 1\}^n$ using $O(\log^3 n)$ rounds on the congested clique with $n$ nodes.
1904.00510
Gustavo Gil
Gustavo D. Gil, Julie M. Walker, Nabil Zemiti, Allison M. Okamura, Philippe Poignet
How to enhance learning of robotic surgery gestures? A tactile cue saliency investigation for 3D hand guidance
HSMR: 12th Hamlyn Symposium on Medical Robotics (London, 24th-26th June 2019)
null
10.31256/HSMR2019.9
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The current generation of surgeons requires extensive training in teleoperation to develop specific dexterous skills, which are independent of medical knowledge. Training curricula progress from manipulation tasks to simulated surgical tasks but are limited in time. To tackle this, we propose to integrate surgical robotic training together with Haptic Feedback (HF) to improve skill acquisition. This paper present the initial but promising results of our haptic device designed to support in the training of surgical gestures. Our ongoing work is related to integrate the HF in the RAVEN II platform.
[ { "created": "Sun, 31 Mar 2019 23:36:31 GMT", "version": "v1" }, { "created": "Sun, 7 Apr 2019 16:15:42 GMT", "version": "v2" }, { "created": "Fri, 10 May 2019 09:18:31 GMT", "version": "v3" }, { "created": "Fri, 19 Jul 2019 10:15:43 GMT", "version": "v4" } ]
2019-07-23
[ [ "Gil", "Gustavo D.", "" ], [ "Walker", "Julie M.", "" ], [ "Zemiti", "Nabil", "" ], [ "Okamura", "Allison M.", "" ], [ "Poignet", "Philippe", "" ] ]
The current generation of surgeons requires extensive training in teleoperation to develop specific dexterous skills, which are independent of medical knowledge. Training curricula progress from manipulation tasks to simulated surgical tasks but are limited in time. To tackle this, we propose to integrate surgical robotic training together with Haptic Feedback (HF) to improve skill acquisition. This paper present the initial but promising results of our haptic device designed to support in the training of surgical gestures. Our ongoing work is related to integrate the HF in the RAVEN II platform.
2305.09442
Jake Welde
Jake Welde and Vijay Kumar
Towards Automatic Identification of Globally Valid Geometric Flat Outputs via Numerical Optimization
To appear as a contributed paper in the "Geometric Representations" workshop at the 2023 International Conference on Robotics and Automation (ICRA)
null
null
null
cs.RO math.DG math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Differential flatness enables efficient planning and control for underactuated robotic systems, but we lack a systematic and practical means of identifying a flat output (or determining whether one exists) for an arbitrary robotic system. In this work, we leverage recent results elucidating the role of symmetry in constructing flat outputs for free-flying robotic systems. Using the tools of Riemannian geometry, Lie group theory, and differential forms, we cast the search for a globally valid, equivariant flat output as an optimization problem. An approximate transcription of this continuum formulation to a quadratic program is performed, and its solutions for two example systems achieve precise agreement with the known closed-form flat outputs. Our results point towards a systematic, automated approach to numerically identify geometric flat outputs directly from the system model, particularly useful when complexity renders pen and paper analysis intractable.
[ { "created": "Tue, 16 May 2023 13:58:40 GMT", "version": "v1" } ]
2023-05-17
[ [ "Welde", "Jake", "" ], [ "Kumar", "Vijay", "" ] ]
Differential flatness enables efficient planning and control for underactuated robotic systems, but we lack a systematic and practical means of identifying a flat output (or determining whether one exists) for an arbitrary robotic system. In this work, we leverage recent results elucidating the role of symmetry in constructing flat outputs for free-flying robotic systems. Using the tools of Riemannian geometry, Lie group theory, and differential forms, we cast the search for a globally valid, equivariant flat output as an optimization problem. An approximate transcription of this continuum formulation to a quadratic program is performed, and its solutions for two example systems achieve precise agreement with the known closed-form flat outputs. Our results point towards a systematic, automated approach to numerically identify geometric flat outputs directly from the system model, particularly useful when complexity renders pen and paper analysis intractable.
1107.3759
Dejan Munjin
Dejan Munjin, Jean-Henry Morin
User Empowerment in the Internet of Things
null
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper focuses on the characteristics of two big triggers that facilitated wide user adoption of the Internet: Web 2.0 and online social networks. We detect brakes for reproduction of these events in Internet of things. To support our hypothesis we first compare the difference between the ways of use of the Internet with the future scenarios of Internet of things. We detect barriers that could slow down apparition of this kind of social events during user adoption of Internet of Things and we propose a conceptual framework to solve these problems.
[ { "created": "Tue, 19 Jul 2011 16:09:07 GMT", "version": "v1" } ]
2011-07-20
[ [ "Munjin", "Dejan", "" ], [ "Morin", "Jean-Henry", "" ] ]
This paper focuses on the characteristics of two big triggers that facilitated wide user adoption of the Internet: Web 2.0 and online social networks. We detect brakes for reproduction of these events in Internet of things. To support our hypothesis we first compare the difference between the ways of use of the Internet with the future scenarios of Internet of things. We detect barriers that could slow down apparition of this kind of social events during user adoption of Internet of Things and we propose a conceptual framework to solve these problems.
1510.00132
MIkhail Hushchyn
Mikhail Hushchyn, Philippe Charpentier, Andrey Ustyuzhanin
Disk storage management for LHCb based on Data Popularity estimator
null
null
10.1088/1742-6596/664/4/042026
null
cs.DC cs.LG physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an algorithm providing recommendations for optimizing the LHCb data storage. The LHCb data storage system is a hybrid system. All datasets are kept as archives on magnetic tapes. The most popular datasets are kept on disks. The algorithm takes the dataset usage history and metadata (size, type, configuration etc.) to generate a recommendation report. This article presents how we use machine learning algorithms to predict future data popularity. Using these predictions it is possible to estimate which datasets should be removed from disk. We use regression algorithms and time series analysis to find the optimal number of replicas for datasets that are kept on disk. Based on the data popularity and the number of replicas optimization, the algorithm minimizes a loss function to find the optimal data distribution. The loss function represents all requirements for data distribution in the data storage system. We demonstrate how our algorithm helps to save disk space and to reduce waiting times for jobs using this data.
[ { "created": "Thu, 1 Oct 2015 07:40:37 GMT", "version": "v1" } ]
2016-01-20
[ [ "Hushchyn", "Mikhail", "" ], [ "Charpentier", "Philippe", "" ], [ "Ustyuzhanin", "Andrey", "" ] ]
This paper presents an algorithm providing recommendations for optimizing the LHCb data storage. The LHCb data storage system is a hybrid system. All datasets are kept as archives on magnetic tapes. The most popular datasets are kept on disks. The algorithm takes the dataset usage history and metadata (size, type, configuration etc.) to generate a recommendation report. This article presents how we use machine learning algorithms to predict future data popularity. Using these predictions it is possible to estimate which datasets should be removed from disk. We use regression algorithms and time series analysis to find the optimal number of replicas for datasets that are kept on disk. Based on the data popularity and the number of replicas optimization, the algorithm minimizes a loss function to find the optimal data distribution. The loss function represents all requirements for data distribution in the data storage system. We demonstrate how our algorithm helps to save disk space and to reduce waiting times for jobs using this data.
1412.2662
Dmitry Zakablukov
Dmitry V. Zakablukov
On Gate Complexity of Reversible Circuits Consisting of NOT, CNOT and 2-CNOT Gates
In Russian, 18 pages, 1 figure
null
10.4213/dm1365
null
cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper discusses the gate complexity of reversible circuits consisting of NOT, CNOT and 2-CNOT gates. The Shannon gate complexity function $L(n, q)$ for a reversible circuit, implementing a Boolean transformation $f\colon \mathbb Z_2^n \to \mathbb Z_2^n$, is defined as a function of $n$ and the number of additional inputs $q$. The general lower bound $L(n,q) \geq \frac{2^n(n-2)}{3\log_2(n+q)} - \frac{n}{3}$ for the gate complexity of a reversible circuit is proved. An upper bound $L(n,0) \leqslant 3n2^{n+4}(1+o(1)) \mathop / \log_2n$ for the gate complexity of a reversible circuit without additional inputs is proved. An upper bound $L(n,q_0) \lesssim 2^n$ for the gate complexity of a reversible circuit with $q_0 \sim n2^{n-o(n)}$ additional inputs is proved.
[ { "created": "Mon, 8 Dec 2014 16:58:12 GMT", "version": "v1" }, { "created": "Sat, 13 Feb 2016 12:15:30 GMT", "version": "v2" } ]
2016-07-08
[ [ "Zakablukov", "Dmitry V.", "" ] ]
The paper discusses the gate complexity of reversible circuits consisting of NOT, CNOT and 2-CNOT gates. The Shannon gate complexity function $L(n, q)$ for a reversible circuit, implementing a Boolean transformation $f\colon \mathbb Z_2^n \to \mathbb Z_2^n$, is defined as a function of $n$ and the number of additional inputs $q$. The general lower bound $L(n,q) \geq \frac{2^n(n-2)}{3\log_2(n+q)} - \frac{n}{3}$ for the gate complexity of a reversible circuit is proved. An upper bound $L(n,0) \leqslant 3n2^{n+4}(1+o(1)) \mathop / \log_2n$ for the gate complexity of a reversible circuit without additional inputs is proved. An upper bound $L(n,q_0) \lesssim 2^n$ for the gate complexity of a reversible circuit with $q_0 \sim n2^{n-o(n)}$ additional inputs is proved.
1305.2704
Ahmad Alamgir Khan Mr
Ahmad Alamgir Khan
Preventing Phishing Attacks using One Time Password and User Machine Identification
5 Pages, 8 Figures, Published with International Journal of Computer Applications 0975 8887 Volume 68 No.3, April 2013
International Journal of Computer Applications 68(3):7-11, April 2013
10.5120/11557-6839
null
cs.CR
http://creativecommons.org/licenses/by/3.0/
Phishing is a type of attack in which cyber criminals tricks the victims to steal their personal and financial data. It has become an organized criminal activity. Spoofed emails claiming to be from legitimate source are crafted in a way to lead victims to reveal their personal, financial data by misdirecting them to the counterfeit website. This research paper presents a novel approach to combat the Phishing attacks. An approach is proposed where user will retrieve the one time password by SMS or by alternate email address. After receiving the one time password the web server will create an encrypted token for the users computer or device for authentication. The encrypted token will be used for identification, any time user wishes to access the website he or she must request the new password. The one time password as name implies will expire after single use. The one time password and encrypted token is a smart way to tackle this problem.
[ { "created": "Mon, 13 May 2013 08:41:13 GMT", "version": "v1" } ]
2013-05-14
[ [ "Khan", "Ahmad Alamgir", "" ] ]
Phishing is a type of attack in which cyber criminals tricks the victims to steal their personal and financial data. It has become an organized criminal activity. Spoofed emails claiming to be from legitimate source are crafted in a way to lead victims to reveal their personal, financial data by misdirecting them to the counterfeit website. This research paper presents a novel approach to combat the Phishing attacks. An approach is proposed where user will retrieve the one time password by SMS or by alternate email address. After receiving the one time password the web server will create an encrypted token for the users computer or device for authentication. The encrypted token will be used for identification, any time user wishes to access the website he or she must request the new password. The one time password as name implies will expire after single use. The one time password and encrypted token is a smart way to tackle this problem.
1406.3969
Siddhartha Ghosh
Siddhartha Ghosh, Sujata Thamke and Kalyani U.R.S
Translation Of Telugu-Marathi and Vice-Versa using Rule Based Machine Translation
13 pages, Fourth International Conference on Advances in Computing and Information Technology (ACITY 2014) Delhi, India - May 2014
null
10.5121/csit.2014.4501
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In todays digital world automated Machine Translation of one language to another has covered a long way to achieve different kinds of success stories. Whereas Babel Fish supports a good number of foreign languages and only Hindi from Indian languages, the Google Translator takes care of about 10 Indian languages. Though most of the Automated Machine Translation Systems are doing well but handling Indian languages needs a major care while handling the local proverbs/ idioms. Most of the Machine Translation system follows the direct translation approach while translating one Indian language to other. Our research at KMIT R&D Lab found that handling the local proverbs/idioms is not given enough attention by the earlier research work. This paper focuses on two of the majorly spoken Indian languages Marathi and Telugu, and translation between them. Handling proverbs and idioms of both the languages have been given a special care, and the research outcome shows a significant achievement in this direction.
[ { "created": "Mon, 16 Jun 2014 10:59:03 GMT", "version": "v1" } ]
2014-06-17
[ [ "Ghosh", "Siddhartha", "" ], [ "Thamke", "Sujata", "" ], [ "S", "Kalyani U. R.", "" ] ]
In todays digital world automated Machine Translation of one language to another has covered a long way to achieve different kinds of success stories. Whereas Babel Fish supports a good number of foreign languages and only Hindi from Indian languages, the Google Translator takes care of about 10 Indian languages. Though most of the Automated Machine Translation Systems are doing well but handling Indian languages needs a major care while handling the local proverbs/ idioms. Most of the Machine Translation system follows the direct translation approach while translating one Indian language to other. Our research at KMIT R&D Lab found that handling the local proverbs/idioms is not given enough attention by the earlier research work. This paper focuses on two of the majorly spoken Indian languages Marathi and Telugu, and translation between them. Handling proverbs and idioms of both the languages have been given a special care, and the research outcome shows a significant achievement in this direction.
1911.01763
Rahat Yeasin Emon
Rahat Yeasin Emon, Sharmistha Chanda Tista
An Efficient Word Lookup System by using Improved Trie Algorithm
6 pages
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Efficiently word storing and searching is an important task in computer science. An application space complexity, time complexity, and overall performance depend on this string data. Many word searching data structures and algorithms exist in the current world but few of them have space compress ability. Trie is a popular data structure for word searching for its linear searching capability. It is the basic and important part of various computer applications such as information retrieval, natural language processing, database system, compiler, and computer network. But currently, the available version of trie tree cannot be used widely because of its high memory requirement. This paper proposes a new Radix trie based data structure for word storing and searching which can share not only just prefix but also infix and suffix and thus reduces memory requirement. We propose a new emptiness property to Radix trie. Proposed trie has character cell reduction capability and it can dramatically reduce any application runtime memory size. Using it as data tank to an operating system the overall main memory requirement of a device can be reduced to a large extent.
[ { "created": "Tue, 5 Nov 2019 13:36:15 GMT", "version": "v1" } ]
2019-11-06
[ [ "Emon", "Rahat Yeasin", "" ], [ "Tista", "Sharmistha Chanda", "" ] ]
Efficiently word storing and searching is an important task in computer science. An application space complexity, time complexity, and overall performance depend on this string data. Many word searching data structures and algorithms exist in the current world but few of them have space compress ability. Trie is a popular data structure for word searching for its linear searching capability. It is the basic and important part of various computer applications such as information retrieval, natural language processing, database system, compiler, and computer network. But currently, the available version of trie tree cannot be used widely because of its high memory requirement. This paper proposes a new Radix trie based data structure for word storing and searching which can share not only just prefix but also infix and suffix and thus reduces memory requirement. We propose a new emptiness property to Radix trie. Proposed trie has character cell reduction capability and it can dramatically reduce any application runtime memory size. Using it as data tank to an operating system the overall main memory requirement of a device can be reduced to a large extent.
1910.14673
Tiantian Fang
Tiantian Fang and Alexander G. Schwing
Co-Generation with GANs using AIS based HMC
Accepted to NeurIPS 2019
null
null
null
cs.CV cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Inferring the most likely configuration for a subset of variables of a joint distribution given the remaining ones - which we refer to as co-generation - is an important challenge that is computationally demanding for all but the simplest settings. This task has received a considerable amount of attention, particularly for classical ways of modeling distributions like structured prediction. In contrast, almost nothing is known about this task when considering recently proposed techniques for modeling high-dimensional distributions, particularly generative adversarial nets (GANs). Therefore, in this paper, we study the occurring challenges for co-generation with GANs. To address those challenges we develop an annealed importance sampling based Hamiltonian Monte Carlo co-generation algorithm. The presented approach significantly outperforms classical gradient based methods on a synthetic and on the CelebA and LSUN datasets.
[ { "created": "Thu, 31 Oct 2019 17:59:59 GMT", "version": "v1" } ]
2019-11-01
[ [ "Fang", "Tiantian", "" ], [ "Schwing", "Alexander G.", "" ] ]
Inferring the most likely configuration for a subset of variables of a joint distribution given the remaining ones - which we refer to as co-generation - is an important challenge that is computationally demanding for all but the simplest settings. This task has received a considerable amount of attention, particularly for classical ways of modeling distributions like structured prediction. In contrast, almost nothing is known about this task when considering recently proposed techniques for modeling high-dimensional distributions, particularly generative adversarial nets (GANs). Therefore, in this paper, we study the occurring challenges for co-generation with GANs. To address those challenges we develop an annealed importance sampling based Hamiltonian Monte Carlo co-generation algorithm. The presented approach significantly outperforms classical gradient based methods on a synthetic and on the CelebA and LSUN datasets.
1811.10376
Yi-Te Hsu
Yi-Te Hsu, Zining Zhu, Chi-Te Wang, Shih-Hau Fang, Frank Rudzicz, Yu Tsao
Robustness against the channel effect in pathological voice detection
Machine Learning for Health (ML4H) Workshop at NeurIPS 2018 arXiv:1811.07216
null
null
ML4H/2018/200
cs.LG cs.SD eess.AS stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many people are suffering from voice disorders, which can adversely affect the quality of their lives. In response, some researchers have proposed algorithms for automatic assessment of these disorders, based on voice signals. However, these signals can be sensitive to the recording devices. Indeed, the channel effect is a pervasive problem in machine learning for healthcare. In this study, we propose a detection system for pathological voice, which is robust against the channel effect. This system is based on a bidirectional LSTM network. To increase the performance robustness against channel mismatch, we integrate domain adversarial training (DAT) to eliminate the differences between the devices. When we train on data recorded on a high-quality microphone and evaluate on smartphone data without labels, our robust detection system increases the PR-AUC from 0.8448 to 0.9455 (and 0.9522 with target sample labels). To the best of our knowledge, this is the first study applying unsupervised domain adaptation to pathological voice detection. Notably, our system does not need target device sample labels, which allows for generalization to many new devices.
[ { "created": "Mon, 26 Nov 2018 14:11:12 GMT", "version": "v1" }, { "created": "Sun, 2 Dec 2018 14:52:39 GMT", "version": "v2" } ]
2018-12-04
[ [ "Hsu", "Yi-Te", "" ], [ "Zhu", "Zining", "" ], [ "Wang", "Chi-Te", "" ], [ "Fang", "Shih-Hau", "" ], [ "Rudzicz", "Frank", "" ], [ "Tsao", "Yu", "" ] ]
Many people are suffering from voice disorders, which can adversely affect the quality of their lives. In response, some researchers have proposed algorithms for automatic assessment of these disorders, based on voice signals. However, these signals can be sensitive to the recording devices. Indeed, the channel effect is a pervasive problem in machine learning for healthcare. In this study, we propose a detection system for pathological voice, which is robust against the channel effect. This system is based on a bidirectional LSTM network. To increase the performance robustness against channel mismatch, we integrate domain adversarial training (DAT) to eliminate the differences between the devices. When we train on data recorded on a high-quality microphone and evaluate on smartphone data without labels, our robust detection system increases the PR-AUC from 0.8448 to 0.9455 (and 0.9522 with target sample labels). To the best of our knowledge, this is the first study applying unsupervised domain adaptation to pathological voice detection. Notably, our system does not need target device sample labels, which allows for generalization to many new devices.
0905.0079
Thorsten Hehn
Thorsten Hehn, Johannes B. Huber, Olgica Milenkovic, Stefan Laendner
Multiple-Bases Belief-Propagation Decoding of High-Density Cyclic Codes
This full paper accompanies a letter submitted to "IEEE Transactions on Communications". It is intended to provide detailed information for interested readers of the letter. 24 pages, 6 figures
null
10.1109/TCOMM.2010.01.070468
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a new method for decoding short and moderate length linear block codes with dense parity-check matrix representations of cyclic form, termed multiple-bases belief-propagation (MBBP). The proposed iterative scheme makes use of the fact that a code has many structurally diverse parity-check matrices, capable of detecting different error patterns. We show that this inherent code property leads to decoding algorithms with significantly better performance when compared to standard BP decoding. Furthermore, we describe how to choose sets of parity-check matrices of cyclic form amenable for multiple-bases decoding, based on analytical studies performed for the binary erasure channel. For several cyclic and extended cyclic codes, the MBBP decoding performance can be shown to closely follow that of maximum-likelihood decoders.
[ { "created": "Fri, 1 May 2009 11:15:25 GMT", "version": "v1" } ]
2016-11-15
[ [ "Hehn", "Thorsten", "" ], [ "Huber", "Johannes B.", "" ], [ "Milenkovic", "Olgica", "" ], [ "Laendner", "Stefan", "" ] ]
We introduce a new method for decoding short and moderate length linear block codes with dense parity-check matrix representations of cyclic form, termed multiple-bases belief-propagation (MBBP). The proposed iterative scheme makes use of the fact that a code has many structurally diverse parity-check matrices, capable of detecting different error patterns. We show that this inherent code property leads to decoding algorithms with significantly better performance when compared to standard BP decoding. Furthermore, we describe how to choose sets of parity-check matrices of cyclic form amenable for multiple-bases decoding, based on analytical studies performed for the binary erasure channel. For several cyclic and extended cyclic codes, the MBBP decoding performance can be shown to closely follow that of maximum-likelihood decoders.
1306.6657
Markus Rabe
Bernd Finkbeiner, Markus N. Rabe, and C\'esar S\'anchez
A Temporal Logic for Hyperproperties
null
null
null
null
cs.LO cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hyperproperties, as introduced by Clarkson and Schneider, characterize the correctness of a computer program as a condition on its set of computation paths. Standard temporal logics can only refer to a single path at a time, and therefore cannot express many hyperproperties of interest, including noninterference and other important properties in security and coding theory. In this paper, we investigate an extension of temporal logic with explicit path variables. We show that the quantification over paths naturally subsumes other extensions of temporal logic with operators for information flow and knowledge. The model checking problem for temporal logic with path quantification is decidable. For alternation depth 1, the complexity is PSPACE in the length of the formula and NLOGSPACE in the size of the system, as for linear-time temporal logic.
[ { "created": "Thu, 27 Jun 2013 20:39:03 GMT", "version": "v1" } ]
2013-07-01
[ [ "Finkbeiner", "Bernd", "" ], [ "Rabe", "Markus N.", "" ], [ "Sánchez", "César", "" ] ]
Hyperproperties, as introduced by Clarkson and Schneider, characterize the correctness of a computer program as a condition on its set of computation paths. Standard temporal logics can only refer to a single path at a time, and therefore cannot express many hyperproperties of interest, including noninterference and other important properties in security and coding theory. In this paper, we investigate an extension of temporal logic with explicit path variables. We show that the quantification over paths naturally subsumes other extensions of temporal logic with operators for information flow and knowledge. The model checking problem for temporal logic with path quantification is decidable. For alternation depth 1, the complexity is PSPACE in the length of the formula and NLOGSPACE in the size of the system, as for linear-time temporal logic.
2308.00931
Risheng Liu
Zengxi Zhang, Zhiying Jiang, Jinyuan Liu, Xin Fan, Risheng Liu
WaterFlow: Heuristic Normalizing Flow for Underwater Image Enhancement and Beyond
10 pages, 13 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Underwater images suffer from light refraction and absorption, which impairs visibility and interferes the subsequent applications. Existing underwater image enhancement methods mainly focus on image quality improvement, ignoring the effect on practice. To balance the visual quality and application, we propose a heuristic normalizing flow for detection-driven underwater image enhancement, dubbed WaterFlow. Specifically, we first develop an invertible mapping to achieve the translation between the degraded image and its clear counterpart. Considering the differentiability and interpretability, we incorporate the heuristic prior into the data-driven mapping procedure, where the ambient light and medium transmission coefficient benefit credible generation. Furthermore, we introduce a detection perception module to transmit the implicit semantic guidance into the enhancement procedure, where the enhanced images hold more detection-favorable features and are able to promote the detection performance. Extensive experiments prove the superiority of our WaterFlow, against state-of-the-art methods quantitatively and qualitatively.
[ { "created": "Wed, 2 Aug 2023 04:17:35 GMT", "version": "v1" } ]
2023-08-03
[ [ "Zhang", "Zengxi", "" ], [ "Jiang", "Zhiying", "" ], [ "Liu", "Jinyuan", "" ], [ "Fan", "Xin", "" ], [ "Liu", "Risheng", "" ] ]
Underwater images suffer from light refraction and absorption, which impairs visibility and interferes the subsequent applications. Existing underwater image enhancement methods mainly focus on image quality improvement, ignoring the effect on practice. To balance the visual quality and application, we propose a heuristic normalizing flow for detection-driven underwater image enhancement, dubbed WaterFlow. Specifically, we first develop an invertible mapping to achieve the translation between the degraded image and its clear counterpart. Considering the differentiability and interpretability, we incorporate the heuristic prior into the data-driven mapping procedure, where the ambient light and medium transmission coefficient benefit credible generation. Furthermore, we introduce a detection perception module to transmit the implicit semantic guidance into the enhancement procedure, where the enhanced images hold more detection-favorable features and are able to promote the detection performance. Extensive experiments prove the superiority of our WaterFlow, against state-of-the-art methods quantitatively and qualitatively.
2312.10099
Shun Liu
Shun Liu, Jianan Zhang, Ruocheng Song, Teik Toe Teoh
ADA-YOLO: Dynamic Fusion of YOLOv8 and Adaptive Heads for Precise Image Detection and Diagnosis
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Object detection and localization are crucial tasks for biomedical image analysis, particularly in the field of hematology where the detection and recognition of blood cells are essential for diagnosis and treatment decisions. While attention-based methods have shown significant progress in object detection in various domains, their application in medical object detection has been limited due to the unique challenges posed by medical imaging datasets. To address this issue, we propose ADA-YOLO, a light-weight yet effective method for medical object detection that integrates attention-based mechanisms with the YOLOv8 architecture. Our proposed method leverages the dynamic feature localisation and parallel regression for computer vision tasks through \textit{adaptive head} module. Empirical experiments were conducted on the Blood Cell Count and Detection (BCCD) dataset to evaluate the effectiveness of ADA-YOLO. The results showed that ADA-YOLO outperforms the YOLOv8 model in mAP (mean average precision) on the BCCD dataset by using more than 3 times less space than YOLOv8. This indicates that our proposed method is effective. Moreover, the light-weight nature of our proposed method makes it suitable for deployment in resource-constrained environments such as mobile devices or edge computing systems. which could ultimately lead to improved diagnosis and treatment outcomes in the field of hematology.
[ { "created": "Thu, 14 Dec 2023 18:27:13 GMT", "version": "v1" } ]
2023-12-19
[ [ "Liu", "Shun", "" ], [ "Zhang", "Jianan", "" ], [ "Song", "Ruocheng", "" ], [ "Teoh", "Teik Toe", "" ] ]
Object detection and localization are crucial tasks for biomedical image analysis, particularly in the field of hematology where the detection and recognition of blood cells are essential for diagnosis and treatment decisions. While attention-based methods have shown significant progress in object detection in various domains, their application in medical object detection has been limited due to the unique challenges posed by medical imaging datasets. To address this issue, we propose ADA-YOLO, a light-weight yet effective method for medical object detection that integrates attention-based mechanisms with the YOLOv8 architecture. Our proposed method leverages the dynamic feature localisation and parallel regression for computer vision tasks through \textit{adaptive head} module. Empirical experiments were conducted on the Blood Cell Count and Detection (BCCD) dataset to evaluate the effectiveness of ADA-YOLO. The results showed that ADA-YOLO outperforms the YOLOv8 model in mAP (mean average precision) on the BCCD dataset by using more than 3 times less space than YOLOv8. This indicates that our proposed method is effective. Moreover, the light-weight nature of our proposed method makes it suitable for deployment in resource-constrained environments such as mobile devices or edge computing systems. which could ultimately lead to improved diagnosis and treatment outcomes in the field of hematology.
2311.17795
Guy Hay
Guy Hay and Ohad Volk
Marginal Laplacian Score
10 pages
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High-dimensional imbalanced data poses a machine learning challenge. In the absence of sufficient or high-quality labels, unsupervised feature selection methods are crucial for the success of subsequent algorithms. Therefore, we introduce a Marginal Laplacian Score (MLS), a modification of the well known Laplacian Score (LS) tailored to better address imbalanced data. We introduce an assumption that the minority class or anomalous appear more frequently in the margin of the features. Consequently, MLS aims to preserve the local structure of the dataset's margin. We propose its integration into modern feature selection methods that utilize the Laplacian score. We integrate the MLS algorithm into the Differentiable Unsupervised Feature Selection (DUFS), resulting in DUFS-MLS. The proposed methods demonstrate robust and improved performance on synthetic and public datasets.
[ { "created": "Wed, 29 Nov 2023 16:45:43 GMT", "version": "v1" }, { "created": "Fri, 2 Feb 2024 08:06:51 GMT", "version": "v2" } ]
2024-02-05
[ [ "Hay", "Guy", "" ], [ "Volk", "Ohad", "" ] ]
High-dimensional imbalanced data poses a machine learning challenge. In the absence of sufficient or high-quality labels, unsupervised feature selection methods are crucial for the success of subsequent algorithms. Therefore, we introduce a Marginal Laplacian Score (MLS), a modification of the well known Laplacian Score (LS) tailored to better address imbalanced data. We introduce an assumption that the minority class or anomalous appear more frequently in the margin of the features. Consequently, MLS aims to preserve the local structure of the dataset's margin. We propose its integration into modern feature selection methods that utilize the Laplacian score. We integrate the MLS algorithm into the Differentiable Unsupervised Feature Selection (DUFS), resulting in DUFS-MLS. The proposed methods demonstrate robust and improved performance on synthetic and public datasets.
1905.10902
Darren Strash
Damir Ferizovic and Demian Hespe and Sebastian Lamm and Matthias Mnich and Christian Schulz and Darren Strash
Engineering Kernelization for Maximum Cut
16 pages, 4 tables, 2 figures
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Kernelization is a general theoretical framework for preprocessing instances of NP-hard problems into (generally smaller) instances with bounded size, via the repeated application of data reduction rules. For the fundamental Max Cut problem, kernelization algorithms are theoretically highly efficient for various parameterizations. However, the efficacy of these reduction rules in practice---to aid solving highly challenging benchmark instances to optimality---remains entirely unexplored. We engineer a new suite of efficient data reduction rules that subsume most of the previously published rules, and demonstrate their significant impact on benchmark data sets, including synthetic instances, and data sets from the VLSI and image segmentation application domains. Our experiments reveal that current state-of-the-art solvers can be sped up by up to multiple orders of magnitude when combined with our data reduction rules. On social and biological networks in particular, kernelization enables us to solve four instances that were previously unsolved in a ten-hour time limit with state-of-the-art solvers; three of these instances are now solved in less than two seconds.
[ { "created": "Sun, 26 May 2019 23:12:33 GMT", "version": "v1" } ]
2019-05-28
[ [ "Ferizovic", "Damir", "" ], [ "Hespe", "Demian", "" ], [ "Lamm", "Sebastian", "" ], [ "Mnich", "Matthias", "" ], [ "Schulz", "Christian", "" ], [ "Strash", "Darren", "" ] ]
Kernelization is a general theoretical framework for preprocessing instances of NP-hard problems into (generally smaller) instances with bounded size, via the repeated application of data reduction rules. For the fundamental Max Cut problem, kernelization algorithms are theoretically highly efficient for various parameterizations. However, the efficacy of these reduction rules in practice---to aid solving highly challenging benchmark instances to optimality---remains entirely unexplored. We engineer a new suite of efficient data reduction rules that subsume most of the previously published rules, and demonstrate their significant impact on benchmark data sets, including synthetic instances, and data sets from the VLSI and image segmentation application domains. Our experiments reveal that current state-of-the-art solvers can be sped up by up to multiple orders of magnitude when combined with our data reduction rules. On social and biological networks in particular, kernelization enables us to solve four instances that were previously unsolved in a ten-hour time limit with state-of-the-art solvers; three of these instances are now solved in less than two seconds.
2404.19048
Ximing Dong
Ximing Dong, Dayi Lin, Shaowei Wang, Ahmed E. Hassan
A Framework for Real-time Safeguarding the Text Generation of Large Language Model
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models (LLMs) have significantly advanced natural language processing (NLP) tasks but also pose ethical and societal risks due to their propensity to generate harmful content. To address this, various approaches have been developed to safeguard LLMs from producing unsafe content. However, existing methods have limitations, including the need for training specific control models and proactive intervention during text generation, that lead to quality degradation and increased computational overhead. To mitigate those limitations, we propose LLMSafeGuard, a lightweight framework to safeguard LLM text generation in real-time. LLMSafeGuard integrates an external validator into the beam search algorithm during decoding, rejecting candidates that violate safety constraints while allowing valid ones to proceed. We introduce a similarity based validation approach, simplifying constraint introduction and eliminating the need for control model training. Additionally, LLMSafeGuard employs a context-wise timing selection strategy, intervening LLMs only when necessary. We evaluate LLMSafeGuard on two tasks, detoxification and copyright safeguarding, and demonstrate its superior performance over SOTA baselines. For instance, LLMSafeGuard reduces the average toxic score of. LLM output by 29.7% compared to the best baseline meanwhile preserving similar linguistic quality as natural output in detoxification task. Similarly, in the copyright task, LLMSafeGuard decreases the Longest Common Subsequence (LCS) by 56.2% compared to baselines. Moreover, our context-wise timing selection strategy reduces inference time by at least 24% meanwhile maintaining comparable effectiveness as validating each time step. LLMSafeGuard also offers tunable parameters to balance its effectiveness and efficiency.
[ { "created": "Mon, 29 Apr 2024 18:40:01 GMT", "version": "v1" }, { "created": "Wed, 1 May 2024 19:53:12 GMT", "version": "v2" } ]
2024-05-03
[ [ "Dong", "Ximing", "" ], [ "Lin", "Dayi", "" ], [ "Wang", "Shaowei", "" ], [ "Hassan", "Ahmed E.", "" ] ]
Large Language Models (LLMs) have significantly advanced natural language processing (NLP) tasks but also pose ethical and societal risks due to their propensity to generate harmful content. To address this, various approaches have been developed to safeguard LLMs from producing unsafe content. However, existing methods have limitations, including the need for training specific control models and proactive intervention during text generation, that lead to quality degradation and increased computational overhead. To mitigate those limitations, we propose LLMSafeGuard, a lightweight framework to safeguard LLM text generation in real-time. LLMSafeGuard integrates an external validator into the beam search algorithm during decoding, rejecting candidates that violate safety constraints while allowing valid ones to proceed. We introduce a similarity based validation approach, simplifying constraint introduction and eliminating the need for control model training. Additionally, LLMSafeGuard employs a context-wise timing selection strategy, intervening LLMs only when necessary. We evaluate LLMSafeGuard on two tasks, detoxification and copyright safeguarding, and demonstrate its superior performance over SOTA baselines. For instance, LLMSafeGuard reduces the average toxic score of. LLM output by 29.7% compared to the best baseline meanwhile preserving similar linguistic quality as natural output in detoxification task. Similarly, in the copyright task, LLMSafeGuard decreases the Longest Common Subsequence (LCS) by 56.2% compared to baselines. Moreover, our context-wise timing selection strategy reduces inference time by at least 24% meanwhile maintaining comparable effectiveness as validating each time step. LLMSafeGuard also offers tunable parameters to balance its effectiveness and efficiency.
2104.03879
Dat Quoc Nguyen
Thinh Hung Truong, Mai Hoang Dao, Dat Quoc Nguyen
COVID-19 Named Entity Recognition for Vietnamese
To appear in Proceedings of NAACL 2021
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The current COVID-19 pandemic has lead to the creation of many corpora that facilitate NLP research and downstream applications to help fight the pandemic. However, most of these corpora are exclusively for English. As the pandemic is a global problem, it is worth creating COVID-19 related datasets for languages other than English. In this paper, we present the first manually-annotated COVID-19 domain-specific dataset for Vietnamese. Particularly, our dataset is annotated for the named entity recognition (NER) task with newly-defined entity types that can be used in other future epidemics. Our dataset also contains the largest number of entities compared to existing Vietnamese NER datasets. We empirically conduct experiments using strong baselines on our dataset, and find that: automatic Vietnamese word segmentation helps improve the NER results and the highest performances are obtained by fine-tuning pre-trained language models where the monolingual model PhoBERT for Vietnamese (Nguyen and Nguyen, 2020) produces higher results than the multilingual model XLM-R (Conneau et al., 2020). We publicly release our dataset at: https://github.com/VinAIResearch/PhoNER_COVID19
[ { "created": "Thu, 8 Apr 2021 16:35:34 GMT", "version": "v1" } ]
2021-04-09
[ [ "Truong", "Thinh Hung", "" ], [ "Dao", "Mai Hoang", "" ], [ "Nguyen", "Dat Quoc", "" ] ]
The current COVID-19 pandemic has lead to the creation of many corpora that facilitate NLP research and downstream applications to help fight the pandemic. However, most of these corpora are exclusively for English. As the pandemic is a global problem, it is worth creating COVID-19 related datasets for languages other than English. In this paper, we present the first manually-annotated COVID-19 domain-specific dataset for Vietnamese. Particularly, our dataset is annotated for the named entity recognition (NER) task with newly-defined entity types that can be used in other future epidemics. Our dataset also contains the largest number of entities compared to existing Vietnamese NER datasets. We empirically conduct experiments using strong baselines on our dataset, and find that: automatic Vietnamese word segmentation helps improve the NER results and the highest performances are obtained by fine-tuning pre-trained language models where the monolingual model PhoBERT for Vietnamese (Nguyen and Nguyen, 2020) produces higher results than the multilingual model XLM-R (Conneau et al., 2020). We publicly release our dataset at: https://github.com/VinAIResearch/PhoNER_COVID19
1811.11262
Pieter Stroobant
Pieter Stroobant, Sergi Abadal, Wouter Tavernier, Eduard Alarc\'on, Didier Colle, and Mario Pickavet
A General, Fault tolerant, Adaptive, Deadlock-free Routing Protocol for Network-on-chip
Presented at 11th International Workshop on Network on Chip Architectures (NoCArc 2018)
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper presents a topology-agnostic greedy protocol for network-on-chip routing. The proposed routing algorithm can tolerate any number of permanent faults, and is proven to be deadlock-free. We introduce a specialized variant of the algorithm, which is optimized for 2D mesh networks, both flat and wireless. The adaptiveness and minimality of several variants this algorithm are analyzed through graph-based simulations.
[ { "created": "Thu, 25 Oct 2018 04:59:37 GMT", "version": "v1" } ]
2018-11-29
[ [ "Stroobant", "Pieter", "" ], [ "Abadal", "Sergi", "" ], [ "Tavernier", "Wouter", "" ], [ "Alarcón", "Eduard", "" ], [ "Colle", "Didier", "" ], [ "Pickavet", "Mario", "" ] ]
The paper presents a topology-agnostic greedy protocol for network-on-chip routing. The proposed routing algorithm can tolerate any number of permanent faults, and is proven to be deadlock-free. We introduce a specialized variant of the algorithm, which is optimized for 2D mesh networks, both flat and wireless. The adaptiveness and minimality of several variants this algorithm are analyzed through graph-based simulations.
1708.08551
Mohammad Amin Nabian
Mohammad Amin Nabian, Hadi Meidani
Deep Learning for Accelerated Reliability Analysis of Infrastructure Networks
null
Nabian, M. A. and Meidani, H. (2018), Deep Learning for Accelerated Seismic Reliability Analysis of Transportation Networks. Computer Aided Civil and Infrastructure Engineering, 33: 443-458
10.1111/mice.12359
null
cs.CE cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Natural disasters can have catastrophic impacts on the functionality of infrastructure systems and cause severe physical and socio-economic losses. Given budget constraints, it is crucial to optimize decisions regarding mitigation, preparedness, response, and recovery practices for these systems. This requires accurate and efficient means to evaluate the infrastructure system reliability. While numerous research efforts have addressed and quantified the impact of natural disasters on infrastructure systems, typically using the Monte Carlo approach, they still suffer from high computational cost and, thus, are of limited applicability to large systems. This paper presents a deep learning framework for accelerating infrastructure system reliability analysis. In particular, two distinct deep neural network surrogates are constructed and studied: (1) A classifier surrogate which speeds up the connectivity determination of networks, and (2) An end-to-end surrogate that replaces a number of components such as roadway status realization, connectivity determination, and connectivity averaging. The proposed approach is applied to a simulation-based study of the two-terminal connectivity of a California transportation network subject to extreme probabilistic earthquake events. Numerical results highlight the effectiveness of the proposed approach in accelerating the transportation system two-terminal reliability analysis with extremely high prediction accuracy.
[ { "created": "Mon, 28 Aug 2017 22:41:11 GMT", "version": "v1" } ]
2018-06-11
[ [ "Nabian", "Mohammad Amin", "" ], [ "Meidani", "Hadi", "" ] ]
Natural disasters can have catastrophic impacts on the functionality of infrastructure systems and cause severe physical and socio-economic losses. Given budget constraints, it is crucial to optimize decisions regarding mitigation, preparedness, response, and recovery practices for these systems. This requires accurate and efficient means to evaluate the infrastructure system reliability. While numerous research efforts have addressed and quantified the impact of natural disasters on infrastructure systems, typically using the Monte Carlo approach, they still suffer from high computational cost and, thus, are of limited applicability to large systems. This paper presents a deep learning framework for accelerating infrastructure system reliability analysis. In particular, two distinct deep neural network surrogates are constructed and studied: (1) A classifier surrogate which speeds up the connectivity determination of networks, and (2) An end-to-end surrogate that replaces a number of components such as roadway status realization, connectivity determination, and connectivity averaging. The proposed approach is applied to a simulation-based study of the two-terminal connectivity of a California transportation network subject to extreme probabilistic earthquake events. Numerical results highlight the effectiveness of the proposed approach in accelerating the transportation system two-terminal reliability analysis with extremely high prediction accuracy.