id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1805.04604
Li Dong
Li Dong, Chris Quirk, Mirella Lapata
Confidence Modeling for Neural Semantic Parsing
Accepted by ACL-18
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we focus on confidence modeling for neural semantic parsers which are built upon sequence-to-sequence models. We outline three major causes of uncertainty, and design various metrics to quantify these factors. These metrics are then used to estimate confidence scores that indicate whether model predictions are likely to be correct. Beyond confidence estimation, we identify which parts of the input contribute to uncertain predictions allowing users to interpret their model, and verify or refine its input. Experimental results show that our confidence model significantly outperforms a widely used method that relies on posterior probability, and improves the quality of interpretation compared to simply relying on attention scores.
[ { "created": "Fri, 11 May 2018 22:09:37 GMT", "version": "v1" } ]
2018-05-15
[ [ "Dong", "Li", "" ], [ "Quirk", "Chris", "" ], [ "Lapata", "Mirella", "" ] ]
In this work we focus on confidence modeling for neural semantic parsers which are built upon sequence-to-sequence models. We outline three major causes of uncertainty, and design various metrics to quantify these factors. These metrics are then used to estimate confidence scores that indicate whether model predictions are likely to be correct. Beyond confidence estimation, we identify which parts of the input contribute to uncertain predictions allowing users to interpret their model, and verify or refine its input. Experimental results show that our confidence model significantly outperforms a widely used method that relies on posterior probability, and improves the quality of interpretation compared to simply relying on attention scores.
1903.03495
Mohamed Akrout
Mohamed Akrout, Amir-massoud Farahmand, Tory Jarmain, Latif Abid
Improving Skin Condition Classification with a Visual Symptom Checker Trained using Reinforcement Learning
Accepted for the Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2019
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a visual symptom checker that combines a pre-trained Convolutional Neural Network (CNN) with a Reinforcement Learning (RL) agent as a Question Answering (QA) model. This method increases the classification confidence and accuracy of the visual symptom checker, and decreases the average number of questions asked to narrow down the differential diagnosis. A Deep Q-Network (DQN)-based RL agent learns how to ask the patient about the presence of symptoms in order to maximize the probability of correctly identifying the underlying condition. The RL agent uses the visual information provided by CNN in addition to the answers to the asked questions to guide the QA system. We demonstrate that the RL-based approach increases the accuracy more than 20% compared to the CNN-only approach, which only uses the visual information to predict the condition. Moreover, the increased accuracy is up to 10% compared to the approach that uses the visual information provided by CNN along with a conventional decision tree-based QA system. We finally show that the RL-based approach not only outperforms the decision tree-based approach, but also narrows down the diagnosis faster in terms of the average number of asked questions.
[ { "created": "Fri, 8 Mar 2019 15:24:31 GMT", "version": "v1" }, { "created": "Sat, 30 Mar 2019 16:09:27 GMT", "version": "v2" }, { "created": "Fri, 26 Jul 2019 22:45:22 GMT", "version": "v3" }, { "created": "Wed, 7 Aug 2019 23:32:01 GMT", "version": "v4" } ]
2019-08-09
[ [ "Akrout", "Mohamed", "" ], [ "Farahmand", "Amir-massoud", "" ], [ "Jarmain", "Tory", "" ], [ "Abid", "Latif", "" ] ]
We present a visual symptom checker that combines a pre-trained Convolutional Neural Network (CNN) with a Reinforcement Learning (RL) agent as a Question Answering (QA) model. This method increases the classification confidence and accuracy of the visual symptom checker, and decreases the average number of questions asked to narrow down the differential diagnosis. A Deep Q-Network (DQN)-based RL agent learns how to ask the patient about the presence of symptoms in order to maximize the probability of correctly identifying the underlying condition. The RL agent uses the visual information provided by CNN in addition to the answers to the asked questions to guide the QA system. We demonstrate that the RL-based approach increases the accuracy more than 20% compared to the CNN-only approach, which only uses the visual information to predict the condition. Moreover, the increased accuracy is up to 10% compared to the approach that uses the visual information provided by CNN along with a conventional decision tree-based QA system. We finally show that the RL-based approach not only outperforms the decision tree-based approach, but also narrows down the diagnosis faster in terms of the average number of asked questions.
1807.11634
Yuhao Wen
Yuhao Wen, Xiaodan Zhu, Sudeepa Roy, Jun Yang
Interactive Summarization and Exploration of Top Aggregate Query Answers
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a system for summarization and interactive exploration of high-valued aggregate query answers to make a large set of possible answers more informative to the user. Our system outputs a set of clusters on the high-valued query answers showing their common properties such that the clusters are diverse as much as possible to avoid repeating information, and cover a certain number of top original answers as indicated by the user. Further, the system facilitates interactive exploration of the query answers by helping the user (i) choose combinations of parameters for clustering, (ii) inspect the clusters as well as the elements they contain, and (iii) visualize how changes in parameters affect clustering. We define optimization problems, study their complexity, explore properties of the solutions investigating the semi-lattice structure on the clusters, and propose efficient algorithms and optimizations to achieve these goals. We evaluate our techniques experimentally and discuss our prototype with a graphical user interface that facilitates this interactive exploration. A user study is conducted to evaluate the usability of our approach.
[ { "created": "Tue, 31 Jul 2018 02:31:39 GMT", "version": "v1" } ]
2018-08-01
[ [ "Wen", "Yuhao", "" ], [ "Zhu", "Xiaodan", "" ], [ "Roy", "Sudeepa", "" ], [ "Yang", "Jun", "" ] ]
We present a system for summarization and interactive exploration of high-valued aggregate query answers to make a large set of possible answers more informative to the user. Our system outputs a set of clusters on the high-valued query answers showing their common properties such that the clusters are diverse as much as possible to avoid repeating information, and cover a certain number of top original answers as indicated by the user. Further, the system facilitates interactive exploration of the query answers by helping the user (i) choose combinations of parameters for clustering, (ii) inspect the clusters as well as the elements they contain, and (iii) visualize how changes in parameters affect clustering. We define optimization problems, study their complexity, explore properties of the solutions investigating the semi-lattice structure on the clusters, and propose efficient algorithms and optimizations to achieve these goals. We evaluate our techniques experimentally and discuss our prototype with a graphical user interface that facilitates this interactive exploration. A user study is conducted to evaluate the usability of our approach.
2206.00322
Markus Dahlmanns
Markus Dahlmanns, Johannes Lohm\"oller, Jan Pennekamp, J\"orn Bodenhausen, Klaus Wehrle, Martin Henze
Missed Opportunities: Measuring the Untapped TLS Support in the Industrial Internet of Things
15 pages, 6 figures
In Proceedings of the 2022 ACM Asia Conference on Computer and Communications Security (ASIA CCS '22), Association for Computing Machinery, New York, NY, USA, pages 252-266
10.1145/3488932.3497762
null
cs.CR cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ongoing trend to move industrial appliances from previously isolated networks to the Internet requires fundamental changes in security to uphold secure and safe operation. Consequently, to ensure end-to-end secure communication and authentication, (i) traditional industrial protocols, e.g., Modbus, are retrofitted with TLS support, and (ii) modern protocols, e.g., MQTT, are directly designed to use TLS. To understand whether these changes indeed lead to secure Industrial Internet of Things deployments, i.e., using TLS-based protocols, which are configured according to security best practices, we perform an Internet-wide security assessment of ten industrial protocols covering the complete IPv4 address space. Our results show that both, retrofitted existing protocols and newly developed secure alternatives, are barely noticeable in the wild. While we find that new protocols have a higher TLS adoption rate than traditional protocols (7.2% vs. 0.4%), the overall adoption of TLS is comparably low (6.5% of hosts). Thus, most industrial deployments (934,736 hosts) are insecurely connected to the Internet. Furthermore, we identify that 42% of hosts with TLS support (26,665 hosts) show security deficits, e.g., missing access control. Finally, we show that support in configuring systems securely, e.g., via configuration templates, is promising to strengthen security.
[ { "created": "Wed, 1 Jun 2022 08:38:28 GMT", "version": "v1" } ]
2022-06-02
[ [ "Dahlmanns", "Markus", "" ], [ "Lohmöller", "Johannes", "" ], [ "Pennekamp", "Jan", "" ], [ "Bodenhausen", "Jörn", "" ], [ "Wehrle", "Klaus", "" ], [ "Henze", "Martin", "" ] ]
The ongoing trend to move industrial appliances from previously isolated networks to the Internet requires fundamental changes in security to uphold secure and safe operation. Consequently, to ensure end-to-end secure communication and authentication, (i) traditional industrial protocols, e.g., Modbus, are retrofitted with TLS support, and (ii) modern protocols, e.g., MQTT, are directly designed to use TLS. To understand whether these changes indeed lead to secure Industrial Internet of Things deployments, i.e., using TLS-based protocols, which are configured according to security best practices, we perform an Internet-wide security assessment of ten industrial protocols covering the complete IPv4 address space. Our results show that both, retrofitted existing protocols and newly developed secure alternatives, are barely noticeable in the wild. While we find that new protocols have a higher TLS adoption rate than traditional protocols (7.2% vs. 0.4%), the overall adoption of TLS is comparably low (6.5% of hosts). Thus, most industrial deployments (934,736 hosts) are insecurely connected to the Internet. Furthermore, we identify that 42% of hosts with TLS support (26,665 hosts) show security deficits, e.g., missing access control. Finally, we show that support in configuring systems securely, e.g., via configuration templates, is promising to strengthen security.
2309.10987
Xingting Yao
Xingting Yao, Qinghao Hu, Tielong Liu, Zitao Mo, Zeyu Zhu, Zhengyang Zhuge, Jian Cheng
SpikingNeRF: Making Bio-inspired Neural Networks See through the Real World
null
null
null
null
cs.NE cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spiking neural networks (SNNs) have been thriving on numerous tasks to leverage their promising energy efficiency and exploit their potentialities as biologically plausible intelligence. Meanwhile, the Neural Radiance Fields (NeRF) render high-quality 3D scenes with massive energy consumption, but few works delve into the energy-saving solution with a bio-inspired approach. In this paper, we propose SpikingNeRF, which aligns the radiance ray with the temporal dimension of SNN, to naturally accommodate the SNN to the reconstruction of Radiance Fields. Thus, the computation turns into a spike-based, multiplication-free manner, reducing the energy consumption. In SpikingNeRF, each sampled point on the ray is matched onto a particular time step, and represented in a hybrid manner where the voxel grids are maintained as well. Based on the voxel grids, sampled points are determined whether to be masked for better training and inference. However, this operation also incurs irregular temporal length. We propose the temporal padding strategy to tackle the masked samples to maintain regular temporal length, i.e., regular tensors, and the temporal condensing strategy to form a denser data structure for hardware-friendly computation. Extensive experiments on various datasets demonstrate that our method reduces the 70.79% energy consumption on average and obtains comparable synthesis quality with the ANN baseline.
[ { "created": "Wed, 20 Sep 2023 01:04:57 GMT", "version": "v1" }, { "created": "Sat, 28 Oct 2023 15:51:44 GMT", "version": "v2" }, { "created": "Mon, 13 Nov 2023 09:35:24 GMT", "version": "v3" } ]
2023-11-14
[ [ "Yao", "Xingting", "" ], [ "Hu", "Qinghao", "" ], [ "Liu", "Tielong", "" ], [ "Mo", "Zitao", "" ], [ "Zhu", "Zeyu", "" ], [ "Zhuge", "Zhengyang", "" ], [ "Cheng", "Jian", "" ] ]
Spiking neural networks (SNNs) have been thriving on numerous tasks to leverage their promising energy efficiency and exploit their potentialities as biologically plausible intelligence. Meanwhile, the Neural Radiance Fields (NeRF) render high-quality 3D scenes with massive energy consumption, but few works delve into the energy-saving solution with a bio-inspired approach. In this paper, we propose SpikingNeRF, which aligns the radiance ray with the temporal dimension of SNN, to naturally accommodate the SNN to the reconstruction of Radiance Fields. Thus, the computation turns into a spike-based, multiplication-free manner, reducing the energy consumption. In SpikingNeRF, each sampled point on the ray is matched onto a particular time step, and represented in a hybrid manner where the voxel grids are maintained as well. Based on the voxel grids, sampled points are determined whether to be masked for better training and inference. However, this operation also incurs irregular temporal length. We propose the temporal padding strategy to tackle the masked samples to maintain regular temporal length, i.e., regular tensors, and the temporal condensing strategy to form a denser data structure for hardware-friendly computation. Extensive experiments on various datasets demonstrate that our method reduces the 70.79% energy consumption on average and obtains comparable synthesis quality with the ANN baseline.
2408.03302
Siyuan Fan
Siyuan Fan, Bo Du, Xiantao Cai, Bo Peng, Longling Sun
TextIM: Part-aware Interactive Motion Synthesis from Text
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we propose TextIM, a novel framework for synthesizing TEXT-driven human Interactive Motions, with a focus on the precise alignment of part-level semantics. Existing methods often overlook the critical roles of interactive body parts and fail to adequately capture and align part-level semantics, resulting in inaccuracies and even erroneous movement outcomes. To address these issues, TextIM utilizes a decoupled conditional diffusion framework to enhance the detailed alignment between interactive movements and corresponding semantic intents from textual descriptions. Our approach leverages large language models, functioning as a human brain, to identify interacting human body parts and to comprehend interaction semantics to generate complicated and subtle interactive motion. Guided by the refined movements of the interacting parts, TextIM further extends these movements into a coherent whole-body motion. We design a spatial coherence module to complement the entire body movements while maintaining consistency and harmony across body parts using a part graph convolutional network. For training and evaluation, we carefully selected and re-labeled interactive motions from HUMANML3D to develop a specialized dataset. Experimental results demonstrate that TextIM produces semantically accurate human interactive motions, significantly enhancing the realism and applicability of synthesized interactive motions in diverse scenarios, even including interactions with deformable and dynamically changing objects.
[ { "created": "Tue, 6 Aug 2024 17:08:05 GMT", "version": "v1" } ]
2024-08-07
[ [ "Fan", "Siyuan", "" ], [ "Du", "Bo", "" ], [ "Cai", "Xiantao", "" ], [ "Peng", "Bo", "" ], [ "Sun", "Longling", "" ] ]
In this work, we propose TextIM, a novel framework for synthesizing TEXT-driven human Interactive Motions, with a focus on the precise alignment of part-level semantics. Existing methods often overlook the critical roles of interactive body parts and fail to adequately capture and align part-level semantics, resulting in inaccuracies and even erroneous movement outcomes. To address these issues, TextIM utilizes a decoupled conditional diffusion framework to enhance the detailed alignment between interactive movements and corresponding semantic intents from textual descriptions. Our approach leverages large language models, functioning as a human brain, to identify interacting human body parts and to comprehend interaction semantics to generate complicated and subtle interactive motion. Guided by the refined movements of the interacting parts, TextIM further extends these movements into a coherent whole-body motion. We design a spatial coherence module to complement the entire body movements while maintaining consistency and harmony across body parts using a part graph convolutional network. For training and evaluation, we carefully selected and re-labeled interactive motions from HUMANML3D to develop a specialized dataset. Experimental results demonstrate that TextIM produces semantically accurate human interactive motions, significantly enhancing the realism and applicability of synthesized interactive motions in diverse scenarios, even including interactions with deformable and dynamically changing objects.
2005.05228
Natig Tofigzade
Jochen Koenemann, Kanstantsin Pashkovich, Natig Tofigzade
Approximating Stable Matchings with Ties of Bounded Size
null
null
null
null
cs.GT math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Finding a stable matching is one of the central problems in algorithmic game theory. If participants are allowed to have ties and incomplete preferences, computing a stable matching of maximum cardinality is known to be NP-hard. In this paper we present a $(3L-2)/(2L-1)$-approximation algorithm for the stable matching problem with ties of size at most $L$ and incomplete lists. Our result matches the known lower bound on the integrality gap for the associated LP formulation.
[ { "created": "Mon, 11 May 2020 16:19:42 GMT", "version": "v1" }, { "created": "Tue, 14 Jul 2020 15:07:26 GMT", "version": "v2" } ]
2020-07-15
[ [ "Koenemann", "Jochen", "" ], [ "Pashkovich", "Kanstantsin", "" ], [ "Tofigzade", "Natig", "" ] ]
Finding a stable matching is one of the central problems in algorithmic game theory. If participants are allowed to have ties and incomplete preferences, computing a stable matching of maximum cardinality is known to be NP-hard. In this paper we present a $(3L-2)/(2L-1)$-approximation algorithm for the stable matching problem with ties of size at most $L$ and incomplete lists. Our result matches the known lower bound on the integrality gap for the associated LP formulation.
1310.5111
Shibamouli Lahiri
Shibamouli Lahiri
Complexity of Word Collocation Networks: A Preliminary Structural Analysis
10 pages
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we explore complex network properties of word collocation networks (Ferret, 2002) from four different genres. Each document of a particular genre was converted into a network of words with word collocations as edges. We analyzed graphically and statistically how the global properties of these networks varied across different genres, and among different network types within the same genre. Our results indicate that the distributions of network properties are visually similar but statistically apart across different genres, and interesting variations emerge when we consider different network types within a single genre. We further investigate how the global properties change as we add more and more collocation edges to the graph of one particular genre, and observe that except for the number of vertices and the size of the largest connected component, network properties change in phases, via jumps and drops.
[ { "created": "Fri, 18 Oct 2013 17:56:28 GMT", "version": "v1" }, { "created": "Thu, 24 Oct 2013 17:22:56 GMT", "version": "v2" }, { "created": "Sat, 26 Oct 2013 17:38:22 GMT", "version": "v3" }, { "created": "Thu, 14 Nov 2013 01:47:25 GMT", "version": "v4" }, { "created": "Thu, 6 Mar 2014 18:47:40 GMT", "version": "v5" } ]
2014-03-07
[ [ "Lahiri", "Shibamouli", "" ] ]
In this paper, we explore complex network properties of word collocation networks (Ferret, 2002) from four different genres. Each document of a particular genre was converted into a network of words with word collocations as edges. We analyzed graphically and statistically how the global properties of these networks varied across different genres, and among different network types within the same genre. Our results indicate that the distributions of network properties are visually similar but statistically apart across different genres, and interesting variations emerge when we consider different network types within a single genre. We further investigate how the global properties change as we add more and more collocation edges to the graph of one particular genre, and observe that except for the number of vertices and the size of the largest connected component, network properties change in phases, via jumps and drops.
2203.17167
Jayson Lynch
Jeffrey Bosboom, Josh Brunner, Michael Coulombe, Erik D. Demaine, Dylan H. Hendrickson, Jayson Lynch, Elle Najt
The Legend of Zelda: The Complexity of Mechanics
Full version of the paper appearing at TJCDCGGG 2021. 27 pages, 14 figures
null
null
null
cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyze some of the many game mechanics available to Link in the classic Legend of Zelda series of video games. In each case, we prove that the generalized game with that mechanic is polynomial, NP-complete, NP-hard and in PSPACE, or PSPACE-complete. In the process we give an overview of many of the hardness proof techniques developed for video games over the past decade: the motion-planning-through-gadgets framework, the planar doors framework, the doors-and-buttons framework, the "Nintendo" platform game / SAT framework, and the collectible tokens and toll roads / Hamiltonicity framework.
[ { "created": "Thu, 31 Mar 2022 16:42:38 GMT", "version": "v1" } ]
2022-04-01
[ [ "Bosboom", "Jeffrey", "" ], [ "Brunner", "Josh", "" ], [ "Coulombe", "Michael", "" ], [ "Demaine", "Erik D.", "" ], [ "Hendrickson", "Dylan H.", "" ], [ "Lynch", "Jayson", "" ], [ "Najt", "Elle", "" ] ]
We analyze some of the many game mechanics available to Link in the classic Legend of Zelda series of video games. In each case, we prove that the generalized game with that mechanic is polynomial, NP-complete, NP-hard and in PSPACE, or PSPACE-complete. In the process we give an overview of many of the hardness proof techniques developed for video games over the past decade: the motion-planning-through-gadgets framework, the planar doors framework, the doors-and-buttons framework, the "Nintendo" platform game / SAT framework, and the collectible tokens and toll roads / Hamiltonicity framework.
cs/9301114
Maggie McLoughlin
Donald E. Knuth
Theory and practice
Abstract added by Greg Kuperberg
Theoretical Comp. Sci. 90 (1991), 1--15
null
Knuth migration 11/2004
cs.GL
null
The author argues to Silicon Valley that the most important and powerful part of computer science is work that is simultaneously theoretical and practical. He particularly considers the intersection of the theory of algorithms and practical software development. He combines examples from the development of the TeX typesetting system with clever jokes, criticisms, and encouragements.
[ { "created": "Fri, 1 Nov 1991 00:00:00 GMT", "version": "v1" } ]
2008-02-03
[ [ "Knuth", "Donald E.", "" ] ]
The author argues to Silicon Valley that the most important and powerful part of computer science is work that is simultaneously theoretical and practical. He particularly considers the intersection of the theory of algorithms and practical software development. He combines examples from the development of the TeX typesetting system with clever jokes, criticisms, and encouragements.
2104.04515
Xi Ye
Xi Ye, Rohan Nair, Greg Durrett
Connecting Attributions and QA Model Behavior on Realistic Counterfactuals
EMNLP 2021
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When a model attribution technique highlights a particular part of the input, a user might understand this highlight as making a statement about counterfactuals (Miller, 2019): if that part of the input were to change, the model's prediction might change as well. This paper investigates how well different attribution techniques align with this assumption on realistic counterfactuals in the case of reading comprehension (RC). RC is a particularly challenging test case, as token-level attributions that have been extensively studied in other NLP tasks such as sentiment analysis are less suitable to represent the reasoning that RC models perform. We construct counterfactual sets for three different RC settings, and through heuristics that can connect attribution methods' outputs to high-level model behavior, we can evaluate how useful different attribution methods and even different formats are for understanding counterfactuals. We find that pairwise attributions are better suited to RC than token-level attributions across these different RC settings, with our best performance coming from a modification that we propose to an existing pairwise attribution method.
[ { "created": "Fri, 9 Apr 2021 17:55:21 GMT", "version": "v1" }, { "created": "Tue, 14 Sep 2021 17:59:55 GMT", "version": "v2" } ]
2021-09-15
[ [ "Ye", "Xi", "" ], [ "Nair", "Rohan", "" ], [ "Durrett", "Greg", "" ] ]
When a model attribution technique highlights a particular part of the input, a user might understand this highlight as making a statement about counterfactuals (Miller, 2019): if that part of the input were to change, the model's prediction might change as well. This paper investigates how well different attribution techniques align with this assumption on realistic counterfactuals in the case of reading comprehension (RC). RC is a particularly challenging test case, as token-level attributions that have been extensively studied in other NLP tasks such as sentiment analysis are less suitable to represent the reasoning that RC models perform. We construct counterfactual sets for three different RC settings, and through heuristics that can connect attribution methods' outputs to high-level model behavior, we can evaluate how useful different attribution methods and even different formats are for understanding counterfactuals. We find that pairwise attributions are better suited to RC than token-level attributions across these different RC settings, with our best performance coming from a modification that we propose to an existing pairwise attribution method.
1404.3002
B. G. Kodge
B. G. Kodge, P. S. Hiremath
Elevation Contour Analysis and Water body Extraction for Finding Water Scarcity Locations using DEM
Due to some unprojectioned spatial data, and wrong contour outputs, the paper is withdrawn
World Journal of Science and Technology 2011, 1(12): 29-34 World Journal of Science and Technology 2011, 1(12)
null
null
cs.OH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The present study was aimed to create new methods for extraction and analysis of land elevation contour lines, automatic extraction of water bodies (river basins and lakes), from the digital elevation models (DEM) of a test area. And extraction of villages which are fell under critical water scarcity regions for agriculture and drinking water with respect to their elevation data and available natural water resources.
[ { "created": "Fri, 11 Apr 2014 04:59:58 GMT", "version": "v1" }, { "created": "Wed, 6 Dec 2017 04:46:51 GMT", "version": "v2" } ]
2017-12-07
[ [ "Kodge", "B. G.", "" ], [ "Hiremath", "P. S.", "" ] ]
The present study was aimed to create new methods for extraction and analysis of land elevation contour lines, automatic extraction of water bodies (river basins and lakes), from the digital elevation models (DEM) of a test area. And extraction of villages which are fell under critical water scarcity regions for agriculture and drinking water with respect to their elevation data and available natural water resources.
2306.06061
Phomolo Teffo
Teffo Phomolo Nicrocia, Owolawi Pius Adewale, Pholo Moanda Diana
clustering an african hairstyle dataset using pca and k-means
null
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
The adoption of digital transformation was not expressed in building an African face shape classifier. In this paper, an approach is presented that uses k-means to classify African women images. African women rely on beauty standards recommendations, personal preference, or the newest trends in hairstyles to decide on the appropriate hairstyle for them. In this paper, an approach is presented that uses K-means clustering to classify African women's images. In order to identify potential facial clusters, Haarcascade is used for feature-based training, and K-means clustering is applied for image classification.
[ { "created": "Thu, 25 May 2023 14:13:29 GMT", "version": "v1" } ]
2023-06-12
[ [ "Nicrocia", "Teffo Phomolo", "" ], [ "Adewale", "Owolawi Pius", "" ], [ "Diana", "Pholo Moanda", "" ] ]
The adoption of digital transformation was not expressed in building an African face shape classifier. In this paper, an approach is presented that uses k-means to classify African women images. African women rely on beauty standards recommendations, personal preference, or the newest trends in hairstyles to decide on the appropriate hairstyle for them. In this paper, an approach is presented that uses K-means clustering to classify African women's images. In order to identify potential facial clusters, Haarcascade is used for feature-based training, and K-means clustering is applied for image classification.
2302.12407
Ruishi Yu
Chao Hu, Ruishi Yu, Binqi Zeng, Yu Zhan, Ying Fu, Quan Zhang, Rongkai Liu and Heyuan Shi
HyperAttack: Multi-Gradient-Guided White-box Adversarial Structure Attack of Hypergraph Neural Networks
10+2pages,9figures
null
null
null
cs.LG cs.AI cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hypergraph neural networks (HGNN) have shown superior performance in various deep learning tasks, leveraging the high-order representation ability to formulate complex correlations among data by connecting two or more nodes through hyperedge modeling. Despite the well-studied adversarial attacks on Graph Neural Networks (GNN), there is few study on adversarial attacks against HGNN, which leads to a threat to the safety of HGNN applications. In this paper, we introduce HyperAttack, the first white-box adversarial attack framework against hypergraph neural networks. HyperAttack conducts a white-box structure attack by perturbing hyperedge link status towards the target node with the guidance of both gradients and integrated gradients. We evaluate HyperAttack on the widely-used Cora and PubMed datasets and three hypergraph neural networks with typical hypergraph modeling techniques. Compared to state-of-the-art white-box structural attack methods for GNN, HyperAttack achieves a 10-20X improvement in time efficiency while also increasing attack success rates by 1.3%-3.7%. The results show that HyperAttack can achieve efficient adversarial attacks that balance effectiveness and time costs.
[ { "created": "Fri, 24 Feb 2023 02:15:42 GMT", "version": "v1" } ]
2023-02-27
[ [ "Hu", "Chao", "" ], [ "Yu", "Ruishi", "" ], [ "Zeng", "Binqi", "" ], [ "Zhan", "Yu", "" ], [ "Fu", "Ying", "" ], [ "Zhang", "Quan", "" ], [ "Liu", "Rongkai", "" ], [ "Shi", "Heyuan", "" ] ]
Hypergraph neural networks (HGNN) have shown superior performance in various deep learning tasks, leveraging the high-order representation ability to formulate complex correlations among data by connecting two or more nodes through hyperedge modeling. Despite the well-studied adversarial attacks on Graph Neural Networks (GNN), there is few study on adversarial attacks against HGNN, which leads to a threat to the safety of HGNN applications. In this paper, we introduce HyperAttack, the first white-box adversarial attack framework against hypergraph neural networks. HyperAttack conducts a white-box structure attack by perturbing hyperedge link status towards the target node with the guidance of both gradients and integrated gradients. We evaluate HyperAttack on the widely-used Cora and PubMed datasets and three hypergraph neural networks with typical hypergraph modeling techniques. Compared to state-of-the-art white-box structural attack methods for GNN, HyperAttack achieves a 10-20X improvement in time efficiency while also increasing attack success rates by 1.3%-3.7%. The results show that HyperAttack can achieve efficient adversarial attacks that balance effectiveness and time costs.
2406.04963
Qitian Wu
Qitian Wu, Fan Nie, Chenxiao Yang, Junchi Yan
Learning Divergence Fields for Shift-Robust Graph Representations
Accepted to ICML 2024. Source codes at https://github.com/fannie1208/GLIND
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Real-world data generation often involves certain geometries (e.g., graphs) that induce instance-level interdependence. This characteristic makes the generalization of learning models more difficult due to the intricate interdependent patterns that impact data-generative distributions and can vary from training to testing. In this work, we propose a geometric diffusion model with learnable divergence fields for the challenging generalization problem with interdependent data. We generalize the diffusion equation with stochastic diffusivity at each time step, which aims to capture the multi-faceted information flows among interdependent data. Furthermore, we derive a new learning objective through causal inference, which can guide the model to learn generalizable patterns of interdependence that are insensitive across domains. Regarding practical implementation, we introduce three model instantiations that can be considered as the generalized versions of GCN, GAT, and Transformers, respectively, which possess advanced robustness against distribution shifts. We demonstrate their promising efficacy for out-of-distribution generalization on diverse real-world datasets.
[ { "created": "Fri, 7 Jun 2024 14:29:21 GMT", "version": "v1" } ]
2024-06-10
[ [ "Wu", "Qitian", "" ], [ "Nie", "Fan", "" ], [ "Yang", "Chenxiao", "" ], [ "Yan", "Junchi", "" ] ]
Real-world data generation often involves certain geometries (e.g., graphs) that induce instance-level interdependence. This characteristic makes the generalization of learning models more difficult due to the intricate interdependent patterns that impact data-generative distributions and can vary from training to testing. In this work, we propose a geometric diffusion model with learnable divergence fields for the challenging generalization problem with interdependent data. We generalize the diffusion equation with stochastic diffusivity at each time step, which aims to capture the multi-faceted information flows among interdependent data. Furthermore, we derive a new learning objective through causal inference, which can guide the model to learn generalizable patterns of interdependence that are insensitive across domains. Regarding practical implementation, we introduce three model instantiations that can be considered as the generalized versions of GCN, GAT, and Transformers, respectively, which possess advanced robustness against distribution shifts. We demonstrate their promising efficacy for out-of-distribution generalization on diverse real-world datasets.
0910.4084
Andrew Gillette
Chandrajit Bajaj, Andrew Gillette, Samrat Goswami, Bong June Kwon, and Jose Rivera
Complementary Space for Enhanced Uncertainty and Dynamics Visualization
12 pages. To appear as a chapter in "Topological Data Analysis and Visualization: Theory, Algorithms and Applications", Pascucci, Tricoche, Hagen, Tierny, Eds., Springer-Verlag, in publication, 2009
null
null
null
cs.CG cs.GR
http://creativecommons.org/licenses/by-nc-sa/3.0/
Given a computer model of a physical object, it is often quite difficult to visualize and quantify any global effects on the shape representation caused by local uncertainty and local errors in the data. This problem is further amplified when dealing with hierarchical representations containing varying levels of detail and / or shapes undergoing dynamic deformations. In this paper, we compute, quantify and visualize the complementary topological and geometrical features of 3D shape models, namely, the tunnels, pockets and internal voids of the object. We find that this approach sheds a unique light on how a model is affected by local uncertainty, errors or modifications and show how the presence or absence of complementary shape features can be essential to an object's structural form and function.
[ { "created": "Tue, 20 Oct 2009 22:00:54 GMT", "version": "v1" } ]
2009-10-22
[ [ "Bajaj", "Chandrajit", "" ], [ "Gillette", "Andrew", "" ], [ "Goswami", "Samrat", "" ], [ "Kwon", "Bong June", "" ], [ "Rivera", "Jose", "" ] ]
Given a computer model of a physical object, it is often quite difficult to visualize and quantify any global effects on the shape representation caused by local uncertainty and local errors in the data. This problem is further amplified when dealing with hierarchical representations containing varying levels of detail and / or shapes undergoing dynamic deformations. In this paper, we compute, quantify and visualize the complementary topological and geometrical features of 3D shape models, namely, the tunnels, pockets and internal voids of the object. We find that this approach sheds a unique light on how a model is affected by local uncertainty, errors or modifications and show how the presence or absence of complementary shape features can be essential to an object's structural form and function.
2306.03832
Jiarui Gan
Jiarui Gan, Rupak Majumdar, Debmalya Mandal, Goran Radanovic
Sequential Principal-Agent Problems with Communication: Efficient Computation and Learning
null
null
null
null
cs.GT cs.LG cs.MA
http://creativecommons.org/licenses/by/4.0/
We study a sequential decision making problem between a principal and an agent with incomplete information on both sides. In this model, the principal and the agent interact in a stochastic environment, and each is privy to observations about the state not available to the other. The principal has the power of commitment, both to elicit information from the agent and to provide signals about her own information. The principal and the agent communicate their signals to each other, and select their actions independently based on this communication. Each player receives a payoff based on the state and their joint actions, and the environment moves to a new state. The interaction continues over a finite time horizon, and both players act to optimize their own total payoffs over the horizon. Our model encompasses as special cases stochastic games of incomplete information and POMDPs, as well as sequential Bayesian persuasion and mechanism design problems. We study both computation of optimal policies and learning in our setting. While the general problems are computationally intractable, we study algorithmic solutions under a conditional independence assumption on the underlying state-observation distributions. We present a polynomial-time algorithm to compute the principal's optimal policy up to an additive approximation. Additionally, we show an efficient learning algorithm in the case where the transition probabilities are not known beforehand. The algorithm guarantees sublinear regret for both players.
[ { "created": "Tue, 6 Jun 2023 16:20:44 GMT", "version": "v1" }, { "created": "Sun, 17 Dec 2023 13:34:46 GMT", "version": "v2" } ]
2023-12-19
[ [ "Gan", "Jiarui", "" ], [ "Majumdar", "Rupak", "" ], [ "Mandal", "Debmalya", "" ], [ "Radanovic", "Goran", "" ] ]
We study a sequential decision making problem between a principal and an agent with incomplete information on both sides. In this model, the principal and the agent interact in a stochastic environment, and each is privy to observations about the state not available to the other. The principal has the power of commitment, both to elicit information from the agent and to provide signals about her own information. The principal and the agent communicate their signals to each other, and select their actions independently based on this communication. Each player receives a payoff based on the state and their joint actions, and the environment moves to a new state. The interaction continues over a finite time horizon, and both players act to optimize their own total payoffs over the horizon. Our model encompasses as special cases stochastic games of incomplete information and POMDPs, as well as sequential Bayesian persuasion and mechanism design problems. We study both computation of optimal policies and learning in our setting. While the general problems are computationally intractable, we study algorithmic solutions under a conditional independence assumption on the underlying state-observation distributions. We present a polynomial-time algorithm to compute the principal's optimal policy up to an additive approximation. Additionally, we show an efficient learning algorithm in the case where the transition probabilities are not known beforehand. The algorithm guarantees sublinear regret for both players.
2405.16335
Tom Jurgenson
Tom Jurgenson, Matan Sudry, Gal Avineri, Aviv Tamar
RoboArm-NMP: a Learning Environment for Neural Motion Planning
null
null
null
null
cs.RO cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present RoboArm-NMP, a learning and evaluation environment that allows simple and thorough evaluations of Neural Motion Planning (NMP) algorithms, focused on robotic manipulators. Our Python-based environment provides baseline implementations for learning control policies (either supervised or reinforcement learning based), a simulator based on PyBullet, data of solved instances using a classical motion planning solver, various representation learning methods for encoding the obstacles, and a clean interface between the learning and planning frameworks. Using RoboArm-NMP, we compare several prominent NMP design points, and demonstrate that the best methods mostly succeed in generalizing to unseen goals in a scene with fixed obstacles, but have difficulty in generalizing to unseen obstacle configurations, suggesting focus points for future research.
[ { "created": "Sat, 25 May 2024 19:28:11 GMT", "version": "v1" } ]
2024-05-28
[ [ "Jurgenson", "Tom", "" ], [ "Sudry", "Matan", "" ], [ "Avineri", "Gal", "" ], [ "Tamar", "Aviv", "" ] ]
We present RoboArm-NMP, a learning and evaluation environment that allows simple and thorough evaluations of Neural Motion Planning (NMP) algorithms, focused on robotic manipulators. Our Python-based environment provides baseline implementations for learning control policies (either supervised or reinforcement learning based), a simulator based on PyBullet, data of solved instances using a classical motion planning solver, various representation learning methods for encoding the obstacles, and a clean interface between the learning and planning frameworks. Using RoboArm-NMP, we compare several prominent NMP design points, and demonstrate that the best methods mostly succeed in generalizing to unseen goals in a scene with fixed obstacles, but have difficulty in generalizing to unseen obstacle configurations, suggesting focus points for future research.
cs/9605101
null
G. I. Webb
Further Experimental Evidence against the Utility of Occam's Razor
See http://www.jair.org/ for an online appendix and other files accompanying this article
Journal of Artificial Intelligence Research, Vol 4, (1996), 397-417
null
null
cs.AI
null
This paper presents new experimental evidence against the utility of Occam's razor. A~systematic procedure is presented for post-processing decision trees produced by C4.5. This procedure was derived by rejecting Occam's razor and instead attending to the assumption that similar objects are likely to belong to the same class. It increases a decision tree's complexity without altering the performance of that tree on the training data from which it is inferred. The resulting more complex decision trees are demonstrated to have, on average, for a variety of common learning tasks, higher predictive accuracy than the less complex original decision trees. This result raises considerable doubt about the utility of Occam's razor as it is commonly applied in modern machine learning.
[ { "created": "Wed, 1 May 1996 00:00:00 GMT", "version": "v1" } ]
2008-02-03
[ [ "Webb", "G. I.", "" ] ]
This paper presents new experimental evidence against the utility of Occam's razor. A~systematic procedure is presented for post-processing decision trees produced by C4.5. This procedure was derived by rejecting Occam's razor and instead attending to the assumption that similar objects are likely to belong to the same class. It increases a decision tree's complexity without altering the performance of that tree on the training data from which it is inferred. The resulting more complex decision trees are demonstrated to have, on average, for a variety of common learning tasks, higher predictive accuracy than the less complex original decision trees. This result raises considerable doubt about the utility of Occam's razor as it is commonly applied in modern machine learning.
1407.2077
Kleanthis Thramboulidis
Kleanthis Thramboulidis
A Cyber-Physical System-based Approach for Industrial Automation Systems
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Industrial automation systems (IASs) are commonly developed using the languages defined by the IEC 61131 standard and are executed on PLCs. In this paper, a system-based approach for the development of IASs is adopted. A framework is described to refine the UML model of the software part, which is extracted from the SysML system model, and get the implementation code. Two implementation alternatives are considered to exploit PLCs but also the recent deluge of embedded boards in the market. For PLC targets, the new version of IEC 61131 that supports Object-Orientation is adopted, while Java is used for embedded boards. The case study was developed as a lab exercise for teaching the various technologies that address challenges in the domain of cyber-physical systems where Internet of Things (IoT) would be the glue regarding their cyber interfaces.
[ { "created": "Tue, 8 Jul 2014 13:34:27 GMT", "version": "v1" } ]
2014-07-09
[ [ "Thramboulidis", "Kleanthis", "" ] ]
Industrial automation systems (IASs) are commonly developed using the languages defined by the IEC 61131 standard and are executed on PLCs. In this paper, a system-based approach for the development of IASs is adopted. A framework is described to refine the UML model of the software part, which is extracted from the SysML system model, and get the implementation code. Two implementation alternatives are considered to exploit PLCs but also the recent deluge of embedded boards in the market. For PLC targets, the new version of IEC 61131 that supports Object-Orientation is adopted, while Java is used for embedded boards. The case study was developed as a lab exercise for teaching the various technologies that address challenges in the domain of cyber-physical systems where Internet of Things (IoT) would be the glue regarding their cyber interfaces.
2401.16791
Chen Liang
Dachi Chen, Weitian Ding, Chen Liang, Chang Xu, Junwei Zhang, Majd Sakr
Accelerated Cloud for Artificial Intelligence (ACAI)
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Training an effective Machine learning (ML) model is an iterative process that requires effort in multiple dimensions. Vertically, a single pipeline typically includes an initial ETL (Extract, Transform, Load) of raw datasets, a model training stage, and an evaluation stage where the practitioners obtain statistics of the model performance. Horizontally, many such pipelines may be required to find the best model within a search space of model configurations. Many practitioners resort to maintaining logs manually and writing simple glue code to automate the workflow. However, carrying out this process on the cloud is not a trivial task in terms of resource provisioning, data management, and bookkeeping of job histories to make sure the results are reproducible. We propose an end-to-end cloud-based machine learning platform, Accelerated Cloud for AI (ACAI), to help improve the productivity of ML practitioners. ACAI achieves this goal by enabling cloud-based storage of indexed, labeled, and searchable data, as well as automatic resource provisioning, job scheduling, and experiment tracking. Specifically, ACAI provides practitioners (1) a data lake for storing versioned datasets and their corresponding metadata, and (2) an execution engine for executing ML jobs on the cloud with automatic resource provisioning (auto-provision), logging and provenance tracking. To evaluate ACAI, we test the efficacy of our auto-provisioner on the MNIST handwritten digit classification task, and we study the usability of our system using experiments and interviews. We show that our auto-provisioner produces a 1.7x speed-up and 39% cost reduction, and our system reduces experiment time for ML scientists by 20% on typical ML use cases.
[ { "created": "Tue, 30 Jan 2024 07:09:48 GMT", "version": "v1" } ]
2024-01-31
[ [ "Chen", "Dachi", "" ], [ "Ding", "Weitian", "" ], [ "Liang", "Chen", "" ], [ "Xu", "Chang", "" ], [ "Zhang", "Junwei", "" ], [ "Sakr", "Majd", "" ] ]
Training an effective Machine learning (ML) model is an iterative process that requires effort in multiple dimensions. Vertically, a single pipeline typically includes an initial ETL (Extract, Transform, Load) of raw datasets, a model training stage, and an evaluation stage where the practitioners obtain statistics of the model performance. Horizontally, many such pipelines may be required to find the best model within a search space of model configurations. Many practitioners resort to maintaining logs manually and writing simple glue code to automate the workflow. However, carrying out this process on the cloud is not a trivial task in terms of resource provisioning, data management, and bookkeeping of job histories to make sure the results are reproducible. We propose an end-to-end cloud-based machine learning platform, Accelerated Cloud for AI (ACAI), to help improve the productivity of ML practitioners. ACAI achieves this goal by enabling cloud-based storage of indexed, labeled, and searchable data, as well as automatic resource provisioning, job scheduling, and experiment tracking. Specifically, ACAI provides practitioners (1) a data lake for storing versioned datasets and their corresponding metadata, and (2) an execution engine for executing ML jobs on the cloud with automatic resource provisioning (auto-provision), logging and provenance tracking. To evaluate ACAI, we test the efficacy of our auto-provisioner on the MNIST handwritten digit classification task, and we study the usability of our system using experiments and interviews. We show that our auto-provisioner produces a 1.7x speed-up and 39% cost reduction, and our system reduces experiment time for ML scientists by 20% on typical ML use cases.
2103.03571
Hong Liu
Hong Liu and Jianmin Wang and Mingsheng Long
Cycle Self-Training for Domain Adaptation
null
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by/4.0/
Mainstream approaches for unsupervised domain adaptation (UDA) learn domain-invariant representations to narrow the domain shift. Recently, self-training has been gaining momentum in UDA, which exploits unlabeled target data by training with target pseudo-labels. However, as corroborated in this work, under distributional shift in UDA, the pseudo-labels can be unreliable in terms of their large discrepancy from target ground truth. Thereby, we propose Cycle Self-Training (CST), a principled self-training algorithm that explicitly enforces pseudo-labels to generalize across domains. CST cycles between a forward step and a reverse step until convergence. In the forward step, CST generates target pseudo-labels with a source-trained classifier. In the reverse step, CST trains a target classifier using target pseudo-labels, and then updates the shared representations to make the target classifier perform well on the source data. We introduce the Tsallis entropy as a confidence-friendly regularization to improve the quality of target pseudo-labels. We analyze CST theoretically under realistic assumptions, and provide hard cases where CST recovers target ground truth, while both invariant feature learning and vanilla self-training fail. Empirical results indicate that CST significantly improves over the state-of-the-arts on visual recognition and sentiment analysis benchmarks.
[ { "created": "Fri, 5 Mar 2021 10:04:25 GMT", "version": "v1" }, { "created": "Wed, 13 Oct 2021 05:17:28 GMT", "version": "v2" }, { "created": "Thu, 28 Oct 2021 20:41:03 GMT", "version": "v3" } ]
2021-11-01
[ [ "Liu", "Hong", "" ], [ "Wang", "Jianmin", "" ], [ "Long", "Mingsheng", "" ] ]
Mainstream approaches for unsupervised domain adaptation (UDA) learn domain-invariant representations to narrow the domain shift. Recently, self-training has been gaining momentum in UDA, which exploits unlabeled target data by training with target pseudo-labels. However, as corroborated in this work, under distributional shift in UDA, the pseudo-labels can be unreliable in terms of their large discrepancy from target ground truth. Thereby, we propose Cycle Self-Training (CST), a principled self-training algorithm that explicitly enforces pseudo-labels to generalize across domains. CST cycles between a forward step and a reverse step until convergence. In the forward step, CST generates target pseudo-labels with a source-trained classifier. In the reverse step, CST trains a target classifier using target pseudo-labels, and then updates the shared representations to make the target classifier perform well on the source data. We introduce the Tsallis entropy as a confidence-friendly regularization to improve the quality of target pseudo-labels. We analyze CST theoretically under realistic assumptions, and provide hard cases where CST recovers target ground truth, while both invariant feature learning and vanilla self-training fail. Empirical results indicate that CST significantly improves over the state-of-the-arts on visual recognition and sentiment analysis benchmarks.
2103.07356
Akira Taniguchi
Akira Taniguchi, Ayako Fukawa, Hiroshi Yamakawa
Hippocampal formation-inspired probabilistic generative model
Submitted to Neural Networks
null
null
null
cs.AI cs.NE q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In building artificial intelligence (AI) agents, referring to how brains function in real environments can accelerate development by reducing the design space. In this study, we propose a probabilistic generative model (PGM) for navigation in uncertain environments by integrating the neuroscientific knowledge of hippocampal formation (HF) and the engineering knowledge in robotics and AI, namely, simultaneous localization and mapping (SLAM). We follow the approach of brain reference architecture (BRA) (Yamakawa, 2021) to compose the PGM and outline how to verify the model. To this end, we survey and discuss the relationship between the HF findings and SLAM models. The proposed hippocampal formation-inspired probabilistic generative model (HF-PGM) is designed to be highly consistent with the anatomical structure and functions of the HF. By referencing the brain, we elaborate on the importance of integration of egocentric/allocentric information from the entorhinal cortex to the hippocampus and the use of discrete-event queues.
[ { "created": "Fri, 12 Mar 2021 15:46:52 GMT", "version": "v1" }, { "created": "Wed, 10 Nov 2021 08:19:20 GMT", "version": "v2" }, { "created": "Mon, 21 Mar 2022 08:15:09 GMT", "version": "v3" } ]
2022-03-22
[ [ "Taniguchi", "Akira", "" ], [ "Fukawa", "Ayako", "" ], [ "Yamakawa", "Hiroshi", "" ] ]
In building artificial intelligence (AI) agents, referring to how brains function in real environments can accelerate development by reducing the design space. In this study, we propose a probabilistic generative model (PGM) for navigation in uncertain environments by integrating the neuroscientific knowledge of hippocampal formation (HF) and the engineering knowledge in robotics and AI, namely, simultaneous localization and mapping (SLAM). We follow the approach of brain reference architecture (BRA) (Yamakawa, 2021) to compose the PGM and outline how to verify the model. To this end, we survey and discuss the relationship between the HF findings and SLAM models. The proposed hippocampal formation-inspired probabilistic generative model (HF-PGM) is designed to be highly consistent with the anatomical structure and functions of the HF. By referencing the brain, we elaborate on the importance of integration of egocentric/allocentric information from the entorhinal cortex to the hippocampus and the use of discrete-event queues.
2101.03391
Jules Jacobs
Jules Jacobs
Paradoxes of Probabilistic Programming
null
null
10.1145/3434339
null
cs.PL
http://creativecommons.org/licenses/by/4.0/
Probabilistic programming languages allow programmers to write down conditional probability distributions that represent statistical and machine learning models as programs that use observe statements. These programs are run by accumulating likelihood at each observe statement, and using the likelihood to steer random choices and weigh results with inference algorithms such as importance sampling or MCMC. We argue that naive likelihood accumulation does not give desirable semantics and leads to paradoxes when an observe statement is used to condition on a measure-zero event, particularly when the observe statement is executed conditionally on random data. We show that the paradoxes disappear if we explicitly model measure-zero events as a limit of positive measure events, and that we can execute these type of probabilistic programs by accumulating infinitesimal probabilities rather than probability densities. Our extension improves probabilistic programming languages as an executable notation for probability distributions by making it more well-behaved and more expressive, by allowing the programmer to be explicit about which limit is intended when conditioning on an event of measure zero.
[ { "created": "Sat, 9 Jan 2021 16:58:55 GMT", "version": "v1" }, { "created": "Fri, 22 Jan 2021 13:09:13 GMT", "version": "v2" } ]
2021-01-25
[ [ "Jacobs", "Jules", "" ] ]
Probabilistic programming languages allow programmers to write down conditional probability distributions that represent statistical and machine learning models as programs that use observe statements. These programs are run by accumulating likelihood at each observe statement, and using the likelihood to steer random choices and weigh results with inference algorithms such as importance sampling or MCMC. We argue that naive likelihood accumulation does not give desirable semantics and leads to paradoxes when an observe statement is used to condition on a measure-zero event, particularly when the observe statement is executed conditionally on random data. We show that the paradoxes disappear if we explicitly model measure-zero events as a limit of positive measure events, and that we can execute these type of probabilistic programs by accumulating infinitesimal probabilities rather than probability densities. Our extension improves probabilistic programming languages as an executable notation for probability distributions by making it more well-behaved and more expressive, by allowing the programmer to be explicit about which limit is intended when conditioning on an event of measure zero.
2403.07632
Gregory Kyro
Gregory W. Kyro, Matthew T. Martin, Eric D. Watt, Victor S. Batista
CardioGenAI: A Machine Learning-Based Framework for Re-Engineering Drugs for Reduced hERG Liability
null
null
null
null
cs.LG q-bio.BM
http://creativecommons.org/licenses/by-nc-sa/4.0/
The link between in vitro hERG ion channel inhibition and subsequent in vivo QT interval prolongation, a critical risk factor for the development of arrythmias such as Torsade de Pointes, is so well established that in vitro hERG activity alone is often sufficient to end the development of an otherwise promising drug candidate. It is therefore of tremendous interest to develop advanced methods for identifying hERG-active compounds in the early stages of drug development, as well as for proposing redesigned compounds with reduced hERG liability and preserved on-target potency. In this work, we present CardioGenAI, a machine learning-based framework for re-engineering both developmental and commercially available drugs for reduced hERG activity while preserving their pharmacological activity. The framework incorporates novel state-of-the-art discriminative models for predicting hERG channel activity, as well as activity against the voltage-gated NaV1.5 and CaV1.2 channels due to their potential implications in modulating the arrhythmogenic potential induced by hERG channel blockade. We applied the complete framework to pimozide, an FDA-approved antipsychotic agent that demonstrates high affinity to the hERG channel, and generated 100 refined candidates. Remarkably, among the candidates is fluspirilene, a compound which is of the same class of drugs (diphenylmethanes) as pimozide and therefore has similar pharmacological activity, yet exhibits over 700-fold weaker binding to hERG. We envision that this method can effectively be applied to developmental compounds exhibiting hERG liabilities to provide a means of rescuing drug development programs that have stalled due to hERG-related safety concerns. We have made all of our software open-source to facilitate integration of the CardioGenAI framework for molecular hypothesis generation into drug discovery workflows.
[ { "created": "Tue, 12 Mar 2024 13:12:24 GMT", "version": "v1" }, { "created": "Fri, 10 May 2024 15:19:22 GMT", "version": "v2" }, { "created": "Tue, 6 Aug 2024 22:37:21 GMT", "version": "v3" } ]
2024-08-08
[ [ "Kyro", "Gregory W.", "" ], [ "Martin", "Matthew T.", "" ], [ "Watt", "Eric D.", "" ], [ "Batista", "Victor S.", "" ] ]
The link between in vitro hERG ion channel inhibition and subsequent in vivo QT interval prolongation, a critical risk factor for the development of arrythmias such as Torsade de Pointes, is so well established that in vitro hERG activity alone is often sufficient to end the development of an otherwise promising drug candidate. It is therefore of tremendous interest to develop advanced methods for identifying hERG-active compounds in the early stages of drug development, as well as for proposing redesigned compounds with reduced hERG liability and preserved on-target potency. In this work, we present CardioGenAI, a machine learning-based framework for re-engineering both developmental and commercially available drugs for reduced hERG activity while preserving their pharmacological activity. The framework incorporates novel state-of-the-art discriminative models for predicting hERG channel activity, as well as activity against the voltage-gated NaV1.5 and CaV1.2 channels due to their potential implications in modulating the arrhythmogenic potential induced by hERG channel blockade. We applied the complete framework to pimozide, an FDA-approved antipsychotic agent that demonstrates high affinity to the hERG channel, and generated 100 refined candidates. Remarkably, among the candidates is fluspirilene, a compound which is of the same class of drugs (diphenylmethanes) as pimozide and therefore has similar pharmacological activity, yet exhibits over 700-fold weaker binding to hERG. We envision that this method can effectively be applied to developmental compounds exhibiting hERG liabilities to provide a means of rescuing drug development programs that have stalled due to hERG-related safety concerns. We have made all of our software open-source to facilitate integration of the CardioGenAI framework for molecular hypothesis generation into drug discovery workflows.
2006.03666
Jason Schoeters
Arnaud Casteigts, Mathieu Raffinot, Jason Schoeters
VectorTSP: A Traveling Salesperson Problem with Racetrack-like acceleration constraints
25 pages, 27 pages with bibliography, 19 figures
null
null
null
cs.DS cs.CC math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a new version of the Euclidean TSP called VectorTSP (VTSP for short) where a mobile entity is allowed to move according to a set of physical constraints inspired from the pen-and-pencil game Racetrack (also known as Vector Racer ). In contrast to other versions of TSP accounting for physical constraints, such as Dubins TSP, the spirit of this model is that (1) no speed limitations apply, and (2) inertia depends on the current velocity. As such, this model is closer to typical models considered in path planning problems, although applied here to the visit of n cities in a non-predetermined order. We motivate and introduce the VectorTSP problem, discussing fundamental differences with previous versions of TSP. In particular, an optimal visit order for ETSP may not be optimal for VTSP. We show that VectorTSP is NP-hard, and in the other direction, that VectorTSP reduces to GroupTSP in polynomial time (although with a significant blow-up in size). On the algorithmic side, we formulate the search for a solution as an interactive scheme between a high-level algorithm and a trajectory oracle, the former being responsible for computing the visit order and the latter for computing the cost (or the trajectory) for a given visit order. We present algorithms for both, and we demonstrate and quantify through experiments that this approach frequently finds a better solution than the optimal trajectory realizing an optimal ETSP tour, which legitimates the problem itself and (we hope) motivates further algorithmic developments.
[ { "created": "Fri, 5 Jun 2020 20:17:06 GMT", "version": "v1" }, { "created": "Tue, 18 Aug 2020 08:45:25 GMT", "version": "v2" }, { "created": "Mon, 16 Aug 2021 14:27:07 GMT", "version": "v3" }, { "created": "Fri, 20 Aug 2021 12:54:32 GMT", "version": "v4" } ]
2021-08-23
[ [ "Casteigts", "Arnaud", "" ], [ "Raffinot", "Mathieu", "" ], [ "Schoeters", "Jason", "" ] ]
We study a new version of the Euclidean TSP called VectorTSP (VTSP for short) where a mobile entity is allowed to move according to a set of physical constraints inspired from the pen-and-pencil game Racetrack (also known as Vector Racer ). In contrast to other versions of TSP accounting for physical constraints, such as Dubins TSP, the spirit of this model is that (1) no speed limitations apply, and (2) inertia depends on the current velocity. As such, this model is closer to typical models considered in path planning problems, although applied here to the visit of n cities in a non-predetermined order. We motivate and introduce the VectorTSP problem, discussing fundamental differences with previous versions of TSP. In particular, an optimal visit order for ETSP may not be optimal for VTSP. We show that VectorTSP is NP-hard, and in the other direction, that VectorTSP reduces to GroupTSP in polynomial time (although with a significant blow-up in size). On the algorithmic side, we formulate the search for a solution as an interactive scheme between a high-level algorithm and a trajectory oracle, the former being responsible for computing the visit order and the latter for computing the cost (or the trajectory) for a given visit order. We present algorithms for both, and we demonstrate and quantify through experiments that this approach frequently finds a better solution than the optimal trajectory realizing an optimal ETSP tour, which legitimates the problem itself and (we hope) motivates further algorithmic developments.
2404.12995
Herman Bi{\o}rn Amundsen
Herman B. Amundsen, Marios Xanthidis, Martin F{\o}re, Sveinung J. Ohrem and Eleni Kelasidi
Aquaculture field robotics: Applications, lessons learned and future prospects
Accepted to the IEEE ICRA Workshop on Field Robotics 2024
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Aquaculture is a big marine industry and contributes to securing global food demands. Underwater vehicles such as remotely operated vehicles (ROVs) are commonly used for inspection, maintenance, and intervention (IMR) tasks in fish farms. However, underwater vehicle operations in aquaculture face several unique and demanding challenges, such as navigation in dynamically changing environments with time-varying sealoads and poor hydroacoustic sensor capabilities, challenges yet to be properly addressed in research. This paper will present various endeavors to address these questions and improve the overall autonomy level in aquaculture robotics, with a focus on field experiments. We will also discuss lessons learned during field trials and potential future prospects in aquaculture robotics.
[ { "created": "Fri, 19 Apr 2024 16:46:29 GMT", "version": "v1" } ]
2024-04-22
[ [ "Amundsen", "Herman B.", "" ], [ "Xanthidis", "Marios", "" ], [ "Føre", "Martin", "" ], [ "Ohrem", "Sveinung J.", "" ], [ "Kelasidi", "Eleni", "" ] ]
Aquaculture is a big marine industry and contributes to securing global food demands. Underwater vehicles such as remotely operated vehicles (ROVs) are commonly used for inspection, maintenance, and intervention (IMR) tasks in fish farms. However, underwater vehicle operations in aquaculture face several unique and demanding challenges, such as navigation in dynamically changing environments with time-varying sealoads and poor hydroacoustic sensor capabilities, challenges yet to be properly addressed in research. This paper will present various endeavors to address these questions and improve the overall autonomy level in aquaculture robotics, with a focus on field experiments. We will also discuss lessons learned during field trials and potential future prospects in aquaculture robotics.
2204.02446
Birendra Jha
Birendra Jha, Medha Atre, Ashwini Rao
Detecting Cloud-Based Phishing Attacks by Combining Deep Learning Models
To be published in the Fourth IEEE International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications (IEEE TPS 2022)
null
null
null
cs.CR cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Web-based phishing attacks nowadays exploit popular cloud web hosting services and apps such as Google Sites and Typeform for hosting their attacks. Since these attacks originate from reputable domains and IP addresses of the cloud services, traditional phishing detection methods such as IP reputation monitoring and blacklisting are not very effective. Here we investigate the effectiveness of deep learning models in detecting this class of cloud-based phishing attacks. Specifically, we evaluate deep learning models for three phishing detection methods--LSTM model for URL analysis, YOLOv2 model for logo analysis, and triplet network model for visual similarity analysis. We train the models using well-known datasets and test their performance on cloud-based phishing attacks in the wild. Our results qualitatively explain why the models succeed or fail. Furthermore, our results highlight how combining results from the individual models can improve the effectiveness of detecting cloud-based phishing attacks.
[ { "created": "Tue, 5 Apr 2022 18:47:57 GMT", "version": "v1" }, { "created": "Mon, 5 Sep 2022 20:54:13 GMT", "version": "v2" }, { "created": "Fri, 28 Oct 2022 00:07:31 GMT", "version": "v3" } ]
2022-10-31
[ [ "Jha", "Birendra", "" ], [ "Atre", "Medha", "" ], [ "Rao", "Ashwini", "" ] ]
Web-based phishing attacks nowadays exploit popular cloud web hosting services and apps such as Google Sites and Typeform for hosting their attacks. Since these attacks originate from reputable domains and IP addresses of the cloud services, traditional phishing detection methods such as IP reputation monitoring and blacklisting are not very effective. Here we investigate the effectiveness of deep learning models in detecting this class of cloud-based phishing attacks. Specifically, we evaluate deep learning models for three phishing detection methods--LSTM model for URL analysis, YOLOv2 model for logo analysis, and triplet network model for visual similarity analysis. We train the models using well-known datasets and test their performance on cloud-based phishing attacks in the wild. Our results qualitatively explain why the models succeed or fail. Furthermore, our results highlight how combining results from the individual models can improve the effectiveness of detecting cloud-based phishing attacks.
2406.16475
Hideo Bannai
Golnaz Badkobeh, Hideo Bannai, Dominik K\"oppl
Bijective BWT based compression schemes
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that for any string $w$ of length $n$, $r_B = O(z\log^2 n)$, where $r_B$ and $z$ are respectively the number of character runs in the bijective Burrows-Wheeler transform (BBWT), and the number of Lempel-Ziv 77 factors of $w$. We can further induce a bidirectional macro scheme of size $O(r_B)$ from the BBWT. Finally, there exists a family of strings with $r_B = \Omega(\log n)$ but having only $r=2$ character runs in the standard Burrows--Wheeler transform (BWT). However, a lower bound for $r$ is the minimal run-length of the BBWTs applied to the cyclic shifts of $w$, whose time complexity might be $o(n^2)$ in the light that we show how to compute the Lyndon factorization of all cyclic rotations in $O(n)$ time. Considering also the rotation operation performing cyclic shifts, we conjecture that we can transform two strings having the same Parikh vector to each other by BBWT and rotation operations, and prove this conjecture for the case of binary alphabets and permutations.
[ { "created": "Mon, 24 Jun 2024 09:26:58 GMT", "version": "v1" } ]
2024-06-25
[ [ "Badkobeh", "Golnaz", "" ], [ "Bannai", "Hideo", "" ], [ "Köppl", "Dominik", "" ] ]
We show that for any string $w$ of length $n$, $r_B = O(z\log^2 n)$, where $r_B$ and $z$ are respectively the number of character runs in the bijective Burrows-Wheeler transform (BBWT), and the number of Lempel-Ziv 77 factors of $w$. We can further induce a bidirectional macro scheme of size $O(r_B)$ from the BBWT. Finally, there exists a family of strings with $r_B = \Omega(\log n)$ but having only $r=2$ character runs in the standard Burrows--Wheeler transform (BWT). However, a lower bound for $r$ is the minimal run-length of the BBWTs applied to the cyclic shifts of $w$, whose time complexity might be $o(n^2)$ in the light that we show how to compute the Lyndon factorization of all cyclic rotations in $O(n)$ time. Considering also the rotation operation performing cyclic shifts, we conjecture that we can transform two strings having the same Parikh vector to each other by BBWT and rotation operations, and prove this conjecture for the case of binary alphabets and permutations.
1811.09221
Benedikt Jahnel
Alexander Hinsen, Christian Hirsch, Benedikt Jahnel and Elie Cali
The typical cell in anisotropic tessellations
7 pages, 7 figures
null
null
null
cs.NI
http://creativecommons.org/licenses/by-sa/4.0/
The typical cell is a key concept for stochastic-geometry based modeling in communication networks, as it provides a rigorous framework for describing properties of a serving zone associated with a component selected at random in a large network. We consider a setting where network components are located on a large street network. While earlier investigations were restricted to street systems without preferred directions, in this paper we derive the distribution of the typical cell in Manhattan-type systems characterized by a pattern of horizontal and vertical streets. We explain how the mathematical description can be turned into a simulation algorithm and provide numerical results uncovering novel effects when compared to classical isotropic networks.
[ { "created": "Thu, 22 Nov 2018 16:07:28 GMT", "version": "v1" } ]
2018-11-26
[ [ "Hinsen", "Alexander", "" ], [ "Hirsch", "Christian", "" ], [ "Jahnel", "Benedikt", "" ], [ "Cali", "Elie", "" ] ]
The typical cell is a key concept for stochastic-geometry based modeling in communication networks, as it provides a rigorous framework for describing properties of a serving zone associated with a component selected at random in a large network. We consider a setting where network components are located on a large street network. While earlier investigations were restricted to street systems without preferred directions, in this paper we derive the distribution of the typical cell in Manhattan-type systems characterized by a pattern of horizontal and vertical streets. We explain how the mathematical description can be turned into a simulation algorithm and provide numerical results uncovering novel effects when compared to classical isotropic networks.
2002.04477
Ricardo Monge
Osvaldo Skliar, Sherry Gapper, Ricardo E. Monge
A One-to-One Correspondence between Natural Numbers and Binary Trees
30 pages
null
null
null
cs.AI math.NT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A characterization is provided for each natural number except one (1) by means of an ordered pair of elements. The first element is a natural number called the type of the natural number characterized, and the second is a natural number called the order of the number characterized within those of its type. A one-to-one correspondence is specified between the set of binary trees such that a) a given node has no child nodes (that is, it is a terminal node), or b) it has exactly two child nodes. Thus, binary trees such that one of their parent nodes has only one child node are excluded from the set considered here.
[ { "created": "Fri, 7 Feb 2020 03:00:36 GMT", "version": "v1" }, { "created": "Fri, 21 Feb 2020 01:43:15 GMT", "version": "v2" } ]
2020-02-24
[ [ "Skliar", "Osvaldo", "" ], [ "Gapper", "Sherry", "" ], [ "Monge", "Ricardo E.", "" ] ]
A characterization is provided for each natural number except one (1) by means of an ordered pair of elements. The first element is a natural number called the type of the natural number characterized, and the second is a natural number called the order of the number characterized within those of its type. A one-to-one correspondence is specified between the set of binary trees such that a) a given node has no child nodes (that is, it is a terminal node), or b) it has exactly two child nodes. Thus, binary trees such that one of their parent nodes has only one child node are excluded from the set considered here.
1303.6224
Wilbert Samuel Rossi
Wilbert Samuel Rossi, Paolo Frasca, Fabio Fagnani
Limited benefit of cooperation in distributed relative localization
11 pages, 2 figures, submitted to conference
null
null
null
cs.SY math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Important applications in robotic and sensor networks require distributed algorithms to solve the so-called relative localization problem: a node-indexed vector has to be reconstructed from measurements of differences between neighbor nodes. In a recent note, we have studied the estimation error of a popular gradient descent algorithm showing that the mean square error has a minimum at a finite time, after which the performance worsens. This paper proposes a suitable modification of this algorithm incorporating more realistic "a priori" information on the position. The new algorithm presents a performance monotonically decreasing to the optimal one. Furthermore, we show that the optimal performance is approximated, up to a 1 + \eps factor, within a time which is independent of the graph and of the number of nodes. This convergence time is very much related to the minimum exhibited by the previous algorithm and both lead to the following conclusion: in the presence of noisy data, cooperation is only useful till a certain limit.
[ { "created": "Mon, 25 Mar 2013 17:31:06 GMT", "version": "v1" } ]
2013-03-26
[ [ "Rossi", "Wilbert Samuel", "" ], [ "Frasca", "Paolo", "" ], [ "Fagnani", "Fabio", "" ] ]
Important applications in robotic and sensor networks require distributed algorithms to solve the so-called relative localization problem: a node-indexed vector has to be reconstructed from measurements of differences between neighbor nodes. In a recent note, we have studied the estimation error of a popular gradient descent algorithm showing that the mean square error has a minimum at a finite time, after which the performance worsens. This paper proposes a suitable modification of this algorithm incorporating more realistic "a priori" information on the position. The new algorithm presents a performance monotonically decreasing to the optimal one. Furthermore, we show that the optimal performance is approximated, up to a 1 + \eps factor, within a time which is independent of the graph and of the number of nodes. This convergence time is very much related to the minimum exhibited by the previous algorithm and both lead to the following conclusion: in the presence of noisy data, cooperation is only useful till a certain limit.
1911.03631
Yuwei Fang
Yuwei Fang, Siqi Sun, Zhe Gan, Rohit Pillai, Shuohang Wang, Jingjing Liu
Hierarchical Graph Network for Multi-hop Question Answering
Accepted to EMNLP 2020
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present Hierarchical Graph Network (HGN) for multi-hop question answering. To aggregate clues from scattered texts across multiple paragraphs, a hierarchical graph is created by constructing nodes on different levels of granularity (questions, paragraphs, sentences, entities), the representations of which are initialized with pre-trained contextual encoders. Given this hierarchical graph, the initial node representations are updated through graph propagation, and multi-hop reasoning is performed via traversing through the graph edges for each subsequent sub-task (e.g., paragraph selection, supporting facts extraction, answer prediction). By weaving heterogeneous nodes into an integral unified graph, this hierarchical differentiation of node granularity enables HGN to support different question answering sub-tasks simultaneously. Experiments on the HotpotQA benchmark demonstrate that the proposed model achieves new state of the art, outperforming existing multi-hop QA approaches.
[ { "created": "Sat, 9 Nov 2019 07:18:47 GMT", "version": "v1" }, { "created": "Wed, 15 Apr 2020 19:40:03 GMT", "version": "v2" }, { "created": "Sun, 27 Sep 2020 05:00:08 GMT", "version": "v3" }, { "created": "Tue, 6 Oct 2020 08:17:58 GMT", "version": "v4" } ]
2020-10-07
[ [ "Fang", "Yuwei", "" ], [ "Sun", "Siqi", "" ], [ "Gan", "Zhe", "" ], [ "Pillai", "Rohit", "" ], [ "Wang", "Shuohang", "" ], [ "Liu", "Jingjing", "" ] ]
In this paper, we present Hierarchical Graph Network (HGN) for multi-hop question answering. To aggregate clues from scattered texts across multiple paragraphs, a hierarchical graph is created by constructing nodes on different levels of granularity (questions, paragraphs, sentences, entities), the representations of which are initialized with pre-trained contextual encoders. Given this hierarchical graph, the initial node representations are updated through graph propagation, and multi-hop reasoning is performed via traversing through the graph edges for each subsequent sub-task (e.g., paragraph selection, supporting facts extraction, answer prediction). By weaving heterogeneous nodes into an integral unified graph, this hierarchical differentiation of node granularity enables HGN to support different question answering sub-tasks simultaneously. Experiments on the HotpotQA benchmark demonstrate that the proposed model achieves new state of the art, outperforming existing multi-hop QA approaches.
2112.11447
Peng Liu
Peng Liu
Multi-Modality Distillation via Learning the teacher's modality-level Gram Matrix
10 pages
null
null
null
cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the context of multi-modality knowledge distillation research, the existing methods was mainly focus on the problem of only learning teacher final output. Thus, there are still deep differences between the teacher network and the student network. It is necessary to force the student network to learn the modality relationship information of the teacher network. To effectively exploit transfering knowledge from teachers to students, a novel modality relation distillation paradigm by modeling the relationship information among different modality are adopted, that is learning the teacher modality-level Gram Matrix.
[ { "created": "Tue, 21 Dec 2021 18:53:58 GMT", "version": "v1" } ]
2021-12-22
[ [ "Liu", "Peng", "" ] ]
In the context of multi-modality knowledge distillation research, the existing methods was mainly focus on the problem of only learning teacher final output. Thus, there are still deep differences between the teacher network and the student network. It is necessary to force the student network to learn the modality relationship information of the teacher network. To effectively exploit transfering knowledge from teachers to students, a novel modality relation distillation paradigm by modeling the relationship information among different modality are adopted, that is learning the teacher modality-level Gram Matrix.
2306.10037
Kostas Karpouzis
Fereniki Panagopoulou, Christina Parpoula, Kostas Karpouzis
Legal and ethical considerations regarding the use of ChatGPT in education
Accepted at the 1st International Conference of the Network of Learning and Teaching Centers in Greece: Transforming Higher Education Teaching Practice
null
null
null
cs.CY
http://creativecommons.org/licenses/by-sa/4.0/
Artificial intelligence has evolved enormously over the last two decades, becoming mainstream in different scientific domains including education, where so far, it is mainly utilized to enhance administrative and intelligent tutoring systems services and academic support. ChatGPT, an artificial intelligence-based chatbot, developed by OpenAI and released in November 2022, has rapidly gained attention from the entire international community for its impressive performance in generating comprehensive, systematic, and informative human-like responses to user input through natural language processing. Inevitably, it has also rapidly posed several challenges, opportunities, and potential issues and concerns raised regarding its use across various scientific disciplines. This paper aims to discuss the legal and ethical implications arising from this new technology, identify potential use cases, and enrich our understanding of Generative AI, such as ChatGPT, and its capabilities in education.
[ { "created": "Fri, 9 Jun 2023 14:54:09 GMT", "version": "v1" } ]
2023-06-21
[ [ "Panagopoulou", "Fereniki", "" ], [ "Parpoula", "Christina", "" ], [ "Karpouzis", "Kostas", "" ] ]
Artificial intelligence has evolved enormously over the last two decades, becoming mainstream in different scientific domains including education, where so far, it is mainly utilized to enhance administrative and intelligent tutoring systems services and academic support. ChatGPT, an artificial intelligence-based chatbot, developed by OpenAI and released in November 2022, has rapidly gained attention from the entire international community for its impressive performance in generating comprehensive, systematic, and informative human-like responses to user input through natural language processing. Inevitably, it has also rapidly posed several challenges, opportunities, and potential issues and concerns raised regarding its use across various scientific disciplines. This paper aims to discuss the legal and ethical implications arising from this new technology, identify potential use cases, and enrich our understanding of Generative AI, such as ChatGPT, and its capabilities in education.
1601.05516
Linqi Song
Linqi Song, and Christina Fragouli
A Deterministic Algorithm for Pliable Index Coding
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pliable index coding considers a server with m messages, and n clients where each has as side information a subset of the messages. We seek to minimize the number of transmissions the server should make, so that each client receives (any) one message she does not already have. Previous work has shown that the server can achieve this using O(\log^2(n)) transmissions and needs at least \Omega(log(n)) transmissions in the worst case, but finding a code of optimal length is NP-hard. In this paper, we propose a deterministic algorithm that we prove achieves this upper bound, that is, in an order almost as the worst-case optimal code length. We also establish a connection between the pliable index coding problem and the minrank problem over a family of mixed matrices.
[ { "created": "Thu, 21 Jan 2016 05:37:09 GMT", "version": "v1" } ]
2016-01-22
[ [ "Song", "Linqi", "" ], [ "Fragouli", "Christina", "" ] ]
Pliable index coding considers a server with m messages, and n clients where each has as side information a subset of the messages. We seek to minimize the number of transmissions the server should make, so that each client receives (any) one message she does not already have. Previous work has shown that the server can achieve this using O(\log^2(n)) transmissions and needs at least \Omega(log(n)) transmissions in the worst case, but finding a code of optimal length is NP-hard. In this paper, we propose a deterministic algorithm that we prove achieves this upper bound, that is, in an order almost as the worst-case optimal code length. We also establish a connection between the pliable index coding problem and the minrank problem over a family of mixed matrices.
1202.1212
Yaniv Plan
Yaniv Plan and Roman Vershynin
Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach
25 pages, 1 figure, error fixed in Lemma 4.1
null
null
null
cs.IT math.IT math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper develops theoretical results regarding noisy 1-bit compressed sensing and sparse binomial regression. We show that a single convex program gives an accurate estimate of the signal, or coefficient vector, for both of these models. We demonstrate that an s-sparse signal in R^n can be accurately estimated from m = O(slog(n/s)) single-bit measurements using a simple convex program. This remains true even if each measurement bit is flipped with probability nearly 1/2. Worst-case (adversarial) noise can also be accounted for, and uniform results that hold for all sparse inputs are derived as well. In the terminology of sparse logistic regression, we show that O(slog(n/s)) Bernoulli trials are sufficient to estimate a coefficient vector in R^n which is approximately s-sparse. Moreover, the same convex program works for virtually all generalized linear models, in which the link function may be unknown. To our knowledge, these are the first results that tie together the theory of sparse logistic regression to 1-bit compressed sensing. Our results apply to general signal structures aside from sparsity; one only needs to know the size of the set K where signals reside. The size is given by the mean width of K, a computable quantity whose square serves as a robust extension of the dimension.
[ { "created": "Mon, 6 Feb 2012 17:23:47 GMT", "version": "v1" }, { "created": "Fri, 6 Jul 2012 00:11:56 GMT", "version": "v2" }, { "created": "Thu, 19 Jul 2012 16:19:42 GMT", "version": "v3" } ]
2012-07-20
[ [ "Plan", "Yaniv", "" ], [ "Vershynin", "Roman", "" ] ]
This paper develops theoretical results regarding noisy 1-bit compressed sensing and sparse binomial regression. We show that a single convex program gives an accurate estimate of the signal, or coefficient vector, for both of these models. We demonstrate that an s-sparse signal in R^n can be accurately estimated from m = O(slog(n/s)) single-bit measurements using a simple convex program. This remains true even if each measurement bit is flipped with probability nearly 1/2. Worst-case (adversarial) noise can also be accounted for, and uniform results that hold for all sparse inputs are derived as well. In the terminology of sparse logistic regression, we show that O(slog(n/s)) Bernoulli trials are sufficient to estimate a coefficient vector in R^n which is approximately s-sparse. Moreover, the same convex program works for virtually all generalized linear models, in which the link function may be unknown. To our knowledge, these are the first results that tie together the theory of sparse logistic regression to 1-bit compressed sensing. Our results apply to general signal structures aside from sparsity; one only needs to know the size of the set K where signals reside. The size is given by the mean width of K, a computable quantity whose square serves as a robust extension of the dimension.
2205.14039
Dustin Mixon
Jameson Cahill, Joseph W. Iverson, Dustin G. Mixon, Daniel Packer
Group-invariant max filtering
null
null
null
null
cs.IT cs.DS cs.LG math.FA math.IT
http://creativecommons.org/licenses/by/4.0/
Given a real inner product space $V$ and a group $G$ of linear isometries, we construct a family of $G$-invariant real-valued functions on $V$ that we call max filters. In the case where $V=\mathbb{R}^d$ and $G$ is finite, a suitable max filter bank separates orbits, and is even bilipschitz in the quotient metric. In the case where $V=L^2(\mathbb{R}^d)$ and $G$ is the group of translation operators, a max filter exhibits stability to diffeomorphic distortion like that of the scattering transform introduced by Mallat. We establish that max filters are well suited for various classification tasks, both in theory and in practice.
[ { "created": "Fri, 27 May 2022 15:18:08 GMT", "version": "v1" } ]
2022-05-30
[ [ "Cahill", "Jameson", "" ], [ "Iverson", "Joseph W.", "" ], [ "Mixon", "Dustin G.", "" ], [ "Packer", "Daniel", "" ] ]
Given a real inner product space $V$ and a group $G$ of linear isometries, we construct a family of $G$-invariant real-valued functions on $V$ that we call max filters. In the case where $V=\mathbb{R}^d$ and $G$ is finite, a suitable max filter bank separates orbits, and is even bilipschitz in the quotient metric. In the case where $V=L^2(\mathbb{R}^d)$ and $G$ is the group of translation operators, a max filter exhibits stability to diffeomorphic distortion like that of the scattering transform introduced by Mallat. We establish that max filters are well suited for various classification tasks, both in theory and in practice.
2203.07706
Ziyang Song
Liang Xu, Ziyang Song, Dongliang Wang, Jing Su, Zhicheng Fang, Chenjing Ding, Weihao Gan, Yichao Yan, Xin Jin, Xiaokang Yang, Wenjun Zeng, Wei Wu
ActFormer: A GAN-based Transformer towards General Action-Conditioned 3D Human Motion Generation
null
null
null
null
cs.CV cs.GR cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a GAN-based Transformer for general action-conditioned 3D human motion generation, including not only single-person actions but also multi-person interactive actions. Our approach consists of a powerful Action-conditioned motion TransFormer (ActFormer) under a GAN training scheme, equipped with a Gaussian Process latent prior. Such a design combines the strong spatio-temporal representation capacity of Transformer, superiority in generative modeling of GAN, and inherent temporal correlations from the latent prior. Furthermore, ActFormer can be naturally extended to multi-person motions by alternately modeling temporal correlations and human interactions with Transformer encoders. To further facilitate research on multi-person motion generation, we introduce a new synthetic dataset of complex multi-person combat behaviors. Extensive experiments on NTU-13, NTU RGB+D 120, BABEL and the proposed combat dataset show that our method can adapt to various human motion representations and achieve superior performance over the state-of-the-art methods on both single-person and multi-person motion generation tasks, demonstrating a promising step towards a general human motion generator.
[ { "created": "Tue, 15 Mar 2022 07:50:12 GMT", "version": "v1" }, { "created": "Wed, 23 Nov 2022 05:27:11 GMT", "version": "v2" } ]
2022-11-24
[ [ "Xu", "Liang", "" ], [ "Song", "Ziyang", "" ], [ "Wang", "Dongliang", "" ], [ "Su", "Jing", "" ], [ "Fang", "Zhicheng", "" ], [ "Ding", "Chenjing", "" ], [ "Gan", "Weihao", "" ], [ "Yan", "Yichao", "" ], [ "Jin", "Xin", "" ], [ "Yang", "Xiaokang", "" ], [ "Zeng", "Wenjun", "" ], [ "Wu", "Wei", "" ] ]
We present a GAN-based Transformer for general action-conditioned 3D human motion generation, including not only single-person actions but also multi-person interactive actions. Our approach consists of a powerful Action-conditioned motion TransFormer (ActFormer) under a GAN training scheme, equipped with a Gaussian Process latent prior. Such a design combines the strong spatio-temporal representation capacity of Transformer, superiority in generative modeling of GAN, and inherent temporal correlations from the latent prior. Furthermore, ActFormer can be naturally extended to multi-person motions by alternately modeling temporal correlations and human interactions with Transformer encoders. To further facilitate research on multi-person motion generation, we introduce a new synthetic dataset of complex multi-person combat behaviors. Extensive experiments on NTU-13, NTU RGB+D 120, BABEL and the proposed combat dataset show that our method can adapt to various human motion representations and achieve superior performance over the state-of-the-art methods on both single-person and multi-person motion generation tasks, demonstrating a promising step towards a general human motion generator.
2208.05092
Angela Zavaleta Bernuy
Angela Zavaleta-Bernuy, Qi Yin Zheng, Hammad Shaikh, Jacob Nogas, Anna Rafferty, Andrew Petersen, Joseph Jay Williams
Using Adaptive Experiments to Rapidly Help Students
International Conference on Artificial Intelligence in Education
null
10.1007/978-3-030-78270-2_75
null
cs.LG cs.CY cs.HC
http://creativecommons.org/licenses/by/4.0/
Adaptive experiments can increase the chance that current students obtain better outcomes from a field experiment of an instructional intervention. In such experiments, the probability of assigning students to conditions changes while more data is being collected, so students can be assigned to interventions that are likely to perform better. Digital educational environments lower the barrier to conducting such adaptive experiments, but they are rarely applied in education. One reason might be that researchers have access to few real-world case studies that illustrate the advantages and disadvantages of these experiments in a specific context. We evaluate the effect of homework email reminders in students by conducting an adaptive experiment using the Thompson Sampling algorithm and compare it to a traditional uniform random experiment. We present this as a case study on how to conduct such experiments, and we raise a range of open questions about the conditions under which adaptive randomized experiments may be more or less useful.
[ { "created": "Wed, 10 Aug 2022 00:43:05 GMT", "version": "v1" } ]
2022-08-11
[ [ "Zavaleta-Bernuy", "Angela", "" ], [ "Zheng", "Qi Yin", "" ], [ "Shaikh", "Hammad", "" ], [ "Nogas", "Jacob", "" ], [ "Rafferty", "Anna", "" ], [ "Petersen", "Andrew", "" ], [ "Williams", "Joseph Jay", "" ] ]
Adaptive experiments can increase the chance that current students obtain better outcomes from a field experiment of an instructional intervention. In such experiments, the probability of assigning students to conditions changes while more data is being collected, so students can be assigned to interventions that are likely to perform better. Digital educational environments lower the barrier to conducting such adaptive experiments, but they are rarely applied in education. One reason might be that researchers have access to few real-world case studies that illustrate the advantages and disadvantages of these experiments in a specific context. We evaluate the effect of homework email reminders in students by conducting an adaptive experiment using the Thompson Sampling algorithm and compare it to a traditional uniform random experiment. We present this as a case study on how to conduct such experiments, and we raise a range of open questions about the conditions under which adaptive randomized experiments may be more or less useful.
2302.01599
Mengxuan Li
Mengxuan Li, Peng Peng, Jingxin Zhang, Hongwei Wang, Weiming Shen
SCCAM: Supervised Contrastive Convolutional Attention Mechanism for Ante-hoc Interpretable Fault Diagnosis with Limited Fault Samples
null
null
10.1109/TNNLS.2023.3313728
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In real industrial processes, fault diagnosis methods are required to learn from limited fault samples since the procedures are mainly under normal conditions and the faults rarely occur. Although attention mechanisms have become popular in the field of fault diagnosis, the existing attention-based methods are still unsatisfying for the above practical applications. First, pure attention-based architectures like transformers need a large number of fault samples to offset the lack of inductive biases thus performing poorly under limited fault samples. Moreover, the poor fault classification dilemma further leads to the failure of the existing attention-based methods to identify the root causes. To address the aforementioned issues, we innovatively propose a supervised contrastive convolutional attention mechanism (SCCAM) with ante-hoc interpretability, which solves the root cause analysis problem under limited fault samples for the first time. The proposed SCCAM method is tested on a continuous stirred tank heater and the Tennessee Eastman industrial process benchmark. Three common fault diagnosis scenarios are covered, including a balanced scenario for additional verification and two scenarios with limited fault samples (i.e., imbalanced scenario and long-tail scenario). The comprehensive results demonstrate that the proposed SCCAM method can achieve better performance compared with the state-of-the-art methods on fault classification and root cause analysis.
[ { "created": "Fri, 3 Feb 2023 08:43:55 GMT", "version": "v1" }, { "created": "Fri, 17 Feb 2023 12:02:33 GMT", "version": "v2" } ]
2023-09-26
[ [ "Li", "Mengxuan", "" ], [ "Peng", "Peng", "" ], [ "Zhang", "Jingxin", "" ], [ "Wang", "Hongwei", "" ], [ "Shen", "Weiming", "" ] ]
In real industrial processes, fault diagnosis methods are required to learn from limited fault samples since the procedures are mainly under normal conditions and the faults rarely occur. Although attention mechanisms have become popular in the field of fault diagnosis, the existing attention-based methods are still unsatisfying for the above practical applications. First, pure attention-based architectures like transformers need a large number of fault samples to offset the lack of inductive biases thus performing poorly under limited fault samples. Moreover, the poor fault classification dilemma further leads to the failure of the existing attention-based methods to identify the root causes. To address the aforementioned issues, we innovatively propose a supervised contrastive convolutional attention mechanism (SCCAM) with ante-hoc interpretability, which solves the root cause analysis problem under limited fault samples for the first time. The proposed SCCAM method is tested on a continuous stirred tank heater and the Tennessee Eastman industrial process benchmark. Three common fault diagnosis scenarios are covered, including a balanced scenario for additional verification and two scenarios with limited fault samples (i.e., imbalanced scenario and long-tail scenario). The comprehensive results demonstrate that the proposed SCCAM method can achieve better performance compared with the state-of-the-art methods on fault classification and root cause analysis.
2402.07441
Sujoy Bhore
Sujoy Bhore, Timothy M. Chan
Fully Dynamic Geometric Vertex Cover and Matching
25 Pages
null
null
null
cs.CG
http://creativecommons.org/licenses/by/4.0/
In this work, we study two fundamental graph optimization problems, minimum vertex cover (MVC) and maximum-cardinality matching (MCM), for intersection graphs of geometric objects, e.g., disks, rectangles, hypercubes, etc., in $d$-dimensional Euclidean space. We consider the problems in fully dynamic settings, allowing insertions and deletions of objects. We develop a general framework for dynamic MVC in intersection graphs, achieving sublinear amortized update time for most natural families of geometric objects. In particular, we show that - - For a dynamic collection of disks in $\mathbb{R}^2$ or hypercubes in $\mathbb{R}^d$ (for constant $d$), it is possible to maintain a $(1+\varepsilon)$-approximate vertex cover in polylog amortized update time. These results also hold in the bipartite case. - For a dynamic collection of rectangles in $\mathbb{R}^2$, it is possible to maintain a $(\frac{3}{2}+\varepsilon)$-approximate vertex cover in polylog amortized update time. Along the way, we obtain the first near-linear time static algorithms for MVC in the above two cases with the same approximation factors. Next, we turn our attention to the MCM problem. Although our MVC algorithms automatically allow us to approximate the size of the MCM in bipartite geometric intersection graphs, they do not produce a matching. We give another general framework to maintain an approximate maximum matching, and further extend the approach to handle non-bipartite intersection graphs. In particular, we show that - - For a dynamic collection of (bichromatic or monochromatic) disks in $\mathbb{R}^2$ or hypercubes in $\mathbb{R}^d$ (for constant $d$), it is possible to maintain a $(1+\varepsilon)$-approximate matching in polylog amortized update time.
[ { "created": "Mon, 12 Feb 2024 06:48:57 GMT", "version": "v1" }, { "created": "Wed, 14 Feb 2024 04:48:32 GMT", "version": "v2" } ]
2024-02-15
[ [ "Bhore", "Sujoy", "" ], [ "Chan", "Timothy M.", "" ] ]
In this work, we study two fundamental graph optimization problems, minimum vertex cover (MVC) and maximum-cardinality matching (MCM), for intersection graphs of geometric objects, e.g., disks, rectangles, hypercubes, etc., in $d$-dimensional Euclidean space. We consider the problems in fully dynamic settings, allowing insertions and deletions of objects. We develop a general framework for dynamic MVC in intersection graphs, achieving sublinear amortized update time for most natural families of geometric objects. In particular, we show that - - For a dynamic collection of disks in $\mathbb{R}^2$ or hypercubes in $\mathbb{R}^d$ (for constant $d$), it is possible to maintain a $(1+\varepsilon)$-approximate vertex cover in polylog amortized update time. These results also hold in the bipartite case. - For a dynamic collection of rectangles in $\mathbb{R}^2$, it is possible to maintain a $(\frac{3}{2}+\varepsilon)$-approximate vertex cover in polylog amortized update time. Along the way, we obtain the first near-linear time static algorithms for MVC in the above two cases with the same approximation factors. Next, we turn our attention to the MCM problem. Although our MVC algorithms automatically allow us to approximate the size of the MCM in bipartite geometric intersection graphs, they do not produce a matching. We give another general framework to maintain an approximate maximum matching, and further extend the approach to handle non-bipartite intersection graphs. In particular, we show that - - For a dynamic collection of (bichromatic or monochromatic) disks in $\mathbb{R}^2$ or hypercubes in $\mathbb{R}^d$ (for constant $d$), it is possible to maintain a $(1+\varepsilon)$-approximate matching in polylog amortized update time.
2011.04483
Steve Hanneke
Olivier Bousquet, Steve Hanneke, Shay Moran, Ramon van Handel, Amir Yehudayoff
A Theory of Universal Learning
null
null
null
null
cs.LG cs.DS math.ST stat.ML stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How quickly can a given class of concepts be learned from examples? It is common to measure the performance of a supervised machine learning algorithm by plotting its "learning curve", that is, the decay of the error rate as a function of the number of training examples. However, the classical theoretical framework for understanding learnability, the PAC model of Vapnik-Chervonenkis and Valiant, does not explain the behavior of learning curves: the distribution-free PAC model of learning can only bound the upper envelope of the learning curves over all possible data distributions. This does not match the practice of machine learning, where the data source is typically fixed in any given scenario, while the learner may choose the number of training examples on the basis of factors such as computational resources and desired accuracy. In this paper, we study an alternative learning model that better captures such practical aspects of machine learning, but still gives rise to a complete theory of the learnable in the spirit of the PAC model. More precisely, we consider the problem of universal learning, which aims to understand the performance of learning algorithms on every data distribution, but without requiring uniformity over the distribution. The main result of this paper is a remarkable trichotomy: there are only three possible rates of universal learning. More precisely, we show that the learning curves of any given concept class decay either at an exponential, linear, or arbitrarily slow rates. Moreover, each of these cases is completely characterized by appropriate combinatorial parameters, and we exhibit optimal learning algorithms that achieve the best possible rate in each case. For concreteness, we consider in this paper only the realizable case, though analogous results are expected to extend to more general learning scenarios.
[ { "created": "Mon, 9 Nov 2020 15:10:32 GMT", "version": "v1" } ]
2020-11-10
[ [ "Bousquet", "Olivier", "" ], [ "Hanneke", "Steve", "" ], [ "Moran", "Shay", "" ], [ "van Handel", "Ramon", "" ], [ "Yehudayoff", "Amir", "" ] ]
How quickly can a given class of concepts be learned from examples? It is common to measure the performance of a supervised machine learning algorithm by plotting its "learning curve", that is, the decay of the error rate as a function of the number of training examples. However, the classical theoretical framework for understanding learnability, the PAC model of Vapnik-Chervonenkis and Valiant, does not explain the behavior of learning curves: the distribution-free PAC model of learning can only bound the upper envelope of the learning curves over all possible data distributions. This does not match the practice of machine learning, where the data source is typically fixed in any given scenario, while the learner may choose the number of training examples on the basis of factors such as computational resources and desired accuracy. In this paper, we study an alternative learning model that better captures such practical aspects of machine learning, but still gives rise to a complete theory of the learnable in the spirit of the PAC model. More precisely, we consider the problem of universal learning, which aims to understand the performance of learning algorithms on every data distribution, but without requiring uniformity over the distribution. The main result of this paper is a remarkable trichotomy: there are only three possible rates of universal learning. More precisely, we show that the learning curves of any given concept class decay either at an exponential, linear, or arbitrarily slow rates. Moreover, each of these cases is completely characterized by appropriate combinatorial parameters, and we exhibit optimal learning algorithms that achieve the best possible rate in each case. For concreteness, we consider in this paper only the realizable case, though analogous results are expected to extend to more general learning scenarios.
1909.06692
Thorsten Wissmann
Johannes {\AA}man Pohjola
Psi-Calculi Revisited: Connectivity and Compositionality
null
Logical Methods in Computer Science, Volume 16, Issue 4 (December 15, 2020) lmcs:5767
10.23638/LMCS-16(4:16)2020
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
Psi-calculi is a parametric framework for process calculi similar to popular pi-calculus extensions such as the explicit fusion calculus, the applied pi-calculus and the spi calculus. Mechanised proofs of standard algebraic and congruence properties of bisimilarity apply to all calculi within the framework. A limitation of psi-calculi is that communication channels must be symmetric and transitive. In this paper, we give a new operational semantics to psi-calculi that allows us to lift these restrictions and simplify some of the proofs. The key technical innovation is to annotate transitions with a provenance -- a description of the scope and channel they originate from. We give mechanised proofs that our extension is conservative, and that the standard algebraic and congruence properties of strong and weak bisimilarity are maintained. We show correspondence with a reduction semantics and barbed bisimulation. We show how a pi-calculus with preorders that was previously beyond the scope of psi-calculi can be captured, and how to encode mixed choice under very strong quality criteria.
[ { "created": "Sat, 14 Sep 2019 23:07:21 GMT", "version": "v1" }, { "created": "Tue, 24 Mar 2020 07:56:26 GMT", "version": "v2" }, { "created": "Wed, 11 Nov 2020 05:08:09 GMT", "version": "v3" }, { "created": "Mon, 14 Dec 2020 12:09:37 GMT", "version": "v4" } ]
2023-06-22
[ [ "Pohjola", "Johannes Åman", "" ] ]
Psi-calculi is a parametric framework for process calculi similar to popular pi-calculus extensions such as the explicit fusion calculus, the applied pi-calculus and the spi calculus. Mechanised proofs of standard algebraic and congruence properties of bisimilarity apply to all calculi within the framework. A limitation of psi-calculi is that communication channels must be symmetric and transitive. In this paper, we give a new operational semantics to psi-calculi that allows us to lift these restrictions and simplify some of the proofs. The key technical innovation is to annotate transitions with a provenance -- a description of the scope and channel they originate from. We give mechanised proofs that our extension is conservative, and that the standard algebraic and congruence properties of strong and weak bisimilarity are maintained. We show correspondence with a reduction semantics and barbed bisimulation. We show how a pi-calculus with preorders that was previously beyond the scope of psi-calculi can be captured, and how to encode mixed choice under very strong quality criteria.
1901.01492
Daniel Gordon
Daniel Gordon and Dieter Fox and Ali Farhadi
What Should I Do Now? Marrying Reinforcement Learning and Symbolic Planning
Currently under review
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Long-term planning poses a major difficulty to many reinforcement learning algorithms. This problem becomes even more pronounced in dynamic visual environments. In this work we propose Hierarchical Planning and Reinforcement Learning (HIP-RL), a method for merging the benefits and capabilities of Symbolic Planning with the learning abilities of Deep Reinforcement Learning. We apply HIPRL to the complex visual tasks of interactive question answering and visual semantic planning and achieve state-of-the-art results on three challenging datasets all while taking fewer steps at test time and training in fewer iterations. Sample results can be found at youtu.be/0TtWJ_0mPfI
[ { "created": "Sun, 6 Jan 2019 03:15:15 GMT", "version": "v1" } ]
2019-01-08
[ [ "Gordon", "Daniel", "" ], [ "Fox", "Dieter", "" ], [ "Farhadi", "Ali", "" ] ]
Long-term planning poses a major difficulty to many reinforcement learning algorithms. This problem becomes even more pronounced in dynamic visual environments. In this work we propose Hierarchical Planning and Reinforcement Learning (HIP-RL), a method for merging the benefits and capabilities of Symbolic Planning with the learning abilities of Deep Reinforcement Learning. We apply HIPRL to the complex visual tasks of interactive question answering and visual semantic planning and achieve state-of-the-art results on three challenging datasets all while taking fewer steps at test time and training in fewer iterations. Sample results can be found at youtu.be/0TtWJ_0mPfI
2402.16828
Minyoung Huh
Minyoung Huh, Brian Cheung, Jeremy Bernstein, Phillip Isola, Pulkit Agrawal
Training Neural Networks from Scratch with Parallel Low-Rank Adapters
null
null
null
null
cs.LG cs.AI cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
The scalability of deep learning models is fundamentally limited by computing resources, memory, and communication. Although methods like low-rank adaptation (LoRA) have reduced the cost of model finetuning, its application in model pre-training remains largely unexplored. This paper explores extending LoRA to model pre-training, identifying the inherent constraints and limitations of standard LoRA in this context. We introduce LoRA-the-Explorer (LTE), a novel bi-level optimization algorithm designed to enable parallel training of multiple low-rank heads across computing nodes, thereby reducing the need for frequent synchronization. Our approach includes extensive experimentation on vision transformers using various vision datasets, demonstrating that LTE is competitive with standard pre-training.
[ { "created": "Mon, 26 Feb 2024 18:55:13 GMT", "version": "v1" }, { "created": "Fri, 26 Jul 2024 21:56:47 GMT", "version": "v2" } ]
2024-07-30
[ [ "Huh", "Minyoung", "" ], [ "Cheung", "Brian", "" ], [ "Bernstein", "Jeremy", "" ], [ "Isola", "Phillip", "" ], [ "Agrawal", "Pulkit", "" ] ]
The scalability of deep learning models is fundamentally limited by computing resources, memory, and communication. Although methods like low-rank adaptation (LoRA) have reduced the cost of model finetuning, its application in model pre-training remains largely unexplored. This paper explores extending LoRA to model pre-training, identifying the inherent constraints and limitations of standard LoRA in this context. We introduce LoRA-the-Explorer (LTE), a novel bi-level optimization algorithm designed to enable parallel training of multiple low-rank heads across computing nodes, thereby reducing the need for frequent synchronization. Our approach includes extensive experimentation on vision transformers using various vision datasets, demonstrating that LTE is competitive with standard pre-training.
1602.00097
Haisheng Tan
Zhenhua Han and Haisheng Tan and Guihai Chen and Rui Wang and Yifan Chen and Francis C.M. Lau
Dynamic Virtual Machine Management via Approximate Markov Decision Process
Full version for the paper appeared in INFOCOM'16 with the same title
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Efficient virtual machine (VM) management can dramatically reduce energy consumption in data centers. Existing VM management algorithms fall into two categories based on whether the VMs' resource demands are assumed to be static or dynamic. The former category fails to maximize the resource utilization as they cannot adapt to the dynamic nature of VMs' resource demands. Most approaches in the latter category are heuristical and lack theoretical performance guarantees. In this work, we formulate dynamic VM management as a large-scale Markov Decision Process (MDP) problem and derive an optimal solution. Our analysis of real-world data traces supports our choice of the modeling approach. However, solving the large-scale MDP problem suffers from the curse of dimensionality. Therefore, we further exploit the special structure of the problem and propose an approximate MDP-based dynamic VM management method, called MadVM. We prove the convergence of MadVM and analyze the bound of its approximation error. Moreover, MadVM can be implemented in a distributed system, which should suit the needs of real data centers. Extensive simulations based on two real-world workload traces show that MadVM achieves significant performance gains over two existing baseline approaches in power consumption, resource shortage and the number of VM migrations. Specifically, the more intensely the resource demands fluctuate, the more MadVM outperforms.
[ { "created": "Sat, 30 Jan 2016 09:57:24 GMT", "version": "v1" } ]
2016-02-02
[ [ "Han", "Zhenhua", "" ], [ "Tan", "Haisheng", "" ], [ "Chen", "Guihai", "" ], [ "Wang", "Rui", "" ], [ "Chen", "Yifan", "" ], [ "Lau", "Francis C. M.", "" ] ]
Efficient virtual machine (VM) management can dramatically reduce energy consumption in data centers. Existing VM management algorithms fall into two categories based on whether the VMs' resource demands are assumed to be static or dynamic. The former category fails to maximize the resource utilization as they cannot adapt to the dynamic nature of VMs' resource demands. Most approaches in the latter category are heuristical and lack theoretical performance guarantees. In this work, we formulate dynamic VM management as a large-scale Markov Decision Process (MDP) problem and derive an optimal solution. Our analysis of real-world data traces supports our choice of the modeling approach. However, solving the large-scale MDP problem suffers from the curse of dimensionality. Therefore, we further exploit the special structure of the problem and propose an approximate MDP-based dynamic VM management method, called MadVM. We prove the convergence of MadVM and analyze the bound of its approximation error. Moreover, MadVM can be implemented in a distributed system, which should suit the needs of real data centers. Extensive simulations based on two real-world workload traces show that MadVM achieves significant performance gains over two existing baseline approaches in power consumption, resource shortage and the number of VM migrations. Specifically, the more intensely the resource demands fluctuate, the more MadVM outperforms.
1905.06235
Issam Damaj
Palwasha Shaikh (1), Issam Damaj (1) ((1) American University of Kuwait)
Analysis of Pipelined KATAN Ciphers under Handle-C for FPGAs
6 pages, 3 figures, 6 tables
13th International Conference on Innovations in Information Technology, IEEE, Al Ain, UAE (2018) 163-168
10.1109/INNOVATIONS.2018.8606012
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Embedded Systems are everywhere from the smartphones we hold in our hands to the satellites that hover around the earth. These embedded systems are being increasingly integrated into our personal and commercial infrastructures. More than 98% of all processors are implanted and used in embedded systems rather than traditional computers. As a result, security in embedded systems now more than ever has become a major concern. Since embedded systems are designed to be low-cost, fast and real-time, it would be appropriate to use tiny, lightweight and highly secure cryptographic algorithms. KATAN and KATANTAN family of light-weight block ciphers are promising cryptographic options. In this paper, a sequential hardware design is developed under Handel-C. Taking a step further, Handel-C's parallel construct is taken advantage of to develop a parallel-pipelined hybrid implementation. Both sequential and parallel-pipelined implementations are tested under Altera Quartus to implement and analyze hardware designs in conjunction with DK Design Suite's Handel-C compiler. The developed designs are mapped to Altera's Stratix II that is one of the industry's highest bandwidth and density FPGAs. The results confirm that using Handel-C can provide faster implementations. The obtained results are promising and show better performance when compared with similar implementations-specifically the developed parallel-pipelined processor.
[ { "created": "Mon, 13 May 2019 12:52:28 GMT", "version": "v1" } ]
2019-05-16
[ [ "Shaikh", "Palwasha", "" ], [ "Damaj", "Issam", "" ] ]
Embedded Systems are everywhere from the smartphones we hold in our hands to the satellites that hover around the earth. These embedded systems are being increasingly integrated into our personal and commercial infrastructures. More than 98% of all processors are implanted and used in embedded systems rather than traditional computers. As a result, security in embedded systems now more than ever has become a major concern. Since embedded systems are designed to be low-cost, fast and real-time, it would be appropriate to use tiny, lightweight and highly secure cryptographic algorithms. KATAN and KATANTAN family of light-weight block ciphers are promising cryptographic options. In this paper, a sequential hardware design is developed under Handel-C. Taking a step further, Handel-C's parallel construct is taken advantage of to develop a parallel-pipelined hybrid implementation. Both sequential and parallel-pipelined implementations are tested under Altera Quartus to implement and analyze hardware designs in conjunction with DK Design Suite's Handel-C compiler. The developed designs are mapped to Altera's Stratix II that is one of the industry's highest bandwidth and density FPGAs. The results confirm that using Handel-C can provide faster implementations. The obtained results are promising and show better performance when compared with similar implementations-specifically the developed parallel-pipelined processor.
2304.08924
Dr. Suryansh Kumar
Han Yao Choong, Suryansh Kumar, Luc Van Gool
Quantum Annealing for Single Image Super-Resolution
Accepted to IEEE/CVF CVPR 2023, NTIRE Challenge and Workshop. Draft info: 10 pages, 6 Figures, 2 Tables
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a quantum computing-based algorithm to solve the single image super-resolution (SISR) problem. One of the well-known classical approaches for SISR relies on the well-established patch-wise sparse modeling of the problem. Yet, this field's current state of affairs is that deep neural networks (DNNs) have demonstrated far superior results than traditional approaches. Nevertheless, quantum computing is expected to become increasingly prominent for machine learning problems soon. As a result, in this work, we take the privilege to perform an early exploration of applying a quantum computing algorithm to this important image enhancement problem, i.e., SISR. Among the two paradigms of quantum computing, namely universal gate quantum computing and adiabatic quantum computing (AQC), the latter has been successfully applied to practical computer vision problems, in which quantum parallelism has been exploited to solve combinatorial optimization efficiently. This work demonstrates formulating quantum SISR as a sparse coding optimization problem, which is solved using quantum annealers accessed via the D-Wave Leap platform. The proposed AQC-based algorithm is demonstrated to achieve improved speed-up over a classical analog while maintaining comparable SISR accuracy.
[ { "created": "Tue, 18 Apr 2023 11:57:15 GMT", "version": "v1" } ]
2023-04-19
[ [ "Choong", "Han Yao", "" ], [ "Kumar", "Suryansh", "" ], [ "Van Gool", "Luc", "" ] ]
This paper proposes a quantum computing-based algorithm to solve the single image super-resolution (SISR) problem. One of the well-known classical approaches for SISR relies on the well-established patch-wise sparse modeling of the problem. Yet, this field's current state of affairs is that deep neural networks (DNNs) have demonstrated far superior results than traditional approaches. Nevertheless, quantum computing is expected to become increasingly prominent for machine learning problems soon. As a result, in this work, we take the privilege to perform an early exploration of applying a quantum computing algorithm to this important image enhancement problem, i.e., SISR. Among the two paradigms of quantum computing, namely universal gate quantum computing and adiabatic quantum computing (AQC), the latter has been successfully applied to practical computer vision problems, in which quantum parallelism has been exploited to solve combinatorial optimization efficiently. This work demonstrates formulating quantum SISR as a sparse coding optimization problem, which is solved using quantum annealers accessed via the D-Wave Leap platform. The proposed AQC-based algorithm is demonstrated to achieve improved speed-up over a classical analog while maintaining comparable SISR accuracy.
1812.05647
Garegin Grigoryan
Garegin Grigoryan, Yaoqing Liu
LAMP: Prompt Layer 7 Attack Mitigation with Programmable Data Planes
null
null
10.1109/NCA.2018.8548136
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While there are various methods to detect application layer attacks or intrusion attempts on an individual end host, it is not efficient to provide all end hosts in the network with heavy-duty defense systems or software firewalls. In this work, we leverage a new concept of programmable data planes, to directly react on alerts raised by a victim and prevent further attacks on the whole network by blocking the attack at the network edge. We call our design LAMP, Layer 7 Attack Mitigation with Programmable data planes. We implemented LAMP using the P4 data plane programming language and evaluated its effectiveness and efficiency in the Behavioral Model (bmv2) environment.
[ { "created": "Thu, 13 Dec 2018 19:30:46 GMT", "version": "v1" } ]
2018-12-17
[ [ "Grigoryan", "Garegin", "" ], [ "Liu", "Yaoqing", "" ] ]
While there are various methods to detect application layer attacks or intrusion attempts on an individual end host, it is not efficient to provide all end hosts in the network with heavy-duty defense systems or software firewalls. In this work, we leverage a new concept of programmable data planes, to directly react on alerts raised by a victim and prevent further attacks on the whole network by blocking the attack at the network edge. We call our design LAMP, Layer 7 Attack Mitigation with Programmable data planes. We implemented LAMP using the P4 data plane programming language and evaluated its effectiveness and efficiency in the Behavioral Model (bmv2) environment.
1810.10656
Ben Zion Vatashsky
Ben Zion Vatashsky and Shimon Ullman
Understand, Compose and Respond - Answering Visual Questions by a Composition of Abstract Procedures
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An image related question defines a specific visual task that is required in order to produce an appropriate answer. The answer may depend on a minor detail in the image and require complex reasoning and use of prior knowledge. When humans perform this task, they are able to do it in a flexible and robust manner, integrating modularly any novel visual capability with diverse options for various elaborations of the task. In contrast, current approaches to solve this problem by a machine are based on casting the problem as an end-to-end learning problem, which lacks such abilities. We present a different approach, inspired by the aforementioned human capabilities. The approach is based on the compositional structure of the question. The underlying idea is that a question has an abstract representation based on its structure, which is compositional in nature. The question can consequently be answered by a composition of procedures corresponding to its substructures. The basic elements of the representation are logical patterns, which are put together to represent the question. These patterns include a parametric representation for object classes, properties and relations. Each basic pattern is mapped into a basic procedure that includes meaningful visual tasks, and the patterns are composed to produce the overall answering procedure. The UnCoRd (Understand Compose and Respond) system, based on this approach, integrates existing detection and classification schemes for a set of object classes, properties and relations. These schemes are incorporated in a modular manner, providing elaborated answers and corrections for negative answers. In addition, an external knowledge base is queried for required common-knowledge. We performed a qualitative analysis of the system, which demonstrates its representation capabilities and provide suggestions for future developments.
[ { "created": "Thu, 25 Oct 2018 00:03:09 GMT", "version": "v1" } ]
2018-10-26
[ [ "Vatashsky", "Ben Zion", "" ], [ "Ullman", "Shimon", "" ] ]
An image related question defines a specific visual task that is required in order to produce an appropriate answer. The answer may depend on a minor detail in the image and require complex reasoning and use of prior knowledge. When humans perform this task, they are able to do it in a flexible and robust manner, integrating modularly any novel visual capability with diverse options for various elaborations of the task. In contrast, current approaches to solve this problem by a machine are based on casting the problem as an end-to-end learning problem, which lacks such abilities. We present a different approach, inspired by the aforementioned human capabilities. The approach is based on the compositional structure of the question. The underlying idea is that a question has an abstract representation based on its structure, which is compositional in nature. The question can consequently be answered by a composition of procedures corresponding to its substructures. The basic elements of the representation are logical patterns, which are put together to represent the question. These patterns include a parametric representation for object classes, properties and relations. Each basic pattern is mapped into a basic procedure that includes meaningful visual tasks, and the patterns are composed to produce the overall answering procedure. The UnCoRd (Understand Compose and Respond) system, based on this approach, integrates existing detection and classification schemes for a set of object classes, properties and relations. These schemes are incorporated in a modular manner, providing elaborated answers and corrections for negative answers. In addition, an external knowledge base is queried for required common-knowledge. We performed a qualitative analysis of the system, which demonstrates its representation capabilities and provide suggestions for future developments.
2206.13057
Mohammad Roghani
Soheil Behnezhad, Mohammad Roghani, Aviad Rubinstein, Amin Saberi
Beating Greedy Matching in Sublinear Time
null
null
null
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
We study sublinear time algorithms for estimating the size of maximum matching in graphs. Our main result is a $(\frac{1}{2}+\Omega(1))$-approximation algorithm which can be implemented in $O(n^{1+\epsilon})$ time, where $n$ is the number of vertices and the constant $\epsilon > 0$ can be made arbitrarily small. The best known lower bound for the problem is $\Omega(n)$, which holds for any constant approximation. Existing algorithms either obtain the greedy bound of $\frac{1}{2}$-approximation [Behnezhad FOCS'21], or require some assumption on the maximum degree to run in $o(n^2)$-time [Yoshida, Yamamoto, and Ito STOC'09]. We improve over these by designing a less "adaptive" augmentation algorithm for maximum matching that might be of independent interest.
[ { "created": "Mon, 27 Jun 2022 05:45:03 GMT", "version": "v1" } ]
2022-06-28
[ [ "Behnezhad", "Soheil", "" ], [ "Roghani", "Mohammad", "" ], [ "Rubinstein", "Aviad", "" ], [ "Saberi", "Amin", "" ] ]
We study sublinear time algorithms for estimating the size of maximum matching in graphs. Our main result is a $(\frac{1}{2}+\Omega(1))$-approximation algorithm which can be implemented in $O(n^{1+\epsilon})$ time, where $n$ is the number of vertices and the constant $\epsilon > 0$ can be made arbitrarily small. The best known lower bound for the problem is $\Omega(n)$, which holds for any constant approximation. Existing algorithms either obtain the greedy bound of $\frac{1}{2}$-approximation [Behnezhad FOCS'21], or require some assumption on the maximum degree to run in $o(n^2)$-time [Yoshida, Yamamoto, and Ito STOC'09]. We improve over these by designing a less "adaptive" augmentation algorithm for maximum matching that might be of independent interest.
2001.04191
Johannes Klaus Fichte
Johannes K. Fichte, Markus Hecher, Patrick Thier, Stefan Woltran
Exploiting Database Management Systems and Treewidth for Counting
Under consideration in Theory and Practice of Logic Programming (TPLP)
Theory and Practice of Logic Programming 22 (2022) 128-157
10.1017/S147106842100003X
null
cs.AI cs.DS math.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bounded treewidth is one of the most cited combinatorial invariants, which was applied in the literature for solving several counting problems efficiently. A canonical counting problem is #SAT, which asks to count the satisfying assignments of a Boolean formula. Recent work shows that benchmarking instances for #SAT often have reasonably small treewidth. This paper deals with counting problems for instances of small treewidth. We introduce a general framework to solve counting questions based on state-of-the-art database management systems (DBMS). Our framework takes explicitly advantage of small treewidth by solving instances using dynamic programming (DP) on tree decompositions (TD). Therefore, we implement the concept of DP into a DBMS (PostgreSQL), since DP algorithms are already often given in terms of table manipulations in theory. This allows for elegant specifications of DP algorithms and the use of SQL to manipulate records and tables, which gives us a natural approach to bring DP algorithms into practice. To the best of our knowledge, we present the first approach to employ a DBMS for algorithms on TDs. A key advantage of our approach is that DBMS naturally allow to deal with huge tables with a limited amount of main memory (RAM), parallelization, as well as suspending computation.
[ { "created": "Mon, 13 Jan 2020 12:45:22 GMT", "version": "v1" }, { "created": "Wed, 3 Feb 2021 16:54:41 GMT", "version": "v2" } ]
2023-06-22
[ [ "Fichte", "Johannes K.", "" ], [ "Hecher", "Markus", "" ], [ "Thier", "Patrick", "" ], [ "Woltran", "Stefan", "" ] ]
Bounded treewidth is one of the most cited combinatorial invariants, which was applied in the literature for solving several counting problems efficiently. A canonical counting problem is #SAT, which asks to count the satisfying assignments of a Boolean formula. Recent work shows that benchmarking instances for #SAT often have reasonably small treewidth. This paper deals with counting problems for instances of small treewidth. We introduce a general framework to solve counting questions based on state-of-the-art database management systems (DBMS). Our framework takes explicitly advantage of small treewidth by solving instances using dynamic programming (DP) on tree decompositions (TD). Therefore, we implement the concept of DP into a DBMS (PostgreSQL), since DP algorithms are already often given in terms of table manipulations in theory. This allows for elegant specifications of DP algorithms and the use of SQL to manipulate records and tables, which gives us a natural approach to bring DP algorithms into practice. To the best of our knowledge, we present the first approach to employ a DBMS for algorithms on TDs. A key advantage of our approach is that DBMS naturally allow to deal with huge tables with a limited amount of main memory (RAM), parallelization, as well as suspending computation.
1710.10057
Huang Lingxiao
L. Elisa Celis, Lingxiao Huang, Nisheeth K. Vishnoi
Multiwinner Voting with Fairness Constraints
The conference version of this paper appears in IJCAI-ECAI 2018
null
null
null
cs.CY cs.AI cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiwinner voting rules are used to select a small representative subset of candidates or items from a larger set given the preferences of voters. However, if candidates have sensitive attributes such as gender or ethnicity (when selecting a committee), or specified types such as political leaning (when selecting a subset of news items), an algorithm that chooses a subset by optimizing a multiwinner voting rule may be unbalanced in its selection -- it may under or over represent a particular gender or political orientation in the examples above. We introduce an algorithmic framework for multiwinner voting problems when there is an additional requirement that the selected subset should be "fair" with respect to a given set of attributes. Our framework provides the flexibility to (1) specify fairness with respect to multiple, non-disjoint attributes (e.g., ethnicity and gender) and (2) specify a score function. We study the computational complexity of this constrained multiwinner voting problem for monotone and submodular score functions and present several approximation algorithms and matching hardness of approximation results for various attribute group structure and types of score functions. We also present simulations that suggest that adding fairness constraints may not affect the scores significantly when compared to the unconstrained case.
[ { "created": "Fri, 27 Oct 2017 10:13:31 GMT", "version": "v1" }, { "created": "Mon, 18 Jun 2018 19:19:15 GMT", "version": "v2" } ]
2018-06-20
[ [ "Celis", "L. Elisa", "" ], [ "Huang", "Lingxiao", "" ], [ "Vishnoi", "Nisheeth K.", "" ] ]
Multiwinner voting rules are used to select a small representative subset of candidates or items from a larger set given the preferences of voters. However, if candidates have sensitive attributes such as gender or ethnicity (when selecting a committee), or specified types such as political leaning (when selecting a subset of news items), an algorithm that chooses a subset by optimizing a multiwinner voting rule may be unbalanced in its selection -- it may under or over represent a particular gender or political orientation in the examples above. We introduce an algorithmic framework for multiwinner voting problems when there is an additional requirement that the selected subset should be "fair" with respect to a given set of attributes. Our framework provides the flexibility to (1) specify fairness with respect to multiple, non-disjoint attributes (e.g., ethnicity and gender) and (2) specify a score function. We study the computational complexity of this constrained multiwinner voting problem for monotone and submodular score functions and present several approximation algorithms and matching hardness of approximation results for various attribute group structure and types of score functions. We also present simulations that suggest that adding fairness constraints may not affect the scores significantly when compared to the unconstrained case.
2105.14162
Zhibo Zhang
Ruiwen Li (co-first author), Zhibo Zhang (co-first author), Jiani Li, Chiheb Trabelsi, Scott Sanner, Jongseong Jang, Yeonjeong Jeong, Dongsub Shim
EDDA: Explanation-driven Data Augmentation to Improve Explanation Faithfulness
null
null
null
null
cs.LG cs.AI cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recent years have seen the introduction of a range of methods for post-hoc explainability of image classifier predictions. However, these post-hoc explanations may not always be faithful to classifier predictions, which poses a significant challenge when attempting to debug models based on such explanations. To this end, we seek a methodology that can improve the faithfulness of an explanation method with respect to model predictions which does not require ground truth explanations. We achieve this through a novel explanation-driven data augmentation (EDDA) technique that augments the training data with occlusions inferred from model explanations; this is based on the simple motivating principle that \emph{if} the explainer is faithful to the model \emph{then} occluding salient regions for the model prediction should decrease the model confidence in the prediction, while occluding non-salient regions should not change the prediction. To verify that the proposed augmentation method has the potential to improve faithfulness, we evaluate EDDA using a variety of datasets and classification models. We demonstrate empirically that our approach leads to a significant increase of faithfulness, which can facilitate better debugging and successful deployment of image classification models in real-world applications.
[ { "created": "Sat, 29 May 2021 00:42:42 GMT", "version": "v1" }, { "created": "Sat, 19 Jun 2021 00:01:42 GMT", "version": "v2" }, { "created": "Fri, 24 Sep 2021 22:20:02 GMT", "version": "v3" } ]
2021-09-28
[ [ "Li", "Ruiwen", "", "co-first author" ], [ "Zhang", "Zhibo", "", "co-first author" ], [ "Li", "Jiani", "" ], [ "Trabelsi", "Chiheb", "" ], [ "Sanner", "Scott", "" ], [ "Jang", "Jongseong", "" ], [ "Jeong", "Yeonjeong", "" ], [ "Shim", "Dongsub", "" ] ]
Recent years have seen the introduction of a range of methods for post-hoc explainability of image classifier predictions. However, these post-hoc explanations may not always be faithful to classifier predictions, which poses a significant challenge when attempting to debug models based on such explanations. To this end, we seek a methodology that can improve the faithfulness of an explanation method with respect to model predictions which does not require ground truth explanations. We achieve this through a novel explanation-driven data augmentation (EDDA) technique that augments the training data with occlusions inferred from model explanations; this is based on the simple motivating principle that \emph{if} the explainer is faithful to the model \emph{then} occluding salient regions for the model prediction should decrease the model confidence in the prediction, while occluding non-salient regions should not change the prediction. To verify that the proposed augmentation method has the potential to improve faithfulness, we evaluate EDDA using a variety of datasets and classification models. We demonstrate empirically that our approach leads to a significant increase of faithfulness, which can facilitate better debugging and successful deployment of image classification models in real-world applications.
2207.13440
Siddhesh Khandelwal
Siddhesh Khandelwal and Leonid Sigal
Iterative Scene Graph Generation
25 pages, 10 images, 9 tables
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The task of scene graph generation entails identifying object entities and their corresponding interaction predicates in a given image (or video). Due to the combinatorially large solution space, existing approaches to scene graph generation assume certain factorization of the joint distribution to make the estimation feasible (e.g., assuming that objects are conditionally independent of predicate predictions). However, this fixed factorization is not ideal under all scenarios (e.g., for images where an object entailed in interaction is small and not discernible on its own). In this work, we propose a novel framework for scene graph generation that addresses this limitation, as well as introduces dynamic conditioning on the image, using message passing in a Markov Random Field. This is implemented as an iterative refinement procedure wherein each modification is conditioned on the graph generated in the previous iteration. This conditioning across refinement steps allows joint reasoning over entities and relations. This framework is realized via a novel and end-to-end trainable transformer-based architecture. In addition, the proposed framework can improve existing approach performance. Through extensive experiments on Visual Genome and Action Genome benchmark datasets we show improved performance on the scene graph generation.
[ { "created": "Wed, 27 Jul 2022 10:37:29 GMT", "version": "v1" } ]
2022-07-28
[ [ "Khandelwal", "Siddhesh", "" ], [ "Sigal", "Leonid", "" ] ]
The task of scene graph generation entails identifying object entities and their corresponding interaction predicates in a given image (or video). Due to the combinatorially large solution space, existing approaches to scene graph generation assume certain factorization of the joint distribution to make the estimation feasible (e.g., assuming that objects are conditionally independent of predicate predictions). However, this fixed factorization is not ideal under all scenarios (e.g., for images where an object entailed in interaction is small and not discernible on its own). In this work, we propose a novel framework for scene graph generation that addresses this limitation, as well as introduces dynamic conditioning on the image, using message passing in a Markov Random Field. This is implemented as an iterative refinement procedure wherein each modification is conditioned on the graph generated in the previous iteration. This conditioning across refinement steps allows joint reasoning over entities and relations. This framework is realized via a novel and end-to-end trainable transformer-based architecture. In addition, the proposed framework can improve existing approach performance. Through extensive experiments on Visual Genome and Action Genome benchmark datasets we show improved performance on the scene graph generation.
2301.11074
Justin Goldston
Justin Goldston, Tomer Jordi Chaffer, Justyna Osowska, and Charles von Goins II
Digital Inheritance in Web3: A Case Study of Soulbound Tokens and the Social Recovery Pallet within the Polkadot and Kusama Ecosystems
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
In recent years discussions centered around digital inheritance have increased among social media users and across blockchain ecosystems. As a result digital assets such as social media content cryptocurrencies and non-fungible tokens have become increasingly valuable and widespread, leading to the need for clear and secure mechanisms for transferring these assets upon the testators death or incapacitation. This study proposes a framework for digital inheritance using soulbound tokens and the social recovery pallet as a use case in the Polkadot and Kusama blockchain networks. The findings discussed within this study suggest that while soulbound tokens and the social recovery pallet offer a promising solution for creating a digital inheritance plan the findings also raise important considerations for testators digital executors and developers. While further research is needed to fully understand the potential impacts and risks of other technologies such as artificial intelligence and quantum computing this study provides a primer for users to begin planning a digital inheritance strategy and for developers to develop a more intuitive solution.
[ { "created": "Thu, 26 Jan 2023 13:12:45 GMT", "version": "v1" }, { "created": "Thu, 30 May 2024 18:50:09 GMT", "version": "v2" }, { "created": "Thu, 6 Jun 2024 21:30:09 GMT", "version": "v3" } ]
2024-06-10
[ [ "Goldston", "Justin", "" ], [ "Chaffer", "Tomer Jordi", "" ], [ "Osowska", "Justyna", "" ], [ "Goins", "Charles von", "II" ] ]
In recent years discussions centered around digital inheritance have increased among social media users and across blockchain ecosystems. As a result digital assets such as social media content cryptocurrencies and non-fungible tokens have become increasingly valuable and widespread, leading to the need for clear and secure mechanisms for transferring these assets upon the testators death or incapacitation. This study proposes a framework for digital inheritance using soulbound tokens and the social recovery pallet as a use case in the Polkadot and Kusama blockchain networks. The findings discussed within this study suggest that while soulbound tokens and the social recovery pallet offer a promising solution for creating a digital inheritance plan the findings also raise important considerations for testators digital executors and developers. While further research is needed to fully understand the potential impacts and risks of other technologies such as artificial intelligence and quantum computing this study provides a primer for users to begin planning a digital inheritance strategy and for developers to develop a more intuitive solution.
2301.06201
Yufei Huang
Yufei Huang, and Mohsen A. Jafari
Risk-aware Vehicle Motion Planning Using Bayesian LSTM-Based Model Predictive Control
12 pages, 17 figures
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the probabilistic traffic environment is a vital challenge for the motion planning of autonomous vehicles. To make feasible control decisions, forecasting future trajectories of adjacent cars is essential for intelligent vehicles to assess potential conflicts and react to reduce the risk. This paper first introduces a Bayesian Long Short-term Memory (BLSTM) model to learn human drivers' behaviors and habits from their historical trajectory data. The model predicts the probability distribution of surrounding vehicles' positions, which are used to estimate dynamic conflict risks. Next, a hybrid automaton is built to model the basic motions of a car, and the conflict risks are assessed for real-time state-space transitions based on environmental information. Finally, a BLSTM-based Model Predictive Control (MPC) is built to navigate vehicles through safe paths with the least predicted conflict risk. By merging BLSTM with MPC, the designed neural-based MPC overcomes the defect that traditional MPC is hard to model uncertain conflict risks. The simulation results show that our proposed BLSTM-based MPC performs better than human drivers because it can foresee potential conflicts and take action to avoid them.
[ { "created": "Sun, 15 Jan 2023 22:11:14 GMT", "version": "v1" } ]
2023-01-18
[ [ "Huang", "Yufei", "" ], [ "Jafari", "Mohsen A.", "" ] ]
Understanding the probabilistic traffic environment is a vital challenge for the motion planning of autonomous vehicles. To make feasible control decisions, forecasting future trajectories of adjacent cars is essential for intelligent vehicles to assess potential conflicts and react to reduce the risk. This paper first introduces a Bayesian Long Short-term Memory (BLSTM) model to learn human drivers' behaviors and habits from their historical trajectory data. The model predicts the probability distribution of surrounding vehicles' positions, which are used to estimate dynamic conflict risks. Next, a hybrid automaton is built to model the basic motions of a car, and the conflict risks are assessed for real-time state-space transitions based on environmental information. Finally, a BLSTM-based Model Predictive Control (MPC) is built to navigate vehicles through safe paths with the least predicted conflict risk. By merging BLSTM with MPC, the designed neural-based MPC overcomes the defect that traditional MPC is hard to model uncertain conflict risks. The simulation results show that our proposed BLSTM-based MPC performs better than human drivers because it can foresee potential conflicts and take action to avoid them.
1112.2257
Mina Rahbari
Mina Rahbari and Mohammad Ali Jabreil Jamali
Efficient Detection of Sybil Attack Based on Cryptography in Vanet
null
International Journal of Network Security & Its Applications (IJNSA), Vol.3, No.6, November 2011
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vehicular communications play a substantial role in providing safety transportation by means of safety message exchange. Researchers have proposed several solutions for securing safety messages. Protocols based on a fixed key infrastructure are more efficient in implementation and maintain stronger security in comparison with dynamic structures. The purpose of this paper present a method based on a fixed key infrastructure for detection impersonation attack, in other words, Sybil attack, in the vehicular ad hoc network. This attack, puts a great impact on performance of the network. The proposed method, using an cryptography mechanism to detection Sybil attack. Finally, using Mat lab simulator the results of this approach are reviewed, This method it has low delay for detection Sybil attack, because most operations are done in Certification Authority, so this proposed schema is a efficient method for detection Sybil attack.
[ { "created": "Sat, 10 Dec 2011 07:43:42 GMT", "version": "v1" } ]
2011-12-13
[ [ "Rahbari", "Mina", "" ], [ "Jamali", "Mohammad Ali Jabreil", "" ] ]
Vehicular communications play a substantial role in providing safety transportation by means of safety message exchange. Researchers have proposed several solutions for securing safety messages. Protocols based on a fixed key infrastructure are more efficient in implementation and maintain stronger security in comparison with dynamic structures. The purpose of this paper present a method based on a fixed key infrastructure for detection impersonation attack, in other words, Sybil attack, in the vehicular ad hoc network. This attack, puts a great impact on performance of the network. The proposed method, using an cryptography mechanism to detection Sybil attack. Finally, using Mat lab simulator the results of this approach are reviewed, This method it has low delay for detection Sybil attack, because most operations are done in Certification Authority, so this proposed schema is a efficient method for detection Sybil attack.
2211.15810
Vagner Santana
Leandro Marega Ferreira Otani and Vagner Figueredo de Santana
Practical Challenges in Indoor Mobile Recommendation
10 pages, 3 figures, 2 tables
null
null
null
cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recommendation systems are present in multiple contexts as e-commerce, websites, and media streaming services. As scenarios get more complex, techniques and tools have to consider a number of variables. When recommending services/products to mobile users while they are in indoor environments next to the object of the recommendation, variables as location, interests, route, and interaction logs also need to be taken into account. In this context, this work discusses the practical challenges inherent to the context of indoor mobile recommendation (e.g., mall, parking lot, museum, among others) grounded on a case and a systematic review. With the presented results, one expects to support practitioners in the task of defining the proper approach, technology, and notification method when recommending services/products to mobile users in indoor environments.
[ { "created": "Mon, 28 Nov 2022 22:36:00 GMT", "version": "v1" } ]
2022-11-30
[ [ "Otani", "Leandro Marega Ferreira", "" ], [ "de Santana", "Vagner Figueredo", "" ] ]
Recommendation systems are present in multiple contexts as e-commerce, websites, and media streaming services. As scenarios get more complex, techniques and tools have to consider a number of variables. When recommending services/products to mobile users while they are in indoor environments next to the object of the recommendation, variables as location, interests, route, and interaction logs also need to be taken into account. In this context, this work discusses the practical challenges inherent to the context of indoor mobile recommendation (e.g., mall, parking lot, museum, among others) grounded on a case and a systematic review. With the presented results, one expects to support practitioners in the task of defining the proper approach, technology, and notification method when recommending services/products to mobile users in indoor environments.
1401.5555
Monowar Hasan
Hina Tabassum, Zaher Dawy, Ekram Hossain, Mohamed-Slim Alouini
Interference Statistics and Capacity Analysis for Uplink Transmission in Two-Tier Small Cell Networks: A Geometric Probability Approach
We have withdrawn the paper due to some limitations
null
null
null
cs.IT cs.NI math.IT math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Small cell networks are evolving as an economically viable solution to ameliorate the capacity and coverage of state-of-the-art wireless cellular systems. Nonetheless, the dense and unplanned deployment of the small cells (e.g., femtocells, picocells) with restricted user access significantly increases the impact of interference on the overall network performance. To this end, this paper presents a novel framework to derive the statistics of the interference considering dedicated and shared spectrum access for uplink transmissions in two-tier small cell networks such as the macrocell-femtocell networks. The derived expressions are validated by the Monte-Carlo simulations. Numerical results are generated to assess the feasibility of shared and dedicated spectrum access in femtocells under varying traffic load and spectral reuse scenarios.
[ { "created": "Wed, 22 Jan 2014 05:18:23 GMT", "version": "v1" }, { "created": "Thu, 30 Jan 2014 20:58:39 GMT", "version": "v2" } ]
2014-01-31
[ [ "Tabassum", "Hina", "" ], [ "Dawy", "Zaher", "" ], [ "Hossain", "Ekram", "" ], [ "Alouini", "Mohamed-Slim", "" ] ]
Small cell networks are evolving as an economically viable solution to ameliorate the capacity and coverage of state-of-the-art wireless cellular systems. Nonetheless, the dense and unplanned deployment of the small cells (e.g., femtocells, picocells) with restricted user access significantly increases the impact of interference on the overall network performance. To this end, this paper presents a novel framework to derive the statistics of the interference considering dedicated and shared spectrum access for uplink transmissions in two-tier small cell networks such as the macrocell-femtocell networks. The derived expressions are validated by the Monte-Carlo simulations. Numerical results are generated to assess the feasibility of shared and dedicated spectrum access in femtocells under varying traffic load and spectral reuse scenarios.
2109.10376
Dominik Dold
Victor Caceres Chian, Marcel Hildebrandt, Thomas Runkler, Dominik Dold
Learning through structure: towards deep neuromorphic knowledge graph embeddings
Accepted for publication at the International Conference on Neuromorphic Computing (ICNC 2021)
null
10.1109/ICNC52316.2021.9607968
null
cs.NE cs.AI cs.LG q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computing latent representations for graph-structured data is an ubiquitous learning task in many industrial and academic applications ranging from molecule synthetization to social network analysis and recommender systems. Knowledge graphs are among the most popular and widely used data representations related to the Semantic Web. Next to structuring factual knowledge in a machine-readable format, knowledge graphs serve as the backbone of many artificial intelligence applications and allow the ingestion of context information into various learning algorithms. Graph neural networks attempt to encode graph structures in low-dimensional vector spaces via a message passing heuristic between neighboring nodes. Over the recent years, a multitude of different graph neural network architectures demonstrated ground-breaking performances in many learning tasks. In this work, we propose a strategy to map deep graph learning architectures for knowledge graph reasoning to neuromorphic architectures. Based on the insight that randomly initialized and untrained (i.e., frozen) graph neural networks are able to preserve local graph structures, we compose a frozen neural network with shallow knowledge graph embedding models. We experimentally show that already on conventional computing hardware, this leads to a significant speedup and memory reduction while maintaining a competitive performance level. Moreover, we extend the frozen architecture to spiking neural networks, introducing a novel, event-based and highly sparse knowledge graph embedding algorithm that is suitable for implementation in neuromorphic hardware.
[ { "created": "Tue, 21 Sep 2021 18:01:04 GMT", "version": "v1" } ]
2023-08-25
[ [ "Chian", "Victor Caceres", "" ], [ "Hildebrandt", "Marcel", "" ], [ "Runkler", "Thomas", "" ], [ "Dold", "Dominik", "" ] ]
Computing latent representations for graph-structured data is an ubiquitous learning task in many industrial and academic applications ranging from molecule synthetization to social network analysis and recommender systems. Knowledge graphs are among the most popular and widely used data representations related to the Semantic Web. Next to structuring factual knowledge in a machine-readable format, knowledge graphs serve as the backbone of many artificial intelligence applications and allow the ingestion of context information into various learning algorithms. Graph neural networks attempt to encode graph structures in low-dimensional vector spaces via a message passing heuristic between neighboring nodes. Over the recent years, a multitude of different graph neural network architectures demonstrated ground-breaking performances in many learning tasks. In this work, we propose a strategy to map deep graph learning architectures for knowledge graph reasoning to neuromorphic architectures. Based on the insight that randomly initialized and untrained (i.e., frozen) graph neural networks are able to preserve local graph structures, we compose a frozen neural network with shallow knowledge graph embedding models. We experimentally show that already on conventional computing hardware, this leads to a significant speedup and memory reduction while maintaining a competitive performance level. Moreover, we extend the frozen architecture to spiking neural networks, introducing a novel, event-based and highly sparse knowledge graph embedding algorithm that is suitable for implementation in neuromorphic hardware.
2011.13917
Jennifer J. Sun
Jennifer J. Sun, Ann Kennedy, Eric Zhan, David J. Anderson, Yisong Yue, Pietro Perona
Task Programming: Learning Data Efficient Behavior Representations
To appear in as an Oral in CVPR 2021. Code: https://github.com/neuroethology/TREBA. Project page: https://sites.google.com/view/task-programming
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Specialized domain knowledge is often necessary to accurately annotate training sets for in-depth analysis, but can be burdensome and time-consuming to acquire from domain experts. This issue arises prominently in automated behavior analysis, in which agent movements or actions of interest are detected from video tracking data. To reduce annotation effort, we present TREBA: a method to learn annotation-sample efficient trajectory embedding for behavior analysis, based on multi-task self-supervised learning. The tasks in our method can be efficiently engineered by domain experts through a process we call "task programming", which uses programs to explicitly encode structured knowledge from domain experts. Total domain expert effort can be reduced by exchanging data annotation time for the construction of a small number of programmed tasks. We evaluate this trade-off using data from behavioral neuroscience, in which specialized domain knowledge is used to identify behaviors. We present experimental results in three datasets across two domains: mice and fruit flies. Using embeddings from TREBA, we reduce annotation burden by up to a factor of 10 without compromising accuracy compared to state-of-the-art features. Our results thus suggest that task programming and self-supervision can be an effective way to reduce annotation effort for domain experts.
[ { "created": "Fri, 27 Nov 2020 18:58:32 GMT", "version": "v1" }, { "created": "Mon, 29 Mar 2021 17:59:47 GMT", "version": "v2" } ]
2021-03-30
[ [ "Sun", "Jennifer J.", "" ], [ "Kennedy", "Ann", "" ], [ "Zhan", "Eric", "" ], [ "Anderson", "David J.", "" ], [ "Yue", "Yisong", "" ], [ "Perona", "Pietro", "" ] ]
Specialized domain knowledge is often necessary to accurately annotate training sets for in-depth analysis, but can be burdensome and time-consuming to acquire from domain experts. This issue arises prominently in automated behavior analysis, in which agent movements or actions of interest are detected from video tracking data. To reduce annotation effort, we present TREBA: a method to learn annotation-sample efficient trajectory embedding for behavior analysis, based on multi-task self-supervised learning. The tasks in our method can be efficiently engineered by domain experts through a process we call "task programming", which uses programs to explicitly encode structured knowledge from domain experts. Total domain expert effort can be reduced by exchanging data annotation time for the construction of a small number of programmed tasks. We evaluate this trade-off using data from behavioral neuroscience, in which specialized domain knowledge is used to identify behaviors. We present experimental results in three datasets across two domains: mice and fruit flies. Using embeddings from TREBA, we reduce annotation burden by up to a factor of 10 without compromising accuracy compared to state-of-the-art features. Our results thus suggest that task programming and self-supervision can be an effective way to reduce annotation effort for domain experts.
1508.02812
Jiamou Liu
Jiamou Liu and Ziheng Wei
A Game of Attribute Decomposition for Software Architecture Design
23 pages, 5 figures, a shorter version to appear at 12th International Colloquium on Theoretical Aspects of Computing (ICTAC 2015)
null
null
null
cs.GT cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Attribute-driven software architecture design aims to provide decision support by taking into account the quality attributes of softwares. A central question in this process is: What architecture design best fulfills the desirable software requirements? To answer this question, a system designer needs to make tradeoffs among several potentially conflicting quality attributes. Such decisions are normally ad-hoc and rely heavily on experiences. We propose a mathematical approach to tackle this problem. Game theory naturally provides the basic language: Players represent requirements, and strategies involve setting up coalitions among the players. In this way we propose a novel model, called decomposition game, for attribute-driven design. We present its solution concept based on the notion of cohesion and expansion-freedom and prove that a solution always exists. We then investigate the computational complexity of obtaining a solution. The game model and the algorithms may serve as a general framework for providing useful guidance for software architecture design. We present our results through running examples and a case study on a real-life software project.
[ { "created": "Wed, 12 Aug 2015 05:07:04 GMT", "version": "v1" } ]
2015-08-13
[ [ "Liu", "Jiamou", "" ], [ "Wei", "Ziheng", "" ] ]
Attribute-driven software architecture design aims to provide decision support by taking into account the quality attributes of softwares. A central question in this process is: What architecture design best fulfills the desirable software requirements? To answer this question, a system designer needs to make tradeoffs among several potentially conflicting quality attributes. Such decisions are normally ad-hoc and rely heavily on experiences. We propose a mathematical approach to tackle this problem. Game theory naturally provides the basic language: Players represent requirements, and strategies involve setting up coalitions among the players. In this way we propose a novel model, called decomposition game, for attribute-driven design. We present its solution concept based on the notion of cohesion and expansion-freedom and prove that a solution always exists. We then investigate the computational complexity of obtaining a solution. The game model and the algorithms may serve as a general framework for providing useful guidance for software architecture design. We present our results through running examples and a case study on a real-life software project.
1607.02637
Shrisha Rao
Samiksha Sarwari, Shrisha Rao
Network Flows Under Thermal Restrictions
null
null
null
null
cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We define a \emph{thermal network}, which is a network where the flow functionality of a node depends upon its temperature. This model is inspired by several types of real-life networks, and generalizes some conventional network models wherein nodes have fixed capacities and the problem is to maximize the flow through the network. In a thermal network, the temperature of a node increases as traffic moves through it, and nodes may also cool spontaneously over time, or by employing cooling packets. We analyze the problems of maximizing the flow from a source to a sink for both these cases, for a holistic view with respect to the single-source-single-sink dynamic flow problem in a thermal network. We have studied certain properties such a thermal network exhibits, and give closed-form solutions for the maximum flow that can be achieved through such a network.
[ { "created": "Sat, 9 Jul 2016 17:25:37 GMT", "version": "v1" } ]
2016-07-12
[ [ "Sarwari", "Samiksha", "" ], [ "Rao", "Shrisha", "" ] ]
We define a \emph{thermal network}, which is a network where the flow functionality of a node depends upon its temperature. This model is inspired by several types of real-life networks, and generalizes some conventional network models wherein nodes have fixed capacities and the problem is to maximize the flow through the network. In a thermal network, the temperature of a node increases as traffic moves through it, and nodes may also cool spontaneously over time, or by employing cooling packets. We analyze the problems of maximizing the flow from a source to a sink for both these cases, for a holistic view with respect to the single-source-single-sink dynamic flow problem in a thermal network. We have studied certain properties such a thermal network exhibits, and give closed-form solutions for the maximum flow that can be achieved through such a network.
1712.04581
Nikhil Bansal
Nikhil Bansal and Anupam Gupta
Potential-Function Proofs for First-Order Methods
null
null
null
null
cs.LG cs.DS math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This note discusses proofs for convergence of first-order methods based on simple potential-function arguments. We cover methods like gradient descent (for both smooth and non-smooth settings), mirror descent, and some accelerated variants.
[ { "created": "Wed, 13 Dec 2017 01:10:13 GMT", "version": "v1" }, { "created": "Thu, 7 Mar 2019 21:55:56 GMT", "version": "v2" }, { "created": "Sun, 2 Jun 2019 19:36:25 GMT", "version": "v3" } ]
2019-06-04
[ [ "Bansal", "Nikhil", "" ], [ "Gupta", "Anupam", "" ] ]
This note discusses proofs for convergence of first-order methods based on simple potential-function arguments. We cover methods like gradient descent (for both smooth and non-smooth settings), mirror descent, and some accelerated variants.
2309.07015
Rabih Zbib
Federico Retyk, Hermenegildo Fabregat, Juan Aizpuru, Mariana Taglio, Rabih Zbib
R\'esum\'e Parsing as Hierarchical Sequence Labeling: An Empirical Study
RecSys in HR'23: The 3rd Workshop on Recommender Systems for Human Resources, in conjunction with the 17th ACM Conference on Recommender Systems, September 18--22, 2023, Singapore, Singapore
null
null
null
cs.CL cs.AI cs.IR
http://creativecommons.org/licenses/by-nc-sa/4.0/
Extracting information from r\'esum\'es is typically formulated as a two-stage problem, where the document is first segmented into sections and then each section is processed individually to extract the target entities. Instead, we cast the whole problem as sequence labeling in two levels -- lines and tokens -- and study model architectures for solving both tasks simultaneously. We build high-quality r\'esum\'e parsing corpora in English, French, Chinese, Spanish, German, Portuguese, and Swedish. Based on these corpora, we present experimental results that demonstrate the effectiveness of the proposed models for the information extraction task, outperforming approaches introduced in previous work. We conduct an ablation study of the proposed architectures. We also analyze both model performance and resource efficiency, and describe the trade-offs for model deployment in the context of a production environment.
[ { "created": "Wed, 13 Sep 2023 15:17:29 GMT", "version": "v1" } ]
2023-09-14
[ [ "Retyk", "Federico", "" ], [ "Fabregat", "Hermenegildo", "" ], [ "Aizpuru", "Juan", "" ], [ "Taglio", "Mariana", "" ], [ "Zbib", "Rabih", "" ] ]
Extracting information from r\'esum\'es is typically formulated as a two-stage problem, where the document is first segmented into sections and then each section is processed individually to extract the target entities. Instead, we cast the whole problem as sequence labeling in two levels -- lines and tokens -- and study model architectures for solving both tasks simultaneously. We build high-quality r\'esum\'e parsing corpora in English, French, Chinese, Spanish, German, Portuguese, and Swedish. Based on these corpora, we present experimental results that demonstrate the effectiveness of the proposed models for the information extraction task, outperforming approaches introduced in previous work. We conduct an ablation study of the proposed architectures. We also analyze both model performance and resource efficiency, and describe the trade-offs for model deployment in the context of a production environment.
0908.3315
Nelma Moreira
Marco Almeida, Nelma Moreira, and Rog\'erio Reis
Exact generation of acyclic deterministic finite automata
DCFS'08
null
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We give a canonical representation for trim acyclic deterministic finite automata (Adfa) with n states over an alphabet of k symbols. Using this normal form, we present a backtracking algorithm for the exact generation of Adfas. This algorithm is a non trivial adaptation of the algorithm for the exact generation of minimal acyclic deterministic finite automata, presented by Almeida et al.
[ { "created": "Sun, 23 Aug 2009 16:59:49 GMT", "version": "v1" } ]
2009-08-25
[ [ "Almeida", "Marco", "" ], [ "Moreira", "Nelma", "" ], [ "Reis", "Rogério", "" ] ]
We give a canonical representation for trim acyclic deterministic finite automata (Adfa) with n states over an alphabet of k symbols. Using this normal form, we present a backtracking algorithm for the exact generation of Adfas. This algorithm is a non trivial adaptation of the algorithm for the exact generation of minimal acyclic deterministic finite automata, presented by Almeida et al.
2303.10725
Md Yousuf Harun
Md Yousuf Harun, Jhair Gallardo, Tyler L. Hayes, Ronald Kemker, Christopher Kanan
SIESTA: Efficient Online Continual Learning with Sleep
Accepted to TMLR 2023
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In supervised continual learning, a deep neural network (DNN) is updated with an ever-growing data stream. Unlike the offline setting where data is shuffled, we cannot make any distributional assumptions about the data stream. Ideally, only one pass through the dataset is needed for computational efficiency. However, existing methods are inadequate and make many assumptions that cannot be made for real-world applications, while simultaneously failing to improve computational efficiency. In this paper, we propose a novel continual learning method, SIESTA based on wake/sleep framework for training, which is well aligned to the needs of on-device learning. The major goal of SIESTA is to advance compute efficient continual learning so that DNNs can be updated efficiently using far less time and energy. The principal innovations of SIESTA are: 1) rapid online updates using a rehearsal-free, backpropagation-free, and data-driven network update rule during its wake phase, and 2) expedited memory consolidation using a compute-restricted rehearsal policy during its sleep phase. For memory efficiency, SIESTA adapts latent rehearsal using memory indexing from REMIND. Compared to REMIND and prior arts, SIESTA is far more computationally efficient, enabling continual learning on ImageNet-1K in under 2 hours on a single GPU; moreover, in the augmentation-free setting it matches the performance of the offline learner, a milestone critical to driving adoption of continual learning in real-world applications.
[ { "created": "Sun, 19 Mar 2023 17:46:40 GMT", "version": "v1" }, { "created": "Fri, 25 Aug 2023 19:58:50 GMT", "version": "v2" }, { "created": "Thu, 2 Nov 2023 11:15:41 GMT", "version": "v3" } ]
2023-11-03
[ [ "Harun", "Md Yousuf", "" ], [ "Gallardo", "Jhair", "" ], [ "Hayes", "Tyler L.", "" ], [ "Kemker", "Ronald", "" ], [ "Kanan", "Christopher", "" ] ]
In supervised continual learning, a deep neural network (DNN) is updated with an ever-growing data stream. Unlike the offline setting where data is shuffled, we cannot make any distributional assumptions about the data stream. Ideally, only one pass through the dataset is needed for computational efficiency. However, existing methods are inadequate and make many assumptions that cannot be made for real-world applications, while simultaneously failing to improve computational efficiency. In this paper, we propose a novel continual learning method, SIESTA based on wake/sleep framework for training, which is well aligned to the needs of on-device learning. The major goal of SIESTA is to advance compute efficient continual learning so that DNNs can be updated efficiently using far less time and energy. The principal innovations of SIESTA are: 1) rapid online updates using a rehearsal-free, backpropagation-free, and data-driven network update rule during its wake phase, and 2) expedited memory consolidation using a compute-restricted rehearsal policy during its sleep phase. For memory efficiency, SIESTA adapts latent rehearsal using memory indexing from REMIND. Compared to REMIND and prior arts, SIESTA is far more computationally efficient, enabling continual learning on ImageNet-1K in under 2 hours on a single GPU; moreover, in the augmentation-free setting it matches the performance of the offline learner, a milestone critical to driving adoption of continual learning in real-world applications.
1805.07035
Morad Behandish
Morad Behandish, Saigopal Nelaturi, and Johan de Kleer
Automated Process Planning for Hybrid Manufacturing
Special Issue on symposium on Solid and Physical Modeling (SPM'2018)
Journal of Computer-Aided Design, 2018
10.1016/j.cad.2018.04.022
null
cs.CG cs.AI cs.GR cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hybrid manufacturing (HM) technologies combine additive and subtractive manufacturing (AM/SM) capabilities, leveraging AM's strengths in fabricating complex geometries and SM's precision and quality to produce finished parts. We present a systematic approach to automated computer-aided process planning (CAPP) for HM that can identify non-trivial, qualitatively distinct, and cost-optimal combinations of AM/SM modalities. A multimodal HM process plan is represented by a finite Boolean expression of AM and SM manufacturing primitives, such that the expression evaluates to an 'as-manufactured' artifact. We show that primitives that respect spatial constraints such as accessibility and collision avoidance may be constructed by solving inverse configuration space problems on the 'as-designed' artifact and manufacturing instruments. The primitives generate a finite Boolean algebra (FBA) that enumerates the entire search space for planning. The FBA's canonical intersection terms (i.e., 'atoms') provide the complete domain decomposition to reframe manufacturability analysis and process planning into purely symbolic reasoning, once a subcollection of atoms is found to be interchangeable with the design target. The approach subsumes unimodal (all-AM or all-SM) process planning as special cases. We demonstrate the practical potency of our framework and its computational efficiency when applied to process planning of complex 3D parts with dramatically different AM and SM instruments.
[ { "created": "Fri, 18 May 2018 03:27:35 GMT", "version": "v1" } ]
2018-05-21
[ [ "Behandish", "Morad", "" ], [ "Nelaturi", "Saigopal", "" ], [ "de Kleer", "Johan", "" ] ]
Hybrid manufacturing (HM) technologies combine additive and subtractive manufacturing (AM/SM) capabilities, leveraging AM's strengths in fabricating complex geometries and SM's precision and quality to produce finished parts. We present a systematic approach to automated computer-aided process planning (CAPP) for HM that can identify non-trivial, qualitatively distinct, and cost-optimal combinations of AM/SM modalities. A multimodal HM process plan is represented by a finite Boolean expression of AM and SM manufacturing primitives, such that the expression evaluates to an 'as-manufactured' artifact. We show that primitives that respect spatial constraints such as accessibility and collision avoidance may be constructed by solving inverse configuration space problems on the 'as-designed' artifact and manufacturing instruments. The primitives generate a finite Boolean algebra (FBA) that enumerates the entire search space for planning. The FBA's canonical intersection terms (i.e., 'atoms') provide the complete domain decomposition to reframe manufacturability analysis and process planning into purely symbolic reasoning, once a subcollection of atoms is found to be interchangeable with the design target. The approach subsumes unimodal (all-AM or all-SM) process planning as special cases. We demonstrate the practical potency of our framework and its computational efficiency when applied to process planning of complex 3D parts with dramatically different AM and SM instruments.
2006.03372
Qiangqiang Dai
Qiangqiang Dai, Rong-Hua Li, Lu Qin, Guoren Wang, Weihua Yang, Zhiwei Zhang and Ye Yuan
Scaling Up Distance-generalized Core Decomposition
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Core decomposition is a fundamental operator in network analysis. In this paper, we study the problem of computing distance-generalized core decomposition on a network. A distance-generalized core, also termed $(k, h)$-core, is a maximal subgraph in which every vertex has at least $k$ other vertices at distance no larger than $h$. The state-of-the-art algorithm for solving this problem is based on a peeling technique which iteratively removes the vertex (denoted by $v$) from the graph that has the smallest $h$-degree. The $h$-degree of a vertex $v$ denotes the number of other vertices that are reachable from $v$ within $h$ hops. Such a peeling algorithm, however, needs to frequently recompute the $h$-degrees of $v$'s neighbors after deleting $v$, which is typically very costly for a large $h$. To overcome this limitation, we propose an efficient peeling algorithm based on a novel $h$-degree updating technique. Instead of recomputing the $h$-degrees, our algorithm can dynamically maintain the $h$-degrees for all vertices via exploring a very small subgraph, after peeling a vertex. We show that such an $h$-degree updating procedure can be efficiently implemented by an elegant bitmap technique. In addition, we also propose a sampling-based algorithm and a parallelization technique to further improve the efficiency. Finally, we conduct extensive experiments on 12 real-world graphs to evaluate our algorithms. The results show that, when $h\ge 3$, our exact and sampling-based algorithms can achieve up to $10\times$ and $100\times$ speedup over the state-of-the-art algorithm, respectively.
[ { "created": "Fri, 5 Jun 2020 11:19:17 GMT", "version": "v1" }, { "created": "Fri, 22 Oct 2021 05:18:23 GMT", "version": "v2" } ]
2021-10-25
[ [ "Dai", "Qiangqiang", "" ], [ "Li", "Rong-Hua", "" ], [ "Qin", "Lu", "" ], [ "Wang", "Guoren", "" ], [ "Yang", "Weihua", "" ], [ "Zhang", "Zhiwei", "" ], [ "Yuan", "Ye", "" ] ]
Core decomposition is a fundamental operator in network analysis. In this paper, we study the problem of computing distance-generalized core decomposition on a network. A distance-generalized core, also termed $(k, h)$-core, is a maximal subgraph in which every vertex has at least $k$ other vertices at distance no larger than $h$. The state-of-the-art algorithm for solving this problem is based on a peeling technique which iteratively removes the vertex (denoted by $v$) from the graph that has the smallest $h$-degree. The $h$-degree of a vertex $v$ denotes the number of other vertices that are reachable from $v$ within $h$ hops. Such a peeling algorithm, however, needs to frequently recompute the $h$-degrees of $v$'s neighbors after deleting $v$, which is typically very costly for a large $h$. To overcome this limitation, we propose an efficient peeling algorithm based on a novel $h$-degree updating technique. Instead of recomputing the $h$-degrees, our algorithm can dynamically maintain the $h$-degrees for all vertices via exploring a very small subgraph, after peeling a vertex. We show that such an $h$-degree updating procedure can be efficiently implemented by an elegant bitmap technique. In addition, we also propose a sampling-based algorithm and a parallelization technique to further improve the efficiency. Finally, we conduct extensive experiments on 12 real-world graphs to evaluate our algorithms. The results show that, when $h\ge 3$, our exact and sampling-based algorithms can achieve up to $10\times$ and $100\times$ speedup over the state-of-the-art algorithm, respectively.
1507.03811
Liliana Lo Presti
Liliana Lo Presti and Marco La Cascia
Ensemble of Hankel Matrices for Face Emotion Recognition
Paper to appear in Proc. of ICIAP 2015. arXiv admin note: text overlap with arXiv:1506.05001
null
null
null
cs.CV cs.HC cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a face emotion is considered as the result of the composition of multiple concurrent signals, each corresponding to the movements of a specific facial muscle. These concurrent signals are represented by means of a set of multi-scale appearance features that might be correlated with one or more concurrent signals. The extraction of these appearance features from a sequence of face images yields to a set of time series. This paper proposes to use the dynamics regulating each appearance feature time series to recognize among different face emotions. To this purpose, an ensemble of Hankel matrices corresponding to the extracted time series is used for emotion classification within a framework that combines nearest neighbor and a majority vote schema. Experimental results on a public available dataset shows that the adopted representation is promising and yields state-of-the-art accuracy in emotion classification.
[ { "created": "Tue, 14 Jul 2015 11:26:31 GMT", "version": "v1" } ]
2015-07-19
[ [ "Presti", "Liliana Lo", "" ], [ "La Cascia", "Marco", "" ] ]
In this paper, a face emotion is considered as the result of the composition of multiple concurrent signals, each corresponding to the movements of a specific facial muscle. These concurrent signals are represented by means of a set of multi-scale appearance features that might be correlated with one or more concurrent signals. The extraction of these appearance features from a sequence of face images yields to a set of time series. This paper proposes to use the dynamics regulating each appearance feature time series to recognize among different face emotions. To this purpose, an ensemble of Hankel matrices corresponding to the extracted time series is used for emotion classification within a framework that combines nearest neighbor and a majority vote schema. Experimental results on a public available dataset shows that the adopted representation is promising and yields state-of-the-art accuracy in emotion classification.
1807.11694
Zenan Ling
Zenan Ling, Xing He, Robert C. Qiu
Spectrum concentration in deep residual learning: a free probability approach
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We revisit the initialization of deep residual networks (ResNets) by introducing a novel analytical tool in free probability to the community of deep learning. This tool deals with non-Hermitian random matrices, rather than their conventional Hermitian counterparts in the literature. As a consequence, this new tool enables us to evaluate the singular value spectrum of the input-output Jacobian of a fully-connected deep ResNet for both linear and nonlinear cases. With the powerful tool of free probability, we conduct an asymptotic analysis of the spectrum on the single-layer case, and then extend this analysis to the multi-layer case of an arbitrary number of layers. In particular, we propose to rescale the classical random initialization by the number of residual units, so that the spectrum has the order of $O(1)$, when compared with the large width and depth of the network. We empirically demonstrate that the proposed initialization scheme learns at a speed of orders of magnitudes faster than the classical ones, and thus attests a strong practical relevance of this investigation.
[ { "created": "Tue, 31 Jul 2018 07:49:59 GMT", "version": "v1" }, { "created": "Fri, 30 Nov 2018 10:34:20 GMT", "version": "v2" }, { "created": "Sun, 24 Feb 2019 09:43:46 GMT", "version": "v3" } ]
2019-02-26
[ [ "Ling", "Zenan", "" ], [ "He", "Xing", "" ], [ "Qiu", "Robert C.", "" ] ]
We revisit the initialization of deep residual networks (ResNets) by introducing a novel analytical tool in free probability to the community of deep learning. This tool deals with non-Hermitian random matrices, rather than their conventional Hermitian counterparts in the literature. As a consequence, this new tool enables us to evaluate the singular value spectrum of the input-output Jacobian of a fully-connected deep ResNet for both linear and nonlinear cases. With the powerful tool of free probability, we conduct an asymptotic analysis of the spectrum on the single-layer case, and then extend this analysis to the multi-layer case of an arbitrary number of layers. In particular, we propose to rescale the classical random initialization by the number of residual units, so that the spectrum has the order of $O(1)$, when compared with the large width and depth of the network. We empirically demonstrate that the proposed initialization scheme learns at a speed of orders of magnitudes faster than the classical ones, and thus attests a strong practical relevance of this investigation.
1808.04589
Andrew Beers
Andrew Beers, James Brown, Ken Chang, Katharina Hoebel, Elizabeth Gerstner, Bruce Rosen, Jayashree Kalpathy-Cramer
DeepNeuro: an open-source deep learning toolbox for neuroimaging
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Translating neural networks from theory to clinical practice has unique challenges, specifically in the field of neuroimaging. In this paper, we present DeepNeuro, a deep learning framework that is best-suited to putting deep learning algorithms for neuroimaging in practical usage with a minimum of friction. We show how this framework can be used to both design and train neural network architectures, as well as modify state-of-the-art architectures in a flexible and intuitive way. We display the pre- and postprocessing functions common in the medical imaging community that DeepNeuro offers to ensure consistent performance of networks across variable users, institutions, and scanners. And we show how pipelines created in DeepNeuro can be concisely packaged into shareable Docker containers and command-line interfaces using DeepNeuro's pipeline resources.
[ { "created": "Tue, 14 Aug 2018 09:03:39 GMT", "version": "v1" } ]
2018-08-15
[ [ "Beers", "Andrew", "" ], [ "Brown", "James", "" ], [ "Chang", "Ken", "" ], [ "Hoebel", "Katharina", "" ], [ "Gerstner", "Elizabeth", "" ], [ "Rosen", "Bruce", "" ], [ "Kalpathy-Cramer", "Jayashree", "" ] ]
Translating neural networks from theory to clinical practice has unique challenges, specifically in the field of neuroimaging. In this paper, we present DeepNeuro, a deep learning framework that is best-suited to putting deep learning algorithms for neuroimaging in practical usage with a minimum of friction. We show how this framework can be used to both design and train neural network architectures, as well as modify state-of-the-art architectures in a flexible and intuitive way. We display the pre- and postprocessing functions common in the medical imaging community that DeepNeuro offers to ensure consistent performance of networks across variable users, institutions, and scanners. And we show how pipelines created in DeepNeuro can be concisely packaged into shareable Docker containers and command-line interfaces using DeepNeuro's pipeline resources.
1204.1653
Ali Elouafiq
Ali Elouafiq
Machine Cognition Models: EPAM and GPS
EPAM, General Problem solver
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Through history, the human being tried to relay its daily tasks to other creatures, which was the main reason behind the rise of civilizations. It started with deploying animals to automate tasks in the field of agriculture(bulls), transportation (e.g. horses and donkeys), and even communication (pigeons). Millenniums after, come the Golden age with "Al-jazari" and other Muslim inventors, which were the pioneers of automation, this has given birth to industrial revolution in Europe, centuries after. At the end of the nineteenth century, a new era was to begin, the computational era, the most advanced technological and scientific development that is driving the mankind and the reason behind all the evolutions of science; such as medicine, communication, education, and physics. At this edge of technology engineers and scientists are trying to model a machine that behaves the same as they do, which pushed us to think about designing and implementing "Things that-Thinks", then artificial intelligence was. In this work we will cover each of the major discoveries and studies in the field of machine cognition, which are the "Elementary Perceiver and Memorizer"(EPAM) and "The General Problem Solver"(GPS). The First one focus mainly on implementing the human-verbal learning behavior, while the second one tries to model an architecture that is able to solve problems generally (e.g. theorem proving, chess playing, and arithmetic). We will cover the major goals and the main ideas of each model, as well as comparing their strengths and weaknesses, and finally giving their fields of applications. And Finally, we will suggest a real life implementation of a cognitive machine.
[ { "created": "Sat, 7 Apr 2012 16:34:20 GMT", "version": "v1" } ]
2012-04-10
[ [ "Elouafiq", "Ali", "" ] ]
Through history, the human being tried to relay its daily tasks to other creatures, which was the main reason behind the rise of civilizations. It started with deploying animals to automate tasks in the field of agriculture(bulls), transportation (e.g. horses and donkeys), and even communication (pigeons). Millenniums after, come the Golden age with "Al-jazari" and other Muslim inventors, which were the pioneers of automation, this has given birth to industrial revolution in Europe, centuries after. At the end of the nineteenth century, a new era was to begin, the computational era, the most advanced technological and scientific development that is driving the mankind and the reason behind all the evolutions of science; such as medicine, communication, education, and physics. At this edge of technology engineers and scientists are trying to model a machine that behaves the same as they do, which pushed us to think about designing and implementing "Things that-Thinks", then artificial intelligence was. In this work we will cover each of the major discoveries and studies in the field of machine cognition, which are the "Elementary Perceiver and Memorizer"(EPAM) and "The General Problem Solver"(GPS). The First one focus mainly on implementing the human-verbal learning behavior, while the second one tries to model an architecture that is able to solve problems generally (e.g. theorem proving, chess playing, and arithmetic). We will cover the major goals and the main ideas of each model, as well as comparing their strengths and weaknesses, and finally giving their fields of applications. And Finally, we will suggest a real life implementation of a cognitive machine.
2110.08525
Nathan Schucher
Nathan Schucher, Siva Reddy, Harm de Vries
The Power of Prompt Tuning for Low-Resource Semantic Parsing
ACL 2022 (main conference); updated results
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Prompt tuning has recently emerged as an effective method for adapting pre-trained language models to a number of language understanding and generation tasks. In this paper, we investigate prompt tuning for semantic parsing -- the task of mapping natural language utterances onto formal meaning representations. On the low-resource splits of Overnight and TOPv2, we find that a prompt tuned T5-xl significantly outperforms its fine-tuned counterpart, as well as strong GPT-3 and BART baselines. We also conduct ablation studies across different model scales and target representations, finding that, with increasing model scale, prompt tuned T5 models improve at generating target representations that are far from the pre-training distribution.
[ { "created": "Sat, 16 Oct 2021 09:33:09 GMT", "version": "v1" }, { "created": "Fri, 1 Apr 2022 13:59:36 GMT", "version": "v2" } ]
2022-04-04
[ [ "Schucher", "Nathan", "" ], [ "Reddy", "Siva", "" ], [ "de Vries", "Harm", "" ] ]
Prompt tuning has recently emerged as an effective method for adapting pre-trained language models to a number of language understanding and generation tasks. In this paper, we investigate prompt tuning for semantic parsing -- the task of mapping natural language utterances onto formal meaning representations. On the low-resource splits of Overnight and TOPv2, we find that a prompt tuned T5-xl significantly outperforms its fine-tuned counterpart, as well as strong GPT-3 and BART baselines. We also conduct ablation studies across different model scales and target representations, finding that, with increasing model scale, prompt tuned T5 models improve at generating target representations that are far from the pre-training distribution.
2106.05731
Jingyi Cui
Hongwei Wen, Jingyi Cui, Hanyuan Hang, Jiabin Liu, Yisen Wang, Zhouchen Lin
Leveraged Weighted Loss for Partial Label Learning
Accepted to ICML2021 as long talk
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As an important branch of weakly supervised learning, partial label learning deals with data where each instance is assigned with a set of candidate labels, whereas only one of them is true. Despite many methodology studies on learning from partial labels, there still lacks theoretical understandings of their risk consistent properties under relatively weak assumptions, especially on the link between theoretical results and the empirical choice of parameters. In this paper, we propose a family of loss functions named \textit{Leveraged Weighted} (LW) loss, which for the first time introduces the leverage parameter $\beta$ to consider the trade-off between losses on partial labels and non-partial ones. From the theoretical side, we derive a generalized result of risk consistency for the LW loss in learning from partial labels, based on which we provide guidance to the choice of the leverage parameter $\beta$. In experiments, we verify the theoretical guidance, and show the high effectiveness of our proposed LW loss on both benchmark and real datasets compared with other state-of-the-art partial label learning algorithms.
[ { "created": "Thu, 10 Jun 2021 13:25:13 GMT", "version": "v1" } ]
2021-06-11
[ [ "Wen", "Hongwei", "" ], [ "Cui", "Jingyi", "" ], [ "Hang", "Hanyuan", "" ], [ "Liu", "Jiabin", "" ], [ "Wang", "Yisen", "" ], [ "Lin", "Zhouchen", "" ] ]
As an important branch of weakly supervised learning, partial label learning deals with data where each instance is assigned with a set of candidate labels, whereas only one of them is true. Despite many methodology studies on learning from partial labels, there still lacks theoretical understandings of their risk consistent properties under relatively weak assumptions, especially on the link between theoretical results and the empirical choice of parameters. In this paper, we propose a family of loss functions named \textit{Leveraged Weighted} (LW) loss, which for the first time introduces the leverage parameter $\beta$ to consider the trade-off between losses on partial labels and non-partial ones. From the theoretical side, we derive a generalized result of risk consistency for the LW loss in learning from partial labels, based on which we provide guidance to the choice of the leverage parameter $\beta$. In experiments, we verify the theoretical guidance, and show the high effectiveness of our proposed LW loss on both benchmark and real datasets compared with other state-of-the-art partial label learning algorithms.
2311.16079
Zeming Chen
Zeming Chen, Alejandro Hern\'andez Cano, Angelika Romanou, Antoine Bonnet, Kyle Matoba, Francesco Salvi, Matteo Pagliardini, Simin Fan, Andreas K\"opf, Amirkeivan Mohtashami, Alexandre Sallinen, Alireza Sakhaeirad, Vinitra Swamy, Igor Krawczuk, Deniz Bayazit, Axel Marmet, Syrielle Montariol, Mary-Anne Hartley, Martin Jaggi, Antoine Bosselut
MEDITRON-70B: Scaling Medical Pretraining for Large Language Models
null
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Large language models (LLMs) can potentially democratize access to medical knowledge. While many efforts have been made to harness and improve LLMs' medical knowledge and reasoning capacities, the resulting models are either closed-source (e.g., PaLM, GPT-4) or limited in scale (<= 13B parameters), which restricts their abilities. In this work, we improve access to large-scale medical LLMs by releasing MEDITRON: a suite of open-source LLMs with 7B and 70B parameters adapted to the medical domain. MEDITRON builds on Llama-2 (through our adaptation of Nvidia's Megatron-LM distributed trainer), and extends pretraining on a comprehensively curated medical corpus, including selected PubMed articles, abstracts, and internationally-recognized medical guidelines. Evaluations using four major medical benchmarks show significant performance gains over several state-of-the-art baselines before and after task-specific finetuning. Overall, MEDITRON achieves a 6% absolute performance gain over the best public baseline in its parameter class and 3% over the strongest baseline we finetuned from Llama-2. Compared to closed-source LLMs, MEDITRON-70B outperforms GPT-3.5 and Med-PaLM and is within 5% of GPT-4 and 10% of Med-PaLM-2. We release our code for curating the medical pretraining corpus and the MEDITRON model weights to drive open-source development of more capable medical LLMs.
[ { "created": "Mon, 27 Nov 2023 18:49:43 GMT", "version": "v1" } ]
2023-11-28
[ [ "Chen", "Zeming", "" ], [ "Cano", "Alejandro Hernández", "" ], [ "Romanou", "Angelika", "" ], [ "Bonnet", "Antoine", "" ], [ "Matoba", "Kyle", "" ], [ "Salvi", "Francesco", "" ], [ "Pagliardini", "Matteo", "" ], [ "Fan", "Simin", "" ], [ "Köpf", "Andreas", "" ], [ "Mohtashami", "Amirkeivan", "" ], [ "Sallinen", "Alexandre", "" ], [ "Sakhaeirad", "Alireza", "" ], [ "Swamy", "Vinitra", "" ], [ "Krawczuk", "Igor", "" ], [ "Bayazit", "Deniz", "" ], [ "Marmet", "Axel", "" ], [ "Montariol", "Syrielle", "" ], [ "Hartley", "Mary-Anne", "" ], [ "Jaggi", "Martin", "" ], [ "Bosselut", "Antoine", "" ] ]
Large language models (LLMs) can potentially democratize access to medical knowledge. While many efforts have been made to harness and improve LLMs' medical knowledge and reasoning capacities, the resulting models are either closed-source (e.g., PaLM, GPT-4) or limited in scale (<= 13B parameters), which restricts their abilities. In this work, we improve access to large-scale medical LLMs by releasing MEDITRON: a suite of open-source LLMs with 7B and 70B parameters adapted to the medical domain. MEDITRON builds on Llama-2 (through our adaptation of Nvidia's Megatron-LM distributed trainer), and extends pretraining on a comprehensively curated medical corpus, including selected PubMed articles, abstracts, and internationally-recognized medical guidelines. Evaluations using four major medical benchmarks show significant performance gains over several state-of-the-art baselines before and after task-specific finetuning. Overall, MEDITRON achieves a 6% absolute performance gain over the best public baseline in its parameter class and 3% over the strongest baseline we finetuned from Llama-2. Compared to closed-source LLMs, MEDITRON-70B outperforms GPT-3.5 and Med-PaLM and is within 5% of GPT-4 and 10% of Med-PaLM-2. We release our code for curating the medical pretraining corpus and the MEDITRON model weights to drive open-source development of more capable medical LLMs.
2403.05334
Kartik Chandra
Kartik Chandra, Tzu-Mao Li, Rachit Nigam, Joshua Tenenbaum, Jonathan Ragan-Kelley
WatChat: Explaining perplexing programs by debugging mental models
null
null
null
null
cs.PL cs.AI cs.HC
http://creativecommons.org/licenses/by/4.0/
Often, a good explanation for a program's unexpected behavior is a bug in the programmer's code. But sometimes, an even better explanation is a bug in the programmer's mental model of the language they are using. Instead of merely debugging our current code ("giving the programmer a fish"), what if our tools could directly debug our mental models ("teaching the programmer to fish")? In this paper, we apply ideas from computational cognitive science to do exactly that. Given a perplexing program, we use program synthesis techniques to automatically infer potential misconceptions that might cause the user to be surprised by the program's behavior. By analyzing these misconceptions, we provide succinct, useful explanations of the program's behavior. Our methods can even be inverted to synthesize pedagogical example programs for diagnosing and correcting misconceptions in students.
[ { "created": "Fri, 8 Mar 2024 14:10:25 GMT", "version": "v1" } ]
2024-03-11
[ [ "Chandra", "Kartik", "" ], [ "Li", "Tzu-Mao", "" ], [ "Nigam", "Rachit", "" ], [ "Tenenbaum", "Joshua", "" ], [ "Ragan-Kelley", "Jonathan", "" ] ]
Often, a good explanation for a program's unexpected behavior is a bug in the programmer's code. But sometimes, an even better explanation is a bug in the programmer's mental model of the language they are using. Instead of merely debugging our current code ("giving the programmer a fish"), what if our tools could directly debug our mental models ("teaching the programmer to fish")? In this paper, we apply ideas from computational cognitive science to do exactly that. Given a perplexing program, we use program synthesis techniques to automatically infer potential misconceptions that might cause the user to be surprised by the program's behavior. By analyzing these misconceptions, we provide succinct, useful explanations of the program's behavior. Our methods can even be inverted to synthesize pedagogical example programs for diagnosing and correcting misconceptions in students.
2006.00643
Juan Ungredda
Juan Ungredda, Michael Pearce, Juergen Branke
Bayesian Optimisation vs. Input Uncertainty Reduction
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Simulators often require calibration inputs estimated from real world data and the quality of the estimate can significantly affect simulation output. Particularly when performing simulation optimisation to find an optimal solution, the uncertainty in the inputs significantly affects the quality of the found solution. One remedy is to search for the solution that has the best performance on average over the uncertain range of inputs yielding an optimal compromise solution. We consider the more general setting where a user may choose between either running simulations or instead collecting real world data. A user may choose an input and a solution and observe the simulation output, or instead query an external data source improving the input estimate enabling the search for a more focused, less compromised solution. We explicitly examine the trade-off between simulation and real data collection in order to find the optimal solution of the simulator with the true inputs. Using a value of information procedure, we propose a novel unified simulation optimisation procedure called Bayesian Information Collection and Optimisation (BICO) that, in each iteration, automatically determines which of the two actions (running simulations or data collection) is more beneficial. Numerical experiments demonstrate that the proposed algorithm is able to automatically determine an appropriate balance between optimisation and data collection.
[ { "created": "Sun, 31 May 2020 23:42:22 GMT", "version": "v1" } ]
2020-06-02
[ [ "Ungredda", "Juan", "" ], [ "Pearce", "Michael", "" ], [ "Branke", "Juergen", "" ] ]
Simulators often require calibration inputs estimated from real world data and the quality of the estimate can significantly affect simulation output. Particularly when performing simulation optimisation to find an optimal solution, the uncertainty in the inputs significantly affects the quality of the found solution. One remedy is to search for the solution that has the best performance on average over the uncertain range of inputs yielding an optimal compromise solution. We consider the more general setting where a user may choose between either running simulations or instead collecting real world data. A user may choose an input and a solution and observe the simulation output, or instead query an external data source improving the input estimate enabling the search for a more focused, less compromised solution. We explicitly examine the trade-off between simulation and real data collection in order to find the optimal solution of the simulator with the true inputs. Using a value of information procedure, we propose a novel unified simulation optimisation procedure called Bayesian Information Collection and Optimisation (BICO) that, in each iteration, automatically determines which of the two actions (running simulations or data collection) is more beneficial. Numerical experiments demonstrate that the proposed algorithm is able to automatically determine an appropriate balance between optimisation and data collection.
0907.2741
Stanley P. Y. Fung
Stanley P. Y. Fung
Bounded Delay Packet Scheduling in a Bounded Buffer
5 pages, 0 figures
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of buffer management in QoS-enabled network switches in the bounded delay model where each packet is associated with a weight and a deadline. We consider the more realistic situation where the network switch has a finite buffer size. A 9.82-competitive algorithm is known for the case of multiple buffers (Azar and Levy, SWAT'06). Recently, for the case of a single buffer, a 3-competitive deterministic algorithm and a 2.618-competitive randomized algorithm was known (Li, INFOCOM'09). In this paper we give a simple deterministic 2-competitive algorithm for the case of a single buffer.
[ { "created": "Thu, 16 Jul 2009 04:05:05 GMT", "version": "v1" } ]
2009-07-17
[ [ "Fung", "Stanley P. Y.", "" ] ]
We study the problem of buffer management in QoS-enabled network switches in the bounded delay model where each packet is associated with a weight and a deadline. We consider the more realistic situation where the network switch has a finite buffer size. A 9.82-competitive algorithm is known for the case of multiple buffers (Azar and Levy, SWAT'06). Recently, for the case of a single buffer, a 3-competitive deterministic algorithm and a 2.618-competitive randomized algorithm was known (Li, INFOCOM'09). In this paper we give a simple deterministic 2-competitive algorithm for the case of a single buffer.
1904.03589
Zhiyuan Fang
Zhiyuan Fang, Shu Kong, Charless Fowlkes, Yezhou Yang
Modularized Textual Grounding for Counterfactual Resilience
13 pages, 12 figures, IEEE Conference on Computer Vision and Pattern Recognition, 2019
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computer Vision applications often require a textual grounding module with precision, interpretability, and resilience to counterfactual inputs/queries. To achieve high grounding precision, current textual grounding methods heavily rely on large-scale training data with manual annotations at the pixel level. Such annotations are expensive to obtain and thus severely narrow the model's scope of real-world applications. Moreover, most of these methods sacrifice interpretability, generalizability, and they neglect the importance of being resilient to counterfactual inputs. To address these issues, we propose a visual grounding system which is 1) end-to-end trainable in a weakly supervised fashion with only image-level annotations, and 2) counterfactually resilient owing to the modular design. Specifically, we decompose textual descriptions into three levels: entity, semantic attribute, color information, and perform compositional grounding progressively. We validate our model through a series of experiments and demonstrate its improvement over the state-of-the-art methods. In particular, our model's performance not only surpasses other weakly/un-supervised methods and even approaches the strongly supervised ones, but also is interpretable for decision making and performs much better in face of counterfactual classes than all the others.
[ { "created": "Sun, 7 Apr 2019 05:59:04 GMT", "version": "v1" }, { "created": "Mon, 1 Jul 2019 04:42:34 GMT", "version": "v2" } ]
2019-07-02
[ [ "Fang", "Zhiyuan", "" ], [ "Kong", "Shu", "" ], [ "Fowlkes", "Charless", "" ], [ "Yang", "Yezhou", "" ] ]
Computer Vision applications often require a textual grounding module with precision, interpretability, and resilience to counterfactual inputs/queries. To achieve high grounding precision, current textual grounding methods heavily rely on large-scale training data with manual annotations at the pixel level. Such annotations are expensive to obtain and thus severely narrow the model's scope of real-world applications. Moreover, most of these methods sacrifice interpretability, generalizability, and they neglect the importance of being resilient to counterfactual inputs. To address these issues, we propose a visual grounding system which is 1) end-to-end trainable in a weakly supervised fashion with only image-level annotations, and 2) counterfactually resilient owing to the modular design. Specifically, we decompose textual descriptions into three levels: entity, semantic attribute, color information, and perform compositional grounding progressively. We validate our model through a series of experiments and demonstrate its improvement over the state-of-the-art methods. In particular, our model's performance not only surpasses other weakly/un-supervised methods and even approaches the strongly supervised ones, but also is interpretable for decision making and performs much better in face of counterfactual classes than all the others.
1502.00143
Hossam Afifi
Thouraya Toukabri Gunes, Hossam Afifi
Hybrid model for LTE Network-Assisted D2D communications
null
null
10.1007/978-3-319-07425-2_8
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
New Architecture to support D2D communications, where discovery is made directly between devices while communications occur with the help of the E-Node B
[ { "created": "Sat, 31 Jan 2015 18:39:46 GMT", "version": "v1" } ]
2015-02-03
[ [ "Gunes", "Thouraya Toukabri", "" ], [ "Afifi", "Hossam", "" ] ]
New Architecture to support D2D communications, where discovery is made directly between devices while communications occur with the help of the E-Node B
1911.12162
Fran\c{c}ois Tessier
Fran\c{c}ois Tessier, Maxime Martinasso, Matteo Chesi, Mark Klein, Miguel Gila
Dynamically Provisioning Cray DataWarp Storage
null
null
null
null
cs.DC cs.PF
http://creativecommons.org/licenses/by-sa/4.0/
Complex applications and workflows needs are often exclusively expressed in terms of computational resources on HPC systems. In many cases, other resources like storage or network are not allocatable and are shared across the entire HPC system. By looking at the storage resource in particular, any workflow or application should be able to select both its preferred data manager and its required storage capability or capacity. To achieve such a goal, new mechanisms should be introduced. In this work, we introduce such a mechanism for dynamically provision a data management system on top of storage devices. We particularly focus our effort on deploying a BeeGFS instance across multiple DataWarp nodes on a Cray XC50 system. However, we also demonstrate that the same mechanism can be used to deploy BeeGFS on non-Cray system.
[ { "created": "Wed, 27 Nov 2019 14:08:36 GMT", "version": "v1" } ]
2020-01-10
[ [ "Tessier", "François", "" ], [ "Martinasso", "Maxime", "" ], [ "Chesi", "Matteo", "" ], [ "Klein", "Mark", "" ], [ "Gila", "Miguel", "" ] ]
Complex applications and workflows needs are often exclusively expressed in terms of computational resources on HPC systems. In many cases, other resources like storage or network are not allocatable and are shared across the entire HPC system. By looking at the storage resource in particular, any workflow or application should be able to select both its preferred data manager and its required storage capability or capacity. To achieve such a goal, new mechanisms should be introduced. In this work, we introduce such a mechanism for dynamically provision a data management system on top of storage devices. We particularly focus our effort on deploying a BeeGFS instance across multiple DataWarp nodes on a Cray XC50 system. However, we also demonstrate that the same mechanism can be used to deploy BeeGFS on non-Cray system.
2006.06264
Nitika Mathur
Nitika Mathur, Timothy Baldwin and Trevor Cohn
Tangled up in BLEU: Reevaluating the Evaluation of Automatic Machine Translation Evaluation Metrics
Accepted at ACL 2020
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic metrics are fundamental for the development and evaluation of machine translation systems. Judging whether, and to what extent, automatic metrics concur with the gold standard of human evaluation is not a straightforward problem. We show that current methods for judging metrics are highly sensitive to the translations used for assessment, particularly the presence of outliers, which often leads to falsely confident conclusions about a metric's efficacy. Finally, we turn to pairwise system ranking, developing a method for thresholding performance improvement under an automatic metric against human judgements, which allows quantification of type I versus type II errors incurred, i.e., insignificant human differences in system quality that are accepted, and significant human differences that are rejected. Together, these findings suggest improvements to the protocols for metric evaluation and system performance evaluation in machine translation.
[ { "created": "Thu, 11 Jun 2020 09:12:53 GMT", "version": "v1" }, { "created": "Fri, 12 Jun 2020 04:35:41 GMT", "version": "v2" } ]
2020-06-15
[ [ "Mathur", "Nitika", "" ], [ "Baldwin", "Timothy", "" ], [ "Cohn", "Trevor", "" ] ]
Automatic metrics are fundamental for the development and evaluation of machine translation systems. Judging whether, and to what extent, automatic metrics concur with the gold standard of human evaluation is not a straightforward problem. We show that current methods for judging metrics are highly sensitive to the translations used for assessment, particularly the presence of outliers, which often leads to falsely confident conclusions about a metric's efficacy. Finally, we turn to pairwise system ranking, developing a method for thresholding performance improvement under an automatic metric against human judgements, which allows quantification of type I versus type II errors incurred, i.e., insignificant human differences in system quality that are accepted, and significant human differences that are rejected. Together, these findings suggest improvements to the protocols for metric evaluation and system performance evaluation in machine translation.
1708.09058
Shirin Nilizadeh
Shirin Nilizadeh, Francois Labreche, Alireza Sedighian, Ali Zand, Jose Fernandez, Christopher Kruegel, Gianluca Stringhini, and Giovanni Vigna
POISED: Spotting Twitter Spam Off the Beaten Paths
null
null
null
null
cs.CR cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cybercriminals have found in online social networks a propitious medium to spread spam and malicious content. Existing techniques for detecting spam include predicting the trustworthiness of accounts and analyzing the content of these messages. However, advanced attackers can still successfully evade these defenses. Online social networks bring people who have personal connections or share common interests to form communities. In this paper, we first show that users within a networked community share some topics of interest. Moreover, content shared on these social network tend to propagate according to the interests of people. Dissemination paths may emerge where some communities post similar messages, based on the interests of those communities. Spam and other malicious content, on the other hand, follow different spreading patterns. In this paper, we follow this insight and present POISED, a system that leverages the differences in propagation between benign and malicious messages on social networks to identify spam and other unwanted content. We test our system on a dataset of 1.3M tweets collected from 64K users, and we show that our approach is effective in detecting malicious messages, reaching 91% precision and 93% recall. We also show that POISED's detection is more comprehensive than previous systems, by comparing it to three state-of-the-art spam detection systems that have been proposed by the research community in the past. POISED significantly outperforms each of these systems. Moreover, through simulations, we show how POISED is effective in the early detection of spam messages and how it is resilient against two well-known adversarial machine learning attacks.
[ { "created": "Tue, 29 Aug 2017 23:41:59 GMT", "version": "v1" } ]
2017-08-31
[ [ "Nilizadeh", "Shirin", "" ], [ "Labreche", "Francois", "" ], [ "Sedighian", "Alireza", "" ], [ "Zand", "Ali", "" ], [ "Fernandez", "Jose", "" ], [ "Kruegel", "Christopher", "" ], [ "Stringhini", "Gianluca", "" ], [ "Vigna", "Giovanni", "" ] ]
Cybercriminals have found in online social networks a propitious medium to spread spam and malicious content. Existing techniques for detecting spam include predicting the trustworthiness of accounts and analyzing the content of these messages. However, advanced attackers can still successfully evade these defenses. Online social networks bring people who have personal connections or share common interests to form communities. In this paper, we first show that users within a networked community share some topics of interest. Moreover, content shared on these social network tend to propagate according to the interests of people. Dissemination paths may emerge where some communities post similar messages, based on the interests of those communities. Spam and other malicious content, on the other hand, follow different spreading patterns. In this paper, we follow this insight and present POISED, a system that leverages the differences in propagation between benign and malicious messages on social networks to identify spam and other unwanted content. We test our system on a dataset of 1.3M tweets collected from 64K users, and we show that our approach is effective in detecting malicious messages, reaching 91% precision and 93% recall. We also show that POISED's detection is more comprehensive than previous systems, by comparing it to three state-of-the-art spam detection systems that have been proposed by the research community in the past. POISED significantly outperforms each of these systems. Moreover, through simulations, we show how POISED is effective in the early detection of spam messages and how it is resilient against two well-known adversarial machine learning attacks.
2403.19728
Deshan Sumanathilaka Mr
Jayathi Hewapathirana and Deshan Sumanathilaka
EmoScan: Automatic Screening of Depression Symptoms in Romanized Sinhala Tweets
4 pages, 2 tables, 1 Figure , Preprint
null
null
null
cs.CL cs.CY cs.LG
http://creativecommons.org/licenses/by/4.0/
This work explores the utilization of Romanized Sinhala social media data to identify individuals at risk of depression. A machine learning-based framework is presented for the automatic screening of depression symptoms by analyzing language patterns, sentiment, and behavioural cues within a comprehensive dataset of social media posts. The research has been carried out to compare the suitability of Neural Networks over the classical machine learning techniques. The proposed Neural Network with an attention layer which is capable of handling long sequence data, attains a remarkable accuracy of 93.25% in detecting depression symptoms, surpassing current state-of-the-art methods. These findings underscore the efficacy of this approach in pinpointing individuals in need of proactive interventions and support. Mental health professionals, policymakers, and social media companies can gain valuable insights through the proposed model. Leveraging natural language processing techniques and machine learning algorithms, this work offers a promising pathway for mental health screening in the digital era. By harnessing the potential of social media data, the framework introduces a proactive method for recognizing and assisting individuals at risk of depression. In conclusion, this research contributes to the advancement of proactive interventions and support systems for mental health, thereby influencing both research and practical applications in the field.
[ { "created": "Thu, 28 Mar 2024 10:31:09 GMT", "version": "v1" } ]
2024-04-01
[ [ "Hewapathirana", "Jayathi", "" ], [ "Sumanathilaka", "Deshan", "" ] ]
This work explores the utilization of Romanized Sinhala social media data to identify individuals at risk of depression. A machine learning-based framework is presented for the automatic screening of depression symptoms by analyzing language patterns, sentiment, and behavioural cues within a comprehensive dataset of social media posts. The research has been carried out to compare the suitability of Neural Networks over the classical machine learning techniques. The proposed Neural Network with an attention layer which is capable of handling long sequence data, attains a remarkable accuracy of 93.25% in detecting depression symptoms, surpassing current state-of-the-art methods. These findings underscore the efficacy of this approach in pinpointing individuals in need of proactive interventions and support. Mental health professionals, policymakers, and social media companies can gain valuable insights through the proposed model. Leveraging natural language processing techniques and machine learning algorithms, this work offers a promising pathway for mental health screening in the digital era. By harnessing the potential of social media data, the framework introduces a proactive method for recognizing and assisting individuals at risk of depression. In conclusion, this research contributes to the advancement of proactive interventions and support systems for mental health, thereby influencing both research and practical applications in the field.
2205.07811
Colin Gordon
Colin S. Gordon, Sergey Matskevich
Natural Language Specifications in Proof Assistants
null
null
null
null
cs.PL cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Interactive proof assistants are computer programs carefully constructed to check a human-designed proof of a mathematical claim with high confidence in the implementation. However, this only validates truth of a formal claim, which may have been mistranslated from a claim made in natural language. This is especially problematic when using proof assistants to formally verify the correctness of software with respect to a natural language specification. The translation from informal to formal remains a challenging, time-consuming process that is difficult to audit for correctness. This paper argues that it is possible to build support for natural language specifications within existing proof assistants, in a way that complements the principles used to establish trust and auditability in proof assistants themselves.
[ { "created": "Mon, 16 May 2022 17:05:45 GMT", "version": "v1" } ]
2022-05-17
[ [ "Gordon", "Colin S.", "" ], [ "Matskevich", "Sergey", "" ] ]
Interactive proof assistants are computer programs carefully constructed to check a human-designed proof of a mathematical claim with high confidence in the implementation. However, this only validates truth of a formal claim, which may have been mistranslated from a claim made in natural language. This is especially problematic when using proof assistants to formally verify the correctness of software with respect to a natural language specification. The translation from informal to formal remains a challenging, time-consuming process that is difficult to audit for correctness. This paper argues that it is possible to build support for natural language specifications within existing proof assistants, in a way that complements the principles used to establish trust and auditability in proof assistants themselves.
2309.04441
Jongwon Lee
Jongwon Lee, Su Yeon Choi, David Hanley, Timothy Bretl
Comparative Study of Visual SLAM-Based Mobile Robot Localization Using Fiducial Markers
IEEE 2023 IROS Workshop "Closing the Loop on Localization". For more information, see https://oravus.github.io/vpr-workshop/index
null
null
null
cs.RO cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper presents a comparative study of three modes for mobile robot localization based on visual SLAM using fiducial markers (i.e., square-shaped artificial landmarks with a black-and-white grid pattern): SLAM, SLAM with a prior map, and localization with a prior map. The reason for comparing the SLAM-based approaches leveraging fiducial markers is because previous work has shown their superior performance over feature-only methods, with less computational burden compared to methods that use both feature and marker detection without compromising the localization performance. The evaluation is conducted using indoor image sequences captured with a hand-held camera containing multiple fiducial markers in the environment. The performance metrics include absolute trajectory error and runtime for the optimization process per frame. In particular, for the last two modes (SLAM and localization with a prior map), we evaluate their performances by perturbing the quality of prior map to study the extent to which each mode is tolerant to such perturbations. Hardware experiments show consistent trajectory error levels across the three modes, with the localization mode exhibiting the shortest runtime among them. Yet, with map perturbations, SLAM with a prior map maintains performance, while localization mode degrades in both aspects.
[ { "created": "Fri, 8 Sep 2023 17:05:24 GMT", "version": "v1" } ]
2023-09-11
[ [ "Lee", "Jongwon", "" ], [ "Choi", "Su Yeon", "" ], [ "Hanley", "David", "" ], [ "Bretl", "Timothy", "" ] ]
This paper presents a comparative study of three modes for mobile robot localization based on visual SLAM using fiducial markers (i.e., square-shaped artificial landmarks with a black-and-white grid pattern): SLAM, SLAM with a prior map, and localization with a prior map. The reason for comparing the SLAM-based approaches leveraging fiducial markers is because previous work has shown their superior performance over feature-only methods, with less computational burden compared to methods that use both feature and marker detection without compromising the localization performance. The evaluation is conducted using indoor image sequences captured with a hand-held camera containing multiple fiducial markers in the environment. The performance metrics include absolute trajectory error and runtime for the optimization process per frame. In particular, for the last two modes (SLAM and localization with a prior map), we evaluate their performances by perturbing the quality of prior map to study the extent to which each mode is tolerant to such perturbations. Hardware experiments show consistent trajectory error levels across the three modes, with the localization mode exhibiting the shortest runtime among them. Yet, with map perturbations, SLAM with a prior map maintains performance, while localization mode degrades in both aspects.
2003.02955
Seid Muhie Yimam
Seid Muhie Yimam and Gopalakrishnan Venkatesh and John Sie Yuen Lee and Chris Biemann
Automatic Compilation of Resources for Academic Writing and Evaluating with Informal Word Identification and Paraphrasing System
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We present the first approach to automatically building resources for academic writing. The aim is to build a writing aid system that automatically edits a text so that it better adheres to the academic style of writing. On top of existing academic resources, such as the Corpus of Contemporary American English (COCA) academic Word List, the New Academic Word List, and the Academic Collocation List, we also explore how to dynamically build such resources that would be used to automatically identify informal or non-academic words or phrases. The resources are compiled using different generic approaches that can be extended for different domains and languages. We describe the evaluation of resources with a system implementation. The system consists of an informal word identification (IWI), academic candidate paraphrase generation, and paraphrase ranking components. To generate candidates and rank them in context, we have used the PPDB and WordNet paraphrase resources. We use the Concepts in Context (CoInCO) "All-Words" lexical substitution dataset both for the informal word identification and paraphrase generation experiments. Our informal word identification component achieves an F-1 score of 82%, significantly outperforming a stratified classifier baseline. The main contribution of this work is a domain-independent methodology to build targeted resources for writing aids.
[ { "created": "Thu, 5 Mar 2020 22:55:45 GMT", "version": "v1" } ]
2020-03-09
[ [ "Yimam", "Seid Muhie", "" ], [ "Venkatesh", "Gopalakrishnan", "" ], [ "Lee", "John Sie Yuen", "" ], [ "Biemann", "Chris", "" ] ]
We present the first approach to automatically building resources for academic writing. The aim is to build a writing aid system that automatically edits a text so that it better adheres to the academic style of writing. On top of existing academic resources, such as the Corpus of Contemporary American English (COCA) academic Word List, the New Academic Word List, and the Academic Collocation List, we also explore how to dynamically build such resources that would be used to automatically identify informal or non-academic words or phrases. The resources are compiled using different generic approaches that can be extended for different domains and languages. We describe the evaluation of resources with a system implementation. The system consists of an informal word identification (IWI), academic candidate paraphrase generation, and paraphrase ranking components. To generate candidates and rank them in context, we have used the PPDB and WordNet paraphrase resources. We use the Concepts in Context (CoInCO) "All-Words" lexical substitution dataset both for the informal word identification and paraphrase generation experiments. Our informal word identification component achieves an F-1 score of 82%, significantly outperforming a stratified classifier baseline. The main contribution of this work is a domain-independent methodology to build targeted resources for writing aids.
1708.06834
Xavier Gir\'o-i-Nieto
Victor Campos, Brendan Jou, Xavier Giro-i-Nieto, Jordi Torres and Shih-Fu Chang
Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks
Accepted as conference paper at ICLR 2018
null
null
null
cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recurrent Neural Networks (RNNs) continue to show outstanding performance in sequence modeling tasks. However, training RNNs on long sequences often face challenges like slow inference, vanishing gradients and difficulty in capturing long term dependencies. In backpropagation through time settings, these issues are tightly coupled with the large, sequential computational graph resulting from unfolding the RNN in time. We introduce the Skip RNN model which extends existing RNN models by learning to skip state updates and shortens the effective size of the computational graph. This model can also be encouraged to perform fewer state updates through a budget constraint. We evaluate the proposed model on various tasks and show how it can reduce the number of required RNN updates while preserving, and sometimes even improving, the performance of the baseline RNN models. Source code is publicly available at https://imatge-upc.github.io/skiprnn-2017-telecombcn/ .
[ { "created": "Tue, 22 Aug 2017 21:53:34 GMT", "version": "v1" }, { "created": "Thu, 24 Aug 2017 00:54:45 GMT", "version": "v2" }, { "created": "Mon, 5 Feb 2018 17:14:12 GMT", "version": "v3" } ]
2018-02-06
[ [ "Campos", "Victor", "" ], [ "Jou", "Brendan", "" ], [ "Giro-i-Nieto", "Xavier", "" ], [ "Torres", "Jordi", "" ], [ "Chang", "Shih-Fu", "" ] ]
Recurrent Neural Networks (RNNs) continue to show outstanding performance in sequence modeling tasks. However, training RNNs on long sequences often face challenges like slow inference, vanishing gradients and difficulty in capturing long term dependencies. In backpropagation through time settings, these issues are tightly coupled with the large, sequential computational graph resulting from unfolding the RNN in time. We introduce the Skip RNN model which extends existing RNN models by learning to skip state updates and shortens the effective size of the computational graph. This model can also be encouraged to perform fewer state updates through a budget constraint. We evaluate the proposed model on various tasks and show how it can reduce the number of required RNN updates while preserving, and sometimes even improving, the performance of the baseline RNN models. Source code is publicly available at https://imatge-upc.github.io/skiprnn-2017-telecombcn/ .
2108.03064
Shuvendu Roy
Shuvendu Roy, Ali Etemad
Spatiotemporal Contrastive Learning of Facial Expressions in Videos
Accepted by 9th International Conference on Affective Computing and Intelligent Interaction (ACII 2021)
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
We propose a self-supervised contrastive learning approach for facial expression recognition (FER) in videos. We propose a novel temporal sampling-based augmentation scheme to be utilized in addition to standard spatial augmentations used for contrastive learning. Our proposed temporal augmentation scheme randomly picks from one of three temporal sampling techniques: (1) pure random sampling, (2) uniform sampling, and (3) sequential sampling. This is followed by a combination of up to three standard spatial augmentations. We then use a deep R(2+1)D network for FER, which we train in a self-supervised fashion based on the augmentations and subsequently fine-tune. Experiments are performed on the Oulu-CASIA dataset and the performance is compared to other works in FER. The results indicate that our method achieves an accuracy of 89.4%, setting a new state-of-the-art by outperforming other works. Additional experiments and analysis confirm the considerable contribution of the proposed temporal augmentation versus the existing spatial ones.
[ { "created": "Fri, 6 Aug 2021 11:27:06 GMT", "version": "v1" } ]
2021-08-09
[ [ "Roy", "Shuvendu", "" ], [ "Etemad", "Ali", "" ] ]
We propose a self-supervised contrastive learning approach for facial expression recognition (FER) in videos. We propose a novel temporal sampling-based augmentation scheme to be utilized in addition to standard spatial augmentations used for contrastive learning. Our proposed temporal augmentation scheme randomly picks from one of three temporal sampling techniques: (1) pure random sampling, (2) uniform sampling, and (3) sequential sampling. This is followed by a combination of up to three standard spatial augmentations. We then use a deep R(2+1)D network for FER, which we train in a self-supervised fashion based on the augmentations and subsequently fine-tune. Experiments are performed on the Oulu-CASIA dataset and the performance is compared to other works in FER. The results indicate that our method achieves an accuracy of 89.4%, setting a new state-of-the-art by outperforming other works. Additional experiments and analysis confirm the considerable contribution of the proposed temporal augmentation versus the existing spatial ones.
2012.09409
David Deng
David Deng and Avideh Zakhor
Temporal LiDAR Frame Prediction for Autonomous Driving
In 3DV 2020
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Anticipating the future in a dynamic scene is critical for many fields such as autonomous driving and robotics. In this paper we propose a class of novel neural network architectures to predict future LiDAR frames given previous ones. Since the ground truth in this application is simply the next frame in the sequence, we can train our models in a self-supervised fashion. Our proposed architectures are based on FlowNet3D and Dynamic Graph CNN. We use Chamfer Distance (CD) and Earth Mover's Distance (EMD) as loss functions and evaluation metrics. We train and evaluate our models using the newly released nuScenes dataset, and characterize their performance and complexity with several baselines. Compared to directly using FlowNet3D, our proposed architectures achieve CD and EMD nearly an order of magnitude lower. In addition, we show that our predictions generate reasonable scene flow approximations without using any labelled supervision.
[ { "created": "Thu, 17 Dec 2020 06:19:59 GMT", "version": "v1" } ]
2020-12-18
[ [ "Deng", "David", "" ], [ "Zakhor", "Avideh", "" ] ]
Anticipating the future in a dynamic scene is critical for many fields such as autonomous driving and robotics. In this paper we propose a class of novel neural network architectures to predict future LiDAR frames given previous ones. Since the ground truth in this application is simply the next frame in the sequence, we can train our models in a self-supervised fashion. Our proposed architectures are based on FlowNet3D and Dynamic Graph CNN. We use Chamfer Distance (CD) and Earth Mover's Distance (EMD) as loss functions and evaluation metrics. We train and evaluate our models using the newly released nuScenes dataset, and characterize their performance and complexity with several baselines. Compared to directly using FlowNet3D, our proposed architectures achieve CD and EMD nearly an order of magnitude lower. In addition, we show that our predictions generate reasonable scene flow approximations without using any labelled supervision.
2307.06698
Thiviyan Thanapalasingam
Thiviyan Thanapalasingam, Emile van Krieken, Peter Bloem, Paul Groth
IntelliGraphs: Datasets for Benchmarking Knowledge Graph Generation
null
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Knowledge Graph Embedding (KGE) models are used to learn continuous representations of entities and relations. A key task in the literature is predicting missing links between entities. However, Knowledge Graphs are not just sets of links but also have semantics underlying their structure. Semantics is crucial in several downstream tasks, such as query answering or reasoning. We introduce the subgraph inference task, where a model has to generate likely and semantically valid subgraphs. We propose IntelliGraphs, a set of five new Knowledge Graph datasets. The IntelliGraphs datasets contain subgraphs with semantics expressed in logical rules for evaluating subgraph inference. We also present the dataset generator that produced the synthetic datasets. We designed four novel baseline models, which include three models based on traditional KGEs. We evaluate their expressiveness and show that these models cannot capture the semantics. We believe this benchmark will encourage the development of machine learning models that emphasize semantic understanding.
[ { "created": "Thu, 13 Jul 2023 11:54:32 GMT", "version": "v1" }, { "created": "Wed, 19 Jul 2023 11:23:07 GMT", "version": "v2" }, { "created": "Fri, 25 Aug 2023 08:37:10 GMT", "version": "v3" } ]
2023-08-28
[ [ "Thanapalasingam", "Thiviyan", "" ], [ "van Krieken", "Emile", "" ], [ "Bloem", "Peter", "" ], [ "Groth", "Paul", "" ] ]
Knowledge Graph Embedding (KGE) models are used to learn continuous representations of entities and relations. A key task in the literature is predicting missing links between entities. However, Knowledge Graphs are not just sets of links but also have semantics underlying their structure. Semantics is crucial in several downstream tasks, such as query answering or reasoning. We introduce the subgraph inference task, where a model has to generate likely and semantically valid subgraphs. We propose IntelliGraphs, a set of five new Knowledge Graph datasets. The IntelliGraphs datasets contain subgraphs with semantics expressed in logical rules for evaluating subgraph inference. We also present the dataset generator that produced the synthetic datasets. We designed four novel baseline models, which include three models based on traditional KGEs. We evaluate their expressiveness and show that these models cannot capture the semantics. We believe this benchmark will encourage the development of machine learning models that emphasize semantic understanding.
2308.09234
Mohammad Saeed Ebrahimi Saadabadi
Sahar Rahimi Malakshan, Mohammad Saeed Ebrahimi Saadabadi, Nima Najafzadeh, Nasser M. Nasrabadi
Deep Boosting Multi-Modal Ensemble Face Recognition with Sample-Level Weighting
2023 IEEE International Joint Conference on Biometrics (IJCB)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep convolutional neural networks have achieved remarkable success in face recognition (FR), partly due to the abundant data availability. However, the current training benchmarks exhibit an imbalanced quality distribution; most images are of high quality. This poses issues for generalization on hard samples since they are underrepresented during training. In this work, we employ the multi-model boosting technique to deal with this issue. Inspired by the well-known AdaBoost, we propose a sample-level weighting approach to incorporate the importance of different samples into the FR loss. Individual models of the proposed framework are experts at distinct levels of sample hardness. Therefore, the combination of models leads to a robust feature extractor without losing the discriminability on the easy samples. Also, for incorporating the sample hardness into the training criterion, we analytically show the effect of sample mining on the important aspects of current angular margin loss functions, i.e., margin and scale. The proposed method shows superior performance in comparison with the state-of-the-art algorithms in extensive experiments on the CFP-FP, LFW, CPLFW, CALFW, AgeDB, TinyFace, IJB-B, and IJB-C evaluation datasets.
[ { "created": "Fri, 18 Aug 2023 01:44:54 GMT", "version": "v1" } ]
2023-08-21
[ [ "Malakshan", "Sahar Rahimi", "" ], [ "Saadabadi", "Mohammad Saeed Ebrahimi", "" ], [ "Najafzadeh", "Nima", "" ], [ "Nasrabadi", "Nasser M.", "" ] ]
Deep convolutional neural networks have achieved remarkable success in face recognition (FR), partly due to the abundant data availability. However, the current training benchmarks exhibit an imbalanced quality distribution; most images are of high quality. This poses issues for generalization on hard samples since they are underrepresented during training. In this work, we employ the multi-model boosting technique to deal with this issue. Inspired by the well-known AdaBoost, we propose a sample-level weighting approach to incorporate the importance of different samples into the FR loss. Individual models of the proposed framework are experts at distinct levels of sample hardness. Therefore, the combination of models leads to a robust feature extractor without losing the discriminability on the easy samples. Also, for incorporating the sample hardness into the training criterion, we analytically show the effect of sample mining on the important aspects of current angular margin loss functions, i.e., margin and scale. The proposed method shows superior performance in comparison with the state-of-the-art algorithms in extensive experiments on the CFP-FP, LFW, CPLFW, CALFW, AgeDB, TinyFace, IJB-B, and IJB-C evaluation datasets.
1410.4141
Shadman Sakib
Shadman Sakib, Rakibul Haq and Tariq Wazed
Unified mobile public health care system (UMPHCS) for underdeveloped countries
6 pages, 8 figures, 3 tables. Published in the proceedings of 3rd International Conference on Informatics, Electronics & Vision, Dhaka, Bangladesh. Available in IEEExplore. http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6850801&url=http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6850801
Proceedings of 3rd International Conference on Informatics, Electronics & Vision-2014, Dhaka, Bangladesh. Published in IEEExplore
10.1109/ICIEV.2014.6850801
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we have proposed a new smartphone based system for health care, monitoring and diagnosis, which is specially designed to efficiently increase the public health care system in the distant, rural, unreached areas of the underdeveloped and developing countries. In this all-in-one system, we have digitized the health monitoring and diagnostic devices in a way so that each device works as a minimum `plug and play' sensor module of the total system, reducing the cost radically. Besides, the easy-to-use smartphone application for operating the whole system reduces the necessity of skilled and trained manpower, making it a perfect toolbox for the government health workers in the unreached rural areas.
[ { "created": "Wed, 15 Oct 2014 17:17:14 GMT", "version": "v1" } ]
2014-10-16
[ [ "Sakib", "Shadman", "" ], [ "Haq", "Rakibul", "" ], [ "Wazed", "Tariq", "" ] ]
In this paper, we have proposed a new smartphone based system for health care, monitoring and diagnosis, which is specially designed to efficiently increase the public health care system in the distant, rural, unreached areas of the underdeveloped and developing countries. In this all-in-one system, we have digitized the health monitoring and diagnostic devices in a way so that each device works as a minimum `plug and play' sensor module of the total system, reducing the cost radically. Besides, the easy-to-use smartphone application for operating the whole system reduces the necessity of skilled and trained manpower, making it a perfect toolbox for the government health workers in the unreached rural areas.
1911.01310
Marco Gallieri
Simone Pozzoli, Marco Gallieri, Riccardo Scattolini
Tustin neural networks: a class of recurrent nets for adaptive MPC of mechanical systems
Under review
null
null
null
cs.NE cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The use of recurrent neural networks to represent the dynamics of unstable systems is difficult due to the need to properly initialize their internal states, which in most of the cases do not have any physical meaning, consequent to the non-smoothness of the optimization problem. For this reason, in this paper focus is placed on mechanical systems characterized by a number of degrees of freedom, each one represented by two states, namely position and velocity. For these systems, a new recurrent neural network is proposed: Tustin-Net. Inspired by second-order dynamics, the network hidden states can be straightforwardly estimated, as their differential relationships with the measured states are hardcoded in the forward pass. The proposed structure is used to model a double inverted pendulum and for model-based Reinforcement Learning, where an adaptive Model Predictive Controller scheme using the Unscented Kalman Filter is proposed to deal with parameter changes in the system.
[ { "created": "Mon, 4 Nov 2019 16:21:27 GMT", "version": "v1" } ]
2019-11-05
[ [ "Pozzoli", "Simone", "" ], [ "Gallieri", "Marco", "" ], [ "Scattolini", "Riccardo", "" ] ]
The use of recurrent neural networks to represent the dynamics of unstable systems is difficult due to the need to properly initialize their internal states, which in most of the cases do not have any physical meaning, consequent to the non-smoothness of the optimization problem. For this reason, in this paper focus is placed on mechanical systems characterized by a number of degrees of freedom, each one represented by two states, namely position and velocity. For these systems, a new recurrent neural network is proposed: Tustin-Net. Inspired by second-order dynamics, the network hidden states can be straightforwardly estimated, as their differential relationships with the measured states are hardcoded in the forward pass. The proposed structure is used to model a double inverted pendulum and for model-based Reinforcement Learning, where an adaptive Model Predictive Controller scheme using the Unscented Kalman Filter is proposed to deal with parameter changes in the system.
2404.08743
Xiaohang Tang
Xiaohang Tang, Sam Wong, Kevin Pu, Xi Chen, Yalong Yang, Yan Chen
VizGroup: An AI-Assisted Event-Driven System for Real-Time Collaborative Programming Learning Analytics
null
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Programming instructors often conduct collaborative learning activities, like Peer Instruction, to foster a deeper understanding in students and enhance their engagement with learning. These activities, however, may not always yield productive outcomes due to the diversity of student mental models and their ineffective collaboration. In this work, we introduce VizGroup, an AI-assisted system that enables programming instructors to easily oversee students' real-time collaborative learning behaviors during large programming courses. VizGroup leverages Large Language Models (LLMs) to recommend event specifications for instructors so that they can simultaneously track and receive alerts about key correlation patterns between various collaboration metrics and ongoing coding tasks. We evaluated VizGroup with 12 instructors using a dataset collected from a Peer Instruction activity that was conducted in a large programming lecture. The results showed that compared to a version of VizGroup without the suggested units, VizGroup with suggested units helped instructors create additional monitoring units on previously undetected patterns on their own, covered a more diverse range of metrics, and influenced the participants' following notification creation strategies.
[ { "created": "Fri, 12 Apr 2024 18:10:40 GMT", "version": "v1" } ]
2024-04-16
[ [ "Tang", "Xiaohang", "" ], [ "Wong", "Sam", "" ], [ "Pu", "Kevin", "" ], [ "Chen", "Xi", "" ], [ "Yang", "Yalong", "" ], [ "Chen", "Yan", "" ] ]
Programming instructors often conduct collaborative learning activities, like Peer Instruction, to foster a deeper understanding in students and enhance their engagement with learning. These activities, however, may not always yield productive outcomes due to the diversity of student mental models and their ineffective collaboration. In this work, we introduce VizGroup, an AI-assisted system that enables programming instructors to easily oversee students' real-time collaborative learning behaviors during large programming courses. VizGroup leverages Large Language Models (LLMs) to recommend event specifications for instructors so that they can simultaneously track and receive alerts about key correlation patterns between various collaboration metrics and ongoing coding tasks. We evaluated VizGroup with 12 instructors using a dataset collected from a Peer Instruction activity that was conducted in a large programming lecture. The results showed that compared to a version of VizGroup without the suggested units, VizGroup with suggested units helped instructors create additional monitoring units on previously undetected patterns on their own, covered a more diverse range of metrics, and influenced the participants' following notification creation strategies.
2407.00682
Yuqiao Yang
Yuqiao Yang, Zhongjie Wu, Yongzhao Zhang, Ting Chen, Jun Li, Jie Yang, Wenhao Liu, Xiaosong Zhang, Ruicong Shi, Jingwei Li, Yu Jiang, Zhuo Su
UWBAD: Towards Effective and Imperceptible Jamming Attacks Against UWB Ranging Systems with COTS Chips
Proceedings of the 2024 ACM SIGSAC Conference on Computer and Communications Security
null
10.1145/3658644.3670349
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
UWB ranging systems have been adopted in many critical and security sensitive applications due to its precise positioning and secure ranging capabilities. We present a practical jamming attack, namely UWBAD, against commercial UWB ranging systems, which exploits the vulnerability of the adoption of the normalized cross-correlation process in UWB ranging and can selectively and quickly block ranging sessions without prior knowledge of the configurations of the victim devices, potentially leading to severe consequences such as property loss, unauthorized access, or vehicle theft. UWBAD achieves more effective and less imperceptible jamming due to: (i) it efficiently blocks every ranging session by leveraging the field-level jamming, thereby exerting a tangible impact on commercial UWB ranging systems, and (ii) the compact, reactive, and selective system design based on COTS UWB chips, making it affordable and less imperceptible. We successfully conducted real attacks against commercial UWB ranging systems from the three largest UWB chip vendors on the market, e.g., Apple, NXP, and Qorvo. We reported our findings to Apple, related Original Equipment Manufacturers (OEM), and the Automotive Security Research Group, triggering internal security incident response procedures at Volkswagen, Audi, Bosch, and NXP. As of the writing of this paper, the related OEM has acknowledged this vulnerability in their automotive systems and has offered a $5,000 reward as a bounty.
[ { "created": "Sun, 30 Jun 2024 12:42:11 GMT", "version": "v1" } ]
2024-07-02
[ [ "Yang", "Yuqiao", "" ], [ "Wu", "Zhongjie", "" ], [ "Zhang", "Yongzhao", "" ], [ "Chen", "Ting", "" ], [ "Li", "Jun", "" ], [ "Yang", "Jie", "" ], [ "Liu", "Wenhao", "" ], [ "Zhang", "Xiaosong", "" ], [ "Shi", "Ruicong", "" ], [ "Li", "Jingwei", "" ], [ "Jiang", "Yu", "" ], [ "Su", "Zhuo", "" ] ]
UWB ranging systems have been adopted in many critical and security sensitive applications due to its precise positioning and secure ranging capabilities. We present a practical jamming attack, namely UWBAD, against commercial UWB ranging systems, which exploits the vulnerability of the adoption of the normalized cross-correlation process in UWB ranging and can selectively and quickly block ranging sessions without prior knowledge of the configurations of the victim devices, potentially leading to severe consequences such as property loss, unauthorized access, or vehicle theft. UWBAD achieves more effective and less imperceptible jamming due to: (i) it efficiently blocks every ranging session by leveraging the field-level jamming, thereby exerting a tangible impact on commercial UWB ranging systems, and (ii) the compact, reactive, and selective system design based on COTS UWB chips, making it affordable and less imperceptible. We successfully conducted real attacks against commercial UWB ranging systems from the three largest UWB chip vendors on the market, e.g., Apple, NXP, and Qorvo. We reported our findings to Apple, related Original Equipment Manufacturers (OEM), and the Automotive Security Research Group, triggering internal security incident response procedures at Volkswagen, Audi, Bosch, and NXP. As of the writing of this paper, the related OEM has acknowledged this vulnerability in their automotive systems and has offered a $5,000 reward as a bounty.
1907.06777
Jason Ku
Jason Ku, Alex D. Pon, Sean Walsh, and Steven L. Waslander
Improving 3D Object Detection for Pedestrians with Virtual Multi-View Synthesis Orientation Estimation
Accepted in IROS 2019
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Accurately estimating the orientation of pedestrians is an important and challenging task for autonomous driving because this information is essential for tracking and predicting pedestrian behavior. This paper presents a flexible Virtual Multi-View Synthesis module that can be adopted into 3D object detection methods to improve orientation estimation. The module uses a multi-step process to acquire the fine-grained semantic information required for accurate orientation estimation. First, the scene's point cloud is densified using a structure preserving depth completion algorithm and each point is colorized using its corresponding RGB pixel. Next, virtual cameras are placed around each object in the densified point cloud to generate novel viewpoints, which preserve the object's appearance. We show that this module greatly improves the orientation estimation on the challenging pedestrian class on the KITTI benchmark. When used with the open-source 3D detector AVOD-FPN, we outperform all other published methods on the pedestrian Orientation, 3D, and Bird's Eye View benchmarks.
[ { "created": "Mon, 15 Jul 2019 22:27:16 GMT", "version": "v1" } ]
2019-07-17
[ [ "Ku", "Jason", "" ], [ "Pon", "Alex D.", "" ], [ "Walsh", "Sean", "" ], [ "Waslander", "Steven L.", "" ] ]
Accurately estimating the orientation of pedestrians is an important and challenging task for autonomous driving because this information is essential for tracking and predicting pedestrian behavior. This paper presents a flexible Virtual Multi-View Synthesis module that can be adopted into 3D object detection methods to improve orientation estimation. The module uses a multi-step process to acquire the fine-grained semantic information required for accurate orientation estimation. First, the scene's point cloud is densified using a structure preserving depth completion algorithm and each point is colorized using its corresponding RGB pixel. Next, virtual cameras are placed around each object in the densified point cloud to generate novel viewpoints, which preserve the object's appearance. We show that this module greatly improves the orientation estimation on the challenging pedestrian class on the KITTI benchmark. When used with the open-source 3D detector AVOD-FPN, we outperform all other published methods on the pedestrian Orientation, 3D, and Bird's Eye View benchmarks.