id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2312.11536
Litian Liu
Litian Liu and Yao Qin
Fast Decision Boundary based Out-of-Distribution Detector
ICML 2024 main conference paper
null
null
null
cs.LG eess.IV
http://creativecommons.org/licenses/by/4.0/
Efficient and effective Out-of-Distribution (OOD) detection is essential for the safe deployment of AI systems. Existing feature space methods, while effective, often incur significant computational overhead due to their reliance on auxiliary models built from training features. In this paper, we propose a computationally-efficient OOD detector without using auxiliary models while still leveraging the rich information embedded in the feature space. Specifically, we detect OOD samples based on their feature distances to decision boundaries. To minimize computational cost, we introduce an efficient closed-form estimation, analytically proven to tightly lower bound the distance. Based on our estimation, we discover that In-Distribution (ID) features tend to be further from decision boundaries than OOD features. Additionally, ID and OOD samples are better separated when compared at equal deviation levels from the mean of training features. By regularizing the distances to decision boundaries based on feature deviation from the mean, we develop a hyperparameter-free, auxiliary model-free OOD detector. Our method matches or surpasses the effectiveness of state-of-the-art methods in extensive experiments while incurring negligible overhead in inference latency. Overall, our approach significantly improves the efficiency-effectiveness trade-off in OOD detection. Code is available at: https://github.com/litianliu/fDBD-OOD.
[ { "created": "Fri, 15 Dec 2023 19:50:32 GMT", "version": "v1" }, { "created": "Tue, 4 Jun 2024 16:01:27 GMT", "version": "v2" } ]
2024-06-05
[ [ "Liu", "Litian", "" ], [ "Qin", "Yao", "" ] ]
Efficient and effective Out-of-Distribution (OOD) detection is essential for the safe deployment of AI systems. Existing feature space methods, while effective, often incur significant computational overhead due to their reliance on auxiliary models built from training features. In this paper, we propose a computationally-efficient OOD detector without using auxiliary models while still leveraging the rich information embedded in the feature space. Specifically, we detect OOD samples based on their feature distances to decision boundaries. To minimize computational cost, we introduce an efficient closed-form estimation, analytically proven to tightly lower bound the distance. Based on our estimation, we discover that In-Distribution (ID) features tend to be further from decision boundaries than OOD features. Additionally, ID and OOD samples are better separated when compared at equal deviation levels from the mean of training features. By regularizing the distances to decision boundaries based on feature deviation from the mean, we develop a hyperparameter-free, auxiliary model-free OOD detector. Our method matches or surpasses the effectiveness of state-of-the-art methods in extensive experiments while incurring negligible overhead in inference latency. Overall, our approach significantly improves the efficiency-effectiveness trade-off in OOD detection. Code is available at: https://github.com/litianliu/fDBD-OOD.
2210.04441
Suayb Arslan
Osman B. Guney and Suayb S. Arslan
Fault-Tolerant Strassen-Like Matrix Multiplication
6 pages, 2 figures
null
10.1109/SIU49456.2020.9302383
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this study, we propose a simple method for fault-tolerant Strassen-like matrix multiplications. The proposed method is based on using two distinct Strassen-like algorithms instead of replicating a given one. We have realized that using two different algorithms, new check relations arise resulting in more local computations. These local computations are found using computer aided search. To improve performance, special parity (extra) sub-matrix multiplications (PSMMs) are generated (two of them) at the expense of increasing communication/computation cost of the system. Our preliminary results demonstrate that the proposed method outperforms a Strassen-like algorithm with two copies and secures a very close performance to three copy version using only 2 PSMMs, reducing the total number of compute nodes by around 24\% i.e., from 21 to 16.
[ { "created": "Mon, 10 Oct 2022 05:18:22 GMT", "version": "v1" } ]
2022-10-11
[ [ "Guney", "Osman B.", "" ], [ "Arslan", "Suayb S.", "" ] ]
In this study, we propose a simple method for fault-tolerant Strassen-like matrix multiplications. The proposed method is based on using two distinct Strassen-like algorithms instead of replicating a given one. We have realized that using two different algorithms, new check relations arise resulting in more local computations. These local computations are found using computer aided search. To improve performance, special parity (extra) sub-matrix multiplications (PSMMs) are generated (two of them) at the expense of increasing communication/computation cost of the system. Our preliminary results demonstrate that the proposed method outperforms a Strassen-like algorithm with two copies and secures a very close performance to three copy version using only 2 PSMMs, reducing the total number of compute nodes by around 24\% i.e., from 21 to 16.
0710.4828
Hossein Hajiabolhassan
Hossein Hajiabolhassan and Abbas Cheraghi
Bounds for Visual Cryptography Schemes
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we investigate the best pixel expansion of the various models of visual cryptography schemes. In this regard, we consider visual cryptography schemes introduced by Tzeng and Hu [13]. In such a model, only minimal qualified sets can recover the secret image and that the recovered secret image can be darker or lighter than the background. Blundo et al. [4] introduced a lower bound for the best pixel expansion of this scheme in terms of minimal qualified sets. We present another lower bound for the best pixel expansion of the scheme. As a corollary, we introduce a lower bound, based on an induced matching of hypergraph of qualified sets, for the best pixel expansion of the aforementioned model and the traditional model of visual cryptography realized by basis matrices. Finally, we study access structures based on graphs and we present an upper bound for the smallest pixel expansion in terms of strong chromatic index.
[ { "created": "Thu, 25 Oct 2007 12:17:15 GMT", "version": "v1" }, { "created": "Mon, 20 Oct 2008 01:49:21 GMT", "version": "v2" }, { "created": "Sat, 6 Jun 2009 07:08:14 GMT", "version": "v3" }, { "created": "Mon, 7 Sep 2009 03:55:32 GMT", "version": "v4" }, { "created": "Thu, 3 Dec 2009 11:57:02 GMT", "version": "v5" } ]
2009-12-03
[ [ "Hajiabolhassan", "Hossein", "" ], [ "Cheraghi", "Abbas", "" ] ]
In this paper, we investigate the best pixel expansion of the various models of visual cryptography schemes. In this regard, we consider visual cryptography schemes introduced by Tzeng and Hu [13]. In such a model, only minimal qualified sets can recover the secret image and that the recovered secret image can be darker or lighter than the background. Blundo et al. [4] introduced a lower bound for the best pixel expansion of this scheme in terms of minimal qualified sets. We present another lower bound for the best pixel expansion of the scheme. As a corollary, we introduce a lower bound, based on an induced matching of hypergraph of qualified sets, for the best pixel expansion of the aforementioned model and the traditional model of visual cryptography realized by basis matrices. Finally, we study access structures based on graphs and we present an upper bound for the smallest pixel expansion in terms of strong chromatic index.
1210.5454
Mina Guirguis
Mina Guirguis and George Atia
Stuck in Traffic (SiT) Attacks: A Framework for Identifying Stealthy Attacks that Cause Traffic Congestion
null
null
null
null
cs.NI cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in wireless technologies have enabled many new applications in Intelligent Transportation Systems (ITS) such as collision avoidance, cooperative driving, congestion avoidance, and traffic optimization. Due to the vulnerable nature of wireless communication against interference and intentional jamming, ITS face new challenges to ensure the reliability and the safety of the overall system. In this paper, we expose a class of stealthy attacks -- Stuck in Traffic (SiT) attacks -- that aim to cause congestion by exploiting how drivers make decisions based on smart traffic signs. An attacker mounting a SiT attack solves a Markov Decision Process problem to find optimal/suboptimal attack policies in which he/she interferes with a well-chosen subset of signals that are based on the state of the system. We apply Approximate Policy Iteration (API) algorithms to derive potent attack policies. We evaluate their performance on a number of systems and compare them to other attack policies including random, myopic and DoS attack policies. The generated policies, albeit suboptimal, are shown to significantly outperform other attack policies as they maximize the expected cumulative reward from the standpoint of the attacker.
[ { "created": "Fri, 19 Oct 2012 15:48:54 GMT", "version": "v1" } ]
2012-10-22
[ [ "Guirguis", "Mina", "" ], [ "Atia", "George", "" ] ]
Recent advances in wireless technologies have enabled many new applications in Intelligent Transportation Systems (ITS) such as collision avoidance, cooperative driving, congestion avoidance, and traffic optimization. Due to the vulnerable nature of wireless communication against interference and intentional jamming, ITS face new challenges to ensure the reliability and the safety of the overall system. In this paper, we expose a class of stealthy attacks -- Stuck in Traffic (SiT) attacks -- that aim to cause congestion by exploiting how drivers make decisions based on smart traffic signs. An attacker mounting a SiT attack solves a Markov Decision Process problem to find optimal/suboptimal attack policies in which he/she interferes with a well-chosen subset of signals that are based on the state of the system. We apply Approximate Policy Iteration (API) algorithms to derive potent attack policies. We evaluate their performance on a number of systems and compare them to other attack policies including random, myopic and DoS attack policies. The generated policies, albeit suboptimal, are shown to significantly outperform other attack policies as they maximize the expected cumulative reward from the standpoint of the attacker.
2205.07149
Zishen Wan
Zishen Wan, Ashwin Lele, Bo Yu, Shaoshan Liu, Yu Wang, Vijay Janapa Reddi, Cong Hao, and Arijit Raychowdhury
Robotic Computing on FPGAs: Current Progress, Research Challenges, and Opportunities
2022 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), June 13-15, 2022, Incheon, Korea
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robotic computing has reached a tipping point, with a myriad of robots (e.g., drones, self-driving cars, logistic robots) being widely applied in diverse scenarios. The continuous proliferation of robotics, however, critically depends on efficient computing substrates, driven by real-time requirements, robotic size-weight-and-power constraints, cybersecurity considerations, and dynamically changing scenarios. Within all platforms, FPGA is able to deliver both software and hardware solutions with low power, high performance, reconfigurability, reliability, and adaptivity characteristics, serving as the promising computing substrate for robotic applications. This paper highlights the current progress, design techniques, challenges, and open research challenges in the domain of robotic computing on FPGAs.
[ { "created": "Sat, 14 May 2022 23:19:33 GMT", "version": "v1" } ]
2022-05-17
[ [ "Wan", "Zishen", "" ], [ "Lele", "Ashwin", "" ], [ "Yu", "Bo", "" ], [ "Liu", "Shaoshan", "" ], [ "Wang", "Yu", "" ], [ "Reddi", "Vijay Janapa", "" ], [ "Hao", "Cong", "" ], [ "Raychowdhury", "Arijit", "" ] ]
Robotic computing has reached a tipping point, with a myriad of robots (e.g., drones, self-driving cars, logistic robots) being widely applied in diverse scenarios. The continuous proliferation of robotics, however, critically depends on efficient computing substrates, driven by real-time requirements, robotic size-weight-and-power constraints, cybersecurity considerations, and dynamically changing scenarios. Within all platforms, FPGA is able to deliver both software and hardware solutions with low power, high performance, reconfigurability, reliability, and adaptivity characteristics, serving as the promising computing substrate for robotic applications. This paper highlights the current progress, design techniques, challenges, and open research challenges in the domain of robotic computing on FPGAs.
1802.08562
Artsiom Sanakoyeu
Artsiom Sanakoyeu, Miguel A. Bautista, Bj\"orn Ommer
Deep Unsupervised Learning of Visual Similarities
arXiv admin note: text overlap with arXiv:1608.08792
Pattern Recognition Volume 78, June 2018, Pages 331-343
10.1016/j.patcog.2018.01.036
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Exemplar learning of visual similarities in an unsupervised manner is a problem of paramount importance to Computer Vision. In this context, however, the recent breakthrough in deep learning could not yet unfold its full potential. With only a single positive sample, a great imbalance between one positive and many negatives, and unreliable relationships between most samples, training of Convolutional Neural networks is impaired. In this paper we use weak estimates of local similarities and propose a single optimization problem to extract batches of samples with mutually consistent relations. Conflicting relations are distributed over different batches and similar samples are grouped into compact groups. Learning visual similarities is then framed as a sequence of categorization tasks. The CNN then consolidates transitivity relations within and between groups and learns a single representation for all samples without the need for labels. The proposed unsupervised approach has shown competitive performance on detailed posture analysis and object classification.
[ { "created": "Thu, 22 Feb 2018 04:11:59 GMT", "version": "v1" } ]
2018-02-26
[ [ "Sanakoyeu", "Artsiom", "" ], [ "Bautista", "Miguel A.", "" ], [ "Ommer", "Björn", "" ] ]
Exemplar learning of visual similarities in an unsupervised manner is a problem of paramount importance to Computer Vision. In this context, however, the recent breakthrough in deep learning could not yet unfold its full potential. With only a single positive sample, a great imbalance between one positive and many negatives, and unreliable relationships between most samples, training of Convolutional Neural networks is impaired. In this paper we use weak estimates of local similarities and propose a single optimization problem to extract batches of samples with mutually consistent relations. Conflicting relations are distributed over different batches and similar samples are grouped into compact groups. Learning visual similarities is then framed as a sequence of categorization tasks. The CNN then consolidates transitivity relations within and between groups and learns a single representation for all samples without the need for labels. The proposed unsupervised approach has shown competitive performance on detailed posture analysis and object classification.
2105.12309
Ayush Rajput
Sharan Balasubramanian, Ayush Rajput, Rodra W. Hascaryo, Chirag Rastogi, William R. Norris
Comparison of Dynamic and Kinematic Model Driven Extended Kalman Filters (EKF) for the Localization of Autonomous Underwater Vehicles
Preprint for ASME Journal for Mechanisms and Robotics, not peer reviewed yet
null
null
null
cs.RO cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Autonomous Underwater Vehicles (AUVs) and Remotely Operated Vehicles (ROVs) are used for a wide variety of missions related to exploration and scientific research. Successful navigation by these systems requires a good localization system. Kalman filter based localization techniques have been prevalent since the early 1960s and extensive research has been carried out using them, both in development and in design. It has been found that the use of a dynamic model (instead of a kinematic model) in the Kalman filter can lead to more accurate predictions, as the dynamic model takes the forces acting on the AUV into account. Presented in this paper is a motion-predictive extended Kalman filter (EKF) for AUVs using a simplified dynamic model. The dynamic model is derived first and then it was simplified for a RexROV, a type of submarine vehicle used in simple underwater exploration, inspection of subsea structures, pipelines and shipwrecks. The filter was implemented with a simulated vehicle in an open-source marine vehicle simulator called UUV Simulator and the results were compared with the ground truth. The results show good prediction accuracy for the dynamic filter, though improvements are needed before the EKF can be used on real-time. Some perspective and discussion on practical implementation is presented to show the next steps needed for this concept.
[ { "created": "Wed, 26 May 2021 03:05:03 GMT", "version": "v1" } ]
2021-05-27
[ [ "Balasubramanian", "Sharan", "" ], [ "Rajput", "Ayush", "" ], [ "Hascaryo", "Rodra W.", "" ], [ "Rastogi", "Chirag", "" ], [ "Norris", "William R.", "" ] ]
Autonomous Underwater Vehicles (AUVs) and Remotely Operated Vehicles (ROVs) are used for a wide variety of missions related to exploration and scientific research. Successful navigation by these systems requires a good localization system. Kalman filter based localization techniques have been prevalent since the early 1960s and extensive research has been carried out using them, both in development and in design. It has been found that the use of a dynamic model (instead of a kinematic model) in the Kalman filter can lead to more accurate predictions, as the dynamic model takes the forces acting on the AUV into account. Presented in this paper is a motion-predictive extended Kalman filter (EKF) for AUVs using a simplified dynamic model. The dynamic model is derived first and then it was simplified for a RexROV, a type of submarine vehicle used in simple underwater exploration, inspection of subsea structures, pipelines and shipwrecks. The filter was implemented with a simulated vehicle in an open-source marine vehicle simulator called UUV Simulator and the results were compared with the ground truth. The results show good prediction accuracy for the dynamic filter, though improvements are needed before the EKF can be used on real-time. Some perspective and discussion on practical implementation is presented to show the next steps needed for this concept.
2304.01171
Qinglin Liu
Qinglin Liu, Xiaoqian Lv, Quanling Meng, Zonglin Li, Xiangyuan Lan, Shuo Yang, Shengping Zhang, Liqiang Nie
Revisiting Context Aggregation for Image Matting
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Traditional studies emphasize the significance of context information in improving matting performance. Consequently, deep learning-based matting methods delve into designing pooling or affinity-based context aggregation modules to achieve superior results. However, these modules cannot well handle the context scale shift caused by the difference in image size during training and inference, resulting in matting performance degradation. In this paper, we revisit the context aggregation mechanisms of matting networks and find that a basic encoder-decoder network without any context aggregation modules can actually learn more universal context aggregation, thereby achieving higher matting performance compared to existing methods. Building on this insight, we present AEMatter, a matting network that is straightforward yet very effective. AEMatter adopts a Hybrid-Transformer backbone with appearance-enhanced axis-wise learning (AEAL) blocks to build a basic network with strong context aggregation learning capability. Furthermore, AEMatter leverages a large image training strategy to assist the network in learning context aggregation from data. Extensive experiments on five popular matting datasets demonstrate that the proposed AEMatter outperforms state-of-the-art matting methods by a large margin.
[ { "created": "Mon, 3 Apr 2023 17:40:30 GMT", "version": "v1" }, { "created": "Wed, 15 May 2024 02:24:58 GMT", "version": "v2" } ]
2024-05-16
[ [ "Liu", "Qinglin", "" ], [ "Lv", "Xiaoqian", "" ], [ "Meng", "Quanling", "" ], [ "Li", "Zonglin", "" ], [ "Lan", "Xiangyuan", "" ], [ "Yang", "Shuo", "" ], [ "Zhang", "Shengping", "" ], [ "Nie", "Liqiang", "" ] ]
Traditional studies emphasize the significance of context information in improving matting performance. Consequently, deep learning-based matting methods delve into designing pooling or affinity-based context aggregation modules to achieve superior results. However, these modules cannot well handle the context scale shift caused by the difference in image size during training and inference, resulting in matting performance degradation. In this paper, we revisit the context aggregation mechanisms of matting networks and find that a basic encoder-decoder network without any context aggregation modules can actually learn more universal context aggregation, thereby achieving higher matting performance compared to existing methods. Building on this insight, we present AEMatter, a matting network that is straightforward yet very effective. AEMatter adopts a Hybrid-Transformer backbone with appearance-enhanced axis-wise learning (AEAL) blocks to build a basic network with strong context aggregation learning capability. Furthermore, AEMatter leverages a large image training strategy to assist the network in learning context aggregation from data. Extensive experiments on five popular matting datasets demonstrate that the proposed AEMatter outperforms state-of-the-art matting methods by a large margin.
1508.00040
Heba Aly
Heba Aly, Moustafa Youssef
An Analysis of Device-Free and Device-Based WiFi-Localization Systems
Published in International Journal of Ambient Computing and Intelligence (IJACI) - Volume 6 Issue 1, January 2014
null
10.4018/ijaci.2014010101
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
WiFi-based localization became one of the main indoor localization techniques due to the ubiquity of WiFi connectivity. However, indoor environments exhibit complex wireless propagation characteristics. Typically, these characteristics are captured by constructing a fingerprint map for the different locations in the area of interest. This fingerprint requires significant overhead in manual construction, and thus has been one of the major drawbacks of WiFi-based localization. In this paper, we present an automated tool for fingerprint constructions and leverage it to study novel scenarios for device-based and device-free WiFi-based localization that are difficult to evaluate in a real environment. In a particular, we examine the effect of changing the access points (AP) mounting location, AP technology upgrade, crowd effect on calibration and operation, among others; on the accuracy of the localization system. We present the analysis for the two classes of WiFi-based localization: device-based and device-free. Our analysis highlights factors affecting the localization system accuracy, how to tune it for better localization, and provides insights for both researchers and practitioners.
[ { "created": "Fri, 31 Jul 2015 21:42:02 GMT", "version": "v1" } ]
2015-08-04
[ [ "Aly", "Heba", "" ], [ "Youssef", "Moustafa", "" ] ]
WiFi-based localization became one of the main indoor localization techniques due to the ubiquity of WiFi connectivity. However, indoor environments exhibit complex wireless propagation characteristics. Typically, these characteristics are captured by constructing a fingerprint map for the different locations in the area of interest. This fingerprint requires significant overhead in manual construction, and thus has been one of the major drawbacks of WiFi-based localization. In this paper, we present an automated tool for fingerprint constructions and leverage it to study novel scenarios for device-based and device-free WiFi-based localization that are difficult to evaluate in a real environment. In a particular, we examine the effect of changing the access points (AP) mounting location, AP technology upgrade, crowd effect on calibration and operation, among others; on the accuracy of the localization system. We present the analysis for the two classes of WiFi-based localization: device-based and device-free. Our analysis highlights factors affecting the localization system accuracy, how to tune it for better localization, and provides insights for both researchers and practitioners.
2306.04459
Zhen Zhang
Mengting Hu, Zhen Zhang, Shiwan Zhao, Minlie Huang and Bingzhe Wu
Uncertainty in Natural Language Processing: Sources, Quantification, and Applications
This work has been submitted to the IEEE for possible publication
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
As a main field of artificial intelligence, natural language processing (NLP) has achieved remarkable success via deep neural networks. Plenty of NLP tasks have been addressed in a unified manner, with various tasks being associated with each other through sharing the same paradigm. However, neural networks are black boxes and rely on probability computation. Making mistakes is inevitable. Therefore, estimating the reliability and trustworthiness (in other words, uncertainty) of neural networks becomes a key research direction, which plays a crucial role in reducing models' risks and making better decisions. Therefore, in this survey, we provide a comprehensive review of uncertainty-relevant works in the NLP field. Considering the data and paradigms characteristics, we first categorize the sources of uncertainty in natural language into three types, including input, system, and output. Then, we systemically review uncertainty quantification approaches and the main applications. Finally, we discuss the challenges of uncertainty estimation in NLP and discuss potential future directions, taking into account recent trends in the field. Though there have been a few surveys about uncertainty estimation, our work is the first to review uncertainty from the NLP perspective.
[ { "created": "Mon, 5 Jun 2023 06:46:53 GMT", "version": "v1" } ]
2023-06-08
[ [ "Hu", "Mengting", "" ], [ "Zhang", "Zhen", "" ], [ "Zhao", "Shiwan", "" ], [ "Huang", "Minlie", "" ], [ "Wu", "Bingzhe", "" ] ]
As a main field of artificial intelligence, natural language processing (NLP) has achieved remarkable success via deep neural networks. Plenty of NLP tasks have been addressed in a unified manner, with various tasks being associated with each other through sharing the same paradigm. However, neural networks are black boxes and rely on probability computation. Making mistakes is inevitable. Therefore, estimating the reliability and trustworthiness (in other words, uncertainty) of neural networks becomes a key research direction, which plays a crucial role in reducing models' risks and making better decisions. Therefore, in this survey, we provide a comprehensive review of uncertainty-relevant works in the NLP field. Considering the data and paradigms characteristics, we first categorize the sources of uncertainty in natural language into three types, including input, system, and output. Then, we systemically review uncertainty quantification approaches and the main applications. Finally, we discuss the challenges of uncertainty estimation in NLP and discuss potential future directions, taking into account recent trends in the field. Though there have been a few surveys about uncertainty estimation, our work is the first to review uncertainty from the NLP perspective.
1910.10073
Maha Elbayad
Maha Elbayad and Jiatao Gu and Edouard Grave and Michael Auli
Depth-Adaptive Transformer
Published as a conference paper at ICLR 2020
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
State of the art sequence-to-sequence models for large scale tasks perform a fixed number of computations for each input sequence regardless of whether it is easy or hard to process. In this paper, we train Transformer models which can make output predictions at different stages of the network and we investigate different ways to predict how much computation is required for a particular sequence. Unlike dynamic computation in Universal Transformers, which applies the same set of layers iteratively, we apply different layers at every step to adjust both the amount of computation as well as the model capacity. On IWSLT German-English translation our approach matches the accuracy of a well tuned baseline Transformer while using less than a quarter of the decoder layers.
[ { "created": "Tue, 22 Oct 2019 16:15:58 GMT", "version": "v1" }, { "created": "Mon, 16 Dec 2019 18:32:39 GMT", "version": "v2" }, { "created": "Thu, 19 Dec 2019 17:26:49 GMT", "version": "v3" }, { "created": "Fri, 14 Feb 2020 20:49:40 GMT", "version": "v4" } ]
2020-02-18
[ [ "Elbayad", "Maha", "" ], [ "Gu", "Jiatao", "" ], [ "Grave", "Edouard", "" ], [ "Auli", "Michael", "" ] ]
State of the art sequence-to-sequence models for large scale tasks perform a fixed number of computations for each input sequence regardless of whether it is easy or hard to process. In this paper, we train Transformer models which can make output predictions at different stages of the network and we investigate different ways to predict how much computation is required for a particular sequence. Unlike dynamic computation in Universal Transformers, which applies the same set of layers iteratively, we apply different layers at every step to adjust both the amount of computation as well as the model capacity. On IWSLT German-English translation our approach matches the accuracy of a well tuned baseline Transformer while using less than a quarter of the decoder layers.
2302.12392
Mehala Balamurali
Mehala.Balamurali, Konstantin M. Seiler
Better Predict the Dynamic of Geometry of In-Pit Stockpiles Using Geospatial Data and Polygon Models
null
Proceedings of the 40th International Symposium on the Application of Computers and Operations Research in the Minerals Industries (APCOM, 2021), 257-267. Johannesburg: The Southern African Institute of Mining and Metallurgy, 2021
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Modelling stockpile is a key factor of a project economic and operation in mining, because not all the mined ores are not able to mill for many reasons. Further, the financial value of the ore in the stockpile needs to be reflected on the balance sheet. Therefore, automatically tracking the frontiers of the stockpile facilitates the mine scheduling engineers to calculate the tonnage of the ore remaining in the stockpile. This paper suggests how the dynamic of stockpile shape changes caused by dumping and reclaiming operations can be inferred using polygon models. The presented work also demonstrates how the geometry of stockpiles can be inferred in the absence of reclaimed bucket information, in which case the reclaim polygons are established using the diggers GPS positional data at the time of truck loading. This work further compares two polygon models for creating 2D shapes.
[ { "created": "Fri, 24 Feb 2023 01:46:13 GMT", "version": "v1" } ]
2023-02-27
[ [ "Balamurali", "Mehala.", "" ], [ "Seiler", "Konstantin M.", "" ] ]
Modelling stockpile is a key factor of a project economic and operation in mining, because not all the mined ores are not able to mill for many reasons. Further, the financial value of the ore in the stockpile needs to be reflected on the balance sheet. Therefore, automatically tracking the frontiers of the stockpile facilitates the mine scheduling engineers to calculate the tonnage of the ore remaining in the stockpile. This paper suggests how the dynamic of stockpile shape changes caused by dumping and reclaiming operations can be inferred using polygon models. The presented work also demonstrates how the geometry of stockpiles can be inferred in the absence of reclaimed bucket information, in which case the reclaim polygons are established using the diggers GPS positional data at the time of truck loading. This work further compares two polygon models for creating 2D shapes.
2011.09140
Tomohide Shibata
Shogo Fujita and Tomohide Shibata and Manabu Okumura
Diverse and Non-redundant Answer Set Extraction on Community QA based on DPPs
COLING2020, 12 pages
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In community-based question answering (CQA) platforms, it takes time for a user to get useful information from among many answers. Although one solution is an answer ranking method, the user still needs to read through the top-ranked answers carefully. This paper proposes a new task of selecting a diverse and non-redundant answer set rather than ranking the answers. Our method is based on determinantal point processes (DPPs), and it calculates the answer importance and similarity between answers by using BERT. We built a dataset focusing on a Japanese CQA site, and the experiments on this dataset demonstrated that the proposed method outperformed several baseline methods.
[ { "created": "Wed, 18 Nov 2020 07:33:03 GMT", "version": "v1" } ]
2020-11-19
[ [ "Fujita", "Shogo", "" ], [ "Shibata", "Tomohide", "" ], [ "Okumura", "Manabu", "" ] ]
In community-based question answering (CQA) platforms, it takes time for a user to get useful information from among many answers. Although one solution is an answer ranking method, the user still needs to read through the top-ranked answers carefully. This paper proposes a new task of selecting a diverse and non-redundant answer set rather than ranking the answers. Our method is based on determinantal point processes (DPPs), and it calculates the answer importance and similarity between answers by using BERT. We built a dataset focusing on a Japanese CQA site, and the experiments on this dataset demonstrated that the proposed method outperformed several baseline methods.
2403.15402
Sourojit Ghosh
Sourojit Ghosh, Sarah Coppola
This Class Isn't Designed For Me: Recognizing Ableist Trends In Design Education, And Redesigning For An Inclusive And Sustainable Future
Upcoming Publication, Design Research Society 2024
null
10.21606/drs.2024.1070.
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
Traditional and currently-prevalent pedagogies of design perpetuate ableist and exclusionary notions of what it means to be a designer. In this paper, we trace such historically exclusionary norms of design education, and highlight modern-day instances from our own experiences as design educators in such epistemologies. Towards imagining a more inclusive and sustainable future of design education, we present three case studies from our own experience as design educators in redesigning course experiences for blind and low-vision (BLV), deaf and hard-of-hearing (DHH) students, and students with other disabilities. In documenting successful and unsuccessful practices, we imagine what a pedagogy of care in design education would look like.
[ { "created": "Mon, 19 Feb 2024 20:14:34 GMT", "version": "v1" } ]
2024-08-06
[ [ "Ghosh", "Sourojit", "" ], [ "Coppola", "Sarah", "" ] ]
Traditional and currently-prevalent pedagogies of design perpetuate ableist and exclusionary notions of what it means to be a designer. In this paper, we trace such historically exclusionary norms of design education, and highlight modern-day instances from our own experiences as design educators in such epistemologies. Towards imagining a more inclusive and sustainable future of design education, we present three case studies from our own experience as design educators in redesigning course experiences for blind and low-vision (BLV), deaf and hard-of-hearing (DHH) students, and students with other disabilities. In documenting successful and unsuccessful practices, we imagine what a pedagogy of care in design education would look like.
2111.05953
Giuseppina Carannante
Giuseppina Carannante, Dimah Dera, Ghulam Rasool, Nidhal C. Bouaynaya, and Lyudmila Mihaylova
Robust Learning via Ensemble Density Propagation in Deep Neural Networks
submitted to 2020 IEEE International Workshop on Machine Learning for Signal Processing
null
null
null
cs.LG cs.AI cs.CV math.PR
http://creativecommons.org/licenses/by/4.0/
Learning in uncertain, noisy, or adversarial environments is a challenging task for deep neural networks (DNNs). We propose a new theoretically grounded and efficient approach for robust learning that builds upon Bayesian estimation and Variational Inference. We formulate the problem of density propagation through layers of a DNN and solve it using an Ensemble Density Propagation (EnDP) scheme. The EnDP approach allows us to propagate moments of the variational probability distribution across the layers of a Bayesian DNN, enabling the estimation of the mean and covariance of the predictive distribution at the output of the model. Our experiments using MNIST and CIFAR-10 datasets show a significant improvement in the robustness of the trained models to random noise and adversarial attacks.
[ { "created": "Wed, 10 Nov 2021 21:26:08 GMT", "version": "v1" } ]
2021-11-12
[ [ "Carannante", "Giuseppina", "" ], [ "Dera", "Dimah", "" ], [ "Rasool", "Ghulam", "" ], [ "Bouaynaya", "Nidhal C.", "" ], [ "Mihaylova", "Lyudmila", "" ] ]
Learning in uncertain, noisy, or adversarial environments is a challenging task for deep neural networks (DNNs). We propose a new theoretically grounded and efficient approach for robust learning that builds upon Bayesian estimation and Variational Inference. We formulate the problem of density propagation through layers of a DNN and solve it using an Ensemble Density Propagation (EnDP) scheme. The EnDP approach allows us to propagate moments of the variational probability distribution across the layers of a Bayesian DNN, enabling the estimation of the mean and covariance of the predictive distribution at the output of the model. Our experiments using MNIST and CIFAR-10 datasets show a significant improvement in the robustness of the trained models to random noise and adversarial attacks.
1109.0775
EPTCS
Azer Bestavros (Boston University), Assaf Kfoury (Boston University)
A Domain-Specific Language for Incremental and Modular Design of Large-Scale Verifiably-Safe Flow Networks (Preliminary Report)
In Proceedings DSL 2011, arXiv:1109.0323
EPTCS 66, 2011, pp. 24-47
10.4204/EPTCS.66.2
null
cs.PL cs.DC cs.LO cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We define a domain-specific language (DSL) to inductively assemble flow networks from small networks or modules to produce arbitrarily large ones, with interchangeable functionally-equivalent parts. Our small networks or modules are "small" only as the building blocks in this inductive definition (there is no limit on their size). Associated with our DSL is a type theory, a system of formal annotations to express desirable properties of flow networks together with rules that enforce them as invariants across their interfaces, i.e, the rules guarantee the properties are preserved as we build larger networks from smaller ones. A prerequisite for a type theory is a formal semantics, i.e, a rigorous definition of the entities that qualify as feasible flows through the networks, possibly restricted to satisfy additional efficiency or safety requirements. This can be carried out in one of two ways, as a denotational semantics or as an operational (or reduction) semantics; we choose the first in preference to the second, partly to avoid exponential-growth rewriting in the operational approach. We set up a typing system and prove its soundness for our DSL.
[ { "created": "Mon, 5 Sep 2011 01:56:15 GMT", "version": "v1" } ]
2011-09-06
[ [ "Bestavros", "Azer", "", "Boston University" ], [ "Kfoury", "Assaf", "", "Boston University" ] ]
We define a domain-specific language (DSL) to inductively assemble flow networks from small networks or modules to produce arbitrarily large ones, with interchangeable functionally-equivalent parts. Our small networks or modules are "small" only as the building blocks in this inductive definition (there is no limit on their size). Associated with our DSL is a type theory, a system of formal annotations to express desirable properties of flow networks together with rules that enforce them as invariants across their interfaces, i.e, the rules guarantee the properties are preserved as we build larger networks from smaller ones. A prerequisite for a type theory is a formal semantics, i.e, a rigorous definition of the entities that qualify as feasible flows through the networks, possibly restricted to satisfy additional efficiency or safety requirements. This can be carried out in one of two ways, as a denotational semantics or as an operational (or reduction) semantics; we choose the first in preference to the second, partly to avoid exponential-growth rewriting in the operational approach. We set up a typing system and prove its soundness for our DSL.
1910.08810
Benjamin Ramtoula
Benjamin Ramtoula, Ricardo de Azambuja, Giovanni Beltrame
CAPRICORN: Communication Aware Place Recognition using Interpretable Constellations of Objects in Robot Networks
8 pages, 6 figures, 1 table. 2020 IEEE International Conference on Robotics and Automation (ICRA)
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using multiple robots for exploring and mapping environments can provide improved robustness and performance, but it can be difficult to implement. In particular, limited communication bandwidth is a considerable constraint when a robot needs to determine if it has visited a location that was previously explored by another robot, as it requires for robots to share descriptions of places they have visited. One way to compress this description is to use constellations, groups of 3D points that correspond to the estimate of a set of relative object positions. Constellations maintain the same pattern from different viewpoints and can be robust to illumination changes or dynamic elements. We present a method to extract from these constellations compact spatial and semantic descriptors of the objects in a scene. We use this representation in a 2-step decentralized loop closure verification: first, we distribute the compact semantic descriptors to determine which other robots might have seen scenes with similar objects; then we query matching robots with the full constellation to validate the match using geometric information. The proposed method requires less memory, is more interpretable than global image descriptors, and could be useful for other tasks and interactions with the environment. We validate our system's performance on a TUM RGB-D SLAM sequence and show its benefits in terms of bandwidth requirements.
[ { "created": "Sat, 19 Oct 2019 17:52:04 GMT", "version": "v1" }, { "created": "Thu, 26 Mar 2020 00:32:21 GMT", "version": "v2" } ]
2020-03-27
[ [ "Ramtoula", "Benjamin", "" ], [ "de Azambuja", "Ricardo", "" ], [ "Beltrame", "Giovanni", "" ] ]
Using multiple robots for exploring and mapping environments can provide improved robustness and performance, but it can be difficult to implement. In particular, limited communication bandwidth is a considerable constraint when a robot needs to determine if it has visited a location that was previously explored by another robot, as it requires for robots to share descriptions of places they have visited. One way to compress this description is to use constellations, groups of 3D points that correspond to the estimate of a set of relative object positions. Constellations maintain the same pattern from different viewpoints and can be robust to illumination changes or dynamic elements. We present a method to extract from these constellations compact spatial and semantic descriptors of the objects in a scene. We use this representation in a 2-step decentralized loop closure verification: first, we distribute the compact semantic descriptors to determine which other robots might have seen scenes with similar objects; then we query matching robots with the full constellation to validate the match using geometric information. The proposed method requires less memory, is more interpretable than global image descriptors, and could be useful for other tasks and interactions with the environment. We validate our system's performance on a TUM RGB-D SLAM sequence and show its benefits in terms of bandwidth requirements.
0711.4792
Sriram Sridharan
Sriram Sridharan and Sriram Vishwanath
On the Capacity of a Class of MIMO Cognitive Radios
13 pages, 8 figures, Accepted for publication in Journal of Selected Topics in Signal Processing (JSTSP) - Special Issue on Dynamic Spectrum Access
null
10.1109/JSTSP.2007.914890
null
cs.IT math.IT
null
Cognitive radios have been studied recently as a means to utilize spectrum in a more efficient manner. This paper focuses on the fundamental limits of operation of a MIMO cognitive radio network with a single licensed user and a single cognitive user. The channel setting is equivalent to an interference channel with degraded message sets (with the cognitive user having access to the licensed user's message). An achievable region and an outer bound is derived for such a network setting. It is shown that under certain conditions, the achievable region is optimal for a portion of the capacity region that includes sum capacity.
[ { "created": "Thu, 29 Nov 2007 18:28:00 GMT", "version": "v1" }, { "created": "Tue, 11 Dec 2007 20:54:34 GMT", "version": "v2" } ]
2009-11-13
[ [ "Sridharan", "Sriram", "" ], [ "Vishwanath", "Sriram", "" ] ]
Cognitive radios have been studied recently as a means to utilize spectrum in a more efficient manner. This paper focuses on the fundamental limits of operation of a MIMO cognitive radio network with a single licensed user and a single cognitive user. The channel setting is equivalent to an interference channel with degraded message sets (with the cognitive user having access to the licensed user's message). An achievable region and an outer bound is derived for such a network setting. It is shown that under certain conditions, the achievable region is optimal for a portion of the capacity region that includes sum capacity.
1909.02423
Vincent Labatut
Xavier Bost (LIA), Serigne Gueye (LIA), Vincent Labatut (LIA), Martha Larson (DMIR), Georges Linar\`es (LIA), Damien Malinas (CNELIAS), Rapha\"el Roth (CNELIAS)
Remembering Winter Was Coming: Character-Oriented Video Summaries of TV Series
null
Multimedia Tools and Applications, Springer, 2019, 78(24):35373-35399
10.1007/s11042-019-07969-4
null
cs.MM cs.IR cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Today's popular TV series tend to develop continuous, complex plots spanning several seasons, but are often viewed in controlled and discontinuous conditions. Consequently, most viewers need to be re-immersed in the story before watching a new season. Although discussions with friends and family can help, we observe that most viewers make extensive use of summaries to re-engage with the plot. Automatic generation of video summaries of TV series' complex stories requires, first, modeling the dynamics of the plot and, second, extracting relevant sequences. In this paper, we tackle plot modeling by considering the social network of interactions between the characters involved in the narrative: substantial, durable changes in a major character's social environment suggest a new development relevant for the summary. Once identified, these major stages in each character's storyline can be used as a basis for completing the summary with related sequences. Our algorithm combines such social network analysis with filmmaking grammar to automatically generate character-oriented video summaries of TV series from partially annotated data. We carry out evaluation with a user study in a real-world scenario: a large sample of viewers were asked to rank video summaries centered on five characters of the popular TV series Game of Thrones, a few weeks before the new, sixth season was released. Our results reveal the ability of character-oriented summaries to re-engage viewers in television series and confirm the contributions of modeling the plot content and exploiting stylistic patterns to identify salient sequences.
[ { "created": "Thu, 5 Sep 2019 14:00:45 GMT", "version": "v1" }, { "created": "Wed, 11 Dec 2019 15:24:57 GMT", "version": "v2" }, { "created": "Wed, 18 Mar 2020 07:10:52 GMT", "version": "v3" } ]
2020-03-19
[ [ "Bost", "Xavier", "", "LIA" ], [ "Gueye", "Serigne", "", "LIA" ], [ "Labatut", "Vincent", "", "LIA" ], [ "Larson", "Martha", "", "DMIR" ], [ "Linarès", "Georges", "", "LIA" ], [ "Malinas", "Damien", "", "CNELIAS" ], [ "Roth", "Raphaël", "", "CNELIAS" ] ]
Today's popular TV series tend to develop continuous, complex plots spanning several seasons, but are often viewed in controlled and discontinuous conditions. Consequently, most viewers need to be re-immersed in the story before watching a new season. Although discussions with friends and family can help, we observe that most viewers make extensive use of summaries to re-engage with the plot. Automatic generation of video summaries of TV series' complex stories requires, first, modeling the dynamics of the plot and, second, extracting relevant sequences. In this paper, we tackle plot modeling by considering the social network of interactions between the characters involved in the narrative: substantial, durable changes in a major character's social environment suggest a new development relevant for the summary. Once identified, these major stages in each character's storyline can be used as a basis for completing the summary with related sequences. Our algorithm combines such social network analysis with filmmaking grammar to automatically generate character-oriented video summaries of TV series from partially annotated data. We carry out evaluation with a user study in a real-world scenario: a large sample of viewers were asked to rank video summaries centered on five characters of the popular TV series Game of Thrones, a few weeks before the new, sixth season was released. Our results reveal the ability of character-oriented summaries to re-engage viewers in television series and confirm the contributions of modeling the plot content and exploiting stylistic patterns to identify salient sequences.
1806.09279
Amritpal Kaur
Amritpal Kaur and Harkiran Kaur
Framework for Opinion Mining Approach to Augment Education System Performance
5 pages, 2 figures
http://ijitce.co.uk/vol8n6.aspx June 2018 Issue Vol.8 No.6
null
null
cs.IR cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
The extensive expansion growth of social networking sites allows the people to share their views and experiences freely with their peers on internet. Due to this, huge amount of data is generated on everyday basis which can be used for the opinion mining to extract the views of people in a particular field. Opinion mining finds its applications in many areas such as Tourism, Politics, education and entertainment, etc. It has not been extensively implemented in area of education system. This paper discusses the malpractices in the present examination system. In the present scenario, Opinion mining is vastly used for decision making. The authors of this paper have designed a framework by applying Na\"ive Bayes approach to the education dataset. The various phases of Na\"ive Bayes approach include three steps: conversion of data into frequency table, making classes of dataset and apply the Na\"ive Bayes algorithm equation to calculate the probabilities of classes. Finally the highest probability class is the outcome of this prediction. These predictions are used to make improvements in the education system and help to provide better education.
[ { "created": "Mon, 25 Jun 2018 04:17:44 GMT", "version": "v1" } ]
2018-06-26
[ [ "Kaur", "Amritpal", "" ], [ "Kaur", "Harkiran", "" ] ]
The extensive expansion growth of social networking sites allows the people to share their views and experiences freely with their peers on internet. Due to this, huge amount of data is generated on everyday basis which can be used for the opinion mining to extract the views of people in a particular field. Opinion mining finds its applications in many areas such as Tourism, Politics, education and entertainment, etc. It has not been extensively implemented in area of education system. This paper discusses the malpractices in the present examination system. In the present scenario, Opinion mining is vastly used for decision making. The authors of this paper have designed a framework by applying Na\"ive Bayes approach to the education dataset. The various phases of Na\"ive Bayes approach include three steps: conversion of data into frequency table, making classes of dataset and apply the Na\"ive Bayes algorithm equation to calculate the probabilities of classes. Finally the highest probability class is the outcome of this prediction. These predictions are used to make improvements in the education system and help to provide better education.
2402.00958
Ignacio F\'abregas
Ignacio F\'abregas and Miguel Palomino and David de Frutos-Escrig
Reflection and Preservation of Properties in Coalgebraic (bi)Simulations
null
Theoretical Aspects of Computing (ICTAC) 2007. Lecture Notes in Computer Science volume 4711
10.1007/978-3-540-75292-9\_16
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
Our objective is to extend the standard results of preservation and reflection of properties by bisimulations to the coalgebraic setting, as well as to study under what conditions these results hold for simulations. The notion of bisimulation is the classical one, while for simulations we use that proposed by Hughes and Jacobs. As for properties, we start by using a generalization of linear temporal logic to arbitrary coalgebras suggested by Jacobs, and then an extension by Kurtz which includes atomic propositions too.
[ { "created": "Thu, 1 Feb 2024 19:26:17 GMT", "version": "v1" } ]
2024-02-05
[ [ "Fábregas", "Ignacio", "" ], [ "Palomino", "Miguel", "" ], [ "de Frutos-Escrig", "David", "" ] ]
Our objective is to extend the standard results of preservation and reflection of properties by bisimulations to the coalgebraic setting, as well as to study under what conditions these results hold for simulations. The notion of bisimulation is the classical one, while for simulations we use that proposed by Hughes and Jacobs. As for properties, we start by using a generalization of linear temporal logic to arbitrary coalgebras suggested by Jacobs, and then an extension by Kurtz which includes atomic propositions too.
1909.07140
Thomas Parnell
Dimitrios Sarigiannis, Thomas Parnell, Haris Pozidis
Weighted Sampling for Combined Model Selection and Hyperparameter Tuning
Accepted for presentation at The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI 2020)
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The combined algorithm selection and hyperparameter tuning (CASH) problem is characterized by large hierarchical hyperparameter spaces. Model-free hyperparameter tuning methods can explore such large spaces efficiently since they are highly parallelizable across multiple machines. When no prior knowledge or meta-data exists to boost their performance, these methods commonly sample random configurations following a uniform distribution. In this work, we propose a novel sampling distribution as an alternative to uniform sampling and prove theoretically that it has a better chance of finding the best configuration in a worst-case setting. In order to compare competing methods rigorously in an experimental setting, one must perform statistical hypothesis testing. We show that there is little-to-no agreement in the automated machine learning literature regarding which methods should be used. We contrast this disparity with the methods recommended by the broader statistics literature, and identify a suitable approach. We then select three popular model-free solutions to CASH and evaluate their performance, with uniform sampling as well as the proposed sampling scheme, across 67 datasets from the OpenML platform. We investigate the trade-off between exploration and exploitation across the three algorithms, and verify empirically that the proposed sampling distribution improves performance in all cases.
[ { "created": "Mon, 16 Sep 2019 12:01:12 GMT", "version": "v1" }, { "created": "Tue, 17 Sep 2019 07:57:49 GMT", "version": "v2" }, { "created": "Thu, 21 Nov 2019 12:19:57 GMT", "version": "v3" } ]
2019-11-22
[ [ "Sarigiannis", "Dimitrios", "" ], [ "Parnell", "Thomas", "" ], [ "Pozidis", "Haris", "" ] ]
The combined algorithm selection and hyperparameter tuning (CASH) problem is characterized by large hierarchical hyperparameter spaces. Model-free hyperparameter tuning methods can explore such large spaces efficiently since they are highly parallelizable across multiple machines. When no prior knowledge or meta-data exists to boost their performance, these methods commonly sample random configurations following a uniform distribution. In this work, we propose a novel sampling distribution as an alternative to uniform sampling and prove theoretically that it has a better chance of finding the best configuration in a worst-case setting. In order to compare competing methods rigorously in an experimental setting, one must perform statistical hypothesis testing. We show that there is little-to-no agreement in the automated machine learning literature regarding which methods should be used. We contrast this disparity with the methods recommended by the broader statistics literature, and identify a suitable approach. We then select three popular model-free solutions to CASH and evaluate their performance, with uniform sampling as well as the proposed sampling scheme, across 67 datasets from the OpenML platform. We investigate the trade-off between exploration and exploitation across the three algorithms, and verify empirically that the proposed sampling distribution improves performance in all cases.
2004.14164
Xiaoqing Geng
Xiaoqing Geng, Xiwen Chen, Kenny Q. Zhu, Libin Shen, Yinggong Zhao
MICK: A Meta-Learning Framework for Few-shot Relation Classification with Small Training Data
null
CIKM 2020: The 29th ACM International Conference on Information and Knowledge Management
10.1145/3340531.3411858
null
cs.CL cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Few-shot relation classification seeks to classify incoming query instances after meeting only few support instances. This ability is gained by training with large amount of in-domain annotated data. In this paper, we tackle an even harder problem by further limiting the amount of data available at training time. We propose a few-shot learning framework for relation classification, which is particularly powerful when the training data is very small. In this framework, models not only strive to classify query instances, but also seek underlying knowledge about the support instances to obtain better instance representations. The framework also includes a method for aggregating cross-domain knowledge into models by open-source task enrichment. Additionally, we construct a brand new dataset: the TinyRel-CM dataset, a few-shot relation classification dataset in health domain with purposely small training data and challenging relation classes. Experimental results demonstrate that our framework brings performance gains for most underlying classification models, outperforms the state-of-the-art results given small training data, and achieves competitive results with sufficiently large training data.
[ { "created": "Sun, 26 Apr 2020 06:23:38 GMT", "version": "v1" }, { "created": "Mon, 14 Dec 2020 15:54:51 GMT", "version": "v2" } ]
2020-12-15
[ [ "Geng", "Xiaoqing", "" ], [ "Chen", "Xiwen", "" ], [ "Zhu", "Kenny Q.", "" ], [ "Shen", "Libin", "" ], [ "Zhao", "Yinggong", "" ] ]
Few-shot relation classification seeks to classify incoming query instances after meeting only few support instances. This ability is gained by training with large amount of in-domain annotated data. In this paper, we tackle an even harder problem by further limiting the amount of data available at training time. We propose a few-shot learning framework for relation classification, which is particularly powerful when the training data is very small. In this framework, models not only strive to classify query instances, but also seek underlying knowledge about the support instances to obtain better instance representations. The framework also includes a method for aggregating cross-domain knowledge into models by open-source task enrichment. Additionally, we construct a brand new dataset: the TinyRel-CM dataset, a few-shot relation classification dataset in health domain with purposely small training data and challenging relation classes. Experimental results demonstrate that our framework brings performance gains for most underlying classification models, outperforms the state-of-the-art results given small training data, and achieves competitive results with sufficiently large training data.
2303.13355
Son Tran
Son Quoc Tran, Phong Nguyen-Thuan Do, Kiet Van Nguyen, Ngan Luu-Thuy Nguyen
Revealing Weaknesses of Vietnamese Language Models Through Unanswerable Questions in Machine Reading Comprehension
Accepted at The 2023 EACL Student Research Workshop
null
null
null
cs.CL cs.AI
http://creativecommons.org/publicdomain/zero/1.0/
Although the curse of multilinguality significantly restricts the language abilities of multilingual models in monolingual settings, researchers now still have to rely on multilingual models to develop state-of-the-art systems in Vietnamese Machine Reading Comprehension. This difficulty in researching is because of the limited number of high-quality works in developing Vietnamese language models. In order to encourage more work in this research field, we present a comprehensive analysis of language weaknesses and strengths of current Vietnamese monolingual models using the downstream task of Machine Reading Comprehension. From the analysis results, we suggest new directions for developing Vietnamese language models. Besides this main contribution, we also successfully reveal the existence of artifacts in Vietnamese Machine Reading Comprehension benchmarks and suggest an urgent need for new high-quality benchmarks to track the progress of Vietnamese Machine Reading Comprehension. Moreover, we also introduced a minor but valuable modification to the process of annotating unanswerable questions for Machine Reading Comprehension from previous work. Our proposed modification helps improve the quality of unanswerable questions to a higher level of difficulty for Machine Reading Comprehension systems to solve.
[ { "created": "Thu, 16 Mar 2023 20:32:58 GMT", "version": "v1" } ]
2023-03-24
[ [ "Tran", "Son Quoc", "" ], [ "Do", "Phong Nguyen-Thuan", "" ], [ "Van Nguyen", "Kiet", "" ], [ "Nguyen", "Ngan Luu-Thuy", "" ] ]
Although the curse of multilinguality significantly restricts the language abilities of multilingual models in monolingual settings, researchers now still have to rely on multilingual models to develop state-of-the-art systems in Vietnamese Machine Reading Comprehension. This difficulty in researching is because of the limited number of high-quality works in developing Vietnamese language models. In order to encourage more work in this research field, we present a comprehensive analysis of language weaknesses and strengths of current Vietnamese monolingual models using the downstream task of Machine Reading Comprehension. From the analysis results, we suggest new directions for developing Vietnamese language models. Besides this main contribution, we also successfully reveal the existence of artifacts in Vietnamese Machine Reading Comprehension benchmarks and suggest an urgent need for new high-quality benchmarks to track the progress of Vietnamese Machine Reading Comprehension. Moreover, we also introduced a minor but valuable modification to the process of annotating unanswerable questions for Machine Reading Comprehension from previous work. Our proposed modification helps improve the quality of unanswerable questions to a higher level of difficulty for Machine Reading Comprehension systems to solve.
1905.12688
Graham Neubig
Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell, Graham Neubig
Choosing Transfer Languages for Cross-Lingual Learning
Proceedings of ACL 2019
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cross-lingual transfer, where a high-resource transfer language is used to improve the accuracy of a low-resource task language, is now an invaluable tool for improving performance of natural language processing (NLP) on low-resource languages. However, given a particular task language, it is not clear which language to transfer from, and the standard strategy is to select languages based on ad hoc criteria, usually the intuition of the experimenter. Since a large number of features contribute to the success of cross-lingual transfer (including phylogenetic similarity, typological properties, lexical overlap, or size of available data), even the most enlightened experimenter rarely considers all these factors for the particular task at hand. In this paper, we consider this task of automatically selecting optimal transfer languages as a ranking problem, and build models that consider the aforementioned features to perform this prediction. In experiments on representative NLP tasks, we demonstrate that our model predicts good transfer languages much better than ad hoc baselines considering single features in isolation, and glean insights on what features are most informative for each different NLP tasks, which may inform future ad hoc selection even without use of our method. Code, data, and pre-trained models are available at https://github.com/neulab/langrank
[ { "created": "Wed, 29 May 2019 19:19:47 GMT", "version": "v1" }, { "created": "Fri, 7 Jun 2019 03:37:25 GMT", "version": "v2" } ]
2019-06-10
[ [ "Lin", "Yu-Hsiang", "" ], [ "Chen", "Chian-Yu", "" ], [ "Lee", "Jean", "" ], [ "Li", "Zirui", "" ], [ "Zhang", "Yuyan", "" ], [ "Xia", "Mengzhou", "" ], [ "Rijhwani", "Shruti", "" ], [ "He", "Junxian", "" ], [ "Zhang", "Zhisong", "" ], [ "Ma", "Xuezhe", "" ], [ "Anastasopoulos", "Antonios", "" ], [ "Littell", "Patrick", "" ], [ "Neubig", "Graham", "" ] ]
Cross-lingual transfer, where a high-resource transfer language is used to improve the accuracy of a low-resource task language, is now an invaluable tool for improving performance of natural language processing (NLP) on low-resource languages. However, given a particular task language, it is not clear which language to transfer from, and the standard strategy is to select languages based on ad hoc criteria, usually the intuition of the experimenter. Since a large number of features contribute to the success of cross-lingual transfer (including phylogenetic similarity, typological properties, lexical overlap, or size of available data), even the most enlightened experimenter rarely considers all these factors for the particular task at hand. In this paper, we consider this task of automatically selecting optimal transfer languages as a ranking problem, and build models that consider the aforementioned features to perform this prediction. In experiments on representative NLP tasks, we demonstrate that our model predicts good transfer languages much better than ad hoc baselines considering single features in isolation, and glean insights on what features are most informative for each different NLP tasks, which may inform future ad hoc selection even without use of our method. Code, data, and pre-trained models are available at https://github.com/neulab/langrank
1403.6167
Utkarsh R. Patel
Utkarsh R. Patel and Piero Triverio
MoM-SO: a Complete Method for Computing the Impedance of Cable Systems Including Skin, Proximity, and Ground Return Effects
This paper has now been published in the IEEE Trans. on Power Delivery in Oct. 2015, vol. 30, no. 5, pp. 2110-2118. DOI: 10.1109/TPWRD.2014.2378594
IEEE Trans. on Power Delivery in Oct. 2015, vol. 30, no. 5, pp. 2110-2118
10.1109/TPWRD.2014.2378594
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The availability of accurate and broadband models for underground and submarine cable systems is of paramount importance for the correct prediction of electromagnetic transients in power grids. Recently, we proposed the MoM-SO method for extracting the series impedance of power cables while accounting for skin and proximity effect in the conductors. In this paper, we extend the method to include ground return effects and to handle cables placed inside a tunnel. Numerical tests show that the proposed method is more accurate than widely-used analytic formulas, and is much faster than existing proximity-aware approaches like finite elements. For a three-phase cable system in a tunnel, the proposed method requires only 0.3 seconds of CPU time per frequency point, against the 8.3 minutes taken by finite elements, for a speed up beyond 1000 X.
[ { "created": "Mon, 24 Mar 2014 22:03:07 GMT", "version": "v1" }, { "created": "Tue, 29 Apr 2014 18:26:42 GMT", "version": "v2" }, { "created": "Mon, 28 Sep 2015 15:51:35 GMT", "version": "v3" } ]
2016-06-29
[ [ "Patel", "Utkarsh R.", "" ], [ "Triverio", "Piero", "" ] ]
The availability of accurate and broadband models for underground and submarine cable systems is of paramount importance for the correct prediction of electromagnetic transients in power grids. Recently, we proposed the MoM-SO method for extracting the series impedance of power cables while accounting for skin and proximity effect in the conductors. In this paper, we extend the method to include ground return effects and to handle cables placed inside a tunnel. Numerical tests show that the proposed method is more accurate than widely-used analytic formulas, and is much faster than existing proximity-aware approaches like finite elements. For a three-phase cable system in a tunnel, the proposed method requires only 0.3 seconds of CPU time per frequency point, against the 8.3 minutes taken by finite elements, for a speed up beyond 1000 X.
2002.01913
Augusto Luis Ballardini PhD
Augusto Luis Ballardini, Daniele Cattaneo, Rub\'en Izquierdo, Ignacio Parra Alonso, Andrea Piazzoni, Miguel \'Angel Sotelo, Domenico Giorgio Sorrenti
Vehicle Ego-Lane Estimation with Sensor Failure Modeling
preprint
null
null
null
cs.RO cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
We present a probabilistic ego-lane estimation algorithm for highway-like scenarios that is designed to increase the accuracy of the ego-lane estimate, which can be obtained relying only on a noisy line detector and tracker. The contribution relies on a Hidden Markov Model (HMM) with a transient failure model. The proposed algorithm exploits the OpenStreetMap (or other cartographic services) road property lane number as the expected number of lanes and leverages consecutive, possibly incomplete, observations. The algorithm effectiveness is proven by employing different line detectors and showing we could achieve much more usable, i.e. stable and reliable, ego-lane estimates over more than 100 Km of highway scenarios, recorded both in Italy and Spain. Moreover, as we could not find a suitable dataset for a quantitative comparison with other approaches, we collected datasets and manually annotated the Ground Truth about the vehicle ego-lane. Such datasets are made publicly available for usage from the scientific community.
[ { "created": "Wed, 5 Feb 2020 18:32:00 GMT", "version": "v1" }, { "created": "Thu, 6 Feb 2020 15:06:49 GMT", "version": "v2" } ]
2020-02-07
[ [ "Ballardini", "Augusto Luis", "" ], [ "Cattaneo", "Daniele", "" ], [ "Izquierdo", "Rubén", "" ], [ "Alonso", "Ignacio Parra", "" ], [ "Piazzoni", "Andrea", "" ], [ "Sotelo", "Miguel Ángel", "" ], [ "Sorrenti", "Domenico Giorgio", "" ] ]
We present a probabilistic ego-lane estimation algorithm for highway-like scenarios that is designed to increase the accuracy of the ego-lane estimate, which can be obtained relying only on a noisy line detector and tracker. The contribution relies on a Hidden Markov Model (HMM) with a transient failure model. The proposed algorithm exploits the OpenStreetMap (or other cartographic services) road property lane number as the expected number of lanes and leverages consecutive, possibly incomplete, observations. The algorithm effectiveness is proven by employing different line detectors and showing we could achieve much more usable, i.e. stable and reliable, ego-lane estimates over more than 100 Km of highway scenarios, recorded both in Italy and Spain. Moreover, as we could not find a suitable dataset for a quantitative comparison with other approaches, we collected datasets and manually annotated the Ground Truth about the vehicle ego-lane. Such datasets are made publicly available for usage from the scientific community.
2403.05221
Wolfgang H\"ohl
Wolfgang H\"ohl
Understanding Hybrid Spaces: Designing a Spacetime Model to Represent Dynamic Topologies of Hybrid Spaces
82 pages, 22 figures, 19 tables
null
null
null
cs.CY
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper develops a spatiotemporal model for the visualization of dynamic topologies of hybrid spaces. The visualization of spatiotemporal data is a well-known problem, for example in digital twins in urban planning. There is also a lack of a basic ontology for understanding hybrid spaces. The developed spatiotemporal model has three levels: a level of places and media types, a level of perception and a level of time and interaction. Existing concepts and types of representation of hybrid spaces are presented. The space-time model is tested on the basis of an art exhibition. Two hypotheses guide the accompanying online survey: (A) there are correlations between media use (modality), the participants' interactions (creativity) and their perception (understanding of art) and (B) individual parameters (demographic data, location and situation, individual knowledge) influence perception (understanding of art). The range, the number of interactions and the response rate were also evaluated. The online survey generally showed a positive correlation between media use (modality) and individual activity (creativity). However, due to the low participation rate ($P_{TN} = 14$), the survey is unfortunately not very representative. Various dynamic topologies of hybrid spaces were successfully visualized. The joint representation of real and virtual places and media types conveys a new basic understanding of place, range and urban density. Relationships between modality, Mobility and communicative interaction become visible. The current phenomenon of multilocality has been successfully mapped. The space-time model enables more precise class and structure formation, for example in the development of digital twins. Dynamic topologies of hybrid spaces, such as in social media, at events or in urban development, can thus be better represented and compared.
[ { "created": "Fri, 8 Mar 2024 11:18:27 GMT", "version": "v1" } ]
2024-03-11
[ [ "Höhl", "Wolfgang", "" ] ]
This paper develops a spatiotemporal model for the visualization of dynamic topologies of hybrid spaces. The visualization of spatiotemporal data is a well-known problem, for example in digital twins in urban planning. There is also a lack of a basic ontology for understanding hybrid spaces. The developed spatiotemporal model has three levels: a level of places and media types, a level of perception and a level of time and interaction. Existing concepts and types of representation of hybrid spaces are presented. The space-time model is tested on the basis of an art exhibition. Two hypotheses guide the accompanying online survey: (A) there are correlations between media use (modality), the participants' interactions (creativity) and their perception (understanding of art) and (B) individual parameters (demographic data, location and situation, individual knowledge) influence perception (understanding of art). The range, the number of interactions and the response rate were also evaluated. The online survey generally showed a positive correlation between media use (modality) and individual activity (creativity). However, due to the low participation rate ($P_{TN} = 14$), the survey is unfortunately not very representative. Various dynamic topologies of hybrid spaces were successfully visualized. The joint representation of real and virtual places and media types conveys a new basic understanding of place, range and urban density. Relationships between modality, Mobility and communicative interaction become visible. The current phenomenon of multilocality has been successfully mapped. The space-time model enables more precise class and structure formation, for example in the development of digital twins. Dynamic topologies of hybrid spaces, such as in social media, at events or in urban development, can thus be better represented and compared.
1903.12266
Maciej Zamorski
Maciej Zamorski, Adrian Zdobylak, Maciej Zi\k{e}ba, Jerzy \'Swi\k{a}tek
Generative Adversarial Networks: recent developments
10 pages
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In traditional generative modeling, good data representation is very often a base for a good machine learning model. It can be linked to good representations encoding more explanatory factors that are hidden in the original data. With the invention of Generative Adversarial Networks (GANs), a subclass of generative models that are able to learn representations in an unsupervised and semi-supervised fashion, we are now able to adversarially learn good mappings from a simple prior distribution to a target data distribution. This paper presents an overview of recent developments in GANs with a focus on learning latent space representations.
[ { "created": "Sat, 16 Mar 2019 18:10:35 GMT", "version": "v1" } ]
2019-04-01
[ [ "Zamorski", "Maciej", "" ], [ "Zdobylak", "Adrian", "" ], [ "Zięba", "Maciej", "" ], [ "Świątek", "Jerzy", "" ] ]
In traditional generative modeling, good data representation is very often a base for a good machine learning model. It can be linked to good representations encoding more explanatory factors that are hidden in the original data. With the invention of Generative Adversarial Networks (GANs), a subclass of generative models that are able to learn representations in an unsupervised and semi-supervised fashion, we are now able to adversarially learn good mappings from a simple prior distribution to a target data distribution. This paper presents an overview of recent developments in GANs with a focus on learning latent space representations.
2207.02295
Benjamin Fuhrer
Benjamin Fuhrer, Yuval Shpigelman, Chen Tessler, Shie Mannor, Gal Chechik, Eitan Zahavi, Gal Dalal
Implementing Reinforcement Learning Datacenter Congestion Control in NVIDIA NICs
null
null
10.1109/CCGrid57682.2023.00039
null
cs.NI cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As communication protocols evolve, datacenter network utilization increases. As a result, congestion is more frequent, causing higher latency and packet loss. Combined with the increasing complexity of workloads, manual design of congestion control (CC) algorithms becomes extremely difficult. This calls for the development of AI approaches to replace the human effort. Unfortunately, it is currently not possible to deploy AI models on network devices due to their limited computational capabilities. Here, we offer a solution to this problem by building a computationally-light solution based on a recent reinforcement learning CC algorithm [arXiv:2207.02295]. We reduce the inference time of RL-CC by x500 by distilling its complex neural network into decision trees. This transformation enables real-time inference within the $\mu$-sec decision-time requirement, with a negligible effect on quality. We deploy the transformed policy on NVIDIA NICs in a live cluster. Compared to popular CC algorithms used in production, RL-CC is the only method that performs well on all benchmarks tested over a large range of number of flows. It balances multiple metrics simultaneously: bandwidth, latency, and packet drops. These results suggest that data-driven methods for CC are feasible, challenging the prior belief that handcrafted heuristics are necessary to achieve optimal performance.
[ { "created": "Tue, 5 Jul 2022 20:42:24 GMT", "version": "v1" }, { "created": "Thu, 1 Dec 2022 20:56:23 GMT", "version": "v2" }, { "created": "Tue, 3 Jan 2023 16:00:27 GMT", "version": "v3" }, { "created": "Sun, 30 Apr 2023 13:12:49 GMT", "version": "v4" }, { "created": "Sat, 1 Jun 2024 13:45:22 GMT", "version": "v5" } ]
2024-06-04
[ [ "Fuhrer", "Benjamin", "" ], [ "Shpigelman", "Yuval", "" ], [ "Tessler", "Chen", "" ], [ "Mannor", "Shie", "" ], [ "Chechik", "Gal", "" ], [ "Zahavi", "Eitan", "" ], [ "Dalal", "Gal", "" ] ]
As communication protocols evolve, datacenter network utilization increases. As a result, congestion is more frequent, causing higher latency and packet loss. Combined with the increasing complexity of workloads, manual design of congestion control (CC) algorithms becomes extremely difficult. This calls for the development of AI approaches to replace the human effort. Unfortunately, it is currently not possible to deploy AI models on network devices due to their limited computational capabilities. Here, we offer a solution to this problem by building a computationally-light solution based on a recent reinforcement learning CC algorithm [arXiv:2207.02295]. We reduce the inference time of RL-CC by x500 by distilling its complex neural network into decision trees. This transformation enables real-time inference within the $\mu$-sec decision-time requirement, with a negligible effect on quality. We deploy the transformed policy on NVIDIA NICs in a live cluster. Compared to popular CC algorithms used in production, RL-CC is the only method that performs well on all benchmarks tested over a large range of number of flows. It balances multiple metrics simultaneously: bandwidth, latency, and packet drops. These results suggest that data-driven methods for CC are feasible, challenging the prior belief that handcrafted heuristics are necessary to achieve optimal performance.
2404.08887
Jinhao Pan
Jinhao Pan, Ziwei Zhu, Jianling Wang, Allen Lin, James Caverlee
Countering Mainstream Bias via End-to-End Adaptive Local Learning
ECIR 2024
In European Conference on Information Retrieval 2024, vol 14612 (pp. 75-89)
10.1007/978-3-031-56069-9_6
null
cs.IR cs.LG
http://creativecommons.org/licenses/by/4.0/
Collaborative filtering (CF) based recommendations suffer from mainstream bias -- where mainstream users are favored over niche users, leading to poor recommendation quality for many long-tail users. In this paper, we identify two root causes of this mainstream bias: (i) discrepancy modeling, whereby CF algorithms focus on modeling mainstream users while neglecting niche users with unique preferences; and (ii) unsynchronized learning, where niche users require more training epochs than mainstream users to reach peak performance. Targeting these causes, we propose a novel end-To-end Adaptive Local Learning (TALL) framework to provide high-quality recommendations to both mainstream and niche users. TALL uses a loss-driven Mixture-of-Experts module to adaptively ensemble experts to provide customized local models for different users. Further, it contains an adaptive weight module to synchronize the learning paces of different users by dynamically adjusting weights in the loss. Extensive experiments demonstrate the state-of-the-art performance of the proposed model. Code and data are provided at \url{https://github.com/JP-25/end-To-end-Adaptive-Local-Leanring-TALL-}
[ { "created": "Sat, 13 Apr 2024 03:17:33 GMT", "version": "v1" } ]
2024-04-16
[ [ "Pan", "Jinhao", "" ], [ "Zhu", "Ziwei", "" ], [ "Wang", "Jianling", "" ], [ "Lin", "Allen", "" ], [ "Caverlee", "James", "" ] ]
Collaborative filtering (CF) based recommendations suffer from mainstream bias -- where mainstream users are favored over niche users, leading to poor recommendation quality for many long-tail users. In this paper, we identify two root causes of this mainstream bias: (i) discrepancy modeling, whereby CF algorithms focus on modeling mainstream users while neglecting niche users with unique preferences; and (ii) unsynchronized learning, where niche users require more training epochs than mainstream users to reach peak performance. Targeting these causes, we propose a novel end-To-end Adaptive Local Learning (TALL) framework to provide high-quality recommendations to both mainstream and niche users. TALL uses a loss-driven Mixture-of-Experts module to adaptively ensemble experts to provide customized local models for different users. Further, it contains an adaptive weight module to synchronize the learning paces of different users by dynamically adjusting weights in the loss. Extensive experiments demonstrate the state-of-the-art performance of the proposed model. Code and data are provided at \url{https://github.com/JP-25/end-To-end-Adaptive-Local-Leanring-TALL-}
2211.12173
Poulami Sinhamahapatra
Poulami Sinhamahapatra, Lena Heidemann, Maureen Monnet, Karsten Roscher
Towards Human-Interpretable Prototypes for Visual Assessment of Image Classification Models
null
Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP, 878-887, 2023
10.5220/0011894900003417
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Explaining black-box Artificial Intelligence (AI) models is a cornerstone for trustworthy AI and a prerequisite for its use in safety critical applications such that AI models can reliably assist humans in critical decisions. However, instead of trying to explain our models post-hoc, we need models which are interpretable-by-design built on a reasoning process similar to humans that exploits meaningful high-level concepts such as shapes, texture or object parts. Learning such concepts is often hindered by its need for explicit specification and annotation up front. Instead, prototype-based learning approaches such as ProtoPNet claim to discover visually meaningful prototypes in an unsupervised way. In this work, we propose a set of properties that those prototypes have to fulfill to enable human analysis, e.g. as part of a reliable model assessment case, and analyse such existing methods in the light of these properties. Given a 'Guess who?' game, we find that these prototypes still have a long way ahead towards definite explanations. We quantitatively validate our findings by conducting a user study indicating that many of the learnt prototypes are not considered useful towards human understanding. We discuss about the missing links in the existing methods and present a potential real-world application motivating the need to progress towards truly human-interpretable prototypes.
[ { "created": "Tue, 22 Nov 2022 11:01:22 GMT", "version": "v1" } ]
2023-03-10
[ [ "Sinhamahapatra", "Poulami", "" ], [ "Heidemann", "Lena", "" ], [ "Monnet", "Maureen", "" ], [ "Roscher", "Karsten", "" ] ]
Explaining black-box Artificial Intelligence (AI) models is a cornerstone for trustworthy AI and a prerequisite for its use in safety critical applications such that AI models can reliably assist humans in critical decisions. However, instead of trying to explain our models post-hoc, we need models which are interpretable-by-design built on a reasoning process similar to humans that exploits meaningful high-level concepts such as shapes, texture or object parts. Learning such concepts is often hindered by its need for explicit specification and annotation up front. Instead, prototype-based learning approaches such as ProtoPNet claim to discover visually meaningful prototypes in an unsupervised way. In this work, we propose a set of properties that those prototypes have to fulfill to enable human analysis, e.g. as part of a reliable model assessment case, and analyse such existing methods in the light of these properties. Given a 'Guess who?' game, we find that these prototypes still have a long way ahead towards definite explanations. We quantitatively validate our findings by conducting a user study indicating that many of the learnt prototypes are not considered useful towards human understanding. We discuss about the missing links in the existing methods and present a potential real-world application motivating the need to progress towards truly human-interpretable prototypes.
1202.1340
Jie Xu Mr.
Yi Huang and Jie Xu and Ling Qiu
An Energy Efficient Semi-static Power Control and Link Adaptation Scheme in UMTS HSDPA
9 pages, 11 figures, accepted in EURASIP Journal on Wireless Communications and Networking, special issue on Green Radio
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High speed downlink packet access (HSDPA) has been successfully applied in commercial systems and improves user experience significantly. However, it incurs substantial energy consumption. In this paper, we address this issue by proposing a novel energy efficient semi-static power control and link adaptation scheme in HSDPA. Through estimating the EE under different modulation and coding schemes (MCSs) and corresponding transmit power, the proposed scheme can determine the most energy efficient MCS level and transmit power at the Node B. And then the Node B configure the optimal MCS level and transmit power. In order to decrease the signaling overhead caused by the configuration, a dual trigger mechanism is employed. After that, we extend the proposed scheme to the multiple input multiple output (MIMO) scenarios. Simulation results confirm the significant EE improvement of our proposed scheme. Finally, we give a discussion on the potential EE gain and challenge of the energy efficient mode switching between single input multiple output (SIMO) and MIMO configuration in HSDPA.
[ { "created": "Tue, 7 Feb 2012 03:31:30 GMT", "version": "v1" } ]
2012-02-08
[ [ "Huang", "Yi", "" ], [ "Xu", "Jie", "" ], [ "Qiu", "Ling", "" ] ]
High speed downlink packet access (HSDPA) has been successfully applied in commercial systems and improves user experience significantly. However, it incurs substantial energy consumption. In this paper, we address this issue by proposing a novel energy efficient semi-static power control and link adaptation scheme in HSDPA. Through estimating the EE under different modulation and coding schemes (MCSs) and corresponding transmit power, the proposed scheme can determine the most energy efficient MCS level and transmit power at the Node B. And then the Node B configure the optimal MCS level and transmit power. In order to decrease the signaling overhead caused by the configuration, a dual trigger mechanism is employed. After that, we extend the proposed scheme to the multiple input multiple output (MIMO) scenarios. Simulation results confirm the significant EE improvement of our proposed scheme. Finally, we give a discussion on the potential EE gain and challenge of the energy efficient mode switching between single input multiple output (SIMO) and MIMO configuration in HSDPA.
1808.06853
Seid Muhie Yimam
Seid Muhie Yimam, Chris Biemann
Demonstrating PAR4SEM - A Semantic Writing Aid with Adaptive Paraphrasing
EMNLP Demo paper
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
In this paper, we present Par4Sem, a semantic writing aid tool based on adaptive paraphrasing. Unlike many annotation tools that are primarily used to collect training examples, Par4Sem is integrated into a real word application, in this case a writing aid tool, in order to collect training examples from usage data. Par4Sem is a tool, which supports an adaptive, iterative, and interactive process where the underlying machine learning models are updated for each iteration using new training examples from usage data. After motivating the use of ever-learning tools in NLP applications, we evaluate Par4Sem by adopting it to a text simplification task through mere usage.
[ { "created": "Tue, 21 Aug 2018 11:37:57 GMT", "version": "v1" } ]
2018-08-22
[ [ "Yimam", "Seid Muhie", "" ], [ "Biemann", "Chris", "" ] ]
In this paper, we present Par4Sem, a semantic writing aid tool based on adaptive paraphrasing. Unlike many annotation tools that are primarily used to collect training examples, Par4Sem is integrated into a real word application, in this case a writing aid tool, in order to collect training examples from usage data. Par4Sem is a tool, which supports an adaptive, iterative, and interactive process where the underlying machine learning models are updated for each iteration using new training examples from usage data. After motivating the use of ever-learning tools in NLP applications, we evaluate Par4Sem by adopting it to a text simplification task through mere usage.
2307.05126
Cec\'ilia Coelho
C. Coelho, M. Fernanda P. Costa, L.L. Ferr\'as
Enhancing Continuous Time Series Modelling with a Latent ODE-LSTM Approach
null
null
10.1016/j.amc.2024.128727
null
cs.LG math.OC
http://creativecommons.org/licenses/by-sa/4.0/
Due to their dynamic properties such as irregular sampling rate and high-frequency sampling, Continuous Time Series (CTS) are found in many applications. Since CTS with irregular sampling rate are difficult to model with standard Recurrent Neural Networks (RNNs), RNNs have been generalised to have continuous-time hidden dynamics defined by a Neural Ordinary Differential Equation (Neural ODE), leading to the ODE-RNN model. Another approach that provides a better modelling is that of the Latent ODE model, which constructs a continuous-time model where a latent state is defined at all times. The Latent ODE model uses a standard RNN as the encoder and a Neural ODE as the decoder. However, since the RNN encoder leads to difficulties with missing data and ill-defined latent variables, a Latent ODE-RNN model has recently been proposed that uses a ODE-RNN model as the encoder instead. Both the Latent ODE and Latent ODE-RNN models are difficult to train due to the vanishing and exploding gradients problem. To overcome this problem, the main contribution of this paper is to propose and illustrate a new model based on a new Latent ODE using an ODE-LSTM (Long Short-Term Memory) network as an encoder -- the Latent ODE-LSTM model. To limit the growth of the gradients the Norm Gradient Clipping strategy was embedded on the Latent ODE-LSTM model. The performance evaluation of the new Latent ODE-LSTM (with and without Norm Gradient Clipping) for modelling CTS with regular and irregular sampling rates is then demonstrated. Numerical experiments show that the new Latent ODE-LSTM performs better than Latent ODE-RNNs and can avoid the vanishing and exploding gradients during training.
[ { "created": "Tue, 11 Jul 2023 09:01:49 GMT", "version": "v1" } ]
2024-07-02
[ [ "Coelho", "C.", "" ], [ "Costa", "M. Fernanda P.", "" ], [ "Ferrás", "L. L.", "" ] ]
Due to their dynamic properties such as irregular sampling rate and high-frequency sampling, Continuous Time Series (CTS) are found in many applications. Since CTS with irregular sampling rate are difficult to model with standard Recurrent Neural Networks (RNNs), RNNs have been generalised to have continuous-time hidden dynamics defined by a Neural Ordinary Differential Equation (Neural ODE), leading to the ODE-RNN model. Another approach that provides a better modelling is that of the Latent ODE model, which constructs a continuous-time model where a latent state is defined at all times. The Latent ODE model uses a standard RNN as the encoder and a Neural ODE as the decoder. However, since the RNN encoder leads to difficulties with missing data and ill-defined latent variables, a Latent ODE-RNN model has recently been proposed that uses a ODE-RNN model as the encoder instead. Both the Latent ODE and Latent ODE-RNN models are difficult to train due to the vanishing and exploding gradients problem. To overcome this problem, the main contribution of this paper is to propose and illustrate a new model based on a new Latent ODE using an ODE-LSTM (Long Short-Term Memory) network as an encoder -- the Latent ODE-LSTM model. To limit the growth of the gradients the Norm Gradient Clipping strategy was embedded on the Latent ODE-LSTM model. The performance evaluation of the new Latent ODE-LSTM (with and without Norm Gradient Clipping) for modelling CTS with regular and irregular sampling rates is then demonstrated. Numerical experiments show that the new Latent ODE-LSTM performs better than Latent ODE-RNNs and can avoid the vanishing and exploding gradients during training.
2407.13811
Gertjan Burghouts
Anne Kemmeren, Gertjan Burghouts, Michael van Bekkum, Wouter Meijer, Jelle van Mil
Which objects help me to act effectively? Reasoning about physically-grounded affordances
10 pages
Robotics: Science and Systems. Semantic Reasoning and Goal Understanding in Robotics 2024
null
null
cs.CV cs.RO
http://creativecommons.org/licenses/by/4.0/
For effective interactions with the open world, robots should understand how interactions with known and novel objects help them towards their goal. A key aspect of this understanding lies in detecting an object's affordances, which represent the potential effects that can be achieved by manipulating the object in various ways. Our approach leverages a dialogue of large language models (LLMs) and vision-language models (VLMs) to achieve open-world affordance detection. Given open-vocabulary descriptions of intended actions and effects, the useful objects in the environment are found. By grounding our system in the physical world, we account for the robot's embodiment and the intrinsic properties of the objects it encounters. In our experiments, we have shown that our method produces tailored outputs based on different embodiments or intended effects. The method was able to select a useful object from a set of distractors. Finetuning the VLM for physical properties improved overall performance. These results underline the importance of grounding the affordance search in the physical world, by taking into account robot embodiment and the physical properties of objects.
[ { "created": "Thu, 18 Jul 2024 11:08:57 GMT", "version": "v1" } ]
2024-07-22
[ [ "Kemmeren", "Anne", "" ], [ "Burghouts", "Gertjan", "" ], [ "van Bekkum", "Michael", "" ], [ "Meijer", "Wouter", "" ], [ "van Mil", "Jelle", "" ] ]
For effective interactions with the open world, robots should understand how interactions with known and novel objects help them towards their goal. A key aspect of this understanding lies in detecting an object's affordances, which represent the potential effects that can be achieved by manipulating the object in various ways. Our approach leverages a dialogue of large language models (LLMs) and vision-language models (VLMs) to achieve open-world affordance detection. Given open-vocabulary descriptions of intended actions and effects, the useful objects in the environment are found. By grounding our system in the physical world, we account for the robot's embodiment and the intrinsic properties of the objects it encounters. In our experiments, we have shown that our method produces tailored outputs based on different embodiments or intended effects. The method was able to select a useful object from a set of distractors. Finetuning the VLM for physical properties improved overall performance. These results underline the importance of grounding the affordance search in the physical world, by taking into account robot embodiment and the physical properties of objects.
1804.08859
Joshua Owoyemi
Joshua Owoyemi, Koichi Hashimoto
Spatiotemporal Learning of Dynamic Gestures from 3D Point Cloud Data
Accepted to ICRA2018, 6 Pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we demonstrate an end-to-end spatiotemporal gesture learning approach for 3D point cloud data using a new gestures dataset of point clouds acquired from a 3D sensor. Nine classes of gestures were learned from gestures sample data. We mapped point cloud data into dense occupancy grids, then time steps of the occupancy grids are used as inputs into a 3D convolutional neural network which learns the spatiotemporal features in the data without explicit modeling of gesture dynamics. We also introduced a 3D region of interest jittering approach for point cloud data augmentation. This resulted in an increased classification accuracy of up to 10% when the augmented data is added to the original training data. The developed model is able to classify gestures from the dataset with 84.44% accuracy. We propose that point cloud data will be a more viable data type for scene understanding and motion recognition, as 3D sensors become ubiquitous in years to come.
[ { "created": "Tue, 24 Apr 2018 06:48:56 GMT", "version": "v1" } ]
2018-04-25
[ [ "Owoyemi", "Joshua", "" ], [ "Hashimoto", "Koichi", "" ] ]
In this paper, we demonstrate an end-to-end spatiotemporal gesture learning approach for 3D point cloud data using a new gestures dataset of point clouds acquired from a 3D sensor. Nine classes of gestures were learned from gestures sample data. We mapped point cloud data into dense occupancy grids, then time steps of the occupancy grids are used as inputs into a 3D convolutional neural network which learns the spatiotemporal features in the data without explicit modeling of gesture dynamics. We also introduced a 3D region of interest jittering approach for point cloud data augmentation. This resulted in an increased classification accuracy of up to 10% when the augmented data is added to the original training data. The developed model is able to classify gestures from the dataset with 84.44% accuracy. We propose that point cloud data will be a more viable data type for scene understanding and motion recognition, as 3D sensors become ubiquitous in years to come.
1803.10561
Matthias Walter
Dominik Ermel and Matthias Walter
Parity Polytopes and Binarization
9 pages, 1 figure, presented at 15th Cologne-Twente Workshop on Graphs and Combinatorial Optimization 2017
null
null
null
cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider generalizations of parity polytopes whose variables, in addition to a parity constraint, satisfy certain ordering constraints. More precisely, the variable domain is partitioned into $k$ contiguous groups, and within each group, we require that $x_i \geq x_{i+1}$ for all relevant $i$. Such constraints are used to break symmetry after replacing an integer variable by a sum of binary variables, so-called binarization. We provide extended formulations for such polytopes, derive a complete outer description, and present a separation algorithm for the new constraints. It turns out that applying binarization and only enforcing parity constraints on the new variables is often a bad idea. For our application, an integer programming model for the graphic traveling salesman problem, we observe that parity constraints do not improve the dual bounds, and we provide a theoretical explanation of this effect.
[ { "created": "Wed, 28 Mar 2018 12:37:47 GMT", "version": "v1" }, { "created": "Wed, 18 Apr 2018 14:35:46 GMT", "version": "v2" } ]
2018-04-19
[ [ "Ermel", "Dominik", "" ], [ "Walter", "Matthias", "" ] ]
We consider generalizations of parity polytopes whose variables, in addition to a parity constraint, satisfy certain ordering constraints. More precisely, the variable domain is partitioned into $k$ contiguous groups, and within each group, we require that $x_i \geq x_{i+1}$ for all relevant $i$. Such constraints are used to break symmetry after replacing an integer variable by a sum of binary variables, so-called binarization. We provide extended formulations for such polytopes, derive a complete outer description, and present a separation algorithm for the new constraints. It turns out that applying binarization and only enforcing parity constraints on the new variables is often a bad idea. For our application, an integer programming model for the graphic traveling salesman problem, we observe that parity constraints do not improve the dual bounds, and we provide a theoretical explanation of this effect.
2206.07669
Ting Chen
Ting Chen, Saurabh Saxena, Lala Li, Tsung-Yi Lin, David J. Fleet, Geoffrey Hinton
A Unified Sequence Interface for Vision Tasks
The first three authors contributed equally
null
null
null
cs.CV cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While language tasks are naturally expressed in a single, unified, modeling framework, i.e., generating sequences of tokens, this has not been the case in computer vision. As a result, there is a proliferation of distinct architectures and loss functions for different vision tasks. In this work we show that a diverse set of "core" computer vision tasks can also be unified if formulated in terms of a shared pixel-to-sequence interface. We focus on four tasks, namely, object detection, instance segmentation, keypoint detection, and image captioning, all with diverse types of outputs, e.g., bounding boxes or dense masks. Despite that, by formulating the output of each task as a sequence of discrete tokens with a unified interface, we show that one can train a neural network with a single model architecture and loss function on all these tasks, with no task-specific customization. To solve a specific task, we use a short prompt as task description, and the sequence output adapts to the prompt so it can produce task-specific output. We show that such a model can achieve competitive performance compared to well-established task-specific models.
[ { "created": "Wed, 15 Jun 2022 17:08:53 GMT", "version": "v1" }, { "created": "Sun, 16 Oct 2022 02:41:15 GMT", "version": "v2" } ]
2022-10-18
[ [ "Chen", "Ting", "" ], [ "Saxena", "Saurabh", "" ], [ "Li", "Lala", "" ], [ "Lin", "Tsung-Yi", "" ], [ "Fleet", "David J.", "" ], [ "Hinton", "Geoffrey", "" ] ]
While language tasks are naturally expressed in a single, unified, modeling framework, i.e., generating sequences of tokens, this has not been the case in computer vision. As a result, there is a proliferation of distinct architectures and loss functions for different vision tasks. In this work we show that a diverse set of "core" computer vision tasks can also be unified if formulated in terms of a shared pixel-to-sequence interface. We focus on four tasks, namely, object detection, instance segmentation, keypoint detection, and image captioning, all with diverse types of outputs, e.g., bounding boxes or dense masks. Despite that, by formulating the output of each task as a sequence of discrete tokens with a unified interface, we show that one can train a neural network with a single model architecture and loss function on all these tasks, with no task-specific customization. To solve a specific task, we use a short prompt as task description, and the sequence output adapts to the prompt so it can produce task-specific output. We show that such a model can achieve competitive performance compared to well-established task-specific models.
2009.06009
Jan Novotny
Karel Ad\'amek, Jan Novotn\'y, Jeyarajan Thiyagalingam, Wesley Armour
Efficiency Near the Edge: Increasing the Energy Efficiency of FFTs on GPUs for Real-time Edge Computing
published in IEEE Access
in IEEE Access, vol. 9, pp. 18167-18182, 2021
10.1109/ACCESS.2021.3053409
null
cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Square Kilometre Array (SKA) is an international initiative for developing the world's largest radio telescope with a total collecting area of over a million square meters. The scale of the operation, combined with the remote location of the telescope, requires the use of energy-efficient computational algorithms. This, along with the extreme data rates that will be produced by the SKA and the requirement for a real-time observing capability, necessitates in-situ data processing in an edge style computing solution. More generally, energy efficiency in the modern computing landscape is becoming of paramount concern. Whether it be the power budget that can limit some of the world's largest supercomputers, or the limited power available to the smallest Internet-of-Things devices. In this paper, we study the impact of hardware frequency scaling on the energy consumption and execution time of the Fast Fourier Transform (FFT) on NVIDIA GPUs using the cuFFT library. The FFT is used in many areas of science and it is one of the key algorithms used in radio astronomy data processing pipelines. Through the use of frequency scaling, we show that we can lower the power consumption of the NVIDIA V100 GPU when computing the FFT by up to 60% compared to the boost clock frequency, with less than a 10% increase in the execution time. Furthermore, using one common core clock frequency for all tested FFT lengths, we show on average a 50% reduction in power consumption compared to the boost core clock frequency with an increase in the execution time still below 10%. We demonstrate how these results can be used to lower the power consumption of existing data processing pipelines. These savings, when considered over years of operation, can yield significant financial savings, but can also lead to a significant reduction of greenhouse gas emissions.
[ { "created": "Sun, 13 Sep 2020 14:48:16 GMT", "version": "v1" }, { "created": "Tue, 9 Nov 2021 21:13:56 GMT", "version": "v2" } ]
2021-11-11
[ [ "Adámek", "Karel", "" ], [ "Novotný", "Jan", "" ], [ "Thiyagalingam", "Jeyarajan", "" ], [ "Armour", "Wesley", "" ] ]
The Square Kilometre Array (SKA) is an international initiative for developing the world's largest radio telescope with a total collecting area of over a million square meters. The scale of the operation, combined with the remote location of the telescope, requires the use of energy-efficient computational algorithms. This, along with the extreme data rates that will be produced by the SKA and the requirement for a real-time observing capability, necessitates in-situ data processing in an edge style computing solution. More generally, energy efficiency in the modern computing landscape is becoming of paramount concern. Whether it be the power budget that can limit some of the world's largest supercomputers, or the limited power available to the smallest Internet-of-Things devices. In this paper, we study the impact of hardware frequency scaling on the energy consumption and execution time of the Fast Fourier Transform (FFT) on NVIDIA GPUs using the cuFFT library. The FFT is used in many areas of science and it is one of the key algorithms used in radio astronomy data processing pipelines. Through the use of frequency scaling, we show that we can lower the power consumption of the NVIDIA V100 GPU when computing the FFT by up to 60% compared to the boost clock frequency, with less than a 10% increase in the execution time. Furthermore, using one common core clock frequency for all tested FFT lengths, we show on average a 50% reduction in power consumption compared to the boost core clock frequency with an increase in the execution time still below 10%. We demonstrate how these results can be used to lower the power consumption of existing data processing pipelines. These savings, when considered over years of operation, can yield significant financial savings, but can also lead to a significant reduction of greenhouse gas emissions.
1901.11173
Anusha Lalitha
Anusha Lalitha, Osman Cihan Kilinc, Tara Javidi, Farinaz Koushanfar
Peer-to-peer Federated Learning on Graphs
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of training a machine learning model over a network of nodes in a fully decentralized framework. The nodes take a Bayesian-like approach via the introduction of a belief over the model parameter space. We propose a distributed learning algorithm in which nodes update their belief by aggregate information from their one-hop neighbors to learn a model that best fits the observations over the entire network. In addition, we also obtain sufficient conditions to ensure that the probability of error is small for every node in the network. We discuss approximations required for applying this algorithm to train Deep Neural Networks (DNNs). Experiments on training linear regression model and on training a DNN show that the proposed learning rule algorithm provides a significant improvement in the accuracy compared to the case where nodes learn without cooperation.
[ { "created": "Thu, 31 Jan 2019 02:18:45 GMT", "version": "v1" } ]
2019-02-01
[ [ "Lalitha", "Anusha", "" ], [ "Kilinc", "Osman Cihan", "" ], [ "Javidi", "Tara", "" ], [ "Koushanfar", "Farinaz", "" ] ]
We consider the problem of training a machine learning model over a network of nodes in a fully decentralized framework. The nodes take a Bayesian-like approach via the introduction of a belief over the model parameter space. We propose a distributed learning algorithm in which nodes update their belief by aggregate information from their one-hop neighbors to learn a model that best fits the observations over the entire network. In addition, we also obtain sufficient conditions to ensure that the probability of error is small for every node in the network. We discuss approximations required for applying this algorithm to train Deep Neural Networks (DNNs). Experiments on training linear regression model and on training a DNN show that the proposed learning rule algorithm provides a significant improvement in the accuracy compared to the case where nodes learn without cooperation.
2402.03618
Sreejan Kumar
Sreejan Kumar, Raja Marjieh, Byron Zhang, Declan Campbell, Michael Y. Hu, Umang Bhatt, Brenden Lake, Thomas L. Griffiths
Comparing Abstraction in Humans and Large Language Models Using Multimodal Serial Reproduction
null
null
null
null
cs.AI cs.CL q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Humans extract useful abstractions of the world from noisy sensory data. Serial reproduction allows us to study how people construe the world through a paradigm similar to the game of telephone, where one person observes a stimulus and reproduces it for the next to form a chain of reproductions. Past serial reproduction experiments typically employ a single sensory modality, but humans often communicate abstractions of the world to each other through language. To investigate the effect language on the formation of abstractions, we implement a novel multimodal serial reproduction framework by asking people who receive a visual stimulus to reproduce it in a linguistic format, and vice versa. We ran unimodal and multimodal chains with both humans and GPT-4 and find that adding language as a modality has a larger effect on human reproductions than GPT-4's. This suggests human visual and linguistic representations are more dissociable than those of GPT-4.
[ { "created": "Tue, 6 Feb 2024 01:07:56 GMT", "version": "v1" } ]
2024-02-07
[ [ "Kumar", "Sreejan", "" ], [ "Marjieh", "Raja", "" ], [ "Zhang", "Byron", "" ], [ "Campbell", "Declan", "" ], [ "Hu", "Michael Y.", "" ], [ "Bhatt", "Umang", "" ], [ "Lake", "Brenden", "" ], [ "Griffiths", "Thomas L.", "" ] ]
Humans extract useful abstractions of the world from noisy sensory data. Serial reproduction allows us to study how people construe the world through a paradigm similar to the game of telephone, where one person observes a stimulus and reproduces it for the next to form a chain of reproductions. Past serial reproduction experiments typically employ a single sensory modality, but humans often communicate abstractions of the world to each other through language. To investigate the effect language on the formation of abstractions, we implement a novel multimodal serial reproduction framework by asking people who receive a visual stimulus to reproduce it in a linguistic format, and vice versa. We ran unimodal and multimodal chains with both humans and GPT-4 and find that adding language as a modality has a larger effect on human reproductions than GPT-4's. This suggests human visual and linguistic representations are more dissociable than those of GPT-4.
2107.05202
Daniil Pakhomov
Sanchit Hira, Ritwik Das, Abhinav Modi, Daniil Pakhomov
Delta Sampling R-BERT for limited data and low-light action recognition
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We present an approach to perform supervised action recognition in the dark. In this work, we present our results on the ARID dataset. Most previous works only evaluate performance on large, well illuminated datasets like Kinetics and HMDB51. We demonstrate that our work is able to achieve a very low error rate while being trained on a much smaller dataset of dark videos. We also explore a variety of training and inference strategies including domain transfer methodologies and also propose a simple but useful frame selection strategy. Our empirical results demonstrate that we beat previously published baseline models by 11%.
[ { "created": "Mon, 12 Jul 2021 05:35:51 GMT", "version": "v1" } ]
2021-07-13
[ [ "Hira", "Sanchit", "" ], [ "Das", "Ritwik", "" ], [ "Modi", "Abhinav", "" ], [ "Pakhomov", "Daniil", "" ] ]
We present an approach to perform supervised action recognition in the dark. In this work, we present our results on the ARID dataset. Most previous works only evaluate performance on large, well illuminated datasets like Kinetics and HMDB51. We demonstrate that our work is able to achieve a very low error rate while being trained on a much smaller dataset of dark videos. We also explore a variety of training and inference strategies including domain transfer methodologies and also propose a simple but useful frame selection strategy. Our empirical results demonstrate that we beat previously published baseline models by 11%.
1203.4367
Nasrin Jaberi
Hamidreza Barati, Nasrin Jaberi
Thesis Report: Resource Utilization Provisioning in MapReduce
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this thesis report, we have a survey on state-of-the-art methods for modelling resource utilization of MapReduce applications regard to its configuration parameters. After implementation of one of the algorithms in literature, we tried to find that if CPU usage modelling of a MapReduce application can be used to predict CPU usage of another MapReduce application.
[ { "created": "Tue, 20 Mar 2012 10:06:24 GMT", "version": "v1" } ]
2012-03-21
[ [ "Barati", "Hamidreza", "" ], [ "Jaberi", "Nasrin", "" ] ]
In this thesis report, we have a survey on state-of-the-art methods for modelling resource utilization of MapReduce applications regard to its configuration parameters. After implementation of one of the algorithms in literature, we tried to find that if CPU usage modelling of a MapReduce application can be used to predict CPU usage of another MapReduce application.
2002.12674
Sebastian Lunz
Sebastian Lunz, Yingzhen Li, Andrew Fitzgibbon, Nate Kushman
Inverse Graphics GAN: Learning to Generate 3D Shapes from Unstructured 2D Data
8 pages paper, 3 pages references, 18 pages appendix
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work has shown the ability to learn generative models for 3D shapes from only unstructured 2D images. However, training such models requires differentiating through the rasterization step of the rendering process, therefore past work has focused on developing bespoke rendering models which smooth over this non-differentiable process in various ways. Such models are thus unable to take advantage of the photo-realistic, fully featured, industrial renderers built by the gaming and graphics industry. In this paper we introduce the first scalable training technique for 3D generative models from 2D data which utilizes an off-the-shelf non-differentiable renderer. To account for the non-differentiability, we introduce a proxy neural renderer to match the output of the non-differentiable renderer. We further propose discriminator output matching to ensure that the neural renderer learns to smooth over the rasterization appropriately. We evaluate our model on images rendered from our generated 3D shapes, and show that our model can consistently learn to generate better shapes than existing models when trained with exclusively unstructured 2D images.
[ { "created": "Fri, 28 Feb 2020 12:28:12 GMT", "version": "v1" } ]
2020-03-02
[ [ "Lunz", "Sebastian", "" ], [ "Li", "Yingzhen", "" ], [ "Fitzgibbon", "Andrew", "" ], [ "Kushman", "Nate", "" ] ]
Recent work has shown the ability to learn generative models for 3D shapes from only unstructured 2D images. However, training such models requires differentiating through the rasterization step of the rendering process, therefore past work has focused on developing bespoke rendering models which smooth over this non-differentiable process in various ways. Such models are thus unable to take advantage of the photo-realistic, fully featured, industrial renderers built by the gaming and graphics industry. In this paper we introduce the first scalable training technique for 3D generative models from 2D data which utilizes an off-the-shelf non-differentiable renderer. To account for the non-differentiability, we introduce a proxy neural renderer to match the output of the non-differentiable renderer. We further propose discriminator output matching to ensure that the neural renderer learns to smooth over the rasterization appropriately. We evaluate our model on images rendered from our generated 3D shapes, and show that our model can consistently learn to generate better shapes than existing models when trained with exclusively unstructured 2D images.
2212.00855
Luis Alvarez
Cooper Cone, Michael Owen, Luis Alvarez, Marc Brittain
Reward Function Optimization of a Deep Reinforcement Learning Collision Avoidance System
null
null
null
null
cs.AI cs.RO
http://creativecommons.org/licenses/by-sa/4.0/
The proliferation of unmanned aircraft systems (UAS) has caused airspace regulation authorities to examine the interoperability of these aircraft with collision avoidance systems initially designed for large transport category aircraft. Limitations in the currently mandated TCAS led the Federal Aviation Administration to commission the development of a new solution, the Airborne Collision Avoidance System X (ACAS X), designed to enable a collision avoidance capability for multiple aircraft platforms, including UAS. While prior research explored using deep reinforcement learning algorithms (DRL) for collision avoidance, DRL did not perform as well as existing solutions. This work explores the benefits of using a DRL collision avoidance system whose parameters are tuned using a surrogate optimizer. We show the use of a surrogate optimizer leads to DRL approach that can increase safety and operational viability and support future capability development for UAS collision avoidance.
[ { "created": "Thu, 1 Dec 2022 20:20:41 GMT", "version": "v1" } ]
2022-12-05
[ [ "Cone", "Cooper", "" ], [ "Owen", "Michael", "" ], [ "Alvarez", "Luis", "" ], [ "Brittain", "Marc", "" ] ]
The proliferation of unmanned aircraft systems (UAS) has caused airspace regulation authorities to examine the interoperability of these aircraft with collision avoidance systems initially designed for large transport category aircraft. Limitations in the currently mandated TCAS led the Federal Aviation Administration to commission the development of a new solution, the Airborne Collision Avoidance System X (ACAS X), designed to enable a collision avoidance capability for multiple aircraft platforms, including UAS. While prior research explored using deep reinforcement learning algorithms (DRL) for collision avoidance, DRL did not perform as well as existing solutions. This work explores the benefits of using a DRL collision avoidance system whose parameters are tuned using a surrogate optimizer. We show the use of a surrogate optimizer leads to DRL approach that can increase safety and operational viability and support future capability development for UAS collision avoidance.
2308.14562
Philip Tobuschat
Philip Tobuschat, Hao Ma, Dieter B\"uchler, Bernhard Sch\"olkopf, Michael Muehlebach
Data-Efficient Online Learning of Ball Placement in Robot Table Tennis
7 pages, 6 figures, to be published in proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2023
null
null
null
cs.RO cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
We present an implementation of an online optimization algorithm for hitting a predefined target when returning ping-pong balls with a table tennis robot. The online algorithm optimizes over so-called interception policies, which define the manner in which the robot arm intercepts the ball. In our case, these are composed of the state of the robot arm (position and velocity) at interception time. Gradient information is provided to the optimization algorithm via the mapping from the interception policy to the landing point of the ball on the table, which is approximated with a black-box and a grey-box approach. Our algorithm is applied to a robotic arm with four degrees of freedom that is driven by pneumatic artificial muscles. As a result, the robot arm is able to return the ball onto any predefined target on the table after about 2-5 iterations. We highlight the robustness of our approach by showing rapid convergence with both the black-box and the grey-box gradients. In addition, the small number of iterations required to reach close proximity to the target also underlines the sample efficiency. A demonstration video can be found here: https://youtu.be/VC3KJoCss0k.
[ { "created": "Mon, 28 Aug 2023 13:24:58 GMT", "version": "v1" } ]
2023-08-29
[ [ "Tobuschat", "Philip", "" ], [ "Ma", "Hao", "" ], [ "Büchler", "Dieter", "" ], [ "Schölkopf", "Bernhard", "" ], [ "Muehlebach", "Michael", "" ] ]
We present an implementation of an online optimization algorithm for hitting a predefined target when returning ping-pong balls with a table tennis robot. The online algorithm optimizes over so-called interception policies, which define the manner in which the robot arm intercepts the ball. In our case, these are composed of the state of the robot arm (position and velocity) at interception time. Gradient information is provided to the optimization algorithm via the mapping from the interception policy to the landing point of the ball on the table, which is approximated with a black-box and a grey-box approach. Our algorithm is applied to a robotic arm with four degrees of freedom that is driven by pneumatic artificial muscles. As a result, the robot arm is able to return the ball onto any predefined target on the table after about 2-5 iterations. We highlight the robustness of our approach by showing rapid convergence with both the black-box and the grey-box gradients. In addition, the small number of iterations required to reach close proximity to the target also underlines the sample efficiency. A demonstration video can be found here: https://youtu.be/VC3KJoCss0k.
2406.15265
Charlotte Pouw
Charlotte Pouw, Marianne de Heer Kloots, Afra Alishahi, Willem Zuidema
Perception of Phonological Assimilation by Neural Speech Recognition Models
Accepted for publication in Computational Linguistics (Special Issue on Language Learning, Representation, and Processing in Humans and Machines)
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Human listeners effortlessly compensate for phonological changes during speech perception, often unconsciously inferring the intended sounds. For example, listeners infer the underlying /n/ when hearing an utterance such as "clea[m] pan", where [m] arises from place assimilation to the following labial [p]. This article explores how the neural speech recognition model Wav2Vec2 perceives assimilated sounds, and identifies the linguistic knowledge that is implemented by the model to compensate for assimilation during Automatic Speech Recognition (ASR). Using psycholinguistic stimuli, we systematically analyze how various linguistic context cues influence compensation patterns in the model's output. Complementing these behavioral experiments, our probing experiments indicate that the model shifts its interpretation of assimilated sounds from their acoustic form to their underlying form in its final layers. Finally, our causal intervention experiments suggest that the model relies on minimal phonological context cues to accomplish this shift. These findings represent a step towards better understanding the similarities and differences in phonological processing between neural ASR models and humans.
[ { "created": "Fri, 21 Jun 2024 15:58:22 GMT", "version": "v1" } ]
2024-06-24
[ [ "Pouw", "Charlotte", "" ], [ "Kloots", "Marianne de Heer", "" ], [ "Alishahi", "Afra", "" ], [ "Zuidema", "Willem", "" ] ]
Human listeners effortlessly compensate for phonological changes during speech perception, often unconsciously inferring the intended sounds. For example, listeners infer the underlying /n/ when hearing an utterance such as "clea[m] pan", where [m] arises from place assimilation to the following labial [p]. This article explores how the neural speech recognition model Wav2Vec2 perceives assimilated sounds, and identifies the linguistic knowledge that is implemented by the model to compensate for assimilation during Automatic Speech Recognition (ASR). Using psycholinguistic stimuli, we systematically analyze how various linguistic context cues influence compensation patterns in the model's output. Complementing these behavioral experiments, our probing experiments indicate that the model shifts its interpretation of assimilated sounds from their acoustic form to their underlying form in its final layers. Finally, our causal intervention experiments suggest that the model relies on minimal phonological context cues to accomplish this shift. These findings represent a step towards better understanding the similarities and differences in phonological processing between neural ASR models and humans.
2003.06880
Liat Peterfreund
Liat Peterfreund
Grammars for Document Spanners
null
null
null
null
cs.DB
http://creativecommons.org/licenses/by/4.0/
We propose a new grammar-based language for defining information-extractors from documents (text) that is built upon the well-studied framework of document spanners for extracting structured data from text. While previously studied formalisms for document spanners are mainly based on regular expressions, we use an extension of context-free grammars, called {extraction grammars}, to define the new class of context-free spanners. Extraction grammars are simply context-free grammars extended with variables that capture interval positions of the document, namely spans. While regular expressions are efficient for tokenizing and tagging, context-free grammars are also efficient for capturing structural properties. Indeed, we show that context-free spanners are strictly more expressive than their regular counterparts. We reason about the expressive power of our new class and present a pushdown-automata model that captures it. We show that extraction grammars can be evaluated with polynomial data complexity. Nevertheless, as the degree of the polynomial depends on the query, we present an enumeration algorithm for unambiguous extraction grammars that, after quintic preprocessing, outputs the results sequentially, without repetitions, with a constant delay between every two consecutive ones.
[ { "created": "Sun, 15 Mar 2020 17:50:18 GMT", "version": "v1" }, { "created": "Tue, 24 Mar 2020 11:36:38 GMT", "version": "v2" }, { "created": "Mon, 20 Apr 2020 17:00:06 GMT", "version": "v3" }, { "created": "Thu, 12 Nov 2020 11:10:52 GMT", "version": "v4" }, { "created": "Sat, 13 Mar 2021 10:15:15 GMT", "version": "v5" }, { "created": "Tue, 24 Jan 2023 15:32:26 GMT", "version": "v6" } ]
2023-01-25
[ [ "Peterfreund", "Liat", "" ] ]
We propose a new grammar-based language for defining information-extractors from documents (text) that is built upon the well-studied framework of document spanners for extracting structured data from text. While previously studied formalisms for document spanners are mainly based on regular expressions, we use an extension of context-free grammars, called {extraction grammars}, to define the new class of context-free spanners. Extraction grammars are simply context-free grammars extended with variables that capture interval positions of the document, namely spans. While regular expressions are efficient for tokenizing and tagging, context-free grammars are also efficient for capturing structural properties. Indeed, we show that context-free spanners are strictly more expressive than their regular counterparts. We reason about the expressive power of our new class and present a pushdown-automata model that captures it. We show that extraction grammars can be evaluated with polynomial data complexity. Nevertheless, as the degree of the polynomial depends on the query, we present an enumeration algorithm for unambiguous extraction grammars that, after quintic preprocessing, outputs the results sequentially, without repetitions, with a constant delay between every two consecutive ones.
2102.05346
Michael K\"olle
Michael K\"olle, Dominik Laupheimer, Stefan Schmohl, Norbert Haala, Franz Rottensteiner, Jan Dirk Wegner, Hugo Ledoux
The Hessigheim 3D (H3D) Benchmark on Semantic Segmentation of High-Resolution 3D Point Clouds and Textured Meshes from UAV LiDAR and Multi-View-Stereo
H3D can be retrieved from https://ifpwww.ifp.uni-stuttgart.de/benchmark/hessigheim/default.aspx
null
10.1016/j.ophoto.2021.100001
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Automated semantic segmentation and object detection are of great importance in geospatial data analysis. However, supervised machine learning systems such as convolutional neural networks require large corpora of annotated training data. Especially in the geospatial domain, such datasets are quite scarce. Within this paper, we aim to alleviate this issue by introducing a new annotated 3D dataset that is unique in three ways: i) The dataset consists of both an Unmanned Aerial Vehicle (UAV) laser scanning point cloud and a 3D textured mesh. ii) The point cloud features a mean point density of about 800 pts/sqm and the oblique imagery used for 3D mesh texturing realizes a ground sampling distance of about 2-3 cm. This enables the identification of fine-grained structures and represents the state of the art in UAV-based mapping. iii) Both data modalities will be published for a total of three epochs allowing applications such as change detection. The dataset depicts the village of Hessigheim (Germany), henceforth referred to as H3D. It is designed to promote research in the field of 3D data analysis on one hand and to evaluate and rank existing and emerging approaches for semantic segmentation of both data modalities on the other hand. Ultimately, we hope that H3D will become a widely used benchmark dataset in company with the well-established ISPRS Vaihingen 3D Semantic Labeling Challenge benchmark (V3D). The dataset can be downloaded from https://ifpwww.ifp.uni-stuttgart.de/benchmark/hessigheim/default.aspx.
[ { "created": "Wed, 10 Feb 2021 09:33:48 GMT", "version": "v1" }, { "created": "Thu, 25 Feb 2021 19:25:51 GMT", "version": "v2" } ]
2021-07-20
[ [ "Kölle", "Michael", "" ], [ "Laupheimer", "Dominik", "" ], [ "Schmohl", "Stefan", "" ], [ "Haala", "Norbert", "" ], [ "Rottensteiner", "Franz", "" ], [ "Wegner", "Jan Dirk", "" ], [ "Ledoux", "Hugo", "" ] ]
Automated semantic segmentation and object detection are of great importance in geospatial data analysis. However, supervised machine learning systems such as convolutional neural networks require large corpora of annotated training data. Especially in the geospatial domain, such datasets are quite scarce. Within this paper, we aim to alleviate this issue by introducing a new annotated 3D dataset that is unique in three ways: i) The dataset consists of both an Unmanned Aerial Vehicle (UAV) laser scanning point cloud and a 3D textured mesh. ii) The point cloud features a mean point density of about 800 pts/sqm and the oblique imagery used for 3D mesh texturing realizes a ground sampling distance of about 2-3 cm. This enables the identification of fine-grained structures and represents the state of the art in UAV-based mapping. iii) Both data modalities will be published for a total of three epochs allowing applications such as change detection. The dataset depicts the village of Hessigheim (Germany), henceforth referred to as H3D. It is designed to promote research in the field of 3D data analysis on one hand and to evaluate and rank existing and emerging approaches for semantic segmentation of both data modalities on the other hand. Ultimately, we hope that H3D will become a widely used benchmark dataset in company with the well-established ISPRS Vaihingen 3D Semantic Labeling Challenge benchmark (V3D). The dataset can be downloaded from https://ifpwww.ifp.uni-stuttgart.de/benchmark/hessigheim/default.aspx.
2402.03907
Efe Bozkir
Efe Bozkir and S\"uleyman \"Ozdel and Ka Hei Carrie Lau and Mengdi Wang and Hong Gao and Enkelejda Kasneci
Embedding Large Language Models into Extended Reality: Opportunities and Challenges for Inclusion, Engagement, and Privacy
ACM Conversational User Interfaces 2024
null
10.1145/3640794.3665563
null
cs.HC cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advances in artificial intelligence and human-computer interaction will likely lead to extended reality (XR) becoming pervasive. While XR can provide users with interactive, engaging, and immersive experiences, non-player characters are often utilized in pre-scripted and conventional ways. This paper argues for using large language models (LLMs) in XR by embedding them in avatars or as narratives to facilitate inclusion through prompt engineering and fine-tuning the LLMs. We argue that this inclusion will promote diversity for XR use. Furthermore, the versatile conversational capabilities of LLMs will likely increase engagement in XR, helping XR become ubiquitous. Lastly, we speculate that combining the information provided to LLM-powered spaces by users and the biometric data obtained might lead to novel privacy invasions. While exploring potential privacy breaches, examining user privacy concerns and preferences is also essential. Therefore, despite challenges, LLM-powered XR is a promising area with several opportunities.
[ { "created": "Tue, 6 Feb 2024 11:19:40 GMT", "version": "v1" }, { "created": "Thu, 20 Jun 2024 10:02:30 GMT", "version": "v2" } ]
2024-06-21
[ [ "Bozkir", "Efe", "" ], [ "Özdel", "Süleyman", "" ], [ "Lau", "Ka Hei Carrie", "" ], [ "Wang", "Mengdi", "" ], [ "Gao", "Hong", "" ], [ "Kasneci", "Enkelejda", "" ] ]
Advances in artificial intelligence and human-computer interaction will likely lead to extended reality (XR) becoming pervasive. While XR can provide users with interactive, engaging, and immersive experiences, non-player characters are often utilized in pre-scripted and conventional ways. This paper argues for using large language models (LLMs) in XR by embedding them in avatars or as narratives to facilitate inclusion through prompt engineering and fine-tuning the LLMs. We argue that this inclusion will promote diversity for XR use. Furthermore, the versatile conversational capabilities of LLMs will likely increase engagement in XR, helping XR become ubiquitous. Lastly, we speculate that combining the information provided to LLM-powered spaces by users and the biometric data obtained might lead to novel privacy invasions. While exploring potential privacy breaches, examining user privacy concerns and preferences is also essential. Therefore, despite challenges, LLM-powered XR is a promising area with several opportunities.
2112.09196
Tong Xia
Tong Xia, Jing Han, Cecilia Mascolo
Benchmarking Uncertainty Quantification on Biosignal Classification Tasks under Dataset Shift
Accepted by The 6th International Workshop on Health Intelligence (W3PHIAI-22)
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A biosignal is a signal that can be continuously measured from human bodies, such as respiratory sounds, heart activity (ECG), brain waves (EEG), etc, based on which, machine learning models have been developed with very promising performance for automatic disease detection and health status monitoring. However, dataset shift, i.e., data distribution of inference varies from the distribution of the training, is not uncommon for real biosignal-based applications. To improve the robustness, probabilistic models with uncertainty quantification are adapted to capture how reliable a prediction is. Yet, assessing the quality of the estimated uncertainty remains a challenge. In this work, we propose a framework to evaluate the capability of the estimated uncertainty in capturing different types of biosignal dataset shifts with various degrees. In particular, we use three classification tasks based on respiratory sounds and electrocardiography signals to benchmark five representative uncertainty quantification methods. Extensive experiments show that, although Ensemble and Bayesian models could provide relatively better uncertainty estimations under dataset shifts, all tested models fail to meet the promise in trustworthy prediction and model calibration. Our work paves the way for a comprehensive evaluation for any newly developed biosignal classifiers.
[ { "created": "Thu, 16 Dec 2021 20:42:17 GMT", "version": "v1" }, { "created": "Tue, 25 Jan 2022 15:10:41 GMT", "version": "v2" } ]
2022-01-26
[ [ "Xia", "Tong", "" ], [ "Han", "Jing", "" ], [ "Mascolo", "Cecilia", "" ] ]
A biosignal is a signal that can be continuously measured from human bodies, such as respiratory sounds, heart activity (ECG), brain waves (EEG), etc, based on which, machine learning models have been developed with very promising performance for automatic disease detection and health status monitoring. However, dataset shift, i.e., data distribution of inference varies from the distribution of the training, is not uncommon for real biosignal-based applications. To improve the robustness, probabilistic models with uncertainty quantification are adapted to capture how reliable a prediction is. Yet, assessing the quality of the estimated uncertainty remains a challenge. In this work, we propose a framework to evaluate the capability of the estimated uncertainty in capturing different types of biosignal dataset shifts with various degrees. In particular, we use three classification tasks based on respiratory sounds and electrocardiography signals to benchmark five representative uncertainty quantification methods. Extensive experiments show that, although Ensemble and Bayesian models could provide relatively better uncertainty estimations under dataset shifts, all tested models fail to meet the promise in trustworthy prediction and model calibration. Our work paves the way for a comprehensive evaluation for any newly developed biosignal classifiers.
2007.11930
Jaafar Elmirghani
Zaid H. Nasralla, Taisir E. H. Elgorashi and Jaafar M. H. Elmirghani
Blackout Resilient Optical Core Network
null
null
null
null
cs.NI eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A disaster may not necessarily demolish the telecommunications infrastructure, but instead it might affect the national grid and cause blackouts, consequently disrupting the network operation unless there is an alternative power source(s). In this paper, power outages are considered, and the telecommunication network performance is evaluated during a blackout. Two approaches are presented to minimize the impact of power outage and maximize the survival time of the blackout node. A mixed integer linear programming (MILP) model is developed to evaluate the network performance under a single node blackout scenario. The model is used to evaluate the network under the two proposed scenarios. The results show that the proposed approach succeeds in extending the network life time while minimizing the required amount of backup energy.
[ { "created": "Thu, 23 Jul 2020 11:10:33 GMT", "version": "v1" } ]
2020-07-24
[ [ "Nasralla", "Zaid H.", "" ], [ "Elgorashi", "Taisir E. H.", "" ], [ "Elmirghani", "Jaafar M. H.", "" ] ]
A disaster may not necessarily demolish the telecommunications infrastructure, but instead it might affect the national grid and cause blackouts, consequently disrupting the network operation unless there is an alternative power source(s). In this paper, power outages are considered, and the telecommunication network performance is evaluated during a blackout. Two approaches are presented to minimize the impact of power outage and maximize the survival time of the blackout node. A mixed integer linear programming (MILP) model is developed to evaluate the network performance under a single node blackout scenario. The model is used to evaluate the network under the two proposed scenarios. The results show that the proposed approach succeeds in extending the network life time while minimizing the required amount of backup energy.
2106.10412
Devansh Jalota
Devansh Jalota, Marco Pavone, Qi Qi, Yinyu Ye
Fisher Markets with Linear Constraints: Equilibrium Properties and Efficient Distributed Algorithms
null
null
null
null
cs.GT
http://creativecommons.org/licenses/by/4.0/
The Fisher market is one of the most fundamental models for resource allocation problems in economic theory, wherein agents spend a budget of currency to buy goods that maximize their utilities, while producers sell capacity constrained goods in exchange for currency. However, the consideration of only two types of constraints, i.e., budgets of individual buyers and capacities of goods, makes Fisher markets less amenable for resource allocation settings when agents have additional linear constraints, e.g., knapsack and proportionality constraints. In this work, we introduce a modified Fisher market, where each agent may have additional linear constraints and show that this modification to classical Fisher markets fundamentally alters the properties of the market equilibrium as well as the optimal allocations. These properties of the modified Fisher market prompt us to introduce a budget perturbed social optimization problem (BP-SOP) and set prices based on the dual variables of BP-SOP's capacity constraints. To compute the budget perturbations, we develop a fixed point iterative scheme and validate its convergence through numerical experiments. Since this fixed point iterative scheme involves solving a centralized problem at each step, we propose a new class of distributed algorithms to compute equilibrium prices. In particular, we develop an Alternating Direction Method of Multipliers (ADMM) algorithm with strong convergence guarantees for Fisher markets with homogeneous linear constraints as well as for classical Fisher markets. In this algorithm, the prices are updated based on the tatonnement process, with a step size that is completely independent of the utilities of individual agents. Thus, our mechanism, both theoretically and computationally, overcomes a fundamental limitation of classical Fisher markets, which only consider capacity and budget constraints.
[ { "created": "Sat, 19 Jun 2021 03:43:43 GMT", "version": "v1" } ]
2021-06-22
[ [ "Jalota", "Devansh", "" ], [ "Pavone", "Marco", "" ], [ "Qi", "Qi", "" ], [ "Ye", "Yinyu", "" ] ]
The Fisher market is one of the most fundamental models for resource allocation problems in economic theory, wherein agents spend a budget of currency to buy goods that maximize their utilities, while producers sell capacity constrained goods in exchange for currency. However, the consideration of only two types of constraints, i.e., budgets of individual buyers and capacities of goods, makes Fisher markets less amenable for resource allocation settings when agents have additional linear constraints, e.g., knapsack and proportionality constraints. In this work, we introduce a modified Fisher market, where each agent may have additional linear constraints and show that this modification to classical Fisher markets fundamentally alters the properties of the market equilibrium as well as the optimal allocations. These properties of the modified Fisher market prompt us to introduce a budget perturbed social optimization problem (BP-SOP) and set prices based on the dual variables of BP-SOP's capacity constraints. To compute the budget perturbations, we develop a fixed point iterative scheme and validate its convergence through numerical experiments. Since this fixed point iterative scheme involves solving a centralized problem at each step, we propose a new class of distributed algorithms to compute equilibrium prices. In particular, we develop an Alternating Direction Method of Multipliers (ADMM) algorithm with strong convergence guarantees for Fisher markets with homogeneous linear constraints as well as for classical Fisher markets. In this algorithm, the prices are updated based on the tatonnement process, with a step size that is completely independent of the utilities of individual agents. Thus, our mechanism, both theoretically and computationally, overcomes a fundamental limitation of classical Fisher markets, which only consider capacity and budget constraints.
2205.00691
Wei Jiang
Wei Jiang and Hans D. Schotten
Initial Beamforming for Millimeter-Wave and Terahertz Communications in 6G Mobile Systems
2022 IEEE Wireless Communications and Networking Conference (WCNC), April 2022, Austin, TX, USA
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by-nc-nd/4.0/
To meet the demand of supreme data rates in terabits-per-second, the next-generation mobile system needs to exploit the abundant spectrum in the millimeter-wave and terahertz bands. However, high-frequency transmission heavily relies on large-scale antenna arrays to reap high beamforming gain, used to compensate for severe propagation loss. It raises a problem of omni-directional beamforming during the phase of initial access, where a base station is required to broadcast synchronization signals and system information to all users within its coverage. This paper proposes a novel initial beamforming scheme, which provides instantaneous gain equally in all directions by forming a pair of complementary beams. Numerical results verify that it can achieve omni-directional coverage with the optimal performance that remarkably outperforms the previous scheme called random beamforming. It is applicable for any form of large-scale arrays, and all three architecture, i.e., digital, analog, and hybrid beamforming.
[ { "created": "Mon, 2 May 2022 07:17:46 GMT", "version": "v1" } ]
2022-05-03
[ [ "Jiang", "Wei", "" ], [ "Schotten", "Hans D.", "" ] ]
To meet the demand of supreme data rates in terabits-per-second, the next-generation mobile system needs to exploit the abundant spectrum in the millimeter-wave and terahertz bands. However, high-frequency transmission heavily relies on large-scale antenna arrays to reap high beamforming gain, used to compensate for severe propagation loss. It raises a problem of omni-directional beamforming during the phase of initial access, where a base station is required to broadcast synchronization signals and system information to all users within its coverage. This paper proposes a novel initial beamforming scheme, which provides instantaneous gain equally in all directions by forming a pair of complementary beams. Numerical results verify that it can achieve omni-directional coverage with the optimal performance that remarkably outperforms the previous scheme called random beamforming. It is applicable for any form of large-scale arrays, and all three architecture, i.e., digital, analog, and hybrid beamforming.
2110.02597
Kerstin Bongard-Blanchy Dr
Cristiana Santos, Arianna Rossi, Lorena S\'anchez Chamorro, Kerstin Bongard-Blanchy, Ruba Abu-Salma
Cookie Banners, What's the Purpose? Analyzing Cookie Banner Text Through a Legal Lens
null
null
10.1145/3463676.3485611
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
A cookie banner pops up when a user visits a website for the first time, requesting consent to the use of cookies and other trackers for a variety of purposes. Unlike prior work that has focused on evaluating the user interface (UI) design of cookie banners, this paper presents an in-depth analysis of what cookie banners say to users to get their consent. We took an interdisciplinary approach to determining what cookie banners should say. Following the legal requirements of the ePrivacy Directive (ePD) and the General Data Protection Regulation (GDPR), we manually annotated around 400 cookie banners presented on the most popular English-speaking websites visited by users residing in the EU. We focused on analyzing the purposes of cookie banners and how these purposes were expressed (e.g., any misleading or vague language, any use of jargon). We found that 89% of cookie banners violated applicable laws. In particular, 61% of banners violated the purpose specificity requirement by mentioning vague purposes, including "user experience enhancement". Further, 30% of banners used positive framing, breaching the freely given and informed consent requirements. Based on these findings, we provide recommendations that regulators can find useful. We also describe future research directions.
[ { "created": "Wed, 6 Oct 2021 09:07:47 GMT", "version": "v1" }, { "created": "Thu, 7 Oct 2021 11:09:44 GMT", "version": "v2" } ]
2021-10-08
[ [ "Santos", "Cristiana", "" ], [ "Rossi", "Arianna", "" ], [ "Chamorro", "Lorena Sánchez", "" ], [ "Bongard-Blanchy", "Kerstin", "" ], [ "Abu-Salma", "Ruba", "" ] ]
A cookie banner pops up when a user visits a website for the first time, requesting consent to the use of cookies and other trackers for a variety of purposes. Unlike prior work that has focused on evaluating the user interface (UI) design of cookie banners, this paper presents an in-depth analysis of what cookie banners say to users to get their consent. We took an interdisciplinary approach to determining what cookie banners should say. Following the legal requirements of the ePrivacy Directive (ePD) and the General Data Protection Regulation (GDPR), we manually annotated around 400 cookie banners presented on the most popular English-speaking websites visited by users residing in the EU. We focused on analyzing the purposes of cookie banners and how these purposes were expressed (e.g., any misleading or vague language, any use of jargon). We found that 89% of cookie banners violated applicable laws. In particular, 61% of banners violated the purpose specificity requirement by mentioning vague purposes, including "user experience enhancement". Further, 30% of banners used positive framing, breaching the freely given and informed consent requirements. Based on these findings, we provide recommendations that regulators can find useful. We also describe future research directions.
2201.08904
Jeffrey Zhao
Jeffrey Zhao, Raghav Gupta, Yuan Cao, Dian Yu, Mingqiu Wang, Harrison Lee, Abhinav Rastogi, Izhak Shafran, Yonghui Wu
Description-Driven Task-Oriented Dialog Modeling
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Task-oriented dialogue (TOD) systems are required to identify key information from conversations for the completion of given tasks. Such information is conventionally specified in terms of intents and slots contained in task-specific ontology or schemata. Since these schemata are designed by system developers, the naming convention for slots and intents is not uniform across tasks, and may not convey their semantics effectively. This can lead to models memorizing arbitrary patterns in data, resulting in suboptimal performance and generalization. In this paper, we propose that schemata should be modified by replacing names or notations entirely with natural language descriptions. We show that a language description-driven system exhibits better understanding of task specifications, higher performance on state tracking, improved data efficiency, and effective zero-shot transfer to unseen tasks. Following this paradigm, we present a simple yet effective Description-Driven Dialog State Tracking (D3ST) model, which relies purely on schema descriptions and an "index-picking" mechanism. We demonstrate the superiority in quality, data efficiency and robustness of our approach as measured on the MultiWOZ (Budzianowski et al.,2018), SGD (Rastogi et al., 2020), and the recent SGD-X (Lee et al., 2021) benchmarks.
[ { "created": "Fri, 21 Jan 2022 22:07:41 GMT", "version": "v1" } ]
2022-01-25
[ [ "Zhao", "Jeffrey", "" ], [ "Gupta", "Raghav", "" ], [ "Cao", "Yuan", "" ], [ "Yu", "Dian", "" ], [ "Wang", "Mingqiu", "" ], [ "Lee", "Harrison", "" ], [ "Rastogi", "Abhinav", "" ], [ "Shafran", "Izhak", "" ], [ "Wu", "Yonghui", "" ] ]
Task-oriented dialogue (TOD) systems are required to identify key information from conversations for the completion of given tasks. Such information is conventionally specified in terms of intents and slots contained in task-specific ontology or schemata. Since these schemata are designed by system developers, the naming convention for slots and intents is not uniform across tasks, and may not convey their semantics effectively. This can lead to models memorizing arbitrary patterns in data, resulting in suboptimal performance and generalization. In this paper, we propose that schemata should be modified by replacing names or notations entirely with natural language descriptions. We show that a language description-driven system exhibits better understanding of task specifications, higher performance on state tracking, improved data efficiency, and effective zero-shot transfer to unseen tasks. Following this paradigm, we present a simple yet effective Description-Driven Dialog State Tracking (D3ST) model, which relies purely on schema descriptions and an "index-picking" mechanism. We demonstrate the superiority in quality, data efficiency and robustness of our approach as measured on the MultiWOZ (Budzianowski et al.,2018), SGD (Rastogi et al., 2020), and the recent SGD-X (Lee et al., 2021) benchmarks.
2309.02427
Shunyu Yao
Theodore R. Sumers, Shunyu Yao, Karthik Narasimhan, Thomas L. Griffiths
Cognitive Architectures for Language Agents
v3 is TMLR camera ready version. 19 pages of main content, 5 figures. The first two authors contributed equally, order decided by coin flip. A CoALA-based repo of recent work on language agents: https://github.com/ysymyth/awesome-language-agents
null
null
null
cs.AI cs.CL cs.LG cs.SC
http://creativecommons.org/licenses/by/4.0/
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
[ { "created": "Tue, 5 Sep 2023 17:56:20 GMT", "version": "v1" }, { "created": "Wed, 27 Sep 2023 15:27:25 GMT", "version": "v2" }, { "created": "Fri, 15 Mar 2024 15:44:11 GMT", "version": "v3" } ]
2024-03-18
[ [ "Sumers", "Theodore R.", "" ], [ "Yao", "Shunyu", "" ], [ "Narasimhan", "Karthik", "" ], [ "Griffiths", "Thomas L.", "" ] ]
Recent efforts have augmented large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning, leading to a new class of language agents. While these agents have achieved substantial empirical success, we lack a systematic framework to organize existing agents and plan future developments. In this paper, we draw on the rich history of cognitive science and symbolic artificial intelligence to propose Cognitive Architectures for Language Agents (CoALA). CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions. We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents. Taken together, CoALA contextualizes today's language agents within the broader history of AI and outlines a path towards language-based general intelligence.
2001.07607
Timothy LaRock
Timothy LaRock, Timothy Sakharov, Sahely Bhadra, Tina Eliassi-Rad
Understanding the Limitations of Network Online Learning
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Studies of networked phenomena, such as interactions in online social media, often rely on incomplete data, either because these phenomena are partially observed, or because the data is too large or expensive to acquire all at once. Analysis of incomplete data leads to skewed or misleading results. In this paper, we investigate limitations of learning to complete partially observed networks via node querying. Concretely, we study the following problem: given (i) a partially observed network, (ii) the ability to query nodes for their connections (e.g., by accessing an API), and (iii) a budget on the number of such queries, sequentially learn which nodes to query in order to maximally increase observability. We call this querying process Network Online Learning and present a family of algorithms called NOL*. These algorithms learn to choose which partially observed node to query next based on a parameterized model that is trained online through a process of exploration and exploitation. Extensive experiments on both synthetic and real world networks show that (i) it is possible to sequentially learn to choose which nodes are best to query in a network and (ii) some macroscopic properties of networks, such as the degree distribution and modular structure, impact the potential for learning and the optimal amount of random exploration.
[ { "created": "Thu, 9 Jan 2020 13:59:20 GMT", "version": "v1" } ]
2020-01-22
[ [ "LaRock", "Timothy", "" ], [ "Sakharov", "Timothy", "" ], [ "Bhadra", "Sahely", "" ], [ "Eliassi-Rad", "Tina", "" ] ]
Studies of networked phenomena, such as interactions in online social media, often rely on incomplete data, either because these phenomena are partially observed, or because the data is too large or expensive to acquire all at once. Analysis of incomplete data leads to skewed or misleading results. In this paper, we investigate limitations of learning to complete partially observed networks via node querying. Concretely, we study the following problem: given (i) a partially observed network, (ii) the ability to query nodes for their connections (e.g., by accessing an API), and (iii) a budget on the number of such queries, sequentially learn which nodes to query in order to maximally increase observability. We call this querying process Network Online Learning and present a family of algorithms called NOL*. These algorithms learn to choose which partially observed node to query next based on a parameterized model that is trained online through a process of exploration and exploitation. Extensive experiments on both synthetic and real world networks show that (i) it is possible to sequentially learn to choose which nodes are best to query in a network and (ii) some macroscopic properties of networks, such as the degree distribution and modular structure, impact the potential for learning and the optimal amount of random exploration.
2002.04741
Hao Chen
Hao Chen, Yali Wang, Guoyou Wang, Xiang Bai, and Yu Qiao
Progressive Object Transfer Detection
TIP 2019
null
10.1109/TIP.2019.2938680
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent development of object detection mainly depends on deep learning with large-scale benchmarks. However, collecting such fully-annotated data is often difficult or expensive for real-world applications, which restricts the power of deep neural networks in practice. Alternatively, humans can detect new objects with little annotation burden, since humans often use the prior knowledge to identify new objects with few elaborately-annotated examples, and subsequently generalize this capacity by exploiting objects from wild images. Inspired by this procedure of learning to detect, we propose a novel Progressive Object Transfer Detection (POTD) framework. Specifically, we make three main contributions in this paper. First, POTD can leverage various object supervision of different domains effectively into a progressive detection procedure. Via such human-like learning, one can boost a target detection task with few annotations. Second, POTD consists of two delicate transfer stages, i.e., Low-Shot Transfer Detection (LSTD), and Weakly-Supervised Transfer Detection (WSTD). In LSTD, we distill the implicit object knowledge of source detector to enhance target detector with few annotations. It can effectively warm up WSTD later on. In WSTD, we design a recurrent object labelling mechanism for learning to annotate weakly-labeled images. More importantly, we exploit the reliable object supervision from LSTD, which can further enhance the robustness of target detector in the WSTD stage. Finally, we perform extensive experiments on a number of challenging detection benchmarks with different settings. The results demonstrate that, our POTD outperforms the recent state-of-the-art approaches.
[ { "created": "Wed, 12 Feb 2020 00:16:24 GMT", "version": "v1" }, { "created": "Thu, 13 Feb 2020 05:06:51 GMT", "version": "v2" } ]
2020-02-19
[ [ "Chen", "Hao", "" ], [ "Wang", "Yali", "" ], [ "Wang", "Guoyou", "" ], [ "Bai", "Xiang", "" ], [ "Qiao", "Yu", "" ] ]
Recent development of object detection mainly depends on deep learning with large-scale benchmarks. However, collecting such fully-annotated data is often difficult or expensive for real-world applications, which restricts the power of deep neural networks in practice. Alternatively, humans can detect new objects with little annotation burden, since humans often use the prior knowledge to identify new objects with few elaborately-annotated examples, and subsequently generalize this capacity by exploiting objects from wild images. Inspired by this procedure of learning to detect, we propose a novel Progressive Object Transfer Detection (POTD) framework. Specifically, we make three main contributions in this paper. First, POTD can leverage various object supervision of different domains effectively into a progressive detection procedure. Via such human-like learning, one can boost a target detection task with few annotations. Second, POTD consists of two delicate transfer stages, i.e., Low-Shot Transfer Detection (LSTD), and Weakly-Supervised Transfer Detection (WSTD). In LSTD, we distill the implicit object knowledge of source detector to enhance target detector with few annotations. It can effectively warm up WSTD later on. In WSTD, we design a recurrent object labelling mechanism for learning to annotate weakly-labeled images. More importantly, we exploit the reliable object supervision from LSTD, which can further enhance the robustness of target detector in the WSTD stage. Finally, we perform extensive experiments on a number of challenging detection benchmarks with different settings. The results demonstrate that, our POTD outperforms the recent state-of-the-art approaches.
2108.08977
Zecheng He
Zecheng He, Ruby B. Lee
CloudShield: Real-time Anomaly Detection in the Cloud
null
null
null
null
cs.CR cs.LG
http://creativecommons.org/licenses/by/4.0/
In cloud computing, it is desirable if suspicious activities can be detected by automatic anomaly detection systems. Although anomaly detection has been investigated in the past, it remains unsolved in cloud computing. Challenges are: characterizing the normal behavior of a cloud server, distinguishing between benign and malicious anomalies (attacks), and preventing alert fatigue due to false alarms. We propose CloudShield, a practical and generalizable real-time anomaly and attack detection system for cloud computing. Cloudshield uses a general, pretrained deep learning model with different cloud workloads, to predict the normal behavior and provide real-time and continuous detection by examining the model reconstruction error distributions. Once an anomaly is detected, to reduce alert fatigue, CloudShield automatically distinguishes between benign programs, known attacks, and zero-day attacks, by examining the prediction error distributions. We evaluate the proposed CloudShield on representative cloud benchmarks. Our evaluation shows that CloudShield, using model pretraining, can apply to a wide scope of cloud workloads. Especially, we observe that CloudShield can detect the recently proposed speculative execution attacks, e.g., Spectre and Meltdown attacks, in milliseconds. Furthermore, we show that CloudShield accurately differentiates and prioritizes known attacks, and potential zero-day attacks, from benign programs. Thus, it significantly reduces false alarms by up to 99.0%.
[ { "created": "Fri, 20 Aug 2021 03:14:18 GMT", "version": "v1" }, { "created": "Wed, 25 Aug 2021 05:08:12 GMT", "version": "v2" } ]
2021-08-26
[ [ "He", "Zecheng", "" ], [ "Lee", "Ruby B.", "" ] ]
In cloud computing, it is desirable if suspicious activities can be detected by automatic anomaly detection systems. Although anomaly detection has been investigated in the past, it remains unsolved in cloud computing. Challenges are: characterizing the normal behavior of a cloud server, distinguishing between benign and malicious anomalies (attacks), and preventing alert fatigue due to false alarms. We propose CloudShield, a practical and generalizable real-time anomaly and attack detection system for cloud computing. Cloudshield uses a general, pretrained deep learning model with different cloud workloads, to predict the normal behavior and provide real-time and continuous detection by examining the model reconstruction error distributions. Once an anomaly is detected, to reduce alert fatigue, CloudShield automatically distinguishes between benign programs, known attacks, and zero-day attacks, by examining the prediction error distributions. We evaluate the proposed CloudShield on representative cloud benchmarks. Our evaluation shows that CloudShield, using model pretraining, can apply to a wide scope of cloud workloads. Especially, we observe that CloudShield can detect the recently proposed speculative execution attacks, e.g., Spectre and Meltdown attacks, in milliseconds. Furthermore, we show that CloudShield accurately differentiates and prioritizes known attacks, and potential zero-day attacks, from benign programs. Thus, it significantly reduces false alarms by up to 99.0%.
2408.02313
Benjamin Marais
Tony Quertier, Benjamin Marais, Gr\'egoire Barru\'e, St\'ephane Morucci, S\'evan Az\'e, S\'ebastien Salladin
A Lean Transformer Model for Dynamic Malware Analysis and Detection
null
null
null
null
cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Malware is a fast-growing threat to the modern computing world and existing lines of defense are not efficient enough to address this issue. This is mainly due to the fact that many prevention solutions rely on signature-based detection methods that can easily be circumvented by hackers. Therefore, there is a recurrent need for behavior-based analysis where a suspicious file is ran in a secured environment and its traces are collected to reports for analysis. Previous works have shown some success leveraging Neural Networks and API calls sequences extracted from these execution reports. Recently, Large Language Models and Generative AI have demonstrated impressive capabilities mainly in Natural Language Processing tasks and promising applications in the cybersecurity field for both attackers and defenders. In this paper, we design an Encoder-Only model, based on the Transformers architecture, to detect malicious files, digesting their API call sequences collected by an execution emulation solution. We are also limiting the size of the model architecture and the number of its parameters since it is often considered that Large Language Models may be overkill for specific tasks such as the one we are dealing with hereafter. In addition to achieving decent detection results, this approach has the advantage of reducing our carbon footprint by limiting training and inference times and facilitating technical operations with less hardware requirements. We also carry out some analysis of our results and highlight the limits and possible improvements when using Transformers to analyze malicious files.
[ { "created": "Mon, 5 Aug 2024 08:46:46 GMT", "version": "v1" } ]
2024-08-06
[ [ "Quertier", "Tony", "" ], [ "Marais", "Benjamin", "" ], [ "Barrué", "Grégoire", "" ], [ "Morucci", "Stéphane", "" ], [ "Azé", "Sévan", "" ], [ "Salladin", "Sébastien", "" ] ]
Malware is a fast-growing threat to the modern computing world and existing lines of defense are not efficient enough to address this issue. This is mainly due to the fact that many prevention solutions rely on signature-based detection methods that can easily be circumvented by hackers. Therefore, there is a recurrent need for behavior-based analysis where a suspicious file is ran in a secured environment and its traces are collected to reports for analysis. Previous works have shown some success leveraging Neural Networks and API calls sequences extracted from these execution reports. Recently, Large Language Models and Generative AI have demonstrated impressive capabilities mainly in Natural Language Processing tasks and promising applications in the cybersecurity field for both attackers and defenders. In this paper, we design an Encoder-Only model, based on the Transformers architecture, to detect malicious files, digesting their API call sequences collected by an execution emulation solution. We are also limiting the size of the model architecture and the number of its parameters since it is often considered that Large Language Models may be overkill for specific tasks such as the one we are dealing with hereafter. In addition to achieving decent detection results, this approach has the advantage of reducing our carbon footprint by limiting training and inference times and facilitating technical operations with less hardware requirements. We also carry out some analysis of our results and highlight the limits and possible improvements when using Transformers to analyze malicious files.
2101.06067
Jean-Pierre Sleiman
Jean-Pierre Sleiman, Farbod Farshidian, Marco Hutter
Constraint Handling in Continuous-Time DDP-Based Model Predictive Control
null
null
null
null
cs.RO cs.SY eess.SY math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Sequential Linear Quadratic (SLQ) algorithm is a continuous-time variant of the well-known Differential Dynamic Programming (DDP) technique with a Gauss-Newton Hessian approximation. This family of methods has gained popularity in the robotics community due to its efficiency in solving complex trajectory optimization problems. However, one major drawback of DDP-based formulations is their inability to properly incorporate path constraints. In this paper, we address this issue by devising a constrained SLQ algorithm that handles a mixture of constraints with a previously implemented projection technique and a new augmented-Lagrangian approach. By providing an appropriate multiplier update law, and by solving a single inner and outer loop iteration, we are able to retrieve suboptimal solutions at rates suitable for real-time model-predictive control applications. We particularly focus on the inequality-constrained case, where three augmented-Lagrangian penalty functions are introduced, along with their corresponding multiplier update rules. These are then benchmarked against a relaxed log-barrier formulation in a cart-pole swing up example, an obstacle-avoidance task, and an object-pushing task with a quadrupedal mobile manipulator.
[ { "created": "Fri, 15 Jan 2021 11:29:11 GMT", "version": "v1" }, { "created": "Fri, 26 Mar 2021 11:33:11 GMT", "version": "v2" } ]
2021-03-29
[ [ "Sleiman", "Jean-Pierre", "" ], [ "Farshidian", "Farbod", "" ], [ "Hutter", "Marco", "" ] ]
The Sequential Linear Quadratic (SLQ) algorithm is a continuous-time variant of the well-known Differential Dynamic Programming (DDP) technique with a Gauss-Newton Hessian approximation. This family of methods has gained popularity in the robotics community due to its efficiency in solving complex trajectory optimization problems. However, one major drawback of DDP-based formulations is their inability to properly incorporate path constraints. In this paper, we address this issue by devising a constrained SLQ algorithm that handles a mixture of constraints with a previously implemented projection technique and a new augmented-Lagrangian approach. By providing an appropriate multiplier update law, and by solving a single inner and outer loop iteration, we are able to retrieve suboptimal solutions at rates suitable for real-time model-predictive control applications. We particularly focus on the inequality-constrained case, where three augmented-Lagrangian penalty functions are introduced, along with their corresponding multiplier update rules. These are then benchmarked against a relaxed log-barrier formulation in a cart-pole swing up example, an obstacle-avoidance task, and an object-pushing task with a quadrupedal mobile manipulator.
2106.03614
Mo Zhou
Mo Zhou, Le Wang, Zhenxing Niu, Qilin Zhang, Nanning Zheng, Gang Hua
Adversarial Attack and Defense in Deep Ranking
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep Neural Network classifiers are vulnerable to adversarial attack, where an imperceptible perturbation could result in misclassification. However, the vulnerability of DNN-based image ranking systems remains under-explored. In this paper, we propose two attacks against deep ranking systems, i.e., Candidate Attack and Query Attack, that can raise or lower the rank of chosen candidates by adversarial perturbations. Specifically, the expected ranking order is first represented as a set of inequalities, and then a triplet-like objective function is designed to obtain the optimal perturbation. Conversely, an anti-collapse triplet defense is proposed to improve the ranking model robustness against all proposed attacks, where the model learns to prevent the positive and negative samples being pulled close to each other by adversarial attack. To comprehensively measure the empirical adversarial robustness of a ranking model with our defense, we propose an empirical robustness score, which involves a set of representative attacks against ranking models. Our adversarial ranking attacks and defenses are evaluated on MNIST, Fashion-MNIST, CUB200-2011, CARS196 and Stanford Online Products datasets. Experimental results demonstrate that a typical deep ranking system can be effectively compromised by our attacks. Nevertheless, our defense can significantly improve the ranking system robustness, and simultaneously mitigate a wide range of attacks.
[ { "created": "Mon, 7 Jun 2021 13:41:45 GMT", "version": "v1" } ]
2021-06-08
[ [ "Zhou", "Mo", "" ], [ "Wang", "Le", "" ], [ "Niu", "Zhenxing", "" ], [ "Zhang", "Qilin", "" ], [ "Zheng", "Nanning", "" ], [ "Hua", "Gang", "" ] ]
Deep Neural Network classifiers are vulnerable to adversarial attack, where an imperceptible perturbation could result in misclassification. However, the vulnerability of DNN-based image ranking systems remains under-explored. In this paper, we propose two attacks against deep ranking systems, i.e., Candidate Attack and Query Attack, that can raise or lower the rank of chosen candidates by adversarial perturbations. Specifically, the expected ranking order is first represented as a set of inequalities, and then a triplet-like objective function is designed to obtain the optimal perturbation. Conversely, an anti-collapse triplet defense is proposed to improve the ranking model robustness against all proposed attacks, where the model learns to prevent the positive and negative samples being pulled close to each other by adversarial attack. To comprehensively measure the empirical adversarial robustness of a ranking model with our defense, we propose an empirical robustness score, which involves a set of representative attacks against ranking models. Our adversarial ranking attacks and defenses are evaluated on MNIST, Fashion-MNIST, CUB200-2011, CARS196 and Stanford Online Products datasets. Experimental results demonstrate that a typical deep ranking system can be effectively compromised by our attacks. Nevertheless, our defense can significantly improve the ranking system robustness, and simultaneously mitigate a wide range of attacks.
1904.05530
Xiang Ren
Woojeong Jin, Meng Qu, Xisen Jin, Xiang Ren
Recurrent Event Network: Autoregressive Structure Inference over Temporal Knowledge Graphs
15 pages, 8 figures, accepted at as full paper in EMNLP 2020
null
null
null
cs.LG cs.AI cs.CL stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge graph reasoning is a critical task in natural language processing. The task becomes more challenging on temporal knowledge graphs, where each fact is associated with a timestamp. Most existing methods focus on reasoning at past timestamps and they are not able to predict facts happening in the future. This paper proposes Recurrent Event Network (RE-NET), a novel autoregressive architecture for predicting future interactions. The occurrence of a fact (event) is modeled as a probability distribution conditioned on temporal sequences of past knowledge graphs. Specifically, our RE-NET employs a recurrent event encoder to encode past facts and uses a neighborhood aggregator to model the connection of facts at the same timestamp. Future facts can then be inferred in a sequential manner based on the two modules. We evaluate our proposed method via link prediction at future times on five public datasets. Through extensive experiments, we demonstrate the strength of RENET, especially on multi-step inference over future timestamps, and achieve state-of-the-art performance on all five datasets. Code and data can be found at https://github.com/INK-USC/RE-Net.
[ { "created": "Thu, 11 Apr 2019 04:45:42 GMT", "version": "v1" }, { "created": "Tue, 4 Jun 2019 19:06:37 GMT", "version": "v2" }, { "created": "Tue, 8 Oct 2019 03:32:40 GMT", "version": "v3" }, { "created": "Tue, 6 Oct 2020 18:40:59 GMT", "version": "v4" } ]
2020-10-08
[ [ "Jin", "Woojeong", "" ], [ "Qu", "Meng", "" ], [ "Jin", "Xisen", "" ], [ "Ren", "Xiang", "" ] ]
Knowledge graph reasoning is a critical task in natural language processing. The task becomes more challenging on temporal knowledge graphs, where each fact is associated with a timestamp. Most existing methods focus on reasoning at past timestamps and they are not able to predict facts happening in the future. This paper proposes Recurrent Event Network (RE-NET), a novel autoregressive architecture for predicting future interactions. The occurrence of a fact (event) is modeled as a probability distribution conditioned on temporal sequences of past knowledge graphs. Specifically, our RE-NET employs a recurrent event encoder to encode past facts and uses a neighborhood aggregator to model the connection of facts at the same timestamp. Future facts can then be inferred in a sequential manner based on the two modules. We evaluate our proposed method via link prediction at future times on five public datasets. Through extensive experiments, we demonstrate the strength of RENET, especially on multi-step inference over future timestamps, and achieve state-of-the-art performance on all five datasets. Code and data can be found at https://github.com/INK-USC/RE-Net.
2005.10296
Ajith Suresh
Nishat Koti, Mahak Pancholi, Arpita Patra, Ajith Suresh
SWIFT: Super-fast and Robust Privacy-Preserving Machine Learning
This article is the full and extended version of an article to appear in USENIX Security 2021
null
null
null
cs.CR cs.LG
http://creativecommons.org/licenses/by/4.0/
Performing machine learning (ML) computation on private data while maintaining data privacy, aka Privacy-preserving Machine Learning~(PPML), is an emergent field of research. Recently, PPML has seen a visible shift towards the adoption of the Secure Outsourced Computation~(SOC) paradigm due to the heavy computation that it entails. In the SOC paradigm, computation is outsourced to a set of powerful and specially equipped servers that provide service on a pay-per-use basis. In this work, we propose SWIFT, a robust PPML framework for a range of ML algorithms in SOC setting, that guarantees output delivery to the users irrespective of any adversarial behaviour. Robustness, a highly desirable feature, evokes user participation without the fear of denial of service. At the heart of our framework lies a highly-efficient, maliciously-secure, three-party computation (3PC) over rings that provides guaranteed output delivery (GOD) in the honest-majority setting. To the best of our knowledge, SWIFT is the first robust and efficient PPML framework in the 3PC setting. SWIFT is as fast as (and is strictly better in some cases than) the best-known 3PC framework BLAZE (Patra et al. NDSS'20), which only achieves fairness. We extend our 3PC framework for four parties (4PC). In this regime, SWIFT is as fast as the best known fair 4PC framework Trident (Chaudhari et al. NDSS'20) and twice faster than the best-known robust 4PC framework FLASH (Byali et al. PETS'20). We demonstrate our framework's practical relevance by benchmarking popular ML algorithms such as Logistic Regression and deep Neural Networks such as VGG16 and LeNet, both over a 64-bit ring in a WAN setting. For deep NN, our results testify to our claims that we provide improved security guarantee while incurring no additional overhead for 3PC and obtaining 2x improvement for 4PC.
[ { "created": "Wed, 20 May 2020 18:20:23 GMT", "version": "v1" }, { "created": "Fri, 30 Oct 2020 08:26:09 GMT", "version": "v2" }, { "created": "Wed, 17 Feb 2021 08:47:28 GMT", "version": "v3" } ]
2021-02-18
[ [ "Koti", "Nishat", "" ], [ "Pancholi", "Mahak", "" ], [ "Patra", "Arpita", "" ], [ "Suresh", "Ajith", "" ] ]
Performing machine learning (ML) computation on private data while maintaining data privacy, aka Privacy-preserving Machine Learning~(PPML), is an emergent field of research. Recently, PPML has seen a visible shift towards the adoption of the Secure Outsourced Computation~(SOC) paradigm due to the heavy computation that it entails. In the SOC paradigm, computation is outsourced to a set of powerful and specially equipped servers that provide service on a pay-per-use basis. In this work, we propose SWIFT, a robust PPML framework for a range of ML algorithms in SOC setting, that guarantees output delivery to the users irrespective of any adversarial behaviour. Robustness, a highly desirable feature, evokes user participation without the fear of denial of service. At the heart of our framework lies a highly-efficient, maliciously-secure, three-party computation (3PC) over rings that provides guaranteed output delivery (GOD) in the honest-majority setting. To the best of our knowledge, SWIFT is the first robust and efficient PPML framework in the 3PC setting. SWIFT is as fast as (and is strictly better in some cases than) the best-known 3PC framework BLAZE (Patra et al. NDSS'20), which only achieves fairness. We extend our 3PC framework for four parties (4PC). In this regime, SWIFT is as fast as the best known fair 4PC framework Trident (Chaudhari et al. NDSS'20) and twice faster than the best-known robust 4PC framework FLASH (Byali et al. PETS'20). We demonstrate our framework's practical relevance by benchmarking popular ML algorithms such as Logistic Regression and deep Neural Networks such as VGG16 and LeNet, both over a 64-bit ring in a WAN setting. For deep NN, our results testify to our claims that we provide improved security guarantee while incurring no additional overhead for 3PC and obtaining 2x improvement for 4PC.
2102.11749
Ewan Dunbar
Louis Fournier and Ewan Dunbar
Paraphrases do not explain word analogies
To appear in Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Many types of distributional word embeddings (weakly) encode linguistic regularities as directions (the difference between "jump" and "jumped" will be in a similar direction to that of "walk" and "walked," and so on). Several attempts have been made to explain this fact. We respond to Allen and Hospedales' recent (ICML, 2019) theoretical explanation, which claims that word2vec and GloVe will encode linguistic regularities whenever a specific relation of paraphrase holds between the four words involved in the regularity. We demonstrate that the explanation does not go through: the paraphrase relations needed under this explanation do not hold empirically.
[ { "created": "Tue, 23 Feb 2021 15:25:10 GMT", "version": "v1" } ]
2021-02-24
[ [ "Fournier", "Louis", "" ], [ "Dunbar", "Ewan", "" ] ]
Many types of distributional word embeddings (weakly) encode linguistic regularities as directions (the difference between "jump" and "jumped" will be in a similar direction to that of "walk" and "walked," and so on). Several attempts have been made to explain this fact. We respond to Allen and Hospedales' recent (ICML, 2019) theoretical explanation, which claims that word2vec and GloVe will encode linguistic regularities whenever a specific relation of paraphrase holds between the four words involved in the regularity. We demonstrate that the explanation does not go through: the paraphrase relations needed under this explanation do not hold empirically.
2009.10333
Aanchal Mongia
Aanchal Mongia, Stuti Jain, Emilie Chouzenoux and Angshul Majumda
DeepVir -- Graphical Deep Matrix Factorization for "In Silico" Antiviral Repositioning: Application to COVID-19
null
null
null
null
cs.LG stat.ML
http://creativecommons.org/publicdomain/zero/1.0/
This work formulates antiviral repositioning as a matrix completion problem where the antiviral drugs are along the rows and the viruses along the columns. The input matrix is partially filled, with ones in positions where the antiviral has been known to be effective against a virus. The curated metadata for antivirals (chemical structure and pathways) and viruses (genomic structure and symptoms) is encoded into our matrix completion framework as graph Laplacian regularization. We then frame the resulting multiple graph regularized matrix completion problem as deep matrix factorization. This is solved by using a novel optimization method called HyPALM (Hybrid Proximal Alternating Linearized Minimization). Results on our curated RNA drug virus association (DVA) dataset shows that the proposed approach excels over state-of-the-art graph regularized matrix completion techniques. When applied to "in silico" prediction of antivirals for COVID-19, our approach returns antivirals that are either used for treating patients or are under for trials for the same.
[ { "created": "Tue, 22 Sep 2020 05:57:03 GMT", "version": "v1" } ]
2020-09-23
[ [ "Mongia", "Aanchal", "" ], [ "Jain", "Stuti", "" ], [ "Chouzenoux", "Emilie", "" ], [ "Majumda", "Angshul", "" ] ]
This work formulates antiviral repositioning as a matrix completion problem where the antiviral drugs are along the rows and the viruses along the columns. The input matrix is partially filled, with ones in positions where the antiviral has been known to be effective against a virus. The curated metadata for antivirals (chemical structure and pathways) and viruses (genomic structure and symptoms) is encoded into our matrix completion framework as graph Laplacian regularization. We then frame the resulting multiple graph regularized matrix completion problem as deep matrix factorization. This is solved by using a novel optimization method called HyPALM (Hybrid Proximal Alternating Linearized Minimization). Results on our curated RNA drug virus association (DVA) dataset shows that the proposed approach excels over state-of-the-art graph regularized matrix completion techniques. When applied to "in silico" prediction of antivirals for COVID-19, our approach returns antivirals that are either used for treating patients or are under for trials for the same.
2401.01353
Ralph Ankele
Ralph Ankele, Hamed Haddadi
The Boomerang protocol: A Decentralised Privacy-Preserving Verifiable Incentive Protocol
fix formatting issue in abstract
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
In the era of data-driven economies, incentive systems and loyalty programs, have become ubiquitous in various sectors, including advertising, retail, travel, and financial services. While these systems offer advantages for both users and companies, they necessitate the transfer and analysis of substantial amounts of sensitive data. Privacy concerns have become increasingly pertinent, necessitating the development of privacy-preserving incentive protocols. Despite the rising demand for secure and decentralized systems, the existing landscape lacks a comprehensive solution. We propose the Boomerang protocol, a novel decentralized privacy-preserving incentive protocol that leverages cryptographic black box accumulators to securely store user interactions within the incentive system. Moreover, the protocol employs zero-knowledge proofs based on BulletProofs to transparently compute rewards for users, ensuring verifiability while preserving their privacy. To further enhance public verifiability and transparency, we utilize a smart contract on a Layer 1 blockchain to verify these zero-knowledge proofs. The careful combination of black box accumulators with selected elliptic curves in the zero-knowledge proofs makes the Boomerang protocol highly efficient. Our proof of concept implementation shows that we can handle up to 23.6 million users per day, on a single-threaded backend server with financial costs of approximately 2 USD. Using the Solana blockchain we can handle 15.5 million users per day with approximate costs of 0.00011 USD per user. The Boomerang protocol represents a significant advancement in privacy-preserving incentive protocols, laying the groundwork for a more secure and privacy-centric future.
[ { "created": "Wed, 6 Dec 2023 09:37:45 GMT", "version": "v1" }, { "created": "Tue, 9 Jan 2024 17:27:33 GMT", "version": "v2" } ]
2024-01-11
[ [ "Ankele", "Ralph", "" ], [ "Haddadi", "Hamed", "" ] ]
In the era of data-driven economies, incentive systems and loyalty programs, have become ubiquitous in various sectors, including advertising, retail, travel, and financial services. While these systems offer advantages for both users and companies, they necessitate the transfer and analysis of substantial amounts of sensitive data. Privacy concerns have become increasingly pertinent, necessitating the development of privacy-preserving incentive protocols. Despite the rising demand for secure and decentralized systems, the existing landscape lacks a comprehensive solution. We propose the Boomerang protocol, a novel decentralized privacy-preserving incentive protocol that leverages cryptographic black box accumulators to securely store user interactions within the incentive system. Moreover, the protocol employs zero-knowledge proofs based on BulletProofs to transparently compute rewards for users, ensuring verifiability while preserving their privacy. To further enhance public verifiability and transparency, we utilize a smart contract on a Layer 1 blockchain to verify these zero-knowledge proofs. The careful combination of black box accumulators with selected elliptic curves in the zero-knowledge proofs makes the Boomerang protocol highly efficient. Our proof of concept implementation shows that we can handle up to 23.6 million users per day, on a single-threaded backend server with financial costs of approximately 2 USD. Using the Solana blockchain we can handle 15.5 million users per day with approximate costs of 0.00011 USD per user. The Boomerang protocol represents a significant advancement in privacy-preserving incentive protocols, laying the groundwork for a more secure and privacy-centric future.
2310.16361
Zhiyu Chen
Besnik Fetahu, Zhiyu Chen, Oleg Rokhlenko, Shervin Malmasi
InstructPTS: Instruction-Tuning LLMs for Product Title Summarization
Accepted by EMNLP 2023 (Industry Track)
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
E-commerce product catalogs contain billions of items. Most products have lengthy titles, as sellers pack them with product attributes to improve retrieval, and highlight key product aspects. This results in a gap between such unnatural products titles, and how customers refer to them. It also limits how e-commerce stores can use these seller-provided titles for recommendation, QA, or review summarization. Inspired by recent work on instruction-tuned LLMs, we present InstructPTS, a controllable approach for the task of Product Title Summarization (PTS). Trained using a novel instruction fine-tuning strategy, our approach is able to summarize product titles according to various criteria (e.g. number of words in a summary, inclusion of specific phrases, etc.). Extensive evaluation on a real-world e-commerce catalog shows that compared to simple fine-tuning of LLMs, our proposed approach can generate more accurate product name summaries, with an improvement of over 14 and 8 BLEU and ROUGE points, respectively.
[ { "created": "Wed, 25 Oct 2023 04:56:07 GMT", "version": "v1" } ]
2023-10-26
[ [ "Fetahu", "Besnik", "" ], [ "Chen", "Zhiyu", "" ], [ "Rokhlenko", "Oleg", "" ], [ "Malmasi", "Shervin", "" ] ]
E-commerce product catalogs contain billions of items. Most products have lengthy titles, as sellers pack them with product attributes to improve retrieval, and highlight key product aspects. This results in a gap between such unnatural products titles, and how customers refer to them. It also limits how e-commerce stores can use these seller-provided titles for recommendation, QA, or review summarization. Inspired by recent work on instruction-tuned LLMs, we present InstructPTS, a controllable approach for the task of Product Title Summarization (PTS). Trained using a novel instruction fine-tuning strategy, our approach is able to summarize product titles according to various criteria (e.g. number of words in a summary, inclusion of specific phrases, etc.). Extensive evaluation on a real-world e-commerce catalog shows that compared to simple fine-tuning of LLMs, our proposed approach can generate more accurate product name summaries, with an improvement of over 14 and 8 BLEU and ROUGE points, respectively.
2005.00820
Clara Meister
Clara Meister, Elizabeth Salesky, Ryan Cotterell
Generalized Entropy Regularization or: There's Nothing Special about Label Smoothing
Published as long paper at ACL 2020
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Prior work has explored directly regularizing the output distributions of probabilistic models to alleviate peaky (i.e. over-confident) predictions, a common sign of overfitting. This class of techniques, of which label smoothing is one, has a connection to entropy regularization. Despite the consistent success of label smoothing across architectures and data sets in language generation tasks, two problems remain open: (1) there is little understanding of the underlying effects entropy regularizers have on models, and (2) the full space of entropy regularization techniques is largely unexplored. We introduce a parametric family of entropy regularizers, which includes label smoothing as a special case, and use it to gain a better understanding of the relationship between the entropy of a model and its performance on language generation tasks. We also find that variance in model performance can be explained largely by the resulting entropy of the model. Lastly, we find that label smoothing provably does not allow for sparsity in an output distribution, an undesirable property for language generation models, and therefore advise the use of other entropy regularization methods in its place.
[ { "created": "Sat, 2 May 2020 12:46:28 GMT", "version": "v1" }, { "created": "Tue, 12 May 2020 06:22:06 GMT", "version": "v2" } ]
2020-05-13
[ [ "Meister", "Clara", "" ], [ "Salesky", "Elizabeth", "" ], [ "Cotterell", "Ryan", "" ] ]
Prior work has explored directly regularizing the output distributions of probabilistic models to alleviate peaky (i.e. over-confident) predictions, a common sign of overfitting. This class of techniques, of which label smoothing is one, has a connection to entropy regularization. Despite the consistent success of label smoothing across architectures and data sets in language generation tasks, two problems remain open: (1) there is little understanding of the underlying effects entropy regularizers have on models, and (2) the full space of entropy regularization techniques is largely unexplored. We introduce a parametric family of entropy regularizers, which includes label smoothing as a special case, and use it to gain a better understanding of the relationship between the entropy of a model and its performance on language generation tasks. We also find that variance in model performance can be explained largely by the resulting entropy of the model. Lastly, we find that label smoothing provably does not allow for sparsity in an output distribution, an undesirable property for language generation models, and therefore advise the use of other entropy regularization methods in its place.
2301.11608
Lecheng Kong
Lecheng Kong, Christopher King, Bradley Fritz, Yixin Chen
A Multi-View Joint Learning Framework for Embedding Clinical Codes and Text Using Graph Neural Networks
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning to represent free text is a core task in many clinical machine learning (ML) applications, as clinical text contains observations and plans not otherwise available for inference. State-of-the-art methods use large language models developed with immense computational resources and training data; however, applying these models is challenging because of the highly varying syntax and vocabulary in clinical free text. Structured information such as International Classification of Disease (ICD) codes often succinctly abstracts the most important facts of a clinical encounter and yields good performance, but is often not as available as clinical text in real-world scenarios. We propose a \textbf{multi-view learning framework} that jointly learns from codes and text to combine the availability and forward-looking nature of text and better performance of ICD codes. The learned text embeddings can be used as inputs to predictive algorithms independent of the ICD codes during inference. Our approach uses a Graph Neural Network (GNN) to process ICD codes, and Bi-LSTM to process text. We apply Deep Canonical Correlation Analysis (DCCA) to enforce the two views to learn a similar representation of each patient. In experiments using planned surgical procedure text, our model outperforms BERT models fine-tuned to clinical data, and in experiments using diverse text in MIMIC-III, our model is competitive to a fine-tuned BERT at a tiny fraction of its computational effort.
[ { "created": "Fri, 27 Jan 2023 09:19:03 GMT", "version": "v1" } ]
2023-01-30
[ [ "Kong", "Lecheng", "" ], [ "King", "Christopher", "" ], [ "Fritz", "Bradley", "" ], [ "Chen", "Yixin", "" ] ]
Learning to represent free text is a core task in many clinical machine learning (ML) applications, as clinical text contains observations and plans not otherwise available for inference. State-of-the-art methods use large language models developed with immense computational resources and training data; however, applying these models is challenging because of the highly varying syntax and vocabulary in clinical free text. Structured information such as International Classification of Disease (ICD) codes often succinctly abstracts the most important facts of a clinical encounter and yields good performance, but is often not as available as clinical text in real-world scenarios. We propose a \textbf{multi-view learning framework} that jointly learns from codes and text to combine the availability and forward-looking nature of text and better performance of ICD codes. The learned text embeddings can be used as inputs to predictive algorithms independent of the ICD codes during inference. Our approach uses a Graph Neural Network (GNN) to process ICD codes, and Bi-LSTM to process text. We apply Deep Canonical Correlation Analysis (DCCA) to enforce the two views to learn a similar representation of each patient. In experiments using planned surgical procedure text, our model outperforms BERT models fine-tuned to clinical data, and in experiments using diverse text in MIMIC-III, our model is competitive to a fine-tuned BERT at a tiny fraction of its computational effort.
2010.02035
Aksel Wilhelm Wold Eide
Aksel Wilhelm Wold Eide, Eilif Solberg, Ingebj{\o}rg K{\aa}sen
Sample weighting as an explanation for mode collapse in generative adversarial networks
41 pages, 21 figures, preprint
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative adversarial networks were introduced with a logistic MiniMax cost formulation, which normally fails to train due to saturation, and a Non-Saturating reformulation. While addressing the saturation problem, NS-GAN also inverts the generator's sample weighting, implicitly shifting emphasis from higher-scoring to lower-scoring samples when updating parameters. We present both theory and empirical results suggesting that this makes NS-GAN prone to mode dropping. We design MM-nsat, which preserves MM-GAN sample weighting while avoiding saturation by rescaling the MM-GAN minibatch gradient such that its magnitude approximates NS-GAN's gradient magnitude. MM-nsat has qualitatively different training dynamics, and on MNIST and CIFAR-10 it is stronger in terms of mode coverage, stability and FID. While the empirical results for MM-nsat are promising and favorable also in comparison with the LS-GAN and Hinge-GAN formulations, our main contribution is to show how and why NS-GAN's sample weighting causes mode dropping and training collapse.
[ { "created": "Mon, 5 Oct 2020 14:13:45 GMT", "version": "v1" } ]
2020-10-06
[ [ "Eide", "Aksel Wilhelm Wold", "" ], [ "Solberg", "Eilif", "" ], [ "Kåsen", "Ingebjørg", "" ] ]
Generative adversarial networks were introduced with a logistic MiniMax cost formulation, which normally fails to train due to saturation, and a Non-Saturating reformulation. While addressing the saturation problem, NS-GAN also inverts the generator's sample weighting, implicitly shifting emphasis from higher-scoring to lower-scoring samples when updating parameters. We present both theory and empirical results suggesting that this makes NS-GAN prone to mode dropping. We design MM-nsat, which preserves MM-GAN sample weighting while avoiding saturation by rescaling the MM-GAN minibatch gradient such that its magnitude approximates NS-GAN's gradient magnitude. MM-nsat has qualitatively different training dynamics, and on MNIST and CIFAR-10 it is stronger in terms of mode coverage, stability and FID. While the empirical results for MM-nsat are promising and favorable also in comparison with the LS-GAN and Hinge-GAN formulations, our main contribution is to show how and why NS-GAN's sample weighting causes mode dropping and training collapse.
2103.02907
Qibin Hou
Qibin Hou, Daquan Zhou, Jiashi Feng
Coordinate Attention for Efficient Mobile Network Design
CVPR2021
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Recent studies on mobile network design have demonstrated the remarkable effectiveness of channel attention (e.g., the Squeeze-and-Excitation attention) for lifting model performance, but they generally neglect the positional information, which is important for generating spatially selective attention maps. In this paper, we propose a novel attention mechanism for mobile networks by embedding positional information into channel attention, which we call "coordinate attention". Unlike channel attention that transforms a feature tensor to a single feature vector via 2D global pooling, the coordinate attention factorizes channel attention into two 1D feature encoding processes that aggregate features along the two spatial directions, respectively. In this way, long-range dependencies can be captured along one spatial direction and meanwhile precise positional information can be preserved along the other spatial direction. The resulting feature maps are then encoded separately into a pair of direction-aware and position-sensitive attention maps that can be complementarily applied to the input feature map to augment the representations of the objects of interest. Our coordinate attention is simple and can be flexibly plugged into classic mobile networks, such as MobileNetV2, MobileNeXt, and EfficientNet with nearly no computational overhead. Extensive experiments demonstrate that our coordinate attention is not only beneficial to ImageNet classification but more interestingly, behaves better in down-stream tasks, such as object detection and semantic segmentation. Code is available at https://github.com/Andrew-Qibin/CoordAttention.
[ { "created": "Thu, 4 Mar 2021 09:18:02 GMT", "version": "v1" } ]
2021-03-05
[ [ "Hou", "Qibin", "" ], [ "Zhou", "Daquan", "" ], [ "Feng", "Jiashi", "" ] ]
Recent studies on mobile network design have demonstrated the remarkable effectiveness of channel attention (e.g., the Squeeze-and-Excitation attention) for lifting model performance, but they generally neglect the positional information, which is important for generating spatially selective attention maps. In this paper, we propose a novel attention mechanism for mobile networks by embedding positional information into channel attention, which we call "coordinate attention". Unlike channel attention that transforms a feature tensor to a single feature vector via 2D global pooling, the coordinate attention factorizes channel attention into two 1D feature encoding processes that aggregate features along the two spatial directions, respectively. In this way, long-range dependencies can be captured along one spatial direction and meanwhile precise positional information can be preserved along the other spatial direction. The resulting feature maps are then encoded separately into a pair of direction-aware and position-sensitive attention maps that can be complementarily applied to the input feature map to augment the representations of the objects of interest. Our coordinate attention is simple and can be flexibly plugged into classic mobile networks, such as MobileNetV2, MobileNeXt, and EfficientNet with nearly no computational overhead. Extensive experiments demonstrate that our coordinate attention is not only beneficial to ImageNet classification but more interestingly, behaves better in down-stream tasks, such as object detection and semantic segmentation. Code is available at https://github.com/Andrew-Qibin/CoordAttention.
2308.16584
Zhongtao Jiang
Zhongtao Jiang, Yuanzhe Zhang, Yiming Ju, and Kang Liu
Unsupervised Text Style Transfer with Deep Generative Models
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a general framework for unsupervised text style transfer with deep generative models. The framework models each sentence-label pair in the non-parallel corpus as partially observed from a complete quadruplet which additionally contains two latent codes representing the content and style, respectively. These codes are learned by exploiting dependencies inside the observed data. Then a sentence is transferred by manipulating them. Our framework is able to unify previous embedding and prototype methods as two special forms. It also provides a principled perspective to explain previously proposed techniques in the field such as aligned encoder and adversarial training. We further conduct experiments on three benchmarks. Both automatic and human evaluation results show that our methods achieve better or competitive results compared to several strong baselines.
[ { "created": "Thu, 31 Aug 2023 09:29:35 GMT", "version": "v1" } ]
2023-09-01
[ [ "Jiang", "Zhongtao", "" ], [ "Zhang", "Yuanzhe", "" ], [ "Ju", "Yiming", "" ], [ "Liu", "Kang", "" ] ]
We present a general framework for unsupervised text style transfer with deep generative models. The framework models each sentence-label pair in the non-parallel corpus as partially observed from a complete quadruplet which additionally contains two latent codes representing the content and style, respectively. These codes are learned by exploiting dependencies inside the observed data. Then a sentence is transferred by manipulating them. Our framework is able to unify previous embedding and prototype methods as two special forms. It also provides a principled perspective to explain previously proposed techniques in the field such as aligned encoder and adversarial training. We further conduct experiments on three benchmarks. Both automatic and human evaluation results show that our methods achieve better or competitive results compared to several strong baselines.
2201.09081
Steve Huntsman
Steve Huntsman
Physical geometry of channel degradation
null
null
10.1109/CISS56502.2023.10089672
null
cs.IT cond-mat.stat-mech math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We outline a geometrical correspondence between capacity and effective free energy minima of discrete memoryless channels. This correspondence informs the behavior of a timescale that is important in effective statistical physics.
[ { "created": "Sat, 22 Jan 2022 15:36:59 GMT", "version": "v1" } ]
2023-04-18
[ [ "Huntsman", "Steve", "" ] ]
We outline a geometrical correspondence between capacity and effective free energy minima of discrete memoryless channels. This correspondence informs the behavior of a timescale that is important in effective statistical physics.
1609.08531
Anirban Bhattacharyya
Anirban Bhattacharyya and Andrey Mokhov and Ken Pierce
An Empirical Comparison of Formalisms for Modelling and Analysis of Dynamic Reconfiguration of Dependable Systems
84 pages including 4 appendices, journal paper
null
null
null
cs.SE cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper uses a case study to evaluate empirically three formalisms of different kinds for their suitability for the modelling and analysis of dynamic reconfiguration of dependable systems. The requirements on an ideal formalism for dynamic software reconfiguration are defined. The reconfiguration of an office workflow for order processing is described, and the requirements on the reconfiguration of the workflow are defined. The workflow is modelled using the Vienna Development Method ($\mathrm{VDM}$), conditional partial order graphs ($\mathrm{CPOGs}$), and the basic Calculus of Communicating Systems for dynamic process reconfiguration (basic $\mathrm{CCS^{dp}}$), and verification of the reconfiguration requirements is attempted using the models. The formalisms are evaluated according to their ability to model the reconfiguration of the workflow, to verify the requirements on the workflow's reconfiguration, and to meet the requirements on an ideal formalism.
[ { "created": "Tue, 27 Sep 2016 16:59:50 GMT", "version": "v1" } ]
2016-09-28
[ [ "Bhattacharyya", "Anirban", "" ], [ "Mokhov", "Andrey", "" ], [ "Pierce", "Ken", "" ] ]
This paper uses a case study to evaluate empirically three formalisms of different kinds for their suitability for the modelling and analysis of dynamic reconfiguration of dependable systems. The requirements on an ideal formalism for dynamic software reconfiguration are defined. The reconfiguration of an office workflow for order processing is described, and the requirements on the reconfiguration of the workflow are defined. The workflow is modelled using the Vienna Development Method ($\mathrm{VDM}$), conditional partial order graphs ($\mathrm{CPOGs}$), and the basic Calculus of Communicating Systems for dynamic process reconfiguration (basic $\mathrm{CCS^{dp}}$), and verification of the reconfiguration requirements is attempted using the models. The formalisms are evaluated according to their ability to model the reconfiguration of the workflow, to verify the requirements on the workflow's reconfiguration, and to meet the requirements on an ideal formalism.
2405.11537
Mikhail Konenkov
Mikhail Konenkov, Artem Lykov, Daria Trinitatova, Dzmitry Tsetserukou
VR-GPT: Visual Language Model for Intelligent Virtual Reality Applications
Updated version
null
null
null
cs.RO cs.AI cs.ET
http://creativecommons.org/licenses/by-nc-nd/4.0/
The advent of immersive Virtual Reality applications has transformed various domains, yet their integration with advanced artificial intelligence technologies like Visual Language Models remains underexplored. This study introduces a pioneering approach utilizing VLMs within VR environments to enhance user interaction and task efficiency. Leveraging the Unity engine and a custom-developed VLM, our system facilitates real-time, intuitive user interactions through natural language processing, without relying on visual text instructions. The incorporation of speech-to-text and text-to-speech technologies allows for seamless communication between the user and the VLM, enabling the system to guide users through complex tasks effectively. Preliminary experimental results indicate that utilizing VLMs not only reduces task completion times but also improves user comfort and task engagement compared to traditional VR interaction methods.
[ { "created": "Sun, 19 May 2024 12:56:00 GMT", "version": "v1" }, { "created": "Thu, 11 Jul 2024 07:46:14 GMT", "version": "v2" }, { "created": "Sat, 3 Aug 2024 10:19:54 GMT", "version": "v3" } ]
2024-08-06
[ [ "Konenkov", "Mikhail", "" ], [ "Lykov", "Artem", "" ], [ "Trinitatova", "Daria", "" ], [ "Tsetserukou", "Dzmitry", "" ] ]
The advent of immersive Virtual Reality applications has transformed various domains, yet their integration with advanced artificial intelligence technologies like Visual Language Models remains underexplored. This study introduces a pioneering approach utilizing VLMs within VR environments to enhance user interaction and task efficiency. Leveraging the Unity engine and a custom-developed VLM, our system facilitates real-time, intuitive user interactions through natural language processing, without relying on visual text instructions. The incorporation of speech-to-text and text-to-speech technologies allows for seamless communication between the user and the VLM, enabling the system to guide users through complex tasks effectively. Preliminary experimental results indicate that utilizing VLMs not only reduces task completion times but also improves user comfort and task engagement compared to traditional VR interaction methods.
2303.11011
Xinglong Luo
Xinglong Luo, Kunming Luo, Ao Luo, Zhengning Wang, Ping Tan, Shuaicheng Liu
Learning Optical Flow from Event Camera with Rendered Dataset
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of estimating optical flow from event cameras. One important issue is how to build a high-quality event-flow dataset with accurate event values and flow labels. Previous datasets are created by either capturing real scenes by event cameras or synthesizing from images with pasted foreground objects. The former case can produce real event values but with calculated flow labels, which are sparse and inaccurate. The later case can generate dense flow labels but the interpolated events are prone to errors. In this work, we propose to render a physically correct event-flow dataset using computer graphics models. In particular, we first create indoor and outdoor 3D scenes by Blender with rich scene content variations. Second, diverse camera motions are included for the virtual capturing, producing images and accurate flow labels. Third, we render high-framerate videos between images for accurate events. The rendered dataset can adjust the density of events, based on which we further introduce an adaptive density module (ADM). Experiments show that our proposed dataset can facilitate event-flow learning, whereas previous approaches when trained on our dataset can improve their performances constantly by a relatively large margin. In addition, event-flow pipelines when equipped with our ADM can further improve performances.
[ { "created": "Mon, 20 Mar 2023 10:44:32 GMT", "version": "v1" } ]
2023-03-21
[ [ "Luo", "Xinglong", "" ], [ "Luo", "Kunming", "" ], [ "Luo", "Ao", "" ], [ "Wang", "Zhengning", "" ], [ "Tan", "Ping", "" ], [ "Liu", "Shuaicheng", "" ] ]
We study the problem of estimating optical flow from event cameras. One important issue is how to build a high-quality event-flow dataset with accurate event values and flow labels. Previous datasets are created by either capturing real scenes by event cameras or synthesizing from images with pasted foreground objects. The former case can produce real event values but with calculated flow labels, which are sparse and inaccurate. The later case can generate dense flow labels but the interpolated events are prone to errors. In this work, we propose to render a physically correct event-flow dataset using computer graphics models. In particular, we first create indoor and outdoor 3D scenes by Blender with rich scene content variations. Second, diverse camera motions are included for the virtual capturing, producing images and accurate flow labels. Third, we render high-framerate videos between images for accurate events. The rendered dataset can adjust the density of events, based on which we further introduce an adaptive density module (ADM). Experiments show that our proposed dataset can facilitate event-flow learning, whereas previous approaches when trained on our dataset can improve their performances constantly by a relatively large margin. In addition, event-flow pipelines when equipped with our ADM can further improve performances.
1908.06724
Shreyas Kolala Venkataramanaiah
Shreyas Kolala Venkataramanaiah, Yufei Ma, Shihui Yin, Eriko Nurvithadhi, Aravind Dasu, Yu Cao, Jae-sun Seo
Automatic Compiler Based FPGA Accelerator for CNN Training
6 pages, 9 figures, paper accepted at FPL2019 conference
null
null
null
cs.LG cs.NE eess.SP
http://creativecommons.org/licenses/by/4.0/
Training of convolutional neural networks (CNNs)on embedded platforms to support on-device learning is earning vital importance in recent days. Designing flexible training hard-ware is much more challenging than inference hardware, due to design complexity and large computation/memory requirement. In this work, we present an automatic compiler-based FPGA accelerator with 16-bit fixed-point precision for complete CNNtraining, including Forward Pass (FP), Backward Pass (BP) and Weight Update (WU). We implemented an optimized RTL library to perform training-specific tasks and developed an RTL compiler to automatically generate FPGA-synthesizable RTL based on user-defined constraints. We present a new cyclic weight storage/access scheme for on-chip BRAM and off-chip DRAMto efficiently implement non-transpose and transpose operations during FP and BP phases, respectively. Representative CNNs for CIFAR-10 dataset are implemented and trained on Intel Stratix 10-GX FPGA using proposed hardware architecture, demonstrating up to 479 GOPS performance.
[ { "created": "Thu, 15 Aug 2019 18:49:38 GMT", "version": "v1" } ]
2019-08-20
[ [ "Venkataramanaiah", "Shreyas Kolala", "" ], [ "Ma", "Yufei", "" ], [ "Yin", "Shihui", "" ], [ "Nurvithadhi", "Eriko", "" ], [ "Dasu", "Aravind", "" ], [ "Cao", "Yu", "" ], [ "Seo", "Jae-sun", "" ] ]
Training of convolutional neural networks (CNNs)on embedded platforms to support on-device learning is earning vital importance in recent days. Designing flexible training hard-ware is much more challenging than inference hardware, due to design complexity and large computation/memory requirement. In this work, we present an automatic compiler-based FPGA accelerator with 16-bit fixed-point precision for complete CNNtraining, including Forward Pass (FP), Backward Pass (BP) and Weight Update (WU). We implemented an optimized RTL library to perform training-specific tasks and developed an RTL compiler to automatically generate FPGA-synthesizable RTL based on user-defined constraints. We present a new cyclic weight storage/access scheme for on-chip BRAM and off-chip DRAMto efficiently implement non-transpose and transpose operations during FP and BP phases, respectively. Representative CNNs for CIFAR-10 dataset are implemented and trained on Intel Stratix 10-GX FPGA using proposed hardware architecture, demonstrating up to 479 GOPS performance.
2012.13190
Yves Rychener
Yves Rychener, Xavier Renard, Djam\'e Seddah, Pascal Frossard, Marcin Detyniecki
QUACKIE: A NLP Classification Task With Ground Truth Explanations
null
null
null
null
cs.CL stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
NLP Interpretability aims to increase trust in model predictions. This makes evaluating interpretability approaches a pressing issue. There are multiple datasets for evaluating NLP Interpretability, but their dependence on human provided ground truths raises questions about their unbiasedness. In this work, we take a different approach and formulate a specific classification task by diverting question-answering datasets. For this custom classification task, the interpretability ground-truth arises directly from the definition of the classification problem. We use this method to propose a benchmark and lay the groundwork for future research in NLP interpretability by evaluating a wide range of current state of the art methods.
[ { "created": "Thu, 24 Dec 2020 10:43:20 GMT", "version": "v1" }, { "created": "Sun, 27 Dec 2020 18:04:17 GMT", "version": "v2" } ]
2020-12-29
[ [ "Rychener", "Yves", "" ], [ "Renard", "Xavier", "" ], [ "Seddah", "Djamé", "" ], [ "Frossard", "Pascal", "" ], [ "Detyniecki", "Marcin", "" ] ]
NLP Interpretability aims to increase trust in model predictions. This makes evaluating interpretability approaches a pressing issue. There are multiple datasets for evaluating NLP Interpretability, but their dependence on human provided ground truths raises questions about their unbiasedness. In this work, we take a different approach and formulate a specific classification task by diverting question-answering datasets. For this custom classification task, the interpretability ground-truth arises directly from the definition of the classification problem. We use this method to propose a benchmark and lay the groundwork for future research in NLP interpretability by evaluating a wide range of current state of the art methods.
2307.06013
Li Cai
Li Cai, Xin Mao, Youshao Xiao, Changxu Wu, Man Lan
An Effective and Efficient Time-aware Entity Alignment Framework via Two-aspect Three-view Label Propagation
Accepted by IJCAI 2023
null
null
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Entity alignment (EA) aims to find the equivalent entity pairs between different knowledge graphs (KGs), which is crucial to promote knowledge fusion. With the wide use of temporal knowledge graphs (TKGs), time-aware EA (TEA) methods appear to enhance EA. Existing TEA models are based on Graph Neural Networks (GNN) and achieve state-of-the-art (SOTA) performance, but it is difficult to transfer them to large-scale TKGs due to the scalability issue of GNN. In this paper, we propose an effective and efficient non-neural EA framework between TKGs, namely LightTEA, which consists of four essential components: (1) Two-aspect Three-view Label Propagation, (2) Sparse Similarity with Temporal Constraints, (3) Sinkhorn Operator, and (4) Temporal Iterative Learning. All of these modules work together to improve the performance of EA while reducing the time consumption of the model. Extensive experiments on public datasets indicate that our proposed model significantly outperforms the SOTA methods for EA between TKGs, and the time consumed by LightTEA is only dozens of seconds at most, no more than 10% of the most efficient TEA method.
[ { "created": "Wed, 12 Jul 2023 08:51:20 GMT", "version": "v1" } ]
2023-07-13
[ [ "Cai", "Li", "" ], [ "Mao", "Xin", "" ], [ "Xiao", "Youshao", "" ], [ "Wu", "Changxu", "" ], [ "Lan", "Man", "" ] ]
Entity alignment (EA) aims to find the equivalent entity pairs between different knowledge graphs (KGs), which is crucial to promote knowledge fusion. With the wide use of temporal knowledge graphs (TKGs), time-aware EA (TEA) methods appear to enhance EA. Existing TEA models are based on Graph Neural Networks (GNN) and achieve state-of-the-art (SOTA) performance, but it is difficult to transfer them to large-scale TKGs due to the scalability issue of GNN. In this paper, we propose an effective and efficient non-neural EA framework between TKGs, namely LightTEA, which consists of four essential components: (1) Two-aspect Three-view Label Propagation, (2) Sparse Similarity with Temporal Constraints, (3) Sinkhorn Operator, and (4) Temporal Iterative Learning. All of these modules work together to improve the performance of EA while reducing the time consumption of the model. Extensive experiments on public datasets indicate that our proposed model significantly outperforms the SOTA methods for EA between TKGs, and the time consumed by LightTEA is only dozens of seconds at most, no more than 10% of the most efficient TEA method.
2311.16406
Arman Roohi
Sepehr Tabrizchi, Shaahin Angizi, Arman Roohi
DIAC: Design Exploration of Intermittent-Aware Computing Realizing Batteryless Systems
6 pages, will be appeared in Design, Automation and Test in Europe Conference 2024
null
null
null
cs.AR cs.ET
http://creativecommons.org/licenses/by-nc-sa/4.0/
Battery-powered IoT devices face challenges like cost, maintenance, and environmental sustainability, prompting the emergence of batteryless energy-harvesting systems that harness ambient sources. However, their intermittent behavior can disrupt program execution and cause data loss, leading to unpredictable outcomes. Despite exhaustive studies employing conventional checkpoint methods and intricate programming paradigms to address these pitfalls, this paper proposes an innovative systematic methodology, namely DIAC. The DIAC synthesis procedure enhances the performance and efficiency of intermittent computing systems, with a focus on maximizing forward progress and minimizing the energy overhead imposed by distinct memory arrays for backup. Then, a finite-state machine is delineated, encapsulating the core operations of an IoT node, sense, compute, transmit, and sleep states. First, we validate the robustness and functionalities of a DIAC-based design in the presence of power disruptions. DIAC is then applied to a wide range of benchmarks, including ISCAS-89, MCNS, and ITC-99. The simulation results substantiate the power-delay-product (PDP) benefits. For example, results for complex MCNC benchmarks indicate a PDP improvement of 61%, 56%, and 38% on average compared to three alternative techniques, evaluated at 45 nm.
[ { "created": "Tue, 28 Nov 2023 01:18:30 GMT", "version": "v1" } ]
2023-11-29
[ [ "Tabrizchi", "Sepehr", "" ], [ "Angizi", "Shaahin", "" ], [ "Roohi", "Arman", "" ] ]
Battery-powered IoT devices face challenges like cost, maintenance, and environmental sustainability, prompting the emergence of batteryless energy-harvesting systems that harness ambient sources. However, their intermittent behavior can disrupt program execution and cause data loss, leading to unpredictable outcomes. Despite exhaustive studies employing conventional checkpoint methods and intricate programming paradigms to address these pitfalls, this paper proposes an innovative systematic methodology, namely DIAC. The DIAC synthesis procedure enhances the performance and efficiency of intermittent computing systems, with a focus on maximizing forward progress and minimizing the energy overhead imposed by distinct memory arrays for backup. Then, a finite-state machine is delineated, encapsulating the core operations of an IoT node, sense, compute, transmit, and sleep states. First, we validate the robustness and functionalities of a DIAC-based design in the presence of power disruptions. DIAC is then applied to a wide range of benchmarks, including ISCAS-89, MCNS, and ITC-99. The simulation results substantiate the power-delay-product (PDP) benefits. For example, results for complex MCNC benchmarks indicate a PDP improvement of 61%, 56%, and 38% on average compared to three alternative techniques, evaluated at 45 nm.
2201.00180
Mohammadhossein Ghahramani
Mohammadhossein Ghahramani, Mengchu Zhou, Anna Molter, Francesco Pilla
IoT-based Route Recommendation for an Intelligent Waste Management System
11
null
10.1109/JIOT.2021.3132126
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
The Internet of Things (IoT) is a paradigm characterized by a network of embedded sensors and services. These sensors are incorporated to collect various information, track physical conditions, e.g., waste bins' status, and exchange data with different centralized platforms. The need for such sensors is increasing; however, proliferation of technologies comes with various challenges. For example, how can IoT and its associated data be used to enhance waste management? In smart cities, an efficient waste management system is crucial. Artificial Intelligence (AI) and IoT-enabled approaches can empower cities to manage the waste collection. This work proposes an intelligent approach to route recommendation in an IoT-enabled waste management system given spatial constraints. It performs a thorough analysis based on AI-based methods and compares their corresponding results. Our solution is based on a multiple-level decision-making process in which bins' status and coordinates are taken into account to address the routing problem. Such AI-based models can help engineers design a sustainable infrastructure system.
[ { "created": "Sat, 1 Jan 2022 12:36:22 GMT", "version": "v1" } ]
2022-01-04
[ [ "Ghahramani", "Mohammadhossein", "" ], [ "Zhou", "Mengchu", "" ], [ "Molter", "Anna", "" ], [ "Pilla", "Francesco", "" ] ]
The Internet of Things (IoT) is a paradigm characterized by a network of embedded sensors and services. These sensors are incorporated to collect various information, track physical conditions, e.g., waste bins' status, and exchange data with different centralized platforms. The need for such sensors is increasing; however, proliferation of technologies comes with various challenges. For example, how can IoT and its associated data be used to enhance waste management? In smart cities, an efficient waste management system is crucial. Artificial Intelligence (AI) and IoT-enabled approaches can empower cities to manage the waste collection. This work proposes an intelligent approach to route recommendation in an IoT-enabled waste management system given spatial constraints. It performs a thorough analysis based on AI-based methods and compares their corresponding results. Our solution is based on a multiple-level decision-making process in which bins' status and coordinates are taken into account to address the routing problem. Such AI-based models can help engineers design a sustainable infrastructure system.
1107.0919
Markus Lohrey
Stefan G\"oller (University of Bremen), Markus Lohrey (University of Leipzig)
The First-Order Theory of Ground Tree Rewrite Graphs
accepted for Logical Methods in Computer Science
Logical Methods in Computer Science, Volume 10, Issue 1 (February 12, 2014) lmcs:1223
10.2168/LMCS-10(1:7)2014
null
cs.LO cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We prove that the complexity of the uniform first-order theory of ground tree rewrite graphs is in ATIME(2^{2^{poly(n)}},O(n)). Providing a matching lower bound, we show that there is some fixed ground tree rewrite graph whose first-order theory is hard for ATIME(2^{2^{poly(n)}},poly(n)) with respect to logspace reductions. Finally, we prove that there exists a fixed ground tree rewrite graph together with a single unary predicate in form of a regular tree language such that the resulting structure has a non-elementary first-order theory.
[ { "created": "Tue, 5 Jul 2011 16:32:12 GMT", "version": "v1" }, { "created": "Wed, 6 Jul 2011 22:30:54 GMT", "version": "v2" }, { "created": "Wed, 8 Jan 2014 09:22:16 GMT", "version": "v3" }, { "created": "Mon, 10 Feb 2014 10:39:12 GMT", "version": "v4" } ]
2015-07-01
[ [ "Göller", "Stefan", "", "University of Bremen" ], [ "Lohrey", "Markus", "", "University of\n Leipzig" ] ]
We prove that the complexity of the uniform first-order theory of ground tree rewrite graphs is in ATIME(2^{2^{poly(n)}},O(n)). Providing a matching lower bound, we show that there is some fixed ground tree rewrite graph whose first-order theory is hard for ATIME(2^{2^{poly(n)}},poly(n)) with respect to logspace reductions. Finally, we prove that there exists a fixed ground tree rewrite graph together with a single unary predicate in form of a regular tree language such that the resulting structure has a non-elementary first-order theory.
2006.09108
Jinghua Yu
Jinghua Yu, Stefan Wagner, Feng Luo
An STPA-based Approach for Systematic Security Analysis of In-vehicle Diagnostic and Software Update Systems
6 pages, 7 figures, submitted to FISITA 2020 World Congress
null
null
F2020-VES-020, FISITA Web Congress 2020
cs.CR cs.SE cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The in-vehicle diagnostic and software update system, which supports remote diagnostic and Over-The-Air (OTA) software updates, is a critical attack goal in automobiles. Adversaries can inject malicious software into vehicles or steal sensitive information through communication channels. Therefore, security analysis, which identifies potential security issues, needs to be conducted in system design. However, existing security analyses of in-vehicle systems are threat-oriented, which start with threat identification and assess risks by brainstorming. In this paper, a system-oriented approach is proposed on the basis of the System-Theoretic Process Analysis (STPA). The proposed approach extends the original STPA from the perspective of data flows and is applicable for information-flow-based systems. Besides, we propose a general model for in-vehicle diagnostic and software update systems and use it to establish a security analysis guideline. In comparison with threat-oriented approaches, the proposed approach shifts from focusing on threats to system vulnerabilities and seems to be efficient to prevent the system from known or even unknown threats. Furthermore, as an extension of the STPA, which has been proven to be applicable to high level designs, the proposed approach can be well integrated into high-level analyses and perform co-design in different disciplines within a unified STPA framework.
[ { "created": "Tue, 16 Jun 2020 12:34:17 GMT", "version": "v1" } ]
2020-12-01
[ [ "Yu", "Jinghua", "" ], [ "Wagner", "Stefan", "" ], [ "Luo", "Feng", "" ] ]
The in-vehicle diagnostic and software update system, which supports remote diagnostic and Over-The-Air (OTA) software updates, is a critical attack goal in automobiles. Adversaries can inject malicious software into vehicles or steal sensitive information through communication channels. Therefore, security analysis, which identifies potential security issues, needs to be conducted in system design. However, existing security analyses of in-vehicle systems are threat-oriented, which start with threat identification and assess risks by brainstorming. In this paper, a system-oriented approach is proposed on the basis of the System-Theoretic Process Analysis (STPA). The proposed approach extends the original STPA from the perspective of data flows and is applicable for information-flow-based systems. Besides, we propose a general model for in-vehicle diagnostic and software update systems and use it to establish a security analysis guideline. In comparison with threat-oriented approaches, the proposed approach shifts from focusing on threats to system vulnerabilities and seems to be efficient to prevent the system from known or even unknown threats. Furthermore, as an extension of the STPA, which has been proven to be applicable to high level designs, the proposed approach can be well integrated into high-level analyses and perform co-design in different disciplines within a unified STPA framework.
2306.07084
Karin Festl
Karin Festl, Patrick Promitzer, Daniel Watzenig, Huilin Yin
Performance of Graph Database Management Systems as route planning solutions for different data and usage characteristics
Submitted to IEEE IAVVC 2023
null
null
null
cs.DB cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph databases have grown in popularity in recent years as they are able to efficiently store and query complex relationships between data. Incidentally, navigation data and road networks can be processed, sampled or modified efficiently when stored as a graph. As a result, graph databases are a solution for solving route planning tasks that comes more and more to the attention of developers of autonomous vehicles. To achieve a computational performance that enables the realization of route planning on large road networks or for a great number of agents concurrently, several aspects need to be considered in the design of the solution. Based on a concrete use case for centralized route planning, we discuss the characteristics and properties of a use case that can significantly influence the computational effort or efficiency of the database management system. Subsequently we evaluate the performance of both Neo4j and ArangoDB depending on these properties. With these results, it is not only possible to choose the most suitable database system but also to improve the resulting performance by addressing relevant aspects in the design of the application.
[ { "created": "Mon, 12 Jun 2023 12:55:09 GMT", "version": "v1" } ]
2023-06-13
[ [ "Festl", "Karin", "" ], [ "Promitzer", "Patrick", "" ], [ "Watzenig", "Daniel", "" ], [ "Yin", "Huilin", "" ] ]
Graph databases have grown in popularity in recent years as they are able to efficiently store and query complex relationships between data. Incidentally, navigation data and road networks can be processed, sampled or modified efficiently when stored as a graph. As a result, graph databases are a solution for solving route planning tasks that comes more and more to the attention of developers of autonomous vehicles. To achieve a computational performance that enables the realization of route planning on large road networks or for a great number of agents concurrently, several aspects need to be considered in the design of the solution. Based on a concrete use case for centralized route planning, we discuss the characteristics and properties of a use case that can significantly influence the computational effort or efficiency of the database management system. Subsequently we evaluate the performance of both Neo4j and ArangoDB depending on these properties. With these results, it is not only possible to choose the most suitable database system but also to improve the resulting performance by addressing relevant aspects in the design of the application.
1901.08618
Shantanu Sharma
Nisha Panwar, Shantanu Sharma, Guoxi Wang, Sharad Mehrotra, Nalini Venkatasubramanian
Verifiable Round-Robin Scheme for Smart Homes
Accepted in ACM Conference on Data and Application Security and Privacy (CODASPY), 2019. 12 pages
null
10.1145/3292006.3300043
null
cs.CR cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advances in sensing, networking, and actuation technologies have resulted in the IoT wave that is expected to revolutionize all aspects of modern society. This paper focuses on the new challenges of privacy that arise in IoT in the context of smart homes. Specifically, the paper focuses on preventing the user's privacy via inferences through channel and in-home device activities. We propose a method for securely scheduling the devices while decoupling the device and channels activities. The proposed solution avoids any attacks that may reveal the coordinated schedule of the devices, and hence, also, assures that inferences that may compromise individual's privacy are not leaked due to device and channel level activities. Our experiments also validate the proposed approach, and consequently, an adversary cannot infer device and channel activities by just observing the network traffic.
[ { "created": "Thu, 24 Jan 2019 19:20:22 GMT", "version": "v1" } ]
2019-01-28
[ [ "Panwar", "Nisha", "" ], [ "Sharma", "Shantanu", "" ], [ "Wang", "Guoxi", "" ], [ "Mehrotra", "Sharad", "" ], [ "Venkatasubramanian", "Nalini", "" ] ]
Advances in sensing, networking, and actuation technologies have resulted in the IoT wave that is expected to revolutionize all aspects of modern society. This paper focuses on the new challenges of privacy that arise in IoT in the context of smart homes. Specifically, the paper focuses on preventing the user's privacy via inferences through channel and in-home device activities. We propose a method for securely scheduling the devices while decoupling the device and channels activities. The proposed solution avoids any attacks that may reveal the coordinated schedule of the devices, and hence, also, assures that inferences that may compromise individual's privacy are not leaked due to device and channel level activities. Our experiments also validate the proposed approach, and consequently, an adversary cannot infer device and channel activities by just observing the network traffic.
2205.00385
Jichao Yin
Jichao Yin and Hu Wang and Shuhao Li and Daozhen Guo
An efficient topology optimization method based on adaptive reanalysis with projection reduction
42 pages, 32 figures
null
null
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Efficient topology optimization based on the adaptive auxiliary reduced model reanalysis (AARMR) is proposed to improve computational efficiency and scale. In this method, a projection auxiliary reduced model (PARM) is integrated into the combined approximation reduced model (CARM) to reduce the dimension of the model in different aspects. First, the CARM restricts the solution space to avoid large matrix factorization. Second, the PARM is proposed to construct the CARM dynamically to save computational cost. Furthermore, the multi-grid conjugate gradient method is suggested to update PARM adaptively. Finally, several classic numerical examples are tested to show that the proposed method not only significantly improves computational efficiency, but also can solve large-scale problems that are difficult to solve by direct solvers due to the memory limitations.
[ { "created": "Sun, 1 May 2022 02:55:02 GMT", "version": "v1" }, { "created": "Tue, 3 Jan 2023 07:29:47 GMT", "version": "v2" } ]
2023-01-04
[ [ "Yin", "Jichao", "" ], [ "Wang", "Hu", "" ], [ "Li", "Shuhao", "" ], [ "Guo", "Daozhen", "" ] ]
Efficient topology optimization based on the adaptive auxiliary reduced model reanalysis (AARMR) is proposed to improve computational efficiency and scale. In this method, a projection auxiliary reduced model (PARM) is integrated into the combined approximation reduced model (CARM) to reduce the dimension of the model in different aspects. First, the CARM restricts the solution space to avoid large matrix factorization. Second, the PARM is proposed to construct the CARM dynamically to save computational cost. Furthermore, the multi-grid conjugate gradient method is suggested to update PARM adaptively. Finally, several classic numerical examples are tested to show that the proposed method not only significantly improves computational efficiency, but also can solve large-scale problems that are difficult to solve by direct solvers due to the memory limitations.
1403.7022
Hengjun Zhao
Jiang Liu and Naijun Zhan and Hengjun Zhao and Liang Zou
Abstraction of Elementary Hybrid Systems by Variable Transformation
null
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Elementary hybrid systems (EHSs) are those hybrid systems (HSs) containing elementary functions such as exp, ln, sin, cos, etc. EHSs are very common in practice, especially in safety-critical domains. Due to the non-polynomial expressions which lead to undecidable arithmetic, verification of EHSs is very hard. Existing approaches based on partition of state space or over-approximation of reachable sets suffer from state explosion or inflation of numerical errors. In this paper, we propose a symbolic abstraction approach that reduces EHSs to polynomial hybrid systems (PHSs), by replacing all non-polynomial terms with newly introduced variables. Thus the verification of EHSs is reduced to the one of PHSs, enabling us to apply all the well-established verification techniques and tools for PHSs to EHSs. In this way, it is possible to avoid the limitations of many existing methods. We illustrate the abstraction approach and its application in safety verification of EHSs by several real world examples.
[ { "created": "Thu, 27 Mar 2014 13:38:12 GMT", "version": "v1" }, { "created": "Mon, 20 Oct 2014 11:43:22 GMT", "version": "v2" }, { "created": "Tue, 13 Jan 2015 09:09:07 GMT", "version": "v3" }, { "created": "Wed, 14 Jan 2015 06:33:56 GMT", "version": "v4" } ]
2015-01-15
[ [ "Liu", "Jiang", "" ], [ "Zhan", "Naijun", "" ], [ "Zhao", "Hengjun", "" ], [ "Zou", "Liang", "" ] ]
Elementary hybrid systems (EHSs) are those hybrid systems (HSs) containing elementary functions such as exp, ln, sin, cos, etc. EHSs are very common in practice, especially in safety-critical domains. Due to the non-polynomial expressions which lead to undecidable arithmetic, verification of EHSs is very hard. Existing approaches based on partition of state space or over-approximation of reachable sets suffer from state explosion or inflation of numerical errors. In this paper, we propose a symbolic abstraction approach that reduces EHSs to polynomial hybrid systems (PHSs), by replacing all non-polynomial terms with newly introduced variables. Thus the verification of EHSs is reduced to the one of PHSs, enabling us to apply all the well-established verification techniques and tools for PHSs to EHSs. In this way, it is possible to avoid the limitations of many existing methods. We illustrate the abstraction approach and its application in safety verification of EHSs by several real world examples.
2406.16449
Mingrui Wu
Mingrui Wu, Jiayi Ji, Oucheng Huang, Jiale Li, Yuhang Wu, Xiaoshuai Sun, Rongrong Ji
Evaluating and Analyzing Relationship Hallucinations in Large Vision-Language Models
ICML2024; Project Page:https://github.com/mrwu-mac/R-Bench
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The issue of hallucinations is a prevalent concern in existing Large Vision-Language Models (LVLMs). Previous efforts have primarily focused on investigating object hallucinations, which can be easily alleviated by introducing object detectors. However, these efforts neglect hallucinations in inter-object relationships, which is essential for visual comprehension. In this work, we introduce R-Bench, a novel benchmark for evaluating Vision Relationship Hallucination. R-Bench features image-level questions that focus on the existence of relationships and instance-level questions that assess local visual comprehension. We identify three types of relationship co-occurrences that lead to hallucinations: relationship-relationship, subject-relationship, and relationship-object. The visual instruction tuning dataset's long-tail distribution significantly impacts LVLMs' understanding of visual relationships. Furthermore, our analysis reveals that current LVLMs tend to disregard visual content and overly rely on the common sense knowledge of Large Language Models. They also struggle with reasoning about spatial relationships based on contextual information.
[ { "created": "Mon, 24 Jun 2024 08:42:42 GMT", "version": "v1" }, { "created": "Wed, 3 Jul 2024 03:02:35 GMT", "version": "v2" }, { "created": "Thu, 11 Jul 2024 06:48:39 GMT", "version": "v3" }, { "created": "Thu, 18 Jul 2024 04:39:29 GMT", "version": "v4" } ]
2024-07-19
[ [ "Wu", "Mingrui", "" ], [ "Ji", "Jiayi", "" ], [ "Huang", "Oucheng", "" ], [ "Li", "Jiale", "" ], [ "Wu", "Yuhang", "" ], [ "Sun", "Xiaoshuai", "" ], [ "Ji", "Rongrong", "" ] ]
The issue of hallucinations is a prevalent concern in existing Large Vision-Language Models (LVLMs). Previous efforts have primarily focused on investigating object hallucinations, which can be easily alleviated by introducing object detectors. However, these efforts neglect hallucinations in inter-object relationships, which is essential for visual comprehension. In this work, we introduce R-Bench, a novel benchmark for evaluating Vision Relationship Hallucination. R-Bench features image-level questions that focus on the existence of relationships and instance-level questions that assess local visual comprehension. We identify three types of relationship co-occurrences that lead to hallucinations: relationship-relationship, subject-relationship, and relationship-object. The visual instruction tuning dataset's long-tail distribution significantly impacts LVLMs' understanding of visual relationships. Furthermore, our analysis reveals that current LVLMs tend to disregard visual content and overly rely on the common sense knowledge of Large Language Models. They also struggle with reasoning about spatial relationships based on contextual information.
1909.13516
Yao Wan
Yao Wan, Jingdong Shu, Yulei Sui, Guandong Xu, Zhou Zhao, Jian Wu and Philip S. Yu
Multi-Modal Attention Network Learning for Semantic Source Code Retrieval
null
null
null
null
cs.SE cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Code retrieval techniques and tools have been playing a key role in facilitating software developers to retrieve existing code fragments from available open-source repositories given a user query. Despite the existing efforts in improving the effectiveness of code retrieval, there are still two main issues hindering them from being used to accurately retrieve satisfiable code fragments from large-scale repositories when answering complicated queries. First, the existing approaches only consider shallow features of source code such as method names and code tokens, but ignoring structured features such as abstract syntax trees (ASTs) and control-flow graphs (CFGs) of source code, which contains rich and well-defined semantics of source code. Second, although the deep learning-based approach performs well on the representation of source code, it lacks the explainability, making it hard to interpret the retrieval results and almost impossible to understand which features of source code contribute more to the final results. To tackle the two aforementioned issues, this paper proposes MMAN, a novel Multi-Modal Attention Network for semantic source code retrieval. A comprehensive multi-modal representation is developed for representing unstructured and structured features of source code, with one LSTM for the sequential tokens of code, a Tree-LSTM for the AST of code and a GGNN (Gated Graph Neural Network) for the CFG of code. Furthermore, a multi-modal attention fusion layer is applied to assign weights to different parts of each modality of source code and then integrate them into a single hybrid representation. Comprehensive experiments and analysis on a large-scale real-world dataset show that our proposed model can accurately retrieve code snippets and outperforms the state-of-the-art methods.
[ { "created": "Mon, 30 Sep 2019 08:35:04 GMT", "version": "v1" } ]
2019-10-01
[ [ "Wan", "Yao", "" ], [ "Shu", "Jingdong", "" ], [ "Sui", "Yulei", "" ], [ "Xu", "Guandong", "" ], [ "Zhao", "Zhou", "" ], [ "Wu", "Jian", "" ], [ "Yu", "Philip S.", "" ] ]
Code retrieval techniques and tools have been playing a key role in facilitating software developers to retrieve existing code fragments from available open-source repositories given a user query. Despite the existing efforts in improving the effectiveness of code retrieval, there are still two main issues hindering them from being used to accurately retrieve satisfiable code fragments from large-scale repositories when answering complicated queries. First, the existing approaches only consider shallow features of source code such as method names and code tokens, but ignoring structured features such as abstract syntax trees (ASTs) and control-flow graphs (CFGs) of source code, which contains rich and well-defined semantics of source code. Second, although the deep learning-based approach performs well on the representation of source code, it lacks the explainability, making it hard to interpret the retrieval results and almost impossible to understand which features of source code contribute more to the final results. To tackle the two aforementioned issues, this paper proposes MMAN, a novel Multi-Modal Attention Network for semantic source code retrieval. A comprehensive multi-modal representation is developed for representing unstructured and structured features of source code, with one LSTM for the sequential tokens of code, a Tree-LSTM for the AST of code and a GGNN (Gated Graph Neural Network) for the CFG of code. Furthermore, a multi-modal attention fusion layer is applied to assign weights to different parts of each modality of source code and then integrate them into a single hybrid representation. Comprehensive experiments and analysis on a large-scale real-world dataset show that our proposed model can accurately retrieve code snippets and outperforms the state-of-the-art methods.
2406.18330
Matan Halfon
Matan Halfon, Eyal Rozenberg, Ehud Rivlin, Daniel Freedman
Molecular Diffusion Models with Virtual Receptors
null
https://neurips.cc/virtual/2023/77389
null
null
cs.LG q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Machine learning approaches to Structure-Based Drug Design (SBDD) have proven quite fertile over the last few years. In particular, diffusion-based approaches to SBDD have shown great promise. We present a technique which expands on this diffusion approach in two crucial ways. First, we address the size disparity between the drug molecule and the target/receptor, which makes learning more challenging and inference slower. We do so through the notion of a Virtual Receptor, which is a compressed version of the receptor; it is learned so as to preserve key aspects of the structural information of the original receptor, while respecting the relevant group equivariance. Second, we incorporate a protein language embedding used originally in the context of protein folding. We experimentally demonstrate the contributions of both the virtual receptors and the protein embeddings: in practice, they lead to both better performance, as well as significantly faster computations.
[ { "created": "Wed, 26 Jun 2024 13:18:42 GMT", "version": "v1" } ]
2024-07-01
[ [ "Halfon", "Matan", "" ], [ "Rozenberg", "Eyal", "" ], [ "Rivlin", "Ehud", "" ], [ "Freedman", "Daniel", "" ] ]
Machine learning approaches to Structure-Based Drug Design (SBDD) have proven quite fertile over the last few years. In particular, diffusion-based approaches to SBDD have shown great promise. We present a technique which expands on this diffusion approach in two crucial ways. First, we address the size disparity between the drug molecule and the target/receptor, which makes learning more challenging and inference slower. We do so through the notion of a Virtual Receptor, which is a compressed version of the receptor; it is learned so as to preserve key aspects of the structural information of the original receptor, while respecting the relevant group equivariance. Second, we incorporate a protein language embedding used originally in the context of protein folding. We experimentally demonstrate the contributions of both the virtual receptors and the protein embeddings: in practice, they lead to both better performance, as well as significantly faster computations.
2102.05638
Zach Wood-Doughty
Zach Wood-Doughty, Ilya Shpitser, Mark Dredze
Generating Synthetic Text Data to Evaluate Causal Inference Methods
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Drawing causal conclusions from observational data requires making assumptions about the true data-generating process. Causal inference research typically considers low-dimensional data, such as categorical or numerical fields in structured medical records. High-dimensional and unstructured data such as natural language complicates the evaluation of causal inference methods; such evaluations rely on synthetic datasets with known causal effects. Models for natural language generation have been widely studied and perform well empirically. However, existing methods not immediately applicable to producing synthetic datasets for causal evaluations, as they do not allow for quantifying a causal effect on the text itself. In this work, we develop a framework for adapting existing generation models to produce synthetic text datasets with known causal effects. We use this framework to perform an empirical comparison of four recently-proposed methods for estimating causal effects from text data. We release our code and synthetic datasets.
[ { "created": "Wed, 10 Feb 2021 18:53:11 GMT", "version": "v1" } ]
2021-02-11
[ [ "Wood-Doughty", "Zach", "" ], [ "Shpitser", "Ilya", "" ], [ "Dredze", "Mark", "" ] ]
Drawing causal conclusions from observational data requires making assumptions about the true data-generating process. Causal inference research typically considers low-dimensional data, such as categorical or numerical fields in structured medical records. High-dimensional and unstructured data such as natural language complicates the evaluation of causal inference methods; such evaluations rely on synthetic datasets with known causal effects. Models for natural language generation have been widely studied and perform well empirically. However, existing methods not immediately applicable to producing synthetic datasets for causal evaluations, as they do not allow for quantifying a causal effect on the text itself. In this work, we develop a framework for adapting existing generation models to produce synthetic text datasets with known causal effects. We use this framework to perform an empirical comparison of four recently-proposed methods for estimating causal effects from text data. We release our code and synthetic datasets.
2103.09583
Stefan Ohrhallinger
Stefan Ohrhallinger and Jiju Peethambaran and Amal D. Parakkat and Tamal K. Dey and Ramanathan Muthuganapathy
2D Points Curve Reconstruction Survey and Benchmark
24 pages, 22 figures, 5 tables
null
null
null
cs.GR
http://creativecommons.org/licenses/by-nc-sa/4.0/
Curve reconstruction from unstructured points in a plane is a fundamental problem with many applications that has generated research interest for decades. Involved aspects like handling open, sharp, multiple and non-manifold outlines, run-time and provability as well as potential extension to 3D for surface reconstruction have led to many different algorithms. We survey the literature on 2D curve reconstruction and then present an open-sourced benchmark for the experimental study. Our unprecedented evaluation on a selected set of planar curve reconstruction algorithms aims to give an overview of both quantitative analysis and qualitative aspects for helping users to select the right algorithm for specific problems in the field. Our benchmark framework is available online to permit reproducing the results, and easy integration of new algorithms.
[ { "created": "Wed, 17 Mar 2021 11:55:43 GMT", "version": "v1" } ]
2021-03-18
[ [ "Ohrhallinger", "Stefan", "" ], [ "Peethambaran", "Jiju", "" ], [ "Parakkat", "Amal D.", "" ], [ "Dey", "Tamal K.", "" ], [ "Muthuganapathy", "Ramanathan", "" ] ]
Curve reconstruction from unstructured points in a plane is a fundamental problem with many applications that has generated research interest for decades. Involved aspects like handling open, sharp, multiple and non-manifold outlines, run-time and provability as well as potential extension to 3D for surface reconstruction have led to many different algorithms. We survey the literature on 2D curve reconstruction and then present an open-sourced benchmark for the experimental study. Our unprecedented evaluation on a selected set of planar curve reconstruction algorithms aims to give an overview of both quantitative analysis and qualitative aspects for helping users to select the right algorithm for specific problems in the field. Our benchmark framework is available online to permit reproducing the results, and easy integration of new algorithms.
2210.07547
Songyang Gao
Songyang Gao, Shihan Dou, Qi Zhang, Xuanjing Huang
Kernel-Whitening: Overcome Dataset Bias with Isotropic Sentence Embedding
Accepted by EMNLP2022
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Dataset bias has attracted increasing attention recently for its detrimental effect on the generalization ability of fine-tuned models. The current mainstream solution is designing an additional shallow model to pre-identify biased instances. However, such two-stage methods scale up the computational complexity of training process and obstruct valid feature information while mitigating bias. To address this issue, we utilize the representation normalization method which aims at disentangling the correlations between features of encoded sentences. We find it also promising in eliminating the bias problem by providing isotropic data distribution. We further propose Kernel-Whitening, a Nystrom kernel approximation method to achieve more thorough debiasing on nonlinear spurious correlations. Our framework is end-to-end with similar time consumption to fine-tuning. Experiments show that Kernel-Whitening significantly improves the performance of BERT on out-of-distribution datasets while maintaining in-distribution accuracy.
[ { "created": "Fri, 14 Oct 2022 05:56:38 GMT", "version": "v1" } ]
2022-10-17
[ [ "Gao", "Songyang", "" ], [ "Dou", "Shihan", "" ], [ "Zhang", "Qi", "" ], [ "Huang", "Xuanjing", "" ] ]
Dataset bias has attracted increasing attention recently for its detrimental effect on the generalization ability of fine-tuned models. The current mainstream solution is designing an additional shallow model to pre-identify biased instances. However, such two-stage methods scale up the computational complexity of training process and obstruct valid feature information while mitigating bias. To address this issue, we utilize the representation normalization method which aims at disentangling the correlations between features of encoded sentences. We find it also promising in eliminating the bias problem by providing isotropic data distribution. We further propose Kernel-Whitening, a Nystrom kernel approximation method to achieve more thorough debiasing on nonlinear spurious correlations. Our framework is end-to-end with similar time consumption to fine-tuning. Experiments show that Kernel-Whitening significantly improves the performance of BERT on out-of-distribution datasets while maintaining in-distribution accuracy.
1903.12221
Alex Glikson
Ping-Min Lin, Alex Glikson
Mitigating Cold Starts in Serverless Platforms: A Pool-Based Approach
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rapid adoption of the serverless (or Function-as-a-Service, FaaS) paradigm, pioneered by Amazon with AWS Lambda and followed by numerous commercial offerings and open source projects, introduces new challenges in designing the cloud infrastructure, balancing between performance and cost. While instant per-request elasticity that FaaS platforms typically offer application developers makes it possible to achieve high performance of bursty workloads without over-provisioning, such elasticity often involves extra latency associated with on-demand provisioning of individual runtime containers that serve the functions. This phenomenon is often called cold starts, as opposed to the situation when a function is served by a pre-provisioned "warm" container, ready to serve requests with close to zero overhead. Providers are constantly working on techniques aimed at reducing cold starts. A common approach to reduce cold starts is to maintain a pool of warm containers, in anticipation of future requests. In this report, we address the cold start problem in serverless architectures, specifically under the Knative Serving FaaS platform. We describe our implementation leveraging a pool of function instances, and evaluate the latency compared to the original implementation, resulting in a 85% reduction of P99 response time for a single instance pool.
[ { "created": "Thu, 28 Mar 2019 18:55:30 GMT", "version": "v1" } ]
2019-04-01
[ [ "Lin", "Ping-Min", "" ], [ "Glikson", "Alex", "" ] ]
Rapid adoption of the serverless (or Function-as-a-Service, FaaS) paradigm, pioneered by Amazon with AWS Lambda and followed by numerous commercial offerings and open source projects, introduces new challenges in designing the cloud infrastructure, balancing between performance and cost. While instant per-request elasticity that FaaS platforms typically offer application developers makes it possible to achieve high performance of bursty workloads without over-provisioning, such elasticity often involves extra latency associated with on-demand provisioning of individual runtime containers that serve the functions. This phenomenon is often called cold starts, as opposed to the situation when a function is served by a pre-provisioned "warm" container, ready to serve requests with close to zero overhead. Providers are constantly working on techniques aimed at reducing cold starts. A common approach to reduce cold starts is to maintain a pool of warm containers, in anticipation of future requests. In this report, we address the cold start problem in serverless architectures, specifically under the Knative Serving FaaS platform. We describe our implementation leveraging a pool of function instances, and evaluate the latency compared to the original implementation, resulting in a 85% reduction of P99 response time for a single instance pool.
2201.12011
Bendaoud Fayssal
Bendaoud Fayssal and Abdennebi Marwen and Didi Fedoua
A MADM method for network selection in heterogeneous wireless networks
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
The coexistence of different Radio Access Technologies (RATs) in the same area has enabled the researchers to get profit from the available networks by the selection of the best RAT at each moment to satisfy the user requirements. The challenge is to achieve the Always Best Connected (ABC) concept; the main issue is the automatic choice of the suitable Radio Access Technology (RAT) from the list of the available RATs. This decision is called network selection (NS). In this paper, we propose a modified Simple Additive Weigh (modified-SAW) function to deal with the drawbacks of the existing solutions. Indeed, the existing Multiple Attribute Decision Making (MADM) methods suffer mainly from the famous problem of rank reversal once an alternative is added or removed, other problems occur in the legacy MADMs. We modify the SAW method intelligently and we use it to solve the NS problem. Finally, we compare the performance of our solution with the previous works in different scenarios; the simulations show that our proposal outperforms the other existing methods
[ { "created": "Fri, 28 Jan 2022 09:47:29 GMT", "version": "v1" } ]
2022-01-31
[ [ "Fayssal", "Bendaoud", "" ], [ "Marwen", "Abdennebi", "" ], [ "Fedoua", "Didi", "" ] ]
The coexistence of different Radio Access Technologies (RATs) in the same area has enabled the researchers to get profit from the available networks by the selection of the best RAT at each moment to satisfy the user requirements. The challenge is to achieve the Always Best Connected (ABC) concept; the main issue is the automatic choice of the suitable Radio Access Technology (RAT) from the list of the available RATs. This decision is called network selection (NS). In this paper, we propose a modified Simple Additive Weigh (modified-SAW) function to deal with the drawbacks of the existing solutions. Indeed, the existing Multiple Attribute Decision Making (MADM) methods suffer mainly from the famous problem of rank reversal once an alternative is added or removed, other problems occur in the legacy MADMs. We modify the SAW method intelligently and we use it to solve the NS problem. Finally, we compare the performance of our solution with the previous works in different scenarios; the simulations show that our proposal outperforms the other existing methods
1309.6849
Joris Mooij
Joris Mooij, Tom Heskes
Cyclic Causal Discovery from Continuous Equilibrium Data
Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence (UAI2013)
null
null
UAI-P-2013-PG-431-439
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a method for learning cyclic causal models from a combination of observational and interventional equilibrium data. Novel aspects of the proposed method are its ability to work with continuous data (without assuming linearity) and to deal with feedback loops. Within the context of biochemical reactions, we also propose a novel way of modeling interventions that modify the activity of compounds instead of their abundance. For computational reasons, we approximate the nonlinear causal mechanisms by (coupled) local linearizations, one for each experimental condition. We apply the method to reconstruct a cellular signaling network from the flow cytometry data measured by Sachs et al. (2005). We show that our method finds evidence in the data for feedback loops and that it gives a more accurate quantitative description of the data at comparable model complexity.
[ { "created": "Thu, 26 Sep 2013 12:45:43 GMT", "version": "v1" } ]
2013-09-27
[ [ "Mooij", "Joris", "" ], [ "Heskes", "Tom", "" ] ]
We propose a method for learning cyclic causal models from a combination of observational and interventional equilibrium data. Novel aspects of the proposed method are its ability to work with continuous data (without assuming linearity) and to deal with feedback loops. Within the context of biochemical reactions, we also propose a novel way of modeling interventions that modify the activity of compounds instead of their abundance. For computational reasons, we approximate the nonlinear causal mechanisms by (coupled) local linearizations, one for each experimental condition. We apply the method to reconstruct a cellular signaling network from the flow cytometry data measured by Sachs et al. (2005). We show that our method finds evidence in the data for feedback loops and that it gives a more accurate quantitative description of the data at comparable model complexity.
2405.20680
Mingda Li
Mingda Li, Xinyu Li, Yifan Chen, Wenfeng Xuan, Weinan Zhang
Unraveling and Mitigating Retriever Inconsistencies in Retrieval-Augmented Large Language Models
ACL 2024 (findings)
null
null
null
cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although Retrieval-Augmented Large Language Models (RALMs) demonstrate their superiority in terms of factuality, they do not consistently outperform the original retrieval-free Language Models (LMs). Our experiments reveal that this example-level performance inconsistency exists not only between retrieval-augmented and retrieval-free LM but also among different retrievers. To understand this phenomenon, we investigate the degeneration behavior of RALMs and theoretically decompose it into four categories. Further analysis based on our decomposition reveals that the innate difference in knowledge sources and the unpredictable degeneration of the reader model contribute most to the inconsistency. Drawing from our analysis, we introduce Ensemble of Retrievers (EoR), a trainable framework that can adaptively retrieve from different knowledge sources and effectively decrease unpredictable reader errors. Our experiments on Open Domain Question Answering show that EoR substantially improves performance over the RALM with a single retriever by considerably reducing inconsistent behaviors.
[ { "created": "Fri, 31 May 2024 08:22:49 GMT", "version": "v1" }, { "created": "Mon, 3 Jun 2024 06:20:18 GMT", "version": "v2" }, { "created": "Tue, 4 Jun 2024 11:51:53 GMT", "version": "v3" } ]
2024-06-05
[ [ "Li", "Mingda", "" ], [ "Li", "Xinyu", "" ], [ "Chen", "Yifan", "" ], [ "Xuan", "Wenfeng", "" ], [ "Zhang", "Weinan", "" ] ]
Although Retrieval-Augmented Large Language Models (RALMs) demonstrate their superiority in terms of factuality, they do not consistently outperform the original retrieval-free Language Models (LMs). Our experiments reveal that this example-level performance inconsistency exists not only between retrieval-augmented and retrieval-free LM but also among different retrievers. To understand this phenomenon, we investigate the degeneration behavior of RALMs and theoretically decompose it into four categories. Further analysis based on our decomposition reveals that the innate difference in knowledge sources and the unpredictable degeneration of the reader model contribute most to the inconsistency. Drawing from our analysis, we introduce Ensemble of Retrievers (EoR), a trainable framework that can adaptively retrieve from different knowledge sources and effectively decrease unpredictable reader errors. Our experiments on Open Domain Question Answering show that EoR substantially improves performance over the RALM with a single retriever by considerably reducing inconsistent behaviors.