id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2004.13354
Jinwoo Ahn
Jinwoo Ahn, Seungjin Lee, Jinhoon Lee, Yungwoo Ko, Donghyun Min, Junghee Lee, Youngjae Kim
SGX-SSD: A Policy-based Versioning SSD with Intel SGX
7 pages, 4 figures
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper demonstrates that SSDs, which perform device-level versioning, can be exposed to data tampering attacks when the retention time of data is less than the malware's dwell time. To deal with that threat, we propose SGX-SSD, a SGX-based versioning SSD which selectively preserves file history based on the given policy. The proposed system adopts Intel SGX to implement the version policy management system that is safe from high-privileged malware. Based on the policy, only the necessary data is selectively preserved in SSD that prevents files with less priority from wasting space and also ensures the integrity of important files.
[ { "created": "Tue, 28 Apr 2020 08:11:30 GMT", "version": "v1" }, { "created": "Wed, 29 Apr 2020 01:03:18 GMT", "version": "v2" } ]
2020-04-30
[ [ "Ahn", "Jinwoo", "" ], [ "Lee", "Seungjin", "" ], [ "Lee", "Jinhoon", "" ], [ "Ko", "Yungwoo", "" ], [ "Min", "Donghyun", "" ], [ "Lee", "Junghee", "" ], [ "Kim", "Youngjae", "" ] ]
This paper demonstrates that SSDs, which perform device-level versioning, can be exposed to data tampering attacks when the retention time of data is less than the malware's dwell time. To deal with that threat, we propose SGX-SSD, a SGX-based versioning SSD which selectively preserves file history based on the given policy. The proposed system adopts Intel SGX to implement the version policy management system that is safe from high-privileged malware. Based on the policy, only the necessary data is selectively preserved in SSD that prevents files with less priority from wasting space and also ensures the integrity of important files.
1804.06454
Marco Baldi
Mohammad H. Tadayon, Alireza Tasdighi, Massimo Battaglioni, Marco Baldi, Franco Chiaraluce
Efficient Search of Compact QC-LDPC and SC-LDPC Convolutional Codes with Large Girth
4 pages, 3 figures, 1 table, accepted for publication in IEEE Communications Letters
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a low-complexity method to find quasi-cyclic low-density parity-check block codes with girth 10 or 12 and shorter length than those designed through classical approaches. The method is extended to time-invariant spatially coupled low-density parity-check convolutional codes, permitting to achieve small syndrome former constraint lengths. Several numerical examples are given to show its effectiveness.
[ { "created": "Tue, 17 Apr 2018 19:47:42 GMT", "version": "v1" } ]
2018-04-19
[ [ "Tadayon", "Mohammad H.", "" ], [ "Tasdighi", "Alireza", "" ], [ "Battaglioni", "Massimo", "" ], [ "Baldi", "Marco", "" ], [ "Chiaraluce", "Franco", "" ] ]
We propose a low-complexity method to find quasi-cyclic low-density parity-check block codes with girth 10 or 12 and shorter length than those designed through classical approaches. The method is extended to time-invariant spatially coupled low-density parity-check convolutional codes, permitting to achieve small syndrome former constraint lengths. Several numerical examples are given to show its effectiveness.
1710.09876
Samin Aref
Samin Aref, Andrew J. Mason, Mark C. Wilson
Computing the Line Index of Balance Using Integer Programming Optimisation
Accepted author copy, 20 pages, 4 tables and 3 figures. This work is followed up in another study with more focus on Operations Research aspects of the topic that can be found in arXiv:1611.09030
null
null
null
cs.SI math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An important measure of signed graphs is the line index of balance which has several applications in many fields. However, this graph-theoretic measure was underused for decades because of the inherent complexity in its computation which is closely related to solving NP-hard graph optimisation problems like MAXCUT. We develop new quadratic and linear programming models to compute the line index of balance exactly. Using the Gurobi integer programming optimisation solver, we evaluate the line index of balance on real-world and synthetic datasets. The synthetic data involves Erd\H{o}s-R\'{e}nyi graphs, Barab\'{a}si-Albert graphs, and specially structured random graphs. We also use well known datasets from the sociology literature, such as signed graphs inferred from students' choice and rejection as well as datasets from the biology literature including gene regulatory networks. The results show that exact values of the line index of balance in relatively large signed graphs can be efficiently computed using our suggested optimisation models. We find that most real-world social networks and some biological networks have small line index of balance which indicates that they are close to balanced.
[ { "created": "Thu, 26 Oct 2017 19:09:57 GMT", "version": "v1" }, { "created": "Tue, 6 Feb 2018 00:34:19 GMT", "version": "v2" }, { "created": "Wed, 7 Feb 2018 05:19:46 GMT", "version": "v3" } ]
2018-02-08
[ [ "Aref", "Samin", "" ], [ "Mason", "Andrew J.", "" ], [ "Wilson", "Mark C.", "" ] ]
An important measure of signed graphs is the line index of balance which has several applications in many fields. However, this graph-theoretic measure was underused for decades because of the inherent complexity in its computation which is closely related to solving NP-hard graph optimisation problems like MAXCUT. We develop new quadratic and linear programming models to compute the line index of balance exactly. Using the Gurobi integer programming optimisation solver, we evaluate the line index of balance on real-world and synthetic datasets. The synthetic data involves Erd\H{o}s-R\'{e}nyi graphs, Barab\'{a}si-Albert graphs, and specially structured random graphs. We also use well known datasets from the sociology literature, such as signed graphs inferred from students' choice and rejection as well as datasets from the biology literature including gene regulatory networks. The results show that exact values of the line index of balance in relatively large signed graphs can be efficiently computed using our suggested optimisation models. We find that most real-world social networks and some biological networks have small line index of balance which indicates that they are close to balanced.
1903.00951
Babak Alipour
Babak Alipour, Leonardo Tonetto, Roozbeh Ketabi, Aaron Yi Ding, J\"org Ott, Ahmed Helmy
Practical Prediction of Human Movements Across Device Types and Spatiotemporal Granularities
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
Understanding and predicting mobility are essential for the design and evaluation of future mobile edge caching and networking. Consequently, research on prediction of human mobility has drawn significant attention in the last decade. Employing information-theoretic concepts and machine learning methods, earlier research has shown evidence that human behavior can be highly predictable. Despite existing studies, more investigations are needed to capture intrinsic mobility characteristics constraining predictability, and to explore more dimensions (e.g. device types) and spatio-temporal granularities, especially with the change in human behavior and technology. We analyze extensive longitudinal datasets with fine spatial granularity (AP level) covering 16 months. The study reveals device type as an important factor affecting predictability. Ultra-portable devices such as smartphones have "on-the-go" mode of usage (and hence dubbed "Flutes"), whereas laptops are "sit-to-use" (dubbed "Cellos"). The goal of this study is to investigate practical prediction mechanisms to quantify predictability as an aspect of human mobility modeling, across time, space and device types. We apply our systematic analysis to wireless traces from a large university campus. We compare several algorithms using varying degrees of temporal and spatial granularity for the two modes of devices; Flutes vs. Cellos. Through our analysis, we quantify how the mobility of Flutes is less predictable than the mobility of Cellos. In addition, this pattern is consistent across various spatio-temporal granularities, and for different methods (Markov chains, neural networks/deep learning, entropy-based estimators). This work substantiates the importance of predictability as an essential aspect of human mobility, with direct application in predictive caching, user behavior modeling and mobility simulations.
[ { "created": "Sun, 3 Mar 2019 17:46:27 GMT", "version": "v1" } ]
2019-03-05
[ [ "Alipour", "Babak", "" ], [ "Tonetto", "Leonardo", "" ], [ "Ketabi", "Roozbeh", "" ], [ "Ding", "Aaron Yi", "" ], [ "Ott", "Jörg", "" ], [ "Helmy", "Ahmed", "" ] ]
Understanding and predicting mobility are essential for the design and evaluation of future mobile edge caching and networking. Consequently, research on prediction of human mobility has drawn significant attention in the last decade. Employing information-theoretic concepts and machine learning methods, earlier research has shown evidence that human behavior can be highly predictable. Despite existing studies, more investigations are needed to capture intrinsic mobility characteristics constraining predictability, and to explore more dimensions (e.g. device types) and spatio-temporal granularities, especially with the change in human behavior and technology. We analyze extensive longitudinal datasets with fine spatial granularity (AP level) covering 16 months. The study reveals device type as an important factor affecting predictability. Ultra-portable devices such as smartphones have "on-the-go" mode of usage (and hence dubbed "Flutes"), whereas laptops are "sit-to-use" (dubbed "Cellos"). The goal of this study is to investigate practical prediction mechanisms to quantify predictability as an aspect of human mobility modeling, across time, space and device types. We apply our systematic analysis to wireless traces from a large university campus. We compare several algorithms using varying degrees of temporal and spatial granularity for the two modes of devices; Flutes vs. Cellos. Through our analysis, we quantify how the mobility of Flutes is less predictable than the mobility of Cellos. In addition, this pattern is consistent across various spatio-temporal granularities, and for different methods (Markov chains, neural networks/deep learning, entropy-based estimators). This work substantiates the importance of predictability as an essential aspect of human mobility, with direct application in predictive caching, user behavior modeling and mobility simulations.
2109.00895
Yushan Zhu
Yushan Zhu, Huaixiao Tou, Wen Zhang, Ganqiang Ye, Hui Chen, Ningyu Zhang and Huajun Chen
Knowledge Perceived Multi-modal Pretraining in E-commerce
Accepted to ACM MM 2021
null
10.1145/3474085.3475648
null
cs.CV cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
In this paper, we address multi-modal pretraining of product data in the field of E-commerce. Current multi-modal pretraining methods proposed for image and text modalities lack robustness in the face of modality-missing and modality-noise, which are two pervasive problems of multi-modal product data in real E-commerce scenarios. To this end, we propose a novel method, K3M, which introduces knowledge modality in multi-modal pretraining to correct the noise and supplement the missing of image and text modalities. The modal-encoding layer extracts the features of each modality. The modal-interaction layer is capable of effectively modeling the interaction of multiple modalities, where an initial-interactive feature fusion model is designed to maintain the independence of image modality and text modality, and a structure aggregation module is designed to fuse the information of image, text, and knowledge modalities. We pretrain K3M with three pretraining tasks, including masked object modeling (MOM), masked language modeling (MLM), and link prediction modeling (LPM). Experimental results on a real-world E-commerce dataset and a series of product-based downstream tasks demonstrate that K3M achieves significant improvements in performances than the baseline and state-of-the-art methods when modality-noise or modality-missing exists.
[ { "created": "Fri, 20 Aug 2021 08:01:28 GMT", "version": "v1" } ]
2021-09-03
[ [ "Zhu", "Yushan", "" ], [ "Tou", "Huaixiao", "" ], [ "Zhang", "Wen", "" ], [ "Ye", "Ganqiang", "" ], [ "Chen", "Hui", "" ], [ "Zhang", "Ningyu", "" ], [ "Chen", "Huajun", "" ] ]
In this paper, we address multi-modal pretraining of product data in the field of E-commerce. Current multi-modal pretraining methods proposed for image and text modalities lack robustness in the face of modality-missing and modality-noise, which are two pervasive problems of multi-modal product data in real E-commerce scenarios. To this end, we propose a novel method, K3M, which introduces knowledge modality in multi-modal pretraining to correct the noise and supplement the missing of image and text modalities. The modal-encoding layer extracts the features of each modality. The modal-interaction layer is capable of effectively modeling the interaction of multiple modalities, where an initial-interactive feature fusion model is designed to maintain the independence of image modality and text modality, and a structure aggregation module is designed to fuse the information of image, text, and knowledge modalities. We pretrain K3M with three pretraining tasks, including masked object modeling (MOM), masked language modeling (MLM), and link prediction modeling (LPM). Experimental results on a real-world E-commerce dataset and a series of product-based downstream tasks demonstrate that K3M achieves significant improvements in performances than the baseline and state-of-the-art methods when modality-noise or modality-missing exists.
1605.02041
David Guillermo Fajardo Ortiz
David Fajardo-Ortiz, Luis Duran, Laura Moreno, Hector Ochoa, Victor-M Castano
Mapping knowledge translation and innovation processes in Cancer Drug Development: the case of liposomal doxorubicin
null
Journal of Translational Medicine 2014, 12:227
10.1186/s12967-014-0227-9
null
cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explored how the knowledge translation and innovation processes are structured when they result in innovations, as in the case of liposomal doxorubicin research. In order to map the processes, a literature network analysis was made through Cytoscape and semantic analysis was performed by GOPubmed which is based in the controlled vocabularies MeSH (Medical Subject Headings) and GO (Gene Ontology). We found clusters related to different stages of the technological development (invention, innovation and imitation) and the knowledge translation process (preclinical, translational and clinical research), and we were able to map the historic emergence of Doxil as a paradigmatic nanodrug. This research could be a powerful methodological tool for decision-making and innovation management in drug delivery research.
[ { "created": "Tue, 12 Apr 2016 05:55:21 GMT", "version": "v1" } ]
2016-05-09
[ [ "Fajardo-Ortiz", "David", "" ], [ "Duran", "Luis", "" ], [ "Moreno", "Laura", "" ], [ "Ochoa", "Hector", "" ], [ "Castano", "Victor-M", "" ] ]
We explored how the knowledge translation and innovation processes are structured when they result in innovations, as in the case of liposomal doxorubicin research. In order to map the processes, a literature network analysis was made through Cytoscape and semantic analysis was performed by GOPubmed which is based in the controlled vocabularies MeSH (Medical Subject Headings) and GO (Gene Ontology). We found clusters related to different stages of the technological development (invention, innovation and imitation) and the knowledge translation process (preclinical, translational and clinical research), and we were able to map the historic emergence of Doxil as a paradigmatic nanodrug. This research could be a powerful methodological tool for decision-making and innovation management in drug delivery research.
2108.11887
Lei Lei
Jiaju Qi, Qihao Zhou, Lei Lei, Kan Zheng
Federated Reinforcement Learning: Techniques, Applications, and Open Challenges
null
Intelligence & Robotics. 2021; 1(1):18-57
10.20517/ir.2021.02
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper presents a comprehensive survey of Federated Reinforcement Learning (FRL), an emerging and promising field in Reinforcement Learning (RL). Starting with a tutorial of Federated Learning (FL) and RL, we then focus on the introduction of FRL as a new method with great potential by leveraging the basic idea of FL to improve the performance of RL while preserving data-privacy. According to the distribution characteristics of the agents in the framework, FRL algorithms can be divided into two categories, i.e. Horizontal Federated Reinforcement Learning (HFRL) and Vertical Federated Reinforcement Learning (VFRL). We provide the detailed definitions of each category by formulas, investigate the evolution of FRL from a technical perspective, and highlight its advantages over previous RL algorithms. In addition, the existing works on FRL are summarized by application fields, including edge computing, communication, control optimization, and attack detection. Finally, we describe and discuss several key research directions that are crucial to solving the open problems within FRL.
[ { "created": "Thu, 26 Aug 2021 16:22:49 GMT", "version": "v1" }, { "created": "Sun, 24 Oct 2021 19:02:03 GMT", "version": "v2" } ]
2023-05-12
[ [ "Qi", "Jiaju", "" ], [ "Zhou", "Qihao", "" ], [ "Lei", "Lei", "" ], [ "Zheng", "Kan", "" ] ]
This paper presents a comprehensive survey of Federated Reinforcement Learning (FRL), an emerging and promising field in Reinforcement Learning (RL). Starting with a tutorial of Federated Learning (FL) and RL, we then focus on the introduction of FRL as a new method with great potential by leveraging the basic idea of FL to improve the performance of RL while preserving data-privacy. According to the distribution characteristics of the agents in the framework, FRL algorithms can be divided into two categories, i.e. Horizontal Federated Reinforcement Learning (HFRL) and Vertical Federated Reinforcement Learning (VFRL). We provide the detailed definitions of each category by formulas, investigate the evolution of FRL from a technical perspective, and highlight its advantages over previous RL algorithms. In addition, the existing works on FRL are summarized by application fields, including edge computing, communication, control optimization, and attack detection. Finally, we describe and discuss several key research directions that are crucial to solving the open problems within FRL.
2303.05946
Kyle Hart
Kyle M. Hart (1 and 2), Brendan Englot (2), Ryan P. O'Shea (1), John D. Kelly (1), David Martinez (1) ((1) Naval Air Warfare Center Aircraft Division Lakehurst, (2) Stevens Institute of Technology)
Monocular Simultaneous Localization and Mapping using Ground Textures
7 pages, 9 figures. To appear at ICRA 2023, London, UK. Distribution Statement A: Approved for public release; distribution is unlimited, as submitted under NAVAIR Public Release Authorization 2022-0586. The views expressed here are those of the authors and do not reflect the official policy or position of the U.S. Navy, Department of Defense, or U.S. Government
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work has shown impressive localization performance using only images of ground textures taken with a downward facing monocular camera. This provides a reliable navigation method that is robust to feature sparse environments and challenging lighting conditions. However, these localization methods require an existing map for comparison. Our work aims to relax the need for a map by introducing a full simultaneous localization and mapping (SLAM) system. By not requiring an existing map, setup times are minimized and the system is more robust to changing environments. This SLAM system uses a combination of several techniques to accomplish this. Image keypoints are identified and projected into the ground plane. These keypoints, visual bags of words, and several threshold parameters are then used to identify overlapping images and revisited areas. The system then uses robust M-estimators to estimate the transform between robot poses with overlapping images and revisited areas. These optimized estimates make up the map used for navigation. We show, through experimental data, that this system performs reliably on many ground textures, but not all.
[ { "created": "Fri, 10 Mar 2023 14:27:31 GMT", "version": "v1" } ]
2023-03-13
[ [ "Hart", "Kyle M.", "", "1 and 2" ], [ "Englot", "Brendan", "" ], [ "O'Shea", "Ryan P.", "" ], [ "Kelly", "John D.", "" ], [ "Martinez", "David", "" ] ]
Recent work has shown impressive localization performance using only images of ground textures taken with a downward facing monocular camera. This provides a reliable navigation method that is robust to feature sparse environments and challenging lighting conditions. However, these localization methods require an existing map for comparison. Our work aims to relax the need for a map by introducing a full simultaneous localization and mapping (SLAM) system. By not requiring an existing map, setup times are minimized and the system is more robust to changing environments. This SLAM system uses a combination of several techniques to accomplish this. Image keypoints are identified and projected into the ground plane. These keypoints, visual bags of words, and several threshold parameters are then used to identify overlapping images and revisited areas. The system then uses robust M-estimators to estimate the transform between robot poses with overlapping images and revisited areas. These optimized estimates make up the map used for navigation. We show, through experimental data, that this system performs reliably on many ground textures, but not all.
1405.1129
Vikram Krishnamurthy
Vikram Krishnamurthy and Omid Namvar Gharehshiran and Maziyar Hamdi
Interactive Sensing and Decision Making in Social Networks
Foundations and Trends in Signal Processing, Now Publishers, 2014
null
10.1561/2000000048
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The proliferation of social media such as real time microblogging and online reputation systems facilitate real time sensing of social patterns and behavior. In the last decade, sensing and decision making in social networks have witnessed significant progress in the electrical engineering, computer science, economics, finance, and sociology research communities. Research in this area involves the interaction of dynamic random graphs, socio-economic analysis, and statistical inference algorithms. This monograph provides a survey, tutorial development, and discussion of four highly stylized examples: social learning for interactive sensing; tracking the degree distribution of social networks; sensing and information diffusion; and coordination of decision making via game-theoretic learning. Each of the four examples is motivated by practical examples, and comprises of a literature survey together with careful problem formulation and mathematical analysis. Despite being highly stylized, these examples provide a rich variety of models, algorithms and analysis tools that are readily accessible to a signal processing, control/systems theory, and applied mathematics audience.
[ { "created": "Tue, 6 May 2014 02:21:24 GMT", "version": "v1" } ]
2014-05-07
[ [ "Krishnamurthy", "Vikram", "" ], [ "Gharehshiran", "Omid Namvar", "" ], [ "Hamdi", "Maziyar", "" ] ]
The proliferation of social media such as real time microblogging and online reputation systems facilitate real time sensing of social patterns and behavior. In the last decade, sensing and decision making in social networks have witnessed significant progress in the electrical engineering, computer science, economics, finance, and sociology research communities. Research in this area involves the interaction of dynamic random graphs, socio-economic analysis, and statistical inference algorithms. This monograph provides a survey, tutorial development, and discussion of four highly stylized examples: social learning for interactive sensing; tracking the degree distribution of social networks; sensing and information diffusion; and coordination of decision making via game-theoretic learning. Each of the four examples is motivated by practical examples, and comprises of a literature survey together with careful problem formulation and mathematical analysis. Despite being highly stylized, these examples provide a rich variety of models, algorithms and analysis tools that are readily accessible to a signal processing, control/systems theory, and applied mathematics audience.
2404.09265
Mindaugas Budzys
Tanveer Khan, Mindaugas Budzys, Antonis Michalas
Make Split, not Hijack: Preventing Feature-Space Hijacking Attacks in Split Learning
Accepted In Proceedings of the 29th ACM Symposium on Access Control Models and Technologies (SACMAT '24)
null
null
null
cs.CR cs.AI
http://creativecommons.org/licenses/by/4.0/
The popularity of Machine Learning (ML) makes the privacy of sensitive data more imperative than ever. Collaborative learning techniques like Split Learning (SL) aim to protect client data while enhancing ML processes. Though promising, SL has been proved to be vulnerable to a plethora of attacks, thus raising concerns about its effectiveness on data privacy. In this work, we introduce a hybrid approach combining SL and Function Secret Sharing (FSS) to ensure client data privacy. The client adds a random mask to the activation map before sending it to the servers. The servers cannot access the original function but instead work with shares generated using FSS. Consequently, during both forward and backward propagation, the servers cannot reconstruct the client's raw data from the activation map. Furthermore, through visual invertibility, we demonstrate that the server is incapable of reconstructing the raw image data from the activation map when using FSS. It enhances privacy by reducing privacy leakage compared to other SL-based approaches where the server can access client input information. Our approach also ensures security against feature space hijacking attack, protecting sensitive information from potential manipulation. Our protocols yield promising results, reducing communication overhead by over 2x and training time by over 7x compared to the same model with FSS, without any SL. Also, we show that our approach achieves >96% accuracy and remains equivalent to the plaintext models.
[ { "created": "Sun, 14 Apr 2024 14:14:31 GMT", "version": "v1" } ]
2024-04-16
[ [ "Khan", "Tanveer", "" ], [ "Budzys", "Mindaugas", "" ], [ "Michalas", "Antonis", "" ] ]
The popularity of Machine Learning (ML) makes the privacy of sensitive data more imperative than ever. Collaborative learning techniques like Split Learning (SL) aim to protect client data while enhancing ML processes. Though promising, SL has been proved to be vulnerable to a plethora of attacks, thus raising concerns about its effectiveness on data privacy. In this work, we introduce a hybrid approach combining SL and Function Secret Sharing (FSS) to ensure client data privacy. The client adds a random mask to the activation map before sending it to the servers. The servers cannot access the original function but instead work with shares generated using FSS. Consequently, during both forward and backward propagation, the servers cannot reconstruct the client's raw data from the activation map. Furthermore, through visual invertibility, we demonstrate that the server is incapable of reconstructing the raw image data from the activation map when using FSS. It enhances privacy by reducing privacy leakage compared to other SL-based approaches where the server can access client input information. Our approach also ensures security against feature space hijacking attack, protecting sensitive information from potential manipulation. Our protocols yield promising results, reducing communication overhead by over 2x and training time by over 7x compared to the same model with FSS, without any SL. Also, we show that our approach achieves >96% accuracy and remains equivalent to the plaintext models.
2208.14925
Tim Schreiter
Tim Schreiter, Tiago Rodrigues de Almeida, Yufei Zhu, Eduardo Gutierrez Maestro, Lucas Morillo-Mendez, Andrey Rudenko, Tomasz P. Kucner, Oscar Martinez Mozos, Martin Magnusson, Luigi Palmieri, Kai O. Arras, Achim J. Lilienthal
The Magni Human Motion Dataset: Accurate, Complex, Multi-Modal, Natural, Semantically-Rich and Contextualized
in SIRRW Workshop held in conjunction with 31st IEEE International Conference on Robot & Human Interactive Communication, 29/08 - 02/09 2022, Naples (Italy)
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rapid development of social robots stimulates active research in human motion modeling, interpretation and prediction, proactive collision avoidance, human-robot interaction and co-habitation in shared spaces. Modern approaches to this end require high quality datasets for training and evaluation. However, the majority of available datasets suffers from either inaccurate tracking data or unnatural, scripted behavior of the tracked people. This paper attempts to fill this gap by providing high quality tracking information from motion capture, eye-gaze trackers and on-board robot sensors in a semantically-rich environment. To induce natural behavior of the recorded participants, we utilise loosely scripted task assignment, which induces the participants navigate through the dynamic laboratory environment in a natural and purposeful way. The motion dataset, presented in this paper, sets a high quality standard, as the realistic and accurate data is enhanced with semantic information, enabling development of new algorithms which rely not only on the tracking information but also on contextual cues of the moving agents, static and dynamic environment.
[ { "created": "Wed, 31 Aug 2022 15:37:45 GMT", "version": "v1" } ]
2022-09-01
[ [ "Schreiter", "Tim", "" ], [ "de Almeida", "Tiago Rodrigues", "" ], [ "Zhu", "Yufei", "" ], [ "Maestro", "Eduardo Gutierrez", "" ], [ "Morillo-Mendez", "Lucas", "" ], [ "Rudenko", "Andrey", "" ], [ "Kucner", "Tomasz P.", "" ], [ "Mozos", "Oscar Martinez", "" ], [ "Magnusson", "Martin", "" ], [ "Palmieri", "Luigi", "" ], [ "Arras", "Kai O.", "" ], [ "Lilienthal", "Achim J.", "" ] ]
Rapid development of social robots stimulates active research in human motion modeling, interpretation and prediction, proactive collision avoidance, human-robot interaction and co-habitation in shared spaces. Modern approaches to this end require high quality datasets for training and evaluation. However, the majority of available datasets suffers from either inaccurate tracking data or unnatural, scripted behavior of the tracked people. This paper attempts to fill this gap by providing high quality tracking information from motion capture, eye-gaze trackers and on-board robot sensors in a semantically-rich environment. To induce natural behavior of the recorded participants, we utilise loosely scripted task assignment, which induces the participants navigate through the dynamic laboratory environment in a natural and purposeful way. The motion dataset, presented in this paper, sets a high quality standard, as the realistic and accurate data is enhanced with semantic information, enabling development of new algorithms which rely not only on the tracking information but also on contextual cues of the moving agents, static and dynamic environment.
2306.08935
Ritu Yadav
Ritu Yadav, Andrea Nascetti, Yifang Ban
Context-Aware Change Detection With Semi-Supervised Learning
Paper Accepted in IGARSS 2023
null
null
null
cs.CV cs.AI cs.LG eess.IV
http://creativecommons.org/licenses/by/4.0/
Change detection using earth observation data plays a vital role in quantifying the impact of disasters in affected areas. While data sources like Sentinel-2 provide rich optical information, they are often hindered by cloud cover, limiting their usage in disaster scenarios. However, leveraging pre-disaster optical data can offer valuable contextual information about the area such as landcover type, vegetation cover, soil types, enabling a better understanding of the disaster's impact. In this study, we develop a model to assess the contribution of pre-disaster Sentinel-2 data in change detection tasks, focusing on disaster-affected areas. The proposed Context-Aware Change Detection Network (CACDN) utilizes a combination of pre-disaster Sentinel-2 data, pre and post-disaster Sentinel-1 data and ancillary Digital Elevation Models (DEM) data. The model is validated on flood and landslide detection and evaluated using three metrics: Area Under the Precision-Recall Curve (AUPRC), Intersection over Union (IoU), and mean IoU. The preliminary results show significant improvement (4\%, AUPRC, 3-7\% IoU, 3-6\% mean IoU) in model's change detection capabilities when incorporated with pre-disaster optical data reflecting the effectiveness of using contextual information for accurate flood and landslide detection.
[ { "created": "Thu, 15 Jun 2023 08:17:49 GMT", "version": "v1" } ]
2023-06-16
[ [ "Yadav", "Ritu", "" ], [ "Nascetti", "Andrea", "" ], [ "Ban", "Yifang", "" ] ]
Change detection using earth observation data plays a vital role in quantifying the impact of disasters in affected areas. While data sources like Sentinel-2 provide rich optical information, they are often hindered by cloud cover, limiting their usage in disaster scenarios. However, leveraging pre-disaster optical data can offer valuable contextual information about the area such as landcover type, vegetation cover, soil types, enabling a better understanding of the disaster's impact. In this study, we develop a model to assess the contribution of pre-disaster Sentinel-2 data in change detection tasks, focusing on disaster-affected areas. The proposed Context-Aware Change Detection Network (CACDN) utilizes a combination of pre-disaster Sentinel-2 data, pre and post-disaster Sentinel-1 data and ancillary Digital Elevation Models (DEM) data. The model is validated on flood and landslide detection and evaluated using three metrics: Area Under the Precision-Recall Curve (AUPRC), Intersection over Union (IoU), and mean IoU. The preliminary results show significant improvement (4\%, AUPRC, 3-7\% IoU, 3-6\% mean IoU) in model's change detection capabilities when incorporated with pre-disaster optical data reflecting the effectiveness of using contextual information for accurate flood and landslide detection.
2001.10494
Feiyang Cai
Feiyang Cai and Xenofon Koutsoukos
Real-time Out-of-distribution Detection in Learning-Enabled Cyber-Physical Systems
Accepted by 11th International Conference on Cyber-Physical Systems (ICCPS2020)
null
null
null
cs.LG cs.SY eess.SY stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cyber-physical systems (CPS) greatly benefit by using machine learning components that can handle the uncertainty and variability of the real-world. Typical components such as deep neural networks, however, introduce new types of hazards that may impact system safety. The system behavior depends on data that are available only during runtime and may be different than the data used for training. Out-of-distribution data may lead to a large error and compromise safety. The paper considers the problem of efficiently detecting out-of-distribution data in CPS control systems. Detection must be robust and limit the number of false alarms while being computational efficient for real-time monitoring. The proposed approach leverages inductive conformal prediction and anomaly detection for developing a method that has a well-calibrated false alarm rate. We use variational autoencoders and deep support vector data description to learn models that can be used efficiently compute the nonconformity of new inputs relative to the training set and enable real-time detection of out-of-distribution high-dimensional inputs. We demonstrate the method using an advanced emergency braking system and a self-driving end-to-end controller implemented in an open source simulator for self-driving cars. The simulation results show very small number of false positives and detection delay while the execution time is comparable to the execution time of the original machine learning components.
[ { "created": "Tue, 28 Jan 2020 17:51:07 GMT", "version": "v1" } ]
2020-01-29
[ [ "Cai", "Feiyang", "" ], [ "Koutsoukos", "Xenofon", "" ] ]
Cyber-physical systems (CPS) greatly benefit by using machine learning components that can handle the uncertainty and variability of the real-world. Typical components such as deep neural networks, however, introduce new types of hazards that may impact system safety. The system behavior depends on data that are available only during runtime and may be different than the data used for training. Out-of-distribution data may lead to a large error and compromise safety. The paper considers the problem of efficiently detecting out-of-distribution data in CPS control systems. Detection must be robust and limit the number of false alarms while being computational efficient for real-time monitoring. The proposed approach leverages inductive conformal prediction and anomaly detection for developing a method that has a well-calibrated false alarm rate. We use variational autoencoders and deep support vector data description to learn models that can be used efficiently compute the nonconformity of new inputs relative to the training set and enable real-time detection of out-of-distribution high-dimensional inputs. We demonstrate the method using an advanced emergency braking system and a self-driving end-to-end controller implemented in an open source simulator for self-driving cars. The simulation results show very small number of false positives and detection delay while the execution time is comparable to the execution time of the original machine learning components.
2212.07811
Mike Thelwall Prof
Mike Thelwall, Kayvan Kousha, Mahshid Abdoli, Emma Stuart, Meiko Makita, Paul Wilson, Jonathan Levitt
Do altmetric scores reflect article quality? Evidence from the UK Research Excellence Framework 2021
null
Journal of the Association for Information Science and Technology, 74(5), 582-593 (2023)
10.1108/10.1002/asi.24751
null
cs.DL
http://creativecommons.org/licenses/by/4.0/
Altmetrics are web-based quantitative impact or attention indicators for academic articles that have been proposed to supplement citation counts. This article reports the first assessment of the extent to which mature altmetrics from Altmetric.com and Mendeley associate with journal article quality. It exploits expert norm-referenced peer review scores from the UK Research Excellence Framework 2021 for 67,030+ journal articles in all fields 2014-17/18, split into 34 Units of Assessment (UoAs). The results show that altmetrics are better indicators of research quality than previously thought, although not as good as raw and field normalised Scopus citation counts. Surprisingly, field normalising citation counts can reduce their strength as a quality indicator for articles in a single field. For most UoAs, Mendeley reader counts are the best, tweet counts are also a relatively strong indicator in many fields, and Facebook, blogs and news citations are moderately strong indicators in some UoAs, at least in the UK. In general, altmetrics are the strongest indicators of research quality in the health and physical sciences and weakest in the arts and humanities. The Altmetric Attention Score, although hybrid, is almost as good as Mendeley reader counts as a quality indicator and reflects more non-scholarly impacts.
[ { "created": "Sun, 11 Dec 2022 05:40:35 GMT", "version": "v1" } ]
2023-08-01
[ [ "Thelwall", "Mike", "" ], [ "Kousha", "Kayvan", "" ], [ "Abdoli", "Mahshid", "" ], [ "Stuart", "Emma", "" ], [ "Makita", "Meiko", "" ], [ "Wilson", "Paul", "" ], [ "Levitt", "Jonathan", "" ] ]
Altmetrics are web-based quantitative impact or attention indicators for academic articles that have been proposed to supplement citation counts. This article reports the first assessment of the extent to which mature altmetrics from Altmetric.com and Mendeley associate with journal article quality. It exploits expert norm-referenced peer review scores from the UK Research Excellence Framework 2021 for 67,030+ journal articles in all fields 2014-17/18, split into 34 Units of Assessment (UoAs). The results show that altmetrics are better indicators of research quality than previously thought, although not as good as raw and field normalised Scopus citation counts. Surprisingly, field normalising citation counts can reduce their strength as a quality indicator for articles in a single field. For most UoAs, Mendeley reader counts are the best, tweet counts are also a relatively strong indicator in many fields, and Facebook, blogs and news citations are moderately strong indicators in some UoAs, at least in the UK. In general, altmetrics are the strongest indicators of research quality in the health and physical sciences and weakest in the arts and humanities. The Altmetric Attention Score, although hybrid, is almost as good as Mendeley reader counts as a quality indicator and reflects more non-scholarly impacts.
1710.11213
Sahil Singla
Soheil Ehsani, MohammadTaghi Hajiaghayi, Thomas Kesselheim, and Sahil Singla
Prophet Secretary for Combinatorial Auctions and Matroids
Preliminary version appeared in SODA 2018. This version improves the writeup on Fixed-Threshold algorithms
null
null
null
cs.DS cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The secretary and the prophet inequality problems are central to the field of Stopping Theory. Recently, there has been a lot of work in generalizing these models to multiple items because of their applications in mechanism design. The most important of these generalizations are to matroids and to combinatorial auctions (extends bipartite matching). Kleinberg-Weinberg \cite{KW-STOC12} and Feldman et al. \cite{feldman2015combinatorial} show that for adversarial arrival order of random variables the optimal prophet inequalities give a $1/2$-approximation. For many settings, however, it's conceivable that the arrival order is chosen uniformly at random, akin to the secretary problem. For such a random arrival model, we improve upon the $1/2$-approximation and obtain $(1-1/e)$-approximation prophet inequalities for both matroids and combinatorial auctions. This also gives improvements to the results of Yan \cite{yan2011mechanism} and Esfandiari et al. \cite{esfandiari2015prophet} who worked in the special cases where we can fully control the arrival order or when there is only a single item. Our techniques are threshold based. We convert our discrete problem into a continuous setting and then give a generic template on how to dynamically adjust these thresholds to lower bound the expected total welfare.
[ { "created": "Mon, 30 Oct 2017 19:41:38 GMT", "version": "v1" }, { "created": "Sat, 17 Mar 2018 17:13:41 GMT", "version": "v2" } ]
2018-03-20
[ [ "Ehsani", "Soheil", "" ], [ "Hajiaghayi", "MohammadTaghi", "" ], [ "Kesselheim", "Thomas", "" ], [ "Singla", "Sahil", "" ] ]
The secretary and the prophet inequality problems are central to the field of Stopping Theory. Recently, there has been a lot of work in generalizing these models to multiple items because of their applications in mechanism design. The most important of these generalizations are to matroids and to combinatorial auctions (extends bipartite matching). Kleinberg-Weinberg \cite{KW-STOC12} and Feldman et al. \cite{feldman2015combinatorial} show that for adversarial arrival order of random variables the optimal prophet inequalities give a $1/2$-approximation. For many settings, however, it's conceivable that the arrival order is chosen uniformly at random, akin to the secretary problem. For such a random arrival model, we improve upon the $1/2$-approximation and obtain $(1-1/e)$-approximation prophet inequalities for both matroids and combinatorial auctions. This also gives improvements to the results of Yan \cite{yan2011mechanism} and Esfandiari et al. \cite{esfandiari2015prophet} who worked in the special cases where we can fully control the arrival order or when there is only a single item. Our techniques are threshold based. We convert our discrete problem into a continuous setting and then give a generic template on how to dynamically adjust these thresholds to lower bound the expected total welfare.
2304.02841
Zhijie Deng
Zhijie Deng and Yucen Luo
Learning Neural Eigenfunctions for Unsupervised Semantic Segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unsupervised semantic segmentation is a long-standing challenge in computer vision with great significance. Spectral clustering is a theoretically grounded solution to it where the spectral embeddings for pixels are computed to construct distinct clusters. Despite recent progress in enhancing spectral clustering with powerful pre-trained models, current approaches still suffer from inefficiencies in spectral decomposition and inflexibility in applying them to the test data. This work addresses these issues by casting spectral clustering as a parametric approach that employs neural network-based eigenfunctions to produce spectral embeddings. The outputs of the neural eigenfunctions are further restricted to discrete vectors that indicate clustering assignments directly. As a result, an end-to-end NN-based paradigm of spectral clustering emerges. In practice, the neural eigenfunctions are lightweight and take the features from pre-trained models as inputs, improving training efficiency and unleashing the potential of pre-trained models for dense prediction. We conduct extensive empirical studies to validate the effectiveness of our approach and observe significant performance gains over competitive baselines on Pascal Context, Cityscapes, and ADE20K benchmarks.
[ { "created": "Thu, 6 Apr 2023 03:14:15 GMT", "version": "v1" } ]
2023-04-07
[ [ "Deng", "Zhijie", "" ], [ "Luo", "Yucen", "" ] ]
Unsupervised semantic segmentation is a long-standing challenge in computer vision with great significance. Spectral clustering is a theoretically grounded solution to it where the spectral embeddings for pixels are computed to construct distinct clusters. Despite recent progress in enhancing spectral clustering with powerful pre-trained models, current approaches still suffer from inefficiencies in spectral decomposition and inflexibility in applying them to the test data. This work addresses these issues by casting spectral clustering as a parametric approach that employs neural network-based eigenfunctions to produce spectral embeddings. The outputs of the neural eigenfunctions are further restricted to discrete vectors that indicate clustering assignments directly. As a result, an end-to-end NN-based paradigm of spectral clustering emerges. In practice, the neural eigenfunctions are lightweight and take the features from pre-trained models as inputs, improving training efficiency and unleashing the potential of pre-trained models for dense prediction. We conduct extensive empirical studies to validate the effectiveness of our approach and observe significant performance gains over competitive baselines on Pascal Context, Cityscapes, and ADE20K benchmarks.
2108.06812
Nikolai Karpov
Nikolai Karpov, Qin Zhang
Batched Thompson Sampling for Multi-Armed Bandits
9 pages
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
We study Thompson Sampling algorithms for stochastic multi-armed bandits in the batched setting, in which we want to minimize the regret over a sequence of arm pulls using a small number of policy changes (or, batches). We propose two algorithms and demonstrate their effectiveness by experiments on both synthetic and real datasets. We also analyze the proposed algorithms from the theoretical aspect and obtain almost tight regret-batches tradeoffs for the two-arm case.
[ { "created": "Sun, 15 Aug 2021 20:47:46 GMT", "version": "v1" } ]
2021-08-17
[ [ "Karpov", "Nikolai", "" ], [ "Zhang", "Qin", "" ] ]
We study Thompson Sampling algorithms for stochastic multi-armed bandits in the batched setting, in which we want to minimize the regret over a sequence of arm pulls using a small number of policy changes (or, batches). We propose two algorithms and demonstrate their effectiveness by experiments on both synthetic and real datasets. We also analyze the proposed algorithms from the theoretical aspect and obtain almost tight regret-batches tradeoffs for the two-arm case.
2104.13456
Adrian {\L}a\'ncucki
Pawe{\l} Rychlikowski, Bart{\l}omiej Najdecki, Adrian {\L}a\'ncucki, Adam Kaczmarek
Named Entity Recognition and Linking Augmented with Large-Scale Structured Data
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we describe our submissions to the 2nd and 3rd SlavNER Shared Tasks held at BSNLP 2019 and BSNLP 2021, respectively. The tasks focused on the analysis of Named Entities in multilingual Web documents in Slavic languages with rich inflection. Our solution takes advantage of large collections of both unstructured and structured documents. The former serve as data for unsupervised training of language models and embeddings of lexical units. The latter refers to Wikipedia and its structured counterpart - Wikidata, our source of lemmatization rules, and real-world entities. With the aid of those resources, our system could recognize, normalize and link entities, while being trained with only small amounts of labeled data.
[ { "created": "Tue, 27 Apr 2021 20:10:18 GMT", "version": "v1" } ]
2021-04-29
[ [ "Rychlikowski", "Paweł", "" ], [ "Najdecki", "Bartłomiej", "" ], [ "Łańcucki", "Adrian", "" ], [ "Kaczmarek", "Adam", "" ] ]
In this paper we describe our submissions to the 2nd and 3rd SlavNER Shared Tasks held at BSNLP 2019 and BSNLP 2021, respectively. The tasks focused on the analysis of Named Entities in multilingual Web documents in Slavic languages with rich inflection. Our solution takes advantage of large collections of both unstructured and structured documents. The former serve as data for unsupervised training of language models and embeddings of lexical units. The latter refers to Wikipedia and its structured counterpart - Wikidata, our source of lemmatization rules, and real-world entities. With the aid of those resources, our system could recognize, normalize and link entities, while being trained with only small amounts of labeled data.
2205.06910
Kanishka Misra
Kanishka Misra, Julia Taylor Rayz, Allyson Ettinger
A Property Induction Framework for Neural Language Models
CogSci 2022 camera ready version, with hyperref-compatible citations. Code and Supplemental Material can be found in https://github.com/kanishkamisra/lm-induction
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
To what extent can experience from language contribute to our conceptual knowledge? Computational explorations of this question have shed light on the ability of powerful neural language models (LMs) -- informed solely through text input -- to encode and elicit information about concepts and properties. To extend this line of research, we present a framework that uses neural-network language models (LMs) to perform property induction -- a task in which humans generalize novel property knowledge (has sesamoid bones) from one or more concepts (robins) to others (sparrows, canaries). Patterns of property induction observed in humans have shed considerable light on the nature and organization of human conceptual knowledge. Inspired by this insight, we use our framework to explore the property inductions of LMs, and find that they show an inductive preference to generalize novel properties on the basis of category membership, suggesting the presence of a taxonomic bias in their representations.
[ { "created": "Fri, 13 May 2022 22:05:49 GMT", "version": "v1" } ]
2022-05-17
[ [ "Misra", "Kanishka", "" ], [ "Rayz", "Julia Taylor", "" ], [ "Ettinger", "Allyson", "" ] ]
To what extent can experience from language contribute to our conceptual knowledge? Computational explorations of this question have shed light on the ability of powerful neural language models (LMs) -- informed solely through text input -- to encode and elicit information about concepts and properties. To extend this line of research, we present a framework that uses neural-network language models (LMs) to perform property induction -- a task in which humans generalize novel property knowledge (has sesamoid bones) from one or more concepts (robins) to others (sparrows, canaries). Patterns of property induction observed in humans have shed considerable light on the nature and organization of human conceptual knowledge. Inspired by this insight, we use our framework to explore the property inductions of LMs, and find that they show an inductive preference to generalize novel properties on the basis of category membership, suggesting the presence of a taxonomic bias in their representations.
0904.0352
Rami Puzis
Shlomi Dolev, Yuval Elovici, Rami Puzis, Polina Zilberman
Incremental Deployment of Network Monitors Based on Group Betweenness Centrality
null
Information Processing Letters, 109(20), 1172-1176 (2009)
10.1016/j.ipl.2009.07.019
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many applications we are required to increase the deployment of a distributed monitoring system on an evolving network. In this paper we present a new method for finding candidate locations for additional deployment in the network. This method is based on the Group Betweenness Centrality (GBC) measure that is used to estimate the influence of a group of nodes over the information flow in the network. The new method assists in finding the location of k additional monitors in the evolving network, such that the portion of additional traffic covered is at least (1-1/e) of the optimal.
[ { "created": "Thu, 2 Apr 2009 09:32:51 GMT", "version": "v1" }, { "created": "Sun, 12 Jul 2009 10:01:36 GMT", "version": "v2" }, { "created": "Fri, 2 Oct 2020 13:32:31 GMT", "version": "v3" } ]
2020-10-05
[ [ "Dolev", "Shlomi", "" ], [ "Elovici", "Yuval", "" ], [ "Puzis", "Rami", "" ], [ "Zilberman", "Polina", "" ] ]
In many applications we are required to increase the deployment of a distributed monitoring system on an evolving network. In this paper we present a new method for finding candidate locations for additional deployment in the network. This method is based on the Group Betweenness Centrality (GBC) measure that is used to estimate the influence of a group of nodes over the information flow in the network. The new method assists in finding the location of k additional monitors in the evolving network, such that the portion of additional traffic covered is at least (1-1/e) of the optimal.
2306.02500
Taylor Webb
Taylor W. Webb, Shanka Subhra Mondal, Jonathan D. Cohen
Systematic Visual Reasoning through Object-Centric Relational Abstraction
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Human visual reasoning is characterized by an ability to identify abstract patterns from only a small number of examples, and to systematically generalize those patterns to novel inputs. This capacity depends in large part on our ability to represent complex visual inputs in terms of both objects and relations. Recent work in computer vision has introduced models with the capacity to extract object-centric representations, leading to the ability to process multi-object visual inputs, but falling short of the systematic generalization displayed by human reasoning. Other recent models have employed inductive biases for relational abstraction to achieve systematic generalization of learned abstract rules, but have generally assumed the presence of object-focused inputs. Here, we combine these two approaches, introducing Object-Centric Relational Abstraction (OCRA), a model that extracts explicit representations of both objects and abstract relations, and achieves strong systematic generalization in tasks (including a novel dataset, CLEVR-ART, with greater visual complexity) involving complex visual displays.
[ { "created": "Sun, 4 Jun 2023 22:47:17 GMT", "version": "v1" }, { "created": "Fri, 10 Nov 2023 22:22:44 GMT", "version": "v2" } ]
2023-11-14
[ [ "Webb", "Taylor W.", "" ], [ "Mondal", "Shanka Subhra", "" ], [ "Cohen", "Jonathan D.", "" ] ]
Human visual reasoning is characterized by an ability to identify abstract patterns from only a small number of examples, and to systematically generalize those patterns to novel inputs. This capacity depends in large part on our ability to represent complex visual inputs in terms of both objects and relations. Recent work in computer vision has introduced models with the capacity to extract object-centric representations, leading to the ability to process multi-object visual inputs, but falling short of the systematic generalization displayed by human reasoning. Other recent models have employed inductive biases for relational abstraction to achieve systematic generalization of learned abstract rules, but have generally assumed the presence of object-focused inputs. Here, we combine these two approaches, introducing Object-Centric Relational Abstraction (OCRA), a model that extracts explicit representations of both objects and abstract relations, and achieves strong systematic generalization in tasks (including a novel dataset, CLEVR-ART, with greater visual complexity) involving complex visual displays.
1909.04954
Philipp Mayr
Guillaume Cabanac, Ingo Frommholz, Philipp Mayr
Report on the 8th International Workshop on Bibliometric-enhanced Information Retrieval (BIR 2019)
8 pages, report to appear in ACM SIGIR Forum
null
null
null
cs.IR cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Bibliometric-enhanced Information Retrieval workshop series (BIR) at ECIR tackled issues related to academic search, at the crossroads between Information Retrieval and Bibliometrics. BIR is a hot topic investigated by both academia (e.g., ArnetMiner, CiteSeerx, DocEar) and the industry (e.g., Google Scholar, Microsoft Academic Search, Semantic Scholar). This report presents the 8th iteration of the one-day BIR workshop held at ECIR 2019 in Cologne, Germany.
[ { "created": "Wed, 11 Sep 2019 10:07:59 GMT", "version": "v1" } ]
2019-09-12
[ [ "Cabanac", "Guillaume", "" ], [ "Frommholz", "Ingo", "" ], [ "Mayr", "Philipp", "" ] ]
The Bibliometric-enhanced Information Retrieval workshop series (BIR) at ECIR tackled issues related to academic search, at the crossroads between Information Retrieval and Bibliometrics. BIR is a hot topic investigated by both academia (e.g., ArnetMiner, CiteSeerx, DocEar) and the industry (e.g., Google Scholar, Microsoft Academic Search, Semantic Scholar). This report presents the 8th iteration of the one-day BIR workshop held at ECIR 2019 in Cologne, Germany.
2204.02004
Chaim Baskin
Tal Rozen, Moshe Kimhi, Brian Chmiel, Avi Mendelson, Chaim Baskin
Bimodal Distributed Binarized Neural Networks
null
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Binary Neural Networks (BNNs) are an extremely promising method to reduce deep neural networks' complexity and power consumption massively. Binarization techniques, however, suffer from ineligible performance degradation compared to their full-precision counterparts. Prior work mainly focused on strategies for sign function approximation during forward and backward phases to reduce the quantization error during the binarization process. In this work, we propose a Bi-Modal Distributed binarization method (\methodname{}). That imposes bi-modal distribution of the network weights by kurtosis regularization. The proposed method consists of a training scheme that we call Weight Distribution Mimicking (WDM), which efficiently imitates the full-precision network weight distribution to their binary counterpart. Preserving this distribution during binarization-aware training creates robust and informative binary feature maps and significantly reduces the generalization error of the BNN. Extensive evaluations on CIFAR-10 and ImageNet demonstrate the superiority of our method over current state-of-the-art schemes. Our source code, experimental settings, training logs, and binary models are available at \url{https://github.com/BlueAnon/BD-BNN}.
[ { "created": "Tue, 5 Apr 2022 06:07:05 GMT", "version": "v1" } ]
2022-04-06
[ [ "Rozen", "Tal", "" ], [ "Kimhi", "Moshe", "" ], [ "Chmiel", "Brian", "" ], [ "Mendelson", "Avi", "" ], [ "Baskin", "Chaim", "" ] ]
Binary Neural Networks (BNNs) are an extremely promising method to reduce deep neural networks' complexity and power consumption massively. Binarization techniques, however, suffer from ineligible performance degradation compared to their full-precision counterparts. Prior work mainly focused on strategies for sign function approximation during forward and backward phases to reduce the quantization error during the binarization process. In this work, we propose a Bi-Modal Distributed binarization method (\methodname{}). That imposes bi-modal distribution of the network weights by kurtosis regularization. The proposed method consists of a training scheme that we call Weight Distribution Mimicking (WDM), which efficiently imitates the full-precision network weight distribution to their binary counterpart. Preserving this distribution during binarization-aware training creates robust and informative binary feature maps and significantly reduces the generalization error of the BNN. Extensive evaluations on CIFAR-10 and ImageNet demonstrate the superiority of our method over current state-of-the-art schemes. Our source code, experimental settings, training logs, and binary models are available at \url{https://github.com/BlueAnon/BD-BNN}.
2404.12703
Marius Kurz
Daniel Kempf, Marius Kurz, Marcel Blind, Patrick Kopper, Philipp Offenh\"auser, Anna Schwarz, Spencer Starr, Jens Keim, Andrea Beck
GAL{\AE}XI: Solving complex compressible flows with high-order discontinuous Galerkin methods on accelerator-based systems
19 pages, 12 figures, 3 tables. Code available at: https://github.com/flexi-framework/galaexi
null
null
null
cs.MS cs.CE
http://creativecommons.org/licenses/by-nc-nd/4.0/
This work presents GAL{\AE}XI as a novel, energy-efficient flow solver for the simulation of compressible flows on unstructured meshes leveraging the parallel computing power of modern Graphics Processing Units (GPUs). GAL{\AE}XI implements the high-order Discontinuous Galerkin Spectral Element Method (DGSEM) using shock capturing with a finite-volume subcell approach to ensure the stability of the high-order scheme near shocks. This work provides details on the general code design, the parallelization strategy, and the implementation approach for the compute kernels with a focus on the element local mappings between volume and surface data due to the unstructured mesh. GAL{\AE}XI exhibits excellent strong scaling properties up to 1024 GPUs if each GPU is assigned a minimum of one million degrees of freedom degrees of freedom. To verify its implementation, a convergence study is performed that recovers the theoretical order of convergence of the implemented numerical schemes. Moreover, the solver is validated using both the incompressible and compressible formulation of the Taylor-Green-Vortex at a Mach number of 0.1 and 1.25, respectively. A mesh convergence study shows that the results converge to the high-fidelity reference solution and that the results match the original CPU implementation. Finally, GAL{\AE}XI is applied to a large-scale wall-resolved large eddy simulation of a linear cascade of the NASA Rotor 37. Here, the supersonic region and shocks at the leading edge are captured accurately and robustly by the implemented shock-capturing approach. It is demonstrated that GAL{\AE}XI requires less than half of the energy to carry out this simulation in comparison to the reference CPU implementation. This renders GAL{\AE}XI as a potent tool for accurate and efficient simulations of compressible flows in the realm of exascale computing and the associated new HPC architectures.
[ { "created": "Fri, 19 Apr 2024 08:21:05 GMT", "version": "v1" } ]
2024-04-22
[ [ "Kempf", "Daniel", "" ], [ "Kurz", "Marius", "" ], [ "Blind", "Marcel", "" ], [ "Kopper", "Patrick", "" ], [ "Offenhäuser", "Philipp", "" ], [ "Schwarz", "Anna", "" ], [ "Starr", "Spencer", "" ], [ "Keim", "Jens", "" ], [ "Beck", "Andrea", "" ] ]
This work presents GAL{\AE}XI as a novel, energy-efficient flow solver for the simulation of compressible flows on unstructured meshes leveraging the parallel computing power of modern Graphics Processing Units (GPUs). GAL{\AE}XI implements the high-order Discontinuous Galerkin Spectral Element Method (DGSEM) using shock capturing with a finite-volume subcell approach to ensure the stability of the high-order scheme near shocks. This work provides details on the general code design, the parallelization strategy, and the implementation approach for the compute kernels with a focus on the element local mappings between volume and surface data due to the unstructured mesh. GAL{\AE}XI exhibits excellent strong scaling properties up to 1024 GPUs if each GPU is assigned a minimum of one million degrees of freedom degrees of freedom. To verify its implementation, a convergence study is performed that recovers the theoretical order of convergence of the implemented numerical schemes. Moreover, the solver is validated using both the incompressible and compressible formulation of the Taylor-Green-Vortex at a Mach number of 0.1 and 1.25, respectively. A mesh convergence study shows that the results converge to the high-fidelity reference solution and that the results match the original CPU implementation. Finally, GAL{\AE}XI is applied to a large-scale wall-resolved large eddy simulation of a linear cascade of the NASA Rotor 37. Here, the supersonic region and shocks at the leading edge are captured accurately and robustly by the implemented shock-capturing approach. It is demonstrated that GAL{\AE}XI requires less than half of the energy to carry out this simulation in comparison to the reference CPU implementation. This renders GAL{\AE}XI as a potent tool for accurate and efficient simulations of compressible flows in the realm of exascale computing and the associated new HPC architectures.
1007.2449
Kamran Karimi
Kamran Karimi
A Brief Introduction to Temporality and Causality
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Causality is a non-obvious concept that is often considered to be related to temporality. In this paper we present a number of past and present approaches to the definition of temporality and causality from philosophical, physical, and computational points of view. We note that time is an important ingredient in many relationships and phenomena. The topic is then divided into the two main areas of temporal discovery, which is concerned with finding relations that are stretched over time, and causal discovery, where a claim is made as to the causal influence of certain events on others. We present a number of computational tools used for attempting to automatically discover temporal and causal relations in data.
[ { "created": "Wed, 14 Jul 2010 22:41:30 GMT", "version": "v1" } ]
2010-07-16
[ [ "Karimi", "Kamran", "" ] ]
Causality is a non-obvious concept that is often considered to be related to temporality. In this paper we present a number of past and present approaches to the definition of temporality and causality from philosophical, physical, and computational points of view. We note that time is an important ingredient in many relationships and phenomena. The topic is then divided into the two main areas of temporal discovery, which is concerned with finding relations that are stretched over time, and causal discovery, where a claim is made as to the causal influence of certain events on others. We present a number of computational tools used for attempting to automatically discover temporal and causal relations in data.
2403.20195
Victor Silva Dos Santos
Victor Silva dos Santos, Erwan Gloaguen, Shiva Tirdad
Enhancing Lithological Mapping with Spatially Constrained Bayesian Network (SCB-Net): An Approach for Field Data-Constrained Predictions with Uncertainty Evaluation
17 pages, 3559 words, 14 figures
null
null
null
cs.CV cs.LG eess.IV
http://creativecommons.org/licenses/by/4.0/
Geological maps are an extremely valuable source of information for the Earth sciences. They provide insights into mineral exploration, vulnerability to natural hazards, and many other applications. These maps are created using numerical or conceptual models that use geological observations to extrapolate data. Geostatistical techniques have traditionally been used to generate reliable predictions that take into account the spatial patterns inherent in the data. However, as the number of auxiliary variables increases, these methods become more labor-intensive. Additionally, traditional machine learning methods often struggle with spatially correlated data and extracting valuable non-linear information from geoscientific datasets. To address these limitations, a new architecture called the Spatially Constrained Bayesian Network (SCB-Net) has been developed. The SCB-Net aims to effectively exploit the information from auxiliary variables while producing spatially constrained predictions. It is made up of two parts, the first part focuses on learning underlying patterns in the auxiliary variables while the second part integrates ground-truth data and the learned embeddings from the first part. Moreover, to assess model uncertainty, a technique called Monte Carlo dropout is used as a Bayesian approximation. The SCB-Net has been applied to two selected areas in northern Quebec, Canada, and has demonstrated its potential in generating field-data-constrained lithological maps while allowing assessment of prediction uncertainty for decision-making. This study highlights the promising advancements of deep neural networks in geostatistics, particularly in handling complex spatial feature learning tasks, leading to improved spatial information techniques.
[ { "created": "Fri, 29 Mar 2024 14:17:30 GMT", "version": "v1" } ]
2024-04-01
[ [ "Santos", "Victor Silva dos", "" ], [ "Gloaguen", "Erwan", "" ], [ "Tirdad", "Shiva", "" ] ]
Geological maps are an extremely valuable source of information for the Earth sciences. They provide insights into mineral exploration, vulnerability to natural hazards, and many other applications. These maps are created using numerical or conceptual models that use geological observations to extrapolate data. Geostatistical techniques have traditionally been used to generate reliable predictions that take into account the spatial patterns inherent in the data. However, as the number of auxiliary variables increases, these methods become more labor-intensive. Additionally, traditional machine learning methods often struggle with spatially correlated data and extracting valuable non-linear information from geoscientific datasets. To address these limitations, a new architecture called the Spatially Constrained Bayesian Network (SCB-Net) has been developed. The SCB-Net aims to effectively exploit the information from auxiliary variables while producing spatially constrained predictions. It is made up of two parts, the first part focuses on learning underlying patterns in the auxiliary variables while the second part integrates ground-truth data and the learned embeddings from the first part. Moreover, to assess model uncertainty, a technique called Monte Carlo dropout is used as a Bayesian approximation. The SCB-Net has been applied to two selected areas in northern Quebec, Canada, and has demonstrated its potential in generating field-data-constrained lithological maps while allowing assessment of prediction uncertainty for decision-making. This study highlights the promising advancements of deep neural networks in geostatistics, particularly in handling complex spatial feature learning tasks, leading to improved spatial information techniques.
2006.03622
Saman Motamed
Saman Motamed and Patrik Rogalla and Farzad Khalvati
Data Augmentation using Generative Adversarial Networks (GANs) for GAN-based Detection of Pneumonia and COVID-19 in Chest X-ray Images
null
null
null
null
cs.CV cs.LG eess.IV q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Successful training of convolutional neural networks (CNNs) requires a substantial amount of data. With small datasets networks generalize poorly. Data Augmentation techniques improve the generalizability of neural networks by using existing training data more effectively. Standard data augmentation methods, however, produce limited plausible alternative data. Generative Adversarial Networks (GANs) have been utilized to generate new data and improve the performance of CNNs. Nevertheless, data augmentation techniques for training GANs are under-explored compared to CNNs. In this work, we propose a new GAN architecture for augmentation of chest X-rays for semi-supervised detection of pneumonia and COVID-19 using generative models. We show that the proposed GAN can be used to effectively augment data and improve classification accuracy of disease in chest X-rays for pneumonia and COVID-19. We compare our augmentation GAN model with Deep Convolutional GAN and traditional augmentation methods (rotate, zoom, etc) on two different X-ray datasets and show our GAN-based augmentation method surpasses other augmentation methods for training a GAN in detecting anomalies in X-ray images.
[ { "created": "Fri, 5 Jun 2020 18:30:20 GMT", "version": "v1" }, { "created": "Tue, 12 Jan 2021 20:27:04 GMT", "version": "v2" } ]
2021-01-14
[ [ "Motamed", "Saman", "" ], [ "Rogalla", "Patrik", "" ], [ "Khalvati", "Farzad", "" ] ]
Successful training of convolutional neural networks (CNNs) requires a substantial amount of data. With small datasets networks generalize poorly. Data Augmentation techniques improve the generalizability of neural networks by using existing training data more effectively. Standard data augmentation methods, however, produce limited plausible alternative data. Generative Adversarial Networks (GANs) have been utilized to generate new data and improve the performance of CNNs. Nevertheless, data augmentation techniques for training GANs are under-explored compared to CNNs. In this work, we propose a new GAN architecture for augmentation of chest X-rays for semi-supervised detection of pneumonia and COVID-19 using generative models. We show that the proposed GAN can be used to effectively augment data and improve classification accuracy of disease in chest X-rays for pneumonia and COVID-19. We compare our augmentation GAN model with Deep Convolutional GAN and traditional augmentation methods (rotate, zoom, etc) on two different X-ray datasets and show our GAN-based augmentation method surpasses other augmentation methods for training a GAN in detecting anomalies in X-ray images.
2308.02950
Louis Vervoort
Louis Vervoort, Vitaliy Mizyakov, Anastasia Ugleva
A criterion for Artificial General Intelligence: hypothetic-deductive reasoning, tested on ChatGPT
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
We argue that a key reasoning skill that any advanced AI, say GPT-4, should master in order to qualify as 'thinking machine', or AGI, is hypothetic-deductive reasoning. Problem-solving or question-answering can quite generally be construed as involving two steps: hypothesizing that a certain set of hypotheses T applies to the problem or question at hand, and deducing the solution or answer from T - hence the term hypothetic-deductive reasoning. An elementary proxy of hypothetic-deductive reasoning is causal reasoning. We propose simple tests for both types of reasoning, and apply them to ChatGPT. Our study shows that, at present, the chatbot has a limited capacity for either type of reasoning, as soon as the problems considered are somewhat complex. However, we submit that if an AI would be capable of this type of reasoning in a sufficiently wide range of contexts, it would be an AGI.
[ { "created": "Sat, 5 Aug 2023 20:33:13 GMT", "version": "v1" } ]
2023-08-08
[ [ "Vervoort", "Louis", "" ], [ "Mizyakov", "Vitaliy", "" ], [ "Ugleva", "Anastasia", "" ] ]
We argue that a key reasoning skill that any advanced AI, say GPT-4, should master in order to qualify as 'thinking machine', or AGI, is hypothetic-deductive reasoning. Problem-solving or question-answering can quite generally be construed as involving two steps: hypothesizing that a certain set of hypotheses T applies to the problem or question at hand, and deducing the solution or answer from T - hence the term hypothetic-deductive reasoning. An elementary proxy of hypothetic-deductive reasoning is causal reasoning. We propose simple tests for both types of reasoning, and apply them to ChatGPT. Our study shows that, at present, the chatbot has a limited capacity for either type of reasoning, as soon as the problems considered are somewhat complex. However, we submit that if an AI would be capable of this type of reasoning in a sufficiently wide range of contexts, it would be an AGI.
1812.10550
Huy-Hieu Pham
Huy-Hieu Pham and Louahdi Khoudour and Alain Crouzil and Pablo Zegers and Sergio A. Velastin
Learning to Recognize 3D Human Action from A New Skeleton-based Representation Using Deep Convolutional Neural Networks
This paper is a preprint of a paper published to IET Computer Vision. The copy of the record will be available at the IET Digital Library
null
10.1049/iet-cvi.2018.5014
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recognizing human actions in untrimmed videos is an important challenging task. An effective 3D motion representation and a powerful learning model are two key factors influencing recognition performance. In this paper we introduce a new skeleton-based representation for 3D action recognition in videos. The key idea of the proposed representation is to transform 3D joint coordinates of the human body carried in skeleton sequences into RGB images via a color encoding process. By normalizing the 3D joint coordinates and dividing each skeleton frame into five parts, where the joints are concatenated according to the order of their physical connections, the color-coded representation is able to represent spatio-temporal evolutions of complex 3D motions, independently of the length of each sequence. We then design and train different Deep Convolutional Neural Networks (D-CNNs) based on the Residual Network architecture (ResNet) on the obtained image-based representations to learn 3D motion features and classify them into classes. Our method is evaluated on two widely used action recognition benchmarks: MSR Action3D and NTU-RGB+D, a very large-scale dataset for 3D human action recognition. The experimental results demonstrate that the proposed method outperforms previous state-of-the-art approaches whilst requiring less computation for training and prediction.
[ { "created": "Wed, 26 Dec 2018 21:47:08 GMT", "version": "v1" } ]
2018-12-31
[ [ "Pham", "Huy-Hieu", "" ], [ "Khoudour", "Louahdi", "" ], [ "Crouzil", "Alain", "" ], [ "Zegers", "Pablo", "" ], [ "Velastin", "Sergio A.", "" ] ]
Recognizing human actions in untrimmed videos is an important challenging task. An effective 3D motion representation and a powerful learning model are two key factors influencing recognition performance. In this paper we introduce a new skeleton-based representation for 3D action recognition in videos. The key idea of the proposed representation is to transform 3D joint coordinates of the human body carried in skeleton sequences into RGB images via a color encoding process. By normalizing the 3D joint coordinates and dividing each skeleton frame into five parts, where the joints are concatenated according to the order of their physical connections, the color-coded representation is able to represent spatio-temporal evolutions of complex 3D motions, independently of the length of each sequence. We then design and train different Deep Convolutional Neural Networks (D-CNNs) based on the Residual Network architecture (ResNet) on the obtained image-based representations to learn 3D motion features and classify them into classes. Our method is evaluated on two widely used action recognition benchmarks: MSR Action3D and NTU-RGB+D, a very large-scale dataset for 3D human action recognition. The experimental results demonstrate that the proposed method outperforms previous state-of-the-art approaches whilst requiring less computation for training and prediction.
1602.08456
Masaki Ogura Dr.
Masaki Ogura and Victor M. Preciado
Epidemic Processes over Adaptive State-Dependent Networks
null
Phys. Rev. E 93, 062316 (2016)
10.1103/PhysRevE.93.062316
null
cs.SI math.PR physics.soc-ph q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study the dynamics of epidemic processes taking place in adaptive networks of arbitrary topology. We focus our study on the adaptive susceptible-infected-susceptible (ASIS) model, where healthy individuals are allowed to temporarily cut edges connecting them to infected nodes in order to prevent the spread of the infection. In this paper, we derive a closed-form expression for a lower bound on the epidemic threshold of the ASIS model in arbitrary networks with heterogeneous node and edge dynamics. For networks with homogeneous node and edge dynamics, we show that the resulting \blue{lower bound} is proportional to the epidemic threshold of the standard SIS model over static networks, with a proportionality constant that depends on the adaptation rates. Furthermore, based on our results, we propose an efficient algorithm to optimally tune the adaptation rates in order to eradicate epidemic outbreaks in arbitrary networks. We confirm the tightness of the proposed lower bounds with several numerical simulations and compare our optimal adaptation rates with popular centrality measures.
[ { "created": "Fri, 26 Feb 2016 19:56:41 GMT", "version": "v1" }, { "created": "Thu, 9 Jun 2016 17:30:07 GMT", "version": "v2" } ]
2016-06-29
[ [ "Ogura", "Masaki", "" ], [ "Preciado", "Victor M.", "" ] ]
In this paper, we study the dynamics of epidemic processes taking place in adaptive networks of arbitrary topology. We focus our study on the adaptive susceptible-infected-susceptible (ASIS) model, where healthy individuals are allowed to temporarily cut edges connecting them to infected nodes in order to prevent the spread of the infection. In this paper, we derive a closed-form expression for a lower bound on the epidemic threshold of the ASIS model in arbitrary networks with heterogeneous node and edge dynamics. For networks with homogeneous node and edge dynamics, we show that the resulting \blue{lower bound} is proportional to the epidemic threshold of the standard SIS model over static networks, with a proportionality constant that depends on the adaptation rates. Furthermore, based on our results, we propose an efficient algorithm to optimally tune the adaptation rates in order to eradicate epidemic outbreaks in arbitrary networks. We confirm the tightness of the proposed lower bounds with several numerical simulations and compare our optimal adaptation rates with popular centrality measures.
1802.08984
Kalev Alpernas
Kalev Alpernas (Tel Aviv University), Cormac Flanagan (UC Santa Cruz), Sadjad Fouladi (Stanford University), Leonid Ryzhyk (VMware Research), Mooly Sagiv (Tel Aviv University), Thomas Schmitz (UC Santa Cruz) and Keith Winstein (Stanford University)
Secure Serverless Computing Using Dynamic Information Flow Control
null
null
null
null
cs.PL cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rise of serverless computing provides an opportunity to rethink cloud security. We present an approach for securing serverless systems using a novel form of dynamic information flow control (IFC). We show that in serverless applications, the termination channel found in most existing IFC systems can be arbitrarily amplified via multiple concurrent requests, necessitating a stronger termination-sensitive non-interference guarantee, which we achieve using a combination of static labeling of serverless processes and dynamic faceted labeling of persistent data. We describe our implementation of this approach on top of JavaScript for AWS Lambda and OpenWhisk serverless platforms, and present three realistic case studies showing that it can enforce important IFC security properties with low overhead.
[ { "created": "Sun, 25 Feb 2018 10:36:56 GMT", "version": "v1" } ]
2018-02-27
[ [ "Alpernas", "Kalev", "", "Tel Aviv University" ], [ "Flanagan", "Cormac", "", "UC Santa Cruz" ], [ "Fouladi", "Sadjad", "", "Stanford University" ], [ "Ryzhyk", "Leonid", "", "VMware Research" ], [ "Sagiv", "Mooly", "", "Tel Aviv University" ], [ "Schmitz", "Thomas", "", "UC Santa Cruz" ], [ "Winstein", "Keith", "", "Stanford University" ] ]
The rise of serverless computing provides an opportunity to rethink cloud security. We present an approach for securing serverless systems using a novel form of dynamic information flow control (IFC). We show that in serverless applications, the termination channel found in most existing IFC systems can be arbitrarily amplified via multiple concurrent requests, necessitating a stronger termination-sensitive non-interference guarantee, which we achieve using a combination of static labeling of serverless processes and dynamic faceted labeling of persistent data. We describe our implementation of this approach on top of JavaScript for AWS Lambda and OpenWhisk serverless platforms, and present three realistic case studies showing that it can enforce important IFC security properties with low overhead.
0906.5233
Toby Walsh
George Katsirelos, Sebastian Maneth, Nina Narodytska, Toby Walsh
Restricted Global Grammar Constraints
Proceedings of the 15th International Conference on Principles and Practice of Constraint Programming, Lisbon, Portugal. September 2009
null
null
null
cs.AI cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the global GRAMMAR constraint over restricted classes of context free grammars like deterministic and unambiguous context-free grammars. We show that detecting disentailment for the GRAMMAR constraint in these cases is as hard as parsing an unrestricted context free grammar.We also consider the class of linear grammars and give a propagator that runs in quadratic time. Finally, to demonstrate the use of linear grammars, we show that a weighted linear GRAMMAR constraint can efficiently encode the EDITDISTANCE constraint, and a conjunction of the EDITDISTANCE constraint and the REGULAR constraint
[ { "created": "Mon, 29 Jun 2009 09:23:39 GMT", "version": "v1" } ]
2009-06-30
[ [ "Katsirelos", "George", "" ], [ "Maneth", "Sebastian", "" ], [ "Narodytska", "Nina", "" ], [ "Walsh", "Toby", "" ] ]
We investigate the global GRAMMAR constraint over restricted classes of context free grammars like deterministic and unambiguous context-free grammars. We show that detecting disentailment for the GRAMMAR constraint in these cases is as hard as parsing an unrestricted context free grammar.We also consider the class of linear grammars and give a propagator that runs in quadratic time. Finally, to demonstrate the use of linear grammars, we show that a weighted linear GRAMMAR constraint can efficiently encode the EDITDISTANCE constraint, and a conjunction of the EDITDISTANCE constraint and the REGULAR constraint
1006.5188
Nicola Di Mauro
Nicola Di Mauro and Teresa M.A. Basile and Stefano Ferilli and Floriana Esposito
Feature Construction for Relational Sequence Learning
15 pages
null
null
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We tackle the problem of multi-class relational sequence learning using relevant patterns discovered from a set of labelled sequences. To deal with this problem, firstly each relational sequence is mapped into a feature vector using the result of a feature construction method. Since, the efficacy of sequence learning algorithms strongly depends on the features used to represent the sequences, the second step is to find an optimal subset of the constructed features leading to high classification accuracy. This feature selection task has been solved adopting a wrapper approach that uses a stochastic local search algorithm embedding a naive Bayes classifier. The performance of the proposed method applied to a real-world dataset shows an improvement when compared to other established methods, such as hidden Markov models, Fisher kernels and conditional random fields for relational sequences.
[ { "created": "Sun, 27 Jun 2010 08:56:11 GMT", "version": "v1" } ]
2010-06-29
[ [ "Di Mauro", "Nicola", "" ], [ "Basile", "Teresa M. A.", "" ], [ "Ferilli", "Stefano", "" ], [ "Esposito", "Floriana", "" ] ]
We tackle the problem of multi-class relational sequence learning using relevant patterns discovered from a set of labelled sequences. To deal with this problem, firstly each relational sequence is mapped into a feature vector using the result of a feature construction method. Since, the efficacy of sequence learning algorithms strongly depends on the features used to represent the sequences, the second step is to find an optimal subset of the constructed features leading to high classification accuracy. This feature selection task has been solved adopting a wrapper approach that uses a stochastic local search algorithm embedding a naive Bayes classifier. The performance of the proposed method applied to a real-world dataset shows an improvement when compared to other established methods, such as hidden Markov models, Fisher kernels and conditional random fields for relational sequences.
1905.10077
Lixin Su
Lixin Su, Jiafeng Guo, Yixing Fan, Yanyan Lan, and Xueqi Cheng
Controlling Risk of Web Question Answering
42nd International ACM SIGIR Conference on Research and Development in Information Retrieval
null
10.1145/3331184.3331261
null
cs.IR cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Web question answering (QA) has become an indispensable component in modern search systems, which can significantly improve users' search experience by providing a direct answer to users' information need. This could be achieved by applying machine reading comprehension (MRC) models over the retrieved passages to extract answers with respect to the search query. With the development of deep learning techniques, state-of-the-art MRC performances have been achieved by recent deep methods. However, existing studies on MRC seldom address the predictive uncertainty issue, i.e., how likely the prediction of an MRC model is wrong, leading to uncontrollable risks in real-world Web QA applications. In this work, we first conduct an in-depth investigation over the risk of Web QA. We then introduce a novel risk control framework, which consists of a qualify model for uncertainty estimation using the probe idea, and a decision model for selectively output. For evaluation, we introduce risk-related metrics, rather than the traditional EM and F1 in MRC, for the evaluation of risk-aware Web QA. The empirical results over both the real-world Web QA dataset and the academic MRC benchmark collection demonstrate the effectiveness of our approach.
[ { "created": "Fri, 24 May 2019 07:55:42 GMT", "version": "v1" }, { "created": "Mon, 27 May 2019 02:24:32 GMT", "version": "v2" }, { "created": "Thu, 11 Jul 2019 05:10:47 GMT", "version": "v3" } ]
2019-07-12
[ [ "Su", "Lixin", "" ], [ "Guo", "Jiafeng", "" ], [ "Fan", "Yixing", "" ], [ "Lan", "Yanyan", "" ], [ "Cheng", "Xueqi", "" ] ]
Web question answering (QA) has become an indispensable component in modern search systems, which can significantly improve users' search experience by providing a direct answer to users' information need. This could be achieved by applying machine reading comprehension (MRC) models over the retrieved passages to extract answers with respect to the search query. With the development of deep learning techniques, state-of-the-art MRC performances have been achieved by recent deep methods. However, existing studies on MRC seldom address the predictive uncertainty issue, i.e., how likely the prediction of an MRC model is wrong, leading to uncontrollable risks in real-world Web QA applications. In this work, we first conduct an in-depth investigation over the risk of Web QA. We then introduce a novel risk control framework, which consists of a qualify model for uncertainty estimation using the probe idea, and a decision model for selectively output. For evaluation, we introduce risk-related metrics, rather than the traditional EM and F1 in MRC, for the evaluation of risk-aware Web QA. The empirical results over both the real-world Web QA dataset and the academic MRC benchmark collection demonstrate the effectiveness of our approach.
2209.07220
Hyung-Il Kim
Hyung-Il Kim, Kimin Yun, Yong Man Ro
Face Shape-Guided Deep Feature Alignment for Face Recognition Robust to Face Misalignment
14 pages, 9 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For the past decades, face recognition (FR) has been actively studied in computer vision and pattern recognition society. Recently, due to the advances in deep learning, the FR technology shows high performance for most of the benchmark datasets. However, when the FR algorithm is applied to a real-world scenario, the performance has been known to be still unsatisfactory. This is mainly attributed to the mismatch between training and testing sets. Among such mismatches, face misalignment between training and testing faces is one of the factors that hinder successful FR. To address this limitation, we propose a face shape-guided deep feature alignment framework for FR robust to the face misalignment. Based on a face shape prior (e.g., face keypoints), we train the proposed deep network by introducing alignment processes, i.e., pixel and feature alignments, between well-aligned and misaligned face images. Through the pixel alignment process that decodes the aggregated feature extracted from a face image and face shape prior, we add the auxiliary task to reconstruct the well-aligned face image. Since the aggregated features are linked to the face feature extraction network as a guide via the feature alignment process, we train the robust face feature to the face misalignment. Even if the face shape estimation is required in the training stage, the additional face alignment process, which is usually incorporated in the conventional FR pipeline, is not necessarily needed in the testing phase. Through the comparative experiments, we validate the effectiveness of the proposed method for the face misalignment with the FR datasets.
[ { "created": "Thu, 15 Sep 2022 11:23:51 GMT", "version": "v1" } ]
2022-09-16
[ [ "Kim", "Hyung-Il", "" ], [ "Yun", "Kimin", "" ], [ "Ro", "Yong Man", "" ] ]
For the past decades, face recognition (FR) has been actively studied in computer vision and pattern recognition society. Recently, due to the advances in deep learning, the FR technology shows high performance for most of the benchmark datasets. However, when the FR algorithm is applied to a real-world scenario, the performance has been known to be still unsatisfactory. This is mainly attributed to the mismatch between training and testing sets. Among such mismatches, face misalignment between training and testing faces is one of the factors that hinder successful FR. To address this limitation, we propose a face shape-guided deep feature alignment framework for FR robust to the face misalignment. Based on a face shape prior (e.g., face keypoints), we train the proposed deep network by introducing alignment processes, i.e., pixel and feature alignments, between well-aligned and misaligned face images. Through the pixel alignment process that decodes the aggregated feature extracted from a face image and face shape prior, we add the auxiliary task to reconstruct the well-aligned face image. Since the aggregated features are linked to the face feature extraction network as a guide via the feature alignment process, we train the robust face feature to the face misalignment. Even if the face shape estimation is required in the training stage, the additional face alignment process, which is usually incorporated in the conventional FR pipeline, is not necessarily needed in the testing phase. Through the comparative experiments, we validate the effectiveness of the proposed method for the face misalignment with the FR datasets.
2302.10441
Zhuohang Li
Zhuohang Li, Jiaxin Zhang, Jian Liu
Speech Privacy Leakage from Shared Gradients in Distributed Learning
null
null
null
null
cs.LG cs.CR
http://creativecommons.org/licenses/by/4.0/
Distributed machine learning paradigms, such as federated learning, have been recently adopted in many privacy-critical applications for speech analysis. However, such frameworks are vulnerable to privacy leakage attacks from shared gradients. Despite extensive efforts in the image domain, the exploration of speech privacy leakage from gradients is quite limited. In this paper, we explore methods for recovering private speech/speaker information from the shared gradients in distributed learning settings. We conduct experiments on a keyword spotting model with two different types of speech features to quantify the amount of leaked information by measuring the similarity between the original and recovered speech signals. We further demonstrate the feasibility of inferring various levels of side-channel information, including speech content and speaker identity, under the distributed learning framework without accessing the user's data.
[ { "created": "Tue, 21 Feb 2023 04:48:29 GMT", "version": "v1" } ]
2023-02-22
[ [ "Li", "Zhuohang", "" ], [ "Zhang", "Jiaxin", "" ], [ "Liu", "Jian", "" ] ]
Distributed machine learning paradigms, such as federated learning, have been recently adopted in many privacy-critical applications for speech analysis. However, such frameworks are vulnerable to privacy leakage attacks from shared gradients. Despite extensive efforts in the image domain, the exploration of speech privacy leakage from gradients is quite limited. In this paper, we explore methods for recovering private speech/speaker information from the shared gradients in distributed learning settings. We conduct experiments on a keyword spotting model with two different types of speech features to quantify the amount of leaked information by measuring the similarity between the original and recovered speech signals. We further demonstrate the feasibility of inferring various levels of side-channel information, including speech content and speaker identity, under the distributed learning framework without accessing the user's data.
2403.05732
Nitsan Soffair
Nitsan Soffair, Shie Mannor
Conservative DDPG -- Pessimistic RL without Ensemble
Paper do not ready
null
null
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
DDPG is hindered by the overestimation bias problem, wherein its $Q$-estimates tend to overstate the actual $Q$-values. Traditional solutions to this bias involve ensemble-based methods, which require significant computational resources, or complex log-policy-based approaches, which are difficult to understand and implement. In contrast, we propose a straightforward solution using a $Q$-target and incorporating a behavioral cloning (BC) loss penalty. This solution, acting as an uncertainty measure, can be easily implemented with minimal code and without the need for an ensemble. Our empirical findings strongly support the superiority of Conservative DDPG over DDPG across various MuJoCo and Bullet tasks. We consistently observe better performance in all evaluated tasks and even competitive or superior performance compared to TD3 and TD7, all achieved with significantly reduced computational requirements.
[ { "created": "Fri, 8 Mar 2024 23:59:38 GMT", "version": "v1" }, { "created": "Sun, 2 Jun 2024 19:40:48 GMT", "version": "v2" } ]
2024-06-04
[ [ "Soffair", "Nitsan", "" ], [ "Mannor", "Shie", "" ] ]
DDPG is hindered by the overestimation bias problem, wherein its $Q$-estimates tend to overstate the actual $Q$-values. Traditional solutions to this bias involve ensemble-based methods, which require significant computational resources, or complex log-policy-based approaches, which are difficult to understand and implement. In contrast, we propose a straightforward solution using a $Q$-target and incorporating a behavioral cloning (BC) loss penalty. This solution, acting as an uncertainty measure, can be easily implemented with minimal code and without the need for an ensemble. Our empirical findings strongly support the superiority of Conservative DDPG over DDPG across various MuJoCo and Bullet tasks. We consistently observe better performance in all evaluated tasks and even competitive or superior performance compared to TD3 and TD7, all achieved with significantly reduced computational requirements.
2102.06930
Kleanthis Avramidis
Kleanthis Avramidis, Agelos Kratimenos, Christos Garoufis, Athanasia Zlatintsi and Petros Maragos
Deep Convolutional and Recurrent Networks for Polyphonic Instrument Classification from Monophonic Raw Audio Waveforms
5 pages, 4 figures, 6 tables, to be published in the Proc. of the 46th International Conference on Acoustics, Speech and Signal Processing (ICASSP 2021) @ Toronto, Ontario, Canada
null
null
null
cs.SD cs.LG eess.AS
http://creativecommons.org/licenses/by/4.0/
Sound Event Detection and Audio Classification tasks are traditionally addressed through time-frequency representations of audio signals such as spectrograms. However, the emergence of deep neural networks as efficient feature extractors has enabled the direct use of audio signals for classification purposes. In this paper, we attempt to recognize musical instruments in polyphonic audio by only feeding their raw waveforms into deep learning models. Various recurrent and convolutional architectures incorporating residual connections are examined and parameterized in order to build end-to-end classi-fiers with low computational cost and only minimal preprocessing. We obtain competitive classification scores and useful instrument-wise insight through the IRMAS test set, utilizing a parallel CNN-BiGRU model with multiple residual connections, while maintaining a significantly reduced number of trainable parameters.
[ { "created": "Sat, 13 Feb 2021 13:44:46 GMT", "version": "v1" } ]
2021-02-16
[ [ "Avramidis", "Kleanthis", "" ], [ "Kratimenos", "Agelos", "" ], [ "Garoufis", "Christos", "" ], [ "Zlatintsi", "Athanasia", "" ], [ "Maragos", "Petros", "" ] ]
Sound Event Detection and Audio Classification tasks are traditionally addressed through time-frequency representations of audio signals such as spectrograms. However, the emergence of deep neural networks as efficient feature extractors has enabled the direct use of audio signals for classification purposes. In this paper, we attempt to recognize musical instruments in polyphonic audio by only feeding their raw waveforms into deep learning models. Various recurrent and convolutional architectures incorporating residual connections are examined and parameterized in order to build end-to-end classi-fiers with low computational cost and only minimal preprocessing. We obtain competitive classification scores and useful instrument-wise insight through the IRMAS test set, utilizing a parallel CNN-BiGRU model with multiple residual connections, while maintaining a significantly reduced number of trainable parameters.
1711.09408
Jonathan Zhu
Jonathan J. H. Zhu, Hexin Chen, Tai-Quan Peng, Xiao Fan Liu and Haixing Dai
How to Measure Sessions of Mobile Device Use? Quantification, Evaluation, and Applications
Preprint of forthcoming article in Mobile Media & Communication
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Research on mobile phone use often starts with a question of "How much time users spend on using their phones?". The question involves an equal-length measure that captures the duration of mobile phone use but does not tackle the other temporal characteristics of user behavior, such as frequency, timing, and sequence. In the study, we proposed a variable-length measure called "session" to uncover the unmeasured temporal characteristics. We use an open source data to demonstrate how to quantify sessions, aggregate the sessions to higher units of analysis within and across users, evaluate the results, and apply the measure for theoretical or practical purposes.
[ { "created": "Sun, 26 Nov 2017 15:32:22 GMT", "version": "v1" } ]
2017-11-28
[ [ "Zhu", "Jonathan J. H.", "" ], [ "Chen", "Hexin", "" ], [ "Peng", "Tai-Quan", "" ], [ "Liu", "Xiao Fan", "" ], [ "Dai", "Haixing", "" ] ]
Research on mobile phone use often starts with a question of "How much time users spend on using their phones?". The question involves an equal-length measure that captures the duration of mobile phone use but does not tackle the other temporal characteristics of user behavior, such as frequency, timing, and sequence. In the study, we proposed a variable-length measure called "session" to uncover the unmeasured temporal characteristics. We use an open source data to demonstrate how to quantify sessions, aggregate the sessions to higher units of analysis within and across users, evaluate the results, and apply the measure for theoretical or practical purposes.
2110.09234
Martha Barnard
Martha Barnard (1), Radhika Iyer (1 and 2), Sara Y. Del Valle (1), Ashlynn R. Daughton (1) ((1) A-1 Information Systems and Modeling, Los Alamos National Lab, Los Alamos, NM, USA, (2) Department of Political Science and Department of Computing, Data Science, and Society, University of California, Berkeley, Berkeley, CA, USA)
Impact of COVID-19 Policies and Misinformation on Social Unrest
21 pages, 9 figures
null
null
LA-UR-21-29745
cs.CY cs.LG stat.AP
http://creativecommons.org/licenses/by-nc-sa/4.0/
The novel coronavirus disease (COVID-19) pandemic has impacted every corner of earth, disrupting governments and leading to socioeconomic instability. This crisis has prompted questions surrounding how different sectors of society interact and influence each other during times of change and stress. Given the unprecedented economic and societal impacts of this pandemic, many new data sources have become available, allowing us to quantitatively explore these associations. Understanding these relationships can help us better prepare for future disasters and mitigate the impacts. Here, we focus on the interplay between social unrest (protests), health outcomes, public health orders, and misinformation in eight countries of Western Europe and four regions of the United States. We created 1-3 week forecasts of both a binary protest metric for identifying times of high protest activity and the overall protest counts over time. We found that for all regions, except Belgium, at least one feature from our various data streams was predictive of protests. However, the accuracy of the protest forecasts varied by country, that is, for roughly half of the countries analyzed, our forecasts outperform a na\"ive model. These mixed results demonstrate the potential of diverse data streams to predict a topic as volatile as protests as well as the difficulties of predicting a situation that is as rapidly evolving as a pandemic.
[ { "created": "Thu, 7 Oct 2021 16:05:10 GMT", "version": "v1" } ]
2021-10-19
[ [ "Barnard", "Martha", "", "1 and 2" ], [ "Iyer", "Radhika", "", "1 and 2" ], [ "Del Valle", "Sara Y.", "" ], [ "Daughton", "Ashlynn R.", "" ] ]
The novel coronavirus disease (COVID-19) pandemic has impacted every corner of earth, disrupting governments and leading to socioeconomic instability. This crisis has prompted questions surrounding how different sectors of society interact and influence each other during times of change and stress. Given the unprecedented economic and societal impacts of this pandemic, many new data sources have become available, allowing us to quantitatively explore these associations. Understanding these relationships can help us better prepare for future disasters and mitigate the impacts. Here, we focus on the interplay between social unrest (protests), health outcomes, public health orders, and misinformation in eight countries of Western Europe and four regions of the United States. We created 1-3 week forecasts of both a binary protest metric for identifying times of high protest activity and the overall protest counts over time. We found that for all regions, except Belgium, at least one feature from our various data streams was predictive of protests. However, the accuracy of the protest forecasts varied by country, that is, for roughly half of the countries analyzed, our forecasts outperform a na\"ive model. These mixed results demonstrate the potential of diverse data streams to predict a topic as volatile as protests as well as the difficulties of predicting a situation that is as rapidly evolving as a pandemic.
2102.04317
Shuquan Ye
Shuquan Ye, Dongdong Chen, Songfang Han, Ziyu Wan, Jing Liao
Meta-PU: An Arbitrary-Scale Upsampling Network for Point Cloud
To appear at TVCG
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Point cloud upsampling is vital for the quality of the mesh in three-dimensional reconstruction. Recent research on point cloud upsampling has achieved great success due to the development of deep learning. However, the existing methods regard point cloud upsampling of different scale factors as independent tasks. Thus, the methods need to train a specific model for each scale factor, which is both inefficient and impractical for storage and computation in real applications. To address this limitation, in this work, we propose a novel method called ``Meta-PU" to firstly support point cloud upsampling of arbitrary scale factors with a single model. In the Meta-PU method, besides the backbone network consisting of residual graph convolution (RGC) blocks, a meta-subnetwork is learned to adjust the weights of the RGC blocks dynamically, and a farthest sampling block is adopted to sample different numbers of points. Together, these two blocks enable our Meta-PU to continuously upsample the point cloud with arbitrary scale factors by using only a single model. In addition, the experiments reveal that training on multiple scales simultaneously is beneficial to each other. Thus, Meta-PU even outperforms the existing methods trained for a specific scale factor only.
[ { "created": "Mon, 8 Feb 2021 16:21:48 GMT", "version": "v1" } ]
2021-02-09
[ [ "Ye", "Shuquan", "" ], [ "Chen", "Dongdong", "" ], [ "Han", "Songfang", "" ], [ "Wan", "Ziyu", "" ], [ "Liao", "Jing", "" ] ]
Point cloud upsampling is vital for the quality of the mesh in three-dimensional reconstruction. Recent research on point cloud upsampling has achieved great success due to the development of deep learning. However, the existing methods regard point cloud upsampling of different scale factors as independent tasks. Thus, the methods need to train a specific model for each scale factor, which is both inefficient and impractical for storage and computation in real applications. To address this limitation, in this work, we propose a novel method called ``Meta-PU" to firstly support point cloud upsampling of arbitrary scale factors with a single model. In the Meta-PU method, besides the backbone network consisting of residual graph convolution (RGC) blocks, a meta-subnetwork is learned to adjust the weights of the RGC blocks dynamically, and a farthest sampling block is adopted to sample different numbers of points. Together, these two blocks enable our Meta-PU to continuously upsample the point cloud with arbitrary scale factors by using only a single model. In addition, the experiments reveal that training on multiple scales simultaneously is beneficial to each other. Thus, Meta-PU even outperforms the existing methods trained for a specific scale factor only.
1607.04557
Alfonso Cevallos
Alfonso Cevallos, Friedrich Eisenbrand, Rico Zenklusen
Local Search for Max-Sum Diversification
null
null
null
null
cs.DS cs.CG cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We provide simple and fast polynomial time approximation schemes (PTASs) for several variants of the max-sum diversification problem which, in its most basic form, is as follows: Given n points p_1,...,p_n in R^d and an integer k, select k points such that the average Euclidean distance between these points is maximized. This problem commonly appears in information retrieval and web-search in order to select a diverse set of points from the input. In this context, it has recently received a lot of attention. We present new techniques to analyze natural local search algorithms. This leads to a (1-O(1/k))-approximation for distances of negative type, even subject to any matroid constraint of rank k, in time O(n k^2 log k), when assuming that distance evaluations and calls to the independence oracle are constant time. Negative type distances include as special cases Euclidean distances and many further natural distances. Our result easily transforms into a PTAS and improves on the only previously known PTAS for this setting, which relies on convex optimization techniques in an n-dimensional space and is impractical for large data sets. In contrast, our procedure has an (optimal) linear dependence on n. Using generalized exchange properties of matroid intersection, we show that a PTAS can be obtained for matroid intersection constraints as well. Moreover, our techniques, being based on local search, are conceptually simple and allow for various extensions. In particular, we get asymptotically optimal O(1)-approximations when combining the classic dispersion function with a monotone submodular objective, which is a very common class of functions to measure diversity and relevance. This result leverages recent advances on local search techniques based on proxy functions to obtain optimal approximations for monotone submodular function maximization subject to a matroid constraint.
[ { "created": "Fri, 15 Jul 2016 15:38:02 GMT", "version": "v1" } ]
2016-07-18
[ [ "Cevallos", "Alfonso", "" ], [ "Eisenbrand", "Friedrich", "" ], [ "Zenklusen", "Rico", "" ] ]
We provide simple and fast polynomial time approximation schemes (PTASs) for several variants of the max-sum diversification problem which, in its most basic form, is as follows: Given n points p_1,...,p_n in R^d and an integer k, select k points such that the average Euclidean distance between these points is maximized. This problem commonly appears in information retrieval and web-search in order to select a diverse set of points from the input. In this context, it has recently received a lot of attention. We present new techniques to analyze natural local search algorithms. This leads to a (1-O(1/k))-approximation for distances of negative type, even subject to any matroid constraint of rank k, in time O(n k^2 log k), when assuming that distance evaluations and calls to the independence oracle are constant time. Negative type distances include as special cases Euclidean distances and many further natural distances. Our result easily transforms into a PTAS and improves on the only previously known PTAS for this setting, which relies on convex optimization techniques in an n-dimensional space and is impractical for large data sets. In contrast, our procedure has an (optimal) linear dependence on n. Using generalized exchange properties of matroid intersection, we show that a PTAS can be obtained for matroid intersection constraints as well. Moreover, our techniques, being based on local search, are conceptually simple and allow for various extensions. In particular, we get asymptotically optimal O(1)-approximations when combining the classic dispersion function with a monotone submodular objective, which is a very common class of functions to measure diversity and relevance. This result leverages recent advances on local search techniques based on proxy functions to obtain optimal approximations for monotone submodular function maximization subject to a matroid constraint.
1706.06322
Mohamd Sultan
Mohamad T. Sultan and Salim M. Zaki
Evaluation of energy consumption of reactive and proactive routing protocols in MANET
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mobile Ad hoc Network (MANET) is a distributed, infrastructure-less and decentralized network. A routing protocol in MANET is used to find routes between mobile nodes to facilitate communication within the network. Numerous routing protocols have been proposed for MANET. Those routing protocols are designed to adaptively accommodate for dynamic unpredictable changes in network's topology. The mobile nodes in MANET are often powered by limited batteries and network lifetime relies heavily on the energy consumption of nodes. In consequence, the lack of a mobile node can lead to network partitioning. In this paper we analyse, evaluate and measure the energy efficiency of three prominent MANET routing protocols namely DSR, AODV and OLSR in addition to modified protocols. These routing protocols follow the reactive and the proactive routing schemes. A discussion and comparison highlighting their particular merits and drawbacks are also presented. Evaluation study and simulations are performed using NS-2 and its accompanying tools for analysis and investigation of results.
[ { "created": "Tue, 20 Jun 2017 08:56:12 GMT", "version": "v1" } ]
2017-06-21
[ [ "Sultan", "Mohamad T.", "" ], [ "Zaki", "Salim M.", "" ] ]
Mobile Ad hoc Network (MANET) is a distributed, infrastructure-less and decentralized network. A routing protocol in MANET is used to find routes between mobile nodes to facilitate communication within the network. Numerous routing protocols have been proposed for MANET. Those routing protocols are designed to adaptively accommodate for dynamic unpredictable changes in network's topology. The mobile nodes in MANET are often powered by limited batteries and network lifetime relies heavily on the energy consumption of nodes. In consequence, the lack of a mobile node can lead to network partitioning. In this paper we analyse, evaluate and measure the energy efficiency of three prominent MANET routing protocols namely DSR, AODV and OLSR in addition to modified protocols. These routing protocols follow the reactive and the proactive routing schemes. A discussion and comparison highlighting their particular merits and drawbacks are also presented. Evaluation study and simulations are performed using NS-2 and its accompanying tools for analysis and investigation of results.
1606.05839
EPTCS
Olivier Danvy (University of Aarhus), Ugo de'Liguoro (Universit\`a di Torino)
Proceedings of the Workshop on Continuations
null
EPTCS 212, 2016
10.4204/EPTCS.212
null
cs.PL cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The notion of continuation is ubiquitous in many different areas of computer science, including systems programming, programming languages, algorithmics, semantics, logic, and constructive mathematics. In fact the concept of continuation nicely realizes sophisticated control mechanisms, which are widely used in a variety of applications. Since we cannot escape control features, it becomes a challenge to provide them with sound reasoning principles. Indeed there is much research activity on understanding, representing, and reasoning about elaborated non-local control structures, in particular in declarative programming languages such as functional and logic languages. The proceedings of the Workshop on Continuations 2015, held in London in April 2015, illustrate some of the afore mentioned topics and hopefully they will inspire further research work on the subject.
[ { "created": "Sun, 19 Jun 2016 07:25:03 GMT", "version": "v1" } ]
2016-06-21
[ [ "Danvy", "Olivier", "", "University of Aarhus" ], [ "de'Liguoro", "Ugo", "", "Università di\n Torino" ] ]
The notion of continuation is ubiquitous in many different areas of computer science, including systems programming, programming languages, algorithmics, semantics, logic, and constructive mathematics. In fact the concept of continuation nicely realizes sophisticated control mechanisms, which are widely used in a variety of applications. Since we cannot escape control features, it becomes a challenge to provide them with sound reasoning principles. Indeed there is much research activity on understanding, representing, and reasoning about elaborated non-local control structures, in particular in declarative programming languages such as functional and logic languages. The proceedings of the Workshop on Continuations 2015, held in London in April 2015, illustrate some of the afore mentioned topics and hopefully they will inspire further research work on the subject.
2001.00784
Dong Liu
Dong Liu, Chengjian Sun, Chenyang Yang, Lajos Hanzo
Optimizing Wireless Systems Using Unsupervised and Reinforced-Unsupervised Deep Learning
To appear in IEEE Network Magazine
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Resource allocation and transceivers in wireless networks are usually designed by solving optimization problems subject to specific constraints, which can be formulated as variable or functional optimization. If the objective and constraint functions of a variable optimization problem can be derived, standard numerical algorithms can be applied for finding the optimal solution, which however incur high computational cost when the dimension of the variable is high. To reduce the on-line computational complexity, learning the optimal solution as a function of the environment's status by deep neural networks (DNNs) is an effective approach. DNNs can be trained under the supervision of optimal solutions, which however, is not applicable to the scenarios without models or for functional optimization where the optimal solutions are hard to obtain. If the objective and constraint functions are unavailable, reinforcement learning can be applied to find the solution of a functional optimization problem, which is however not tailored to optimization problems in wireless networks. In this article, we introduce unsupervised and reinforced-unsupervised learning frameworks for solving both variable and functional optimization problems without the supervision of the optimal solutions. When the mathematical model of the environment is completely known and the distribution of environment's status is known or unknown, we can invoke unsupervised learning algorithm. When the mathematical model of the environment is incomplete, we introduce reinforced-unsupervised learning algorithms that learn the model by interacting with the environment. Our simulation results confirm the applicability of these learning frameworks by taking a user association problem as an example.
[ { "created": "Fri, 3 Jan 2020 11:01:52 GMT", "version": "v1" } ]
2020-01-06
[ [ "Liu", "Dong", "" ], [ "Sun", "Chengjian", "" ], [ "Yang", "Chenyang", "" ], [ "Hanzo", "Lajos", "" ] ]
Resource allocation and transceivers in wireless networks are usually designed by solving optimization problems subject to specific constraints, which can be formulated as variable or functional optimization. If the objective and constraint functions of a variable optimization problem can be derived, standard numerical algorithms can be applied for finding the optimal solution, which however incur high computational cost when the dimension of the variable is high. To reduce the on-line computational complexity, learning the optimal solution as a function of the environment's status by deep neural networks (DNNs) is an effective approach. DNNs can be trained under the supervision of optimal solutions, which however, is not applicable to the scenarios without models or for functional optimization where the optimal solutions are hard to obtain. If the objective and constraint functions are unavailable, reinforcement learning can be applied to find the solution of a functional optimization problem, which is however not tailored to optimization problems in wireless networks. In this article, we introduce unsupervised and reinforced-unsupervised learning frameworks for solving both variable and functional optimization problems without the supervision of the optimal solutions. When the mathematical model of the environment is completely known and the distribution of environment's status is known or unknown, we can invoke unsupervised learning algorithm. When the mathematical model of the environment is incomplete, we introduce reinforced-unsupervised learning algorithms that learn the model by interacting with the environment. Our simulation results confirm the applicability of these learning frameworks by taking a user association problem as an example.
2105.06807
Ruoxi Chen
Jinyin Chen, Ruoxi Chen, Haibin Zheng, Zhaoyan Ming, Wenrong Jiang and Chen Cui
Salient Feature Extractor for Adversarial Defense on Deep Neural Networks
null
null
null
null
cs.CV cs.AI cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent years have witnessed unprecedented success achieved by deep learning models in the field of computer vision. However, their vulnerability towards carefully crafted adversarial examples has also attracted the increasing attention of researchers. Motivated by the observation that adversarial examples are due to the non-robust feature learned from the original dataset by models, we propose the concepts of salient feature(SF) and trivial feature(TF). The former represents the class-related feature, while the latter is usually adopted to mislead the model. We extract these two features with coupled generative adversarial network model and put forward a novel detection and defense method named salient feature extractor (SFE) to defend against adversarial attacks. Concretely, detection is realized by separating and comparing the difference between SF and TF of the input. At the same time, correct labels are obtained by re-identifying SF to reach the purpose of defense. Extensive experiments are carried out on MNIST, CIFAR-10, and ImageNet datasets where SFE shows state-of-the-art results in effectiveness and efficiency compared with baselines. Furthermore, we provide an interpretable understanding of the defense and detection process.
[ { "created": "Fri, 14 May 2021 12:56:06 GMT", "version": "v1" } ]
2021-05-17
[ [ "Chen", "Jinyin", "" ], [ "Chen", "Ruoxi", "" ], [ "Zheng", "Haibin", "" ], [ "Ming", "Zhaoyan", "" ], [ "Jiang", "Wenrong", "" ], [ "Cui", "Chen", "" ] ]
Recent years have witnessed unprecedented success achieved by deep learning models in the field of computer vision. However, their vulnerability towards carefully crafted adversarial examples has also attracted the increasing attention of researchers. Motivated by the observation that adversarial examples are due to the non-robust feature learned from the original dataset by models, we propose the concepts of salient feature(SF) and trivial feature(TF). The former represents the class-related feature, while the latter is usually adopted to mislead the model. We extract these two features with coupled generative adversarial network model and put forward a novel detection and defense method named salient feature extractor (SFE) to defend against adversarial attacks. Concretely, detection is realized by separating and comparing the difference between SF and TF of the input. At the same time, correct labels are obtained by re-identifying SF to reach the purpose of defense. Extensive experiments are carried out on MNIST, CIFAR-10, and ImageNet datasets where SFE shows state-of-the-art results in effectiveness and efficiency compared with baselines. Furthermore, we provide an interpretable understanding of the defense and detection process.
1407.1429
Mohammad Nassiry
Mohammad Nassiry, Muriati Mukhtar
Business types classification via e-commerce stage model in oil industry in Iran
null
null
null
null
cs.OH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since the strategies and plans for e-commerce development are different for different industries and since the oil industry is one of the most important industries in Iran, the scope of this research is thus confined to that of the oil industry in Iran. The main aim of this study is to identify and classify the different features of e-commerce development stages and features based on the different business types present in companies in the oil industry in Iran. In order to achieve both of these objectives a questionnaire was developed and administered online. The questionnaire was distributed to forty representatives working in different companies. The collected data was classified and sorted and the priority e-commerce features was classified and displayed as triangles for each business type. Furthermore, the experts were asked to indicate the features which they implemented in their companies in order to know the most used features in each stage. The results of this study give an insight to the practice of e-commerce for Iranian oil companies and can be used to strategize future directions for the industry in terms of e- commerce.
[ { "created": "Sat, 5 Jul 2014 19:33:10 GMT", "version": "v1" } ]
2014-07-08
[ [ "Nassiry", "Mohammad", "" ], [ "Mukhtar", "Muriati", "" ] ]
Since the strategies and plans for e-commerce development are different for different industries and since the oil industry is one of the most important industries in Iran, the scope of this research is thus confined to that of the oil industry in Iran. The main aim of this study is to identify and classify the different features of e-commerce development stages and features based on the different business types present in companies in the oil industry in Iran. In order to achieve both of these objectives a questionnaire was developed and administered online. The questionnaire was distributed to forty representatives working in different companies. The collected data was classified and sorted and the priority e-commerce features was classified and displayed as triangles for each business type. Furthermore, the experts were asked to indicate the features which they implemented in their companies in order to know the most used features in each stage. The results of this study give an insight to the practice of e-commerce for Iranian oil companies and can be used to strategize future directions for the industry in terms of e- commerce.
1701.03305
Masahito Hayashi
Ryo Yaguchi and Masahito Hayashi
Finite-Length Bounds for Joint Source-Channel Coding with Markovian Source and Additive Channel Noise to Achieve Large and Moderate Deviation Bounds
This paper and arXiv:1701.03290 address joint source-channel coding with markovian source. While arXiv:1701.03290 discusses the second order analysis, this paper discusses finite-length bounds as well as large and moderate deviation bounds. Hence, there is no overlap between these two papers
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We derive novel upper and lower finite-length bounds of the error probability in joint source-channel coding when the source obeys an ergodic Markov process and the channel is a Markovian additive channel or a Markovian conditional additive channel. These bounds are tight in the large and moderate deviation regimes.
[ { "created": "Thu, 12 Jan 2017 11:09:57 GMT", "version": "v1" }, { "created": "Tue, 2 May 2017 03:39:26 GMT", "version": "v2" } ]
2017-05-03
[ [ "Yaguchi", "Ryo", "" ], [ "Hayashi", "Masahito", "" ] ]
We derive novel upper and lower finite-length bounds of the error probability in joint source-channel coding when the source obeys an ergodic Markov process and the channel is a Markovian additive channel or a Markovian conditional additive channel. These bounds are tight in the large and moderate deviation regimes.
2402.11724
Jianling Wang
Jianling Wang, Haokai Lu, James Caverlee, Ed Chi and Minmin Chen
Large Language Models as Data Augmenters for Cold-Start Item Recommendation
null
null
null
null
cs.IR
http://creativecommons.org/licenses/by/4.0/
The reasoning and generalization capabilities of LLMs can help us better understand user preferences and item characteristics, offering exciting prospects to enhance recommendation systems. Though effective while user-item interactions are abundant, conventional recommendation systems struggle to recommend cold-start items without historical interactions. To address this, we propose utilizing LLMs as data augmenters to bridge the knowledge gap on cold-start items during training. We employ LLMs to infer user preferences for cold-start items based on textual description of user historical behaviors and new item descriptions. The augmented training signals are then incorporated into learning the downstream recommendation models through an auxiliary pairwise loss. Through experiments on public Amazon datasets, we demonstrate that LLMs can effectively augment the training signals for cold-start items, leading to significant improvements in cold-start item recommendation for various recommendation models.
[ { "created": "Sun, 18 Feb 2024 22:29:04 GMT", "version": "v1" } ]
2024-02-20
[ [ "Wang", "Jianling", "" ], [ "Lu", "Haokai", "" ], [ "Caverlee", "James", "" ], [ "Chi", "Ed", "" ], [ "Chen", "Minmin", "" ] ]
The reasoning and generalization capabilities of LLMs can help us better understand user preferences and item characteristics, offering exciting prospects to enhance recommendation systems. Though effective while user-item interactions are abundant, conventional recommendation systems struggle to recommend cold-start items without historical interactions. To address this, we propose utilizing LLMs as data augmenters to bridge the knowledge gap on cold-start items during training. We employ LLMs to infer user preferences for cold-start items based on textual description of user historical behaviors and new item descriptions. The augmented training signals are then incorporated into learning the downstream recommendation models through an auxiliary pairwise loss. Through experiments on public Amazon datasets, we demonstrate that LLMs can effectively augment the training signals for cold-start items, leading to significant improvements in cold-start item recommendation for various recommendation models.
1904.09763
Seungjun Jung
Seungjun Jung, Muhammad Abul Hasan and Changick Kim
Water-Filling: An Efficient Algorithm for Digitized Document Shadow Removal
Accepted at Asian Conference on Computer Vision (2018)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a novel algorithm to rectify illumination of the digitized documents by eliminating shading artifacts. Firstly, a topographic surface of an input digitized document is created using luminance value of each pixel. Then the shading artifact on the document is estimated by simulating an immersion process. The simulation of the immersion process is modeled using a novel diffusion equation with an iterative update rule. After estimating the shading artifacts, the digitized document is reconstructed using the Lambertian surface model. In order to evaluate the performance of the proposed algorithm, we conduct rigorous experiments on a set of digitized documents which is generated using smartphones under challenging lighting conditions. According to the experimental results, it is found that the proposed method produces promising illumination correction results and outperforms the results of the state-of-the-art methods.
[ { "created": "Mon, 22 Apr 2019 08:01:27 GMT", "version": "v1" }, { "created": "Thu, 2 May 2019 18:44:59 GMT", "version": "v2" } ]
2019-05-06
[ [ "Jung", "Seungjun", "" ], [ "Hasan", "Muhammad Abul", "" ], [ "Kim", "Changick", "" ] ]
In this paper, we propose a novel algorithm to rectify illumination of the digitized documents by eliminating shading artifacts. Firstly, a topographic surface of an input digitized document is created using luminance value of each pixel. Then the shading artifact on the document is estimated by simulating an immersion process. The simulation of the immersion process is modeled using a novel diffusion equation with an iterative update rule. After estimating the shading artifacts, the digitized document is reconstructed using the Lambertian surface model. In order to evaluate the performance of the proposed algorithm, we conduct rigorous experiments on a set of digitized documents which is generated using smartphones under challenging lighting conditions. According to the experimental results, it is found that the proposed method produces promising illumination correction results and outperforms the results of the state-of-the-art methods.
1605.07515
Michael Roth
Michael Roth, Mirella Lapata
Neural Semantic Role Labeling with Dependency Path Embeddings
Camera-ready ACL paper
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a novel model for semantic role labeling that makes use of neural sequence modeling techniques. Our approach is motivated by the observation that complex syntactic structures and related phenomena, such as nested subordinations and nominal predicates, are not handled well by existing models. Our model treats such instances as sub-sequences of lexicalized dependency paths and learns suitable embedding representations. We experimentally demonstrate that such embeddings can improve results over previous state-of-the-art semantic role labelers, and showcase qualitative improvements obtained by our method.
[ { "created": "Tue, 24 May 2016 15:54:48 GMT", "version": "v1" }, { "created": "Mon, 18 Jul 2016 09:08:51 GMT", "version": "v2" } ]
2016-07-19
[ [ "Roth", "Michael", "" ], [ "Lapata", "Mirella", "" ] ]
This paper introduces a novel model for semantic role labeling that makes use of neural sequence modeling techniques. Our approach is motivated by the observation that complex syntactic structures and related phenomena, such as nested subordinations and nominal predicates, are not handled well by existing models. Our model treats such instances as sub-sequences of lexicalized dependency paths and learns suitable embedding representations. We experimentally demonstrate that such embeddings can improve results over previous state-of-the-art semantic role labelers, and showcase qualitative improvements obtained by our method.
1707.00513
Chao Zhang
Chao Zhang, Vineeth Varma, Samson Lasaulce, Raphael Visoz
Interference Coordination via Power Domain Channel Estimation
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A novel technique is proposed which enables each transmitter to acquire global channel state information (CSI) from the sole knowledge of individual received signal power measurements, which makes dedicated feedback or inter-transmitter signaling channels unnecessary. To make this possible, we resort to a completely new technique whose key idea is to exploit the transmit power levels as symbols to embed information and the observed interference as a communication channel the transmitters can use to exchange coordination information. Although the used technique allows any kind of {low-rate} information to be exchanged among the transmitters, the focus here is to exchange local CSI. The proposed procedure also comprises a phase which allows local CSI to be estimated. Once an estimate of global CSI is acquired by the transmitters, it can be used to optimize any utility function which depends on it. While algorithms which use the same type of measurements such as the iterative water-filling algorithm (IWFA) implement the sequential best-response dynamics (BRD) applied to individual utilities, here, thanks to the availability of global CSI, the BRD can be applied to the sum-utility. Extensive numerical results show that significant gains can be obtained and, this, by requiring no additional online signaling.
[ { "created": "Wed, 14 Jun 2017 08:04:12 GMT", "version": "v1" } ]
2017-07-04
[ [ "Zhang", "Chao", "" ], [ "Varma", "Vineeth", "" ], [ "Lasaulce", "Samson", "" ], [ "Visoz", "Raphael", "" ] ]
A novel technique is proposed which enables each transmitter to acquire global channel state information (CSI) from the sole knowledge of individual received signal power measurements, which makes dedicated feedback or inter-transmitter signaling channels unnecessary. To make this possible, we resort to a completely new technique whose key idea is to exploit the transmit power levels as symbols to embed information and the observed interference as a communication channel the transmitters can use to exchange coordination information. Although the used technique allows any kind of {low-rate} information to be exchanged among the transmitters, the focus here is to exchange local CSI. The proposed procedure also comprises a phase which allows local CSI to be estimated. Once an estimate of global CSI is acquired by the transmitters, it can be used to optimize any utility function which depends on it. While algorithms which use the same type of measurements such as the iterative water-filling algorithm (IWFA) implement the sequential best-response dynamics (BRD) applied to individual utilities, here, thanks to the availability of global CSI, the BRD can be applied to the sum-utility. Extensive numerical results show that significant gains can be obtained and, this, by requiring no additional online signaling.
2306.01310
Jaeseung Heo
Jaeseung Heo, Seungbeom Lee, Sungsoo Ahn, Dongwoo Kim
EPIC: Graph Augmentation with Edit Path Interpolation via Learnable Cost
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Data augmentation plays a critical role in improving model performance across various domains, but it becomes challenging with graph data due to their complex and irregular structure. To address this issue, we propose EPIC (Edit Path Interpolation via learnable Cost), a novel interpolation-based method for augmenting graph datasets. To interpolate between two graphs lying in an irregular domain, EPIC leverages the concept of graph edit distance, constructing an edit path that represents the transformation process between two graphs via edit operations. Moreover, our method introduces a context-sensitive cost model that accounts for the importance of specific edit operations formulated through a learning framework. This allows for a more nuanced transformation process, where the edit distance is not merely count-based but reflects meaningful graph attributes. With randomly sampled graphs from the edit path, we enrich the training set to enhance the generalization capability of classification models. Experimental evaluations across several benchmark datasets demonstrate that our approach outperforms existing augmentation techniques in many tasks.
[ { "created": "Fri, 2 Jun 2023 07:19:07 GMT", "version": "v1" }, { "created": "Tue, 4 Jun 2024 05:54:38 GMT", "version": "v2" } ]
2024-06-05
[ [ "Heo", "Jaeseung", "" ], [ "Lee", "Seungbeom", "" ], [ "Ahn", "Sungsoo", "" ], [ "Kim", "Dongwoo", "" ] ]
Data augmentation plays a critical role in improving model performance across various domains, but it becomes challenging with graph data due to their complex and irregular structure. To address this issue, we propose EPIC (Edit Path Interpolation via learnable Cost), a novel interpolation-based method for augmenting graph datasets. To interpolate between two graphs lying in an irregular domain, EPIC leverages the concept of graph edit distance, constructing an edit path that represents the transformation process between two graphs via edit operations. Moreover, our method introduces a context-sensitive cost model that accounts for the importance of specific edit operations formulated through a learning framework. This allows for a more nuanced transformation process, where the edit distance is not merely count-based but reflects meaningful graph attributes. With randomly sampled graphs from the edit path, we enrich the training set to enhance the generalization capability of classification models. Experimental evaluations across several benchmark datasets demonstrate that our approach outperforms existing augmentation techniques in many tasks.
2405.06001
Yushi Huang
Ruihao Gong, Yang Yong, Shiqiao Gu, Yushi Huang, Chentao Lv, Yunchen Zhang, Xianglong Liu, Dacheng Tao
LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit
null
null
null
null
cs.LG cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
Recent advancements in large language models (LLMs) are propelling us toward artificial general intelligence with their remarkable emergent abilities and reasoning capabilities. However, the substantial computational and memory requirements limit the widespread adoption. Quantization, a key compression technique, can effectively mitigate these demands by compressing and accelerating LLMs, albeit with potential risks to accuracy. Numerous studies have aimed to minimize the accuracy loss associated with quantization. However, their quantization configurations vary from each other and cannot be fairly compared. In this paper, we present LLMC, a plug-and-play compression toolkit, to fairly and systematically explore the impact of quantization. LLMC integrates dozens of algorithms, models, and hardwares, offering high extensibility from integer to floating-point quantization, from LLM to vision-language (VLM) model, from fixed-bit to mixed precision, and from quantization to sparsification. Powered by this versatile toolkit, our benchmark covers three key aspects: calibration data, algorithms (three strategies), and data formats, providing novel insights and detailed analyses for further research and practical guidance for users. Our toolkit is available at \href{LLMC}{https://github.com/ModelTC/llmc}.
[ { "created": "Thu, 9 May 2024 11:49:05 GMT", "version": "v1" }, { "created": "Sat, 20 Jul 2024 07:29:51 GMT", "version": "v2" } ]
2024-07-23
[ [ "Gong", "Ruihao", "" ], [ "Yong", "Yang", "" ], [ "Gu", "Shiqiao", "" ], [ "Huang", "Yushi", "" ], [ "Lv", "Chentao", "" ], [ "Zhang", "Yunchen", "" ], [ "Liu", "Xianglong", "" ], [ "Tao", "Dacheng", "" ] ]
Recent advancements in large language models (LLMs) are propelling us toward artificial general intelligence with their remarkable emergent abilities and reasoning capabilities. However, the substantial computational and memory requirements limit the widespread adoption. Quantization, a key compression technique, can effectively mitigate these demands by compressing and accelerating LLMs, albeit with potential risks to accuracy. Numerous studies have aimed to minimize the accuracy loss associated with quantization. However, their quantization configurations vary from each other and cannot be fairly compared. In this paper, we present LLMC, a plug-and-play compression toolkit, to fairly and systematically explore the impact of quantization. LLMC integrates dozens of algorithms, models, and hardwares, offering high extensibility from integer to floating-point quantization, from LLM to vision-language (VLM) model, from fixed-bit to mixed precision, and from quantization to sparsification. Powered by this versatile toolkit, our benchmark covers three key aspects: calibration data, algorithms (three strategies), and data formats, providing novel insights and detailed analyses for further research and practical guidance for users. Our toolkit is available at \href{LLMC}{https://github.com/ModelTC/llmc}.
2110.07888
Xingcheng Fu
Xingcheng Fu, Jianxin Li, Jia Wu, Qingyun Sun, Cheng Ji, Senzhang Wang, Jiajun Tan, Hao Peng and Philip S. Yu
ACE-HGNN: Adaptive Curvature Exploration Hyperbolic Graph Neural Network
null
null
10.1109/ICDM51629.2021.00021
null
cs.LG cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph Neural Networks (GNNs) have been widely studied in various graph data mining tasks. Most existingGNNs embed graph data into Euclidean space and thus are less effective to capture the ubiquitous hierarchical structures in real-world networks. Hyperbolic Graph Neural Networks(HGNNs) extend GNNs to hyperbolic space and thus are more effective to capture the hierarchical structures of graphs in node representation learning. In hyperbolic geometry, the graph hierarchical structure can be reflected by the curvatures of the hyperbolic space, and different curvatures can model different hierarchical structures of a graph. However, most existing HGNNs manually set the curvature to a fixed value for simplicity, which achieves a suboptimal performance of graph learning due to the complex and diverse hierarchical structures of the graphs. To resolve this problem, we propose an Adaptive Curvature Exploration Hyperbolic Graph NeuralNetwork named ACE-HGNN to adaptively learn the optimal curvature according to the input graph and downstream tasks. Specifically, ACE-HGNN exploits a multi-agent reinforcement learning framework and contains two agents, ACE-Agent andHGNN-Agent for learning the curvature and node representations, respectively. The two agents are updated by a NashQ-leaning algorithm collaboratively, seeking the optimal hyperbolic space indexed by the curvature. Extensive experiments on multiple real-world graph datasets demonstrate a significant and consistent performance improvement in model quality with competitive performance and good generalization ability.
[ { "created": "Fri, 15 Oct 2021 07:18:57 GMT", "version": "v1" } ]
2022-03-04
[ [ "Fu", "Xingcheng", "" ], [ "Li", "Jianxin", "" ], [ "Wu", "Jia", "" ], [ "Sun", "Qingyun", "" ], [ "Ji", "Cheng", "" ], [ "Wang", "Senzhang", "" ], [ "Tan", "Jiajun", "" ], [ "Peng", "Hao", "" ], [ "Yu", "Philip S.", "" ] ]
Graph Neural Networks (GNNs) have been widely studied in various graph data mining tasks. Most existingGNNs embed graph data into Euclidean space and thus are less effective to capture the ubiquitous hierarchical structures in real-world networks. Hyperbolic Graph Neural Networks(HGNNs) extend GNNs to hyperbolic space and thus are more effective to capture the hierarchical structures of graphs in node representation learning. In hyperbolic geometry, the graph hierarchical structure can be reflected by the curvatures of the hyperbolic space, and different curvatures can model different hierarchical structures of a graph. However, most existing HGNNs manually set the curvature to a fixed value for simplicity, which achieves a suboptimal performance of graph learning due to the complex and diverse hierarchical structures of the graphs. To resolve this problem, we propose an Adaptive Curvature Exploration Hyperbolic Graph NeuralNetwork named ACE-HGNN to adaptively learn the optimal curvature according to the input graph and downstream tasks. Specifically, ACE-HGNN exploits a multi-agent reinforcement learning framework and contains two agents, ACE-Agent andHGNN-Agent for learning the curvature and node representations, respectively. The two agents are updated by a NashQ-leaning algorithm collaboratively, seeking the optimal hyperbolic space indexed by the curvature. Extensive experiments on multiple real-world graph datasets demonstrate a significant and consistent performance improvement in model quality with competitive performance and good generalization ability.
2101.06829
Tianxing He
Tianxing He, Bryan McCann, Caiming Xiong, Ehsan Hosseini-Asl
Joint Energy-based Model Training for Better Calibrated Natural Language Understanding Models
null
EACL 2021
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we explore joint energy-based model (EBM) training during the finetuning of pretrained text encoders (e.g., Roberta) for natural language understanding (NLU) tasks. Our experiments show that EBM training can help the model reach a better calibration that is competitive to strong baselines, with little or no loss in accuracy. We discuss three variants of energy functions (namely scalar, hidden, and sharp-hidden) that can be defined on top of a text encoder, and compare them in experiments. Due to the discreteness of text data, we adopt noise contrastive estimation (NCE) to train the energy-based model. To make NCE training more effective, we train an auto-regressive noise model with the masked language model (MLM) objective.
[ { "created": "Mon, 18 Jan 2021 01:41:31 GMT", "version": "v1" }, { "created": "Fri, 19 Feb 2021 18:36:31 GMT", "version": "v2" } ]
2021-02-22
[ [ "He", "Tianxing", "" ], [ "McCann", "Bryan", "" ], [ "Xiong", "Caiming", "" ], [ "Hosseini-Asl", "Ehsan", "" ] ]
In this work, we explore joint energy-based model (EBM) training during the finetuning of pretrained text encoders (e.g., Roberta) for natural language understanding (NLU) tasks. Our experiments show that EBM training can help the model reach a better calibration that is competitive to strong baselines, with little or no loss in accuracy. We discuss three variants of energy functions (namely scalar, hidden, and sharp-hidden) that can be defined on top of a text encoder, and compare them in experiments. Due to the discreteness of text data, we adopt noise contrastive estimation (NCE) to train the energy-based model. To make NCE training more effective, we train an auto-regressive noise model with the masked language model (MLM) objective.
2201.01182
Kshitija Taywade
Kshitija Taywade, Brent Harrison, Adib Bagh
Modelling Cournot Games as Multi-agent Multi-armed Bandits
12 pages. arXiv admin note: text overlap with arXiv:2201.00486
null
null
null
cs.GT cs.AI cs.LG cs.MA econ.EM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the use of a multi-agent multi-armed bandit (MA-MAB) setting for modeling repeated Cournot oligopoly games, where the firms acting as agents choose from the set of arms representing production quantity (a discrete value). Agents interact with separate and independent bandit problems. In this formulation, each agent makes sequential choices among arms to maximize its own reward. Agents do not have any information about the environment; they can only see their own rewards after taking an action. However, the market demand is a stationary function of total industry output, and random entry or exit from the market is not allowed. Given these assumptions, we found that an $\epsilon$-greedy approach offers a more viable learning mechanism than other traditional MAB approaches, as it does not require any additional knowledge of the system to operate. We also propose two novel approaches that take advantage of the ordered action space: $\epsilon$-greedy+HL and $\epsilon$-greedy+EL. These new approaches help firms to focus on more profitable actions by eliminating less profitable choices and hence are designed to optimize the exploration. We use computer simulations to study the emergence of various equilibria in the outcomes and do the empirical analysis of joint cumulative regrets.
[ { "created": "Sat, 1 Jan 2022 22:02:47 GMT", "version": "v1" } ]
2022-01-05
[ [ "Taywade", "Kshitija", "" ], [ "Harrison", "Brent", "" ], [ "Bagh", "Adib", "" ] ]
We investigate the use of a multi-agent multi-armed bandit (MA-MAB) setting for modeling repeated Cournot oligopoly games, where the firms acting as agents choose from the set of arms representing production quantity (a discrete value). Agents interact with separate and independent bandit problems. In this formulation, each agent makes sequential choices among arms to maximize its own reward. Agents do not have any information about the environment; they can only see their own rewards after taking an action. However, the market demand is a stationary function of total industry output, and random entry or exit from the market is not allowed. Given these assumptions, we found that an $\epsilon$-greedy approach offers a more viable learning mechanism than other traditional MAB approaches, as it does not require any additional knowledge of the system to operate. We also propose two novel approaches that take advantage of the ordered action space: $\epsilon$-greedy+HL and $\epsilon$-greedy+EL. These new approaches help firms to focus on more profitable actions by eliminating less profitable choices and hence are designed to optimize the exploration. We use computer simulations to study the emergence of various equilibria in the outcomes and do the empirical analysis of joint cumulative regrets.
2407.13071
Vatsal Vinay Parikh
Vatsal Vinay Parikh
Analysing the Public Discourse around OpenAI's Text-To-Video Model 'Sora' using Topic Modeling
null
null
null
null
cs.CY cs.CL cs.IR cs.LG cs.SI
http://creativecommons.org/publicdomain/zero/1.0/
The recent introduction of OpenAI's text-to-video model Sora has sparked widespread public discourse across online communities. This study aims to uncover the dominant themes and narratives surrounding Sora by conducting topic modeling analysis on a corpus of 1,827 Reddit comments from five relevant subreddits (r/OpenAI, r/technology, r/singularity, r/vfx, and r/ChatGPT). The comments were collected over a two-month period following Sora's announcement in February 2024. After preprocessing the data, Latent Dirichlet Allocation (LDA) was employed to extract four key topics: 1) AI Impact and Trends in Sora Discussions, 2) Public Opinion and Concerns about Sora, 3) Artistic Expression and Video Creation with Sora, and 4) Sora's Applications in Media and Entertainment. Visualizations including word clouds, bar charts, and t-SNE clustering provided insights into the importance of topic keywords and the distribution of comments across topics. The results highlight prominent narratives around Sora's potential impact on industries and employment, public sentiment and ethical concerns, creative applications, and use cases in the media and entertainment sectors. While limited to Reddit data within a specific timeframe, this study offers a framework for understanding public perceptions of emerging generative AI technologies through online discourse analysis.
[ { "created": "Thu, 30 May 2024 01:55:30 GMT", "version": "v1" } ]
2024-07-19
[ [ "Parikh", "Vatsal Vinay", "" ] ]
The recent introduction of OpenAI's text-to-video model Sora has sparked widespread public discourse across online communities. This study aims to uncover the dominant themes and narratives surrounding Sora by conducting topic modeling analysis on a corpus of 1,827 Reddit comments from five relevant subreddits (r/OpenAI, r/technology, r/singularity, r/vfx, and r/ChatGPT). The comments were collected over a two-month period following Sora's announcement in February 2024. After preprocessing the data, Latent Dirichlet Allocation (LDA) was employed to extract four key topics: 1) AI Impact and Trends in Sora Discussions, 2) Public Opinion and Concerns about Sora, 3) Artistic Expression and Video Creation with Sora, and 4) Sora's Applications in Media and Entertainment. Visualizations including word clouds, bar charts, and t-SNE clustering provided insights into the importance of topic keywords and the distribution of comments across topics. The results highlight prominent narratives around Sora's potential impact on industries and employment, public sentiment and ethical concerns, creative applications, and use cases in the media and entertainment sectors. While limited to Reddit data within a specific timeframe, this study offers a framework for understanding public perceptions of emerging generative AI technologies through online discourse analysis.
1904.13279
Tim Pfeifer
Tim Pfeifer and Peter Protzel
Incrementally Learned Mixture Models for GNSS Localization
8 pages, 5 figures, published in proceedings of IEEE Intelligent Vehicles Symposium (IV) 2019
null
10.1109/IVS.2019.8813847
null
cs.RO eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
GNSS localization is an important part of today's autonomous systems, although it suffers from non-Gaussian errors caused by non-line-of-sight effects. Recent methods are able to mitigate these effects by including the corresponding distributions in the sensor fusion algorithm. However, these approaches require prior knowledge about the sensor's distribution, which is often not available. We introduce a novel sensor fusion algorithm based on variational Bayesian inference, that is able to approximate the true distribution with a Gaussian mixture model and to learn its parametrization online. The proposed Incremental Variational Mixture algorithm automatically adapts the number of mixture components to the complexity of the measurement's error distribution. We compare the proposed algorithm against current state-of-the-art approaches using a collection of open access real world datasets and demonstrate its superior localization accuracy.
[ { "created": "Tue, 30 Apr 2019 14:39:00 GMT", "version": "v1" }, { "created": "Thu, 19 Mar 2020 11:27:11 GMT", "version": "v2" } ]
2020-03-20
[ [ "Pfeifer", "Tim", "" ], [ "Protzel", "Peter", "" ] ]
GNSS localization is an important part of today's autonomous systems, although it suffers from non-Gaussian errors caused by non-line-of-sight effects. Recent methods are able to mitigate these effects by including the corresponding distributions in the sensor fusion algorithm. However, these approaches require prior knowledge about the sensor's distribution, which is often not available. We introduce a novel sensor fusion algorithm based on variational Bayesian inference, that is able to approximate the true distribution with a Gaussian mixture model and to learn its parametrization online. The proposed Incremental Variational Mixture algorithm automatically adapts the number of mixture components to the complexity of the measurement's error distribution. We compare the proposed algorithm against current state-of-the-art approaches using a collection of open access real world datasets and demonstrate its superior localization accuracy.
2011.01584
Li-Yang Tan
Guy Blanc, Neha Gupta, Jane Lange, Li-Yang Tan
Estimating decision tree learnability with polylogarithmic sample complexity
25 pages, to appear in NeurIPS 2020
null
null
null
cs.LG cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that top-down decision tree learning heuristics are amenable to highly efficient learnability estimation: for monotone target functions, the error of the decision tree hypothesis constructed by these heuristics can be estimated with polylogarithmically many labeled examples, exponentially smaller than the number necessary to run these heuristics, and indeed, exponentially smaller than information-theoretic minimum required to learn a good decision tree. This adds to a small but growing list of fundamental learning algorithms that have been shown to be amenable to learnability estimation. En route to this result, we design and analyze sample-efficient minibatch versions of top-down decision tree learning heuristics and show that they achieve the same provable guarantees as the full-batch versions. We further give "active local" versions of these heuristics: given a test point $x^\star$, we show how the label $T(x^\star)$ of the decision tree hypothesis $T$ can be computed with polylogarithmically many labeled examples, exponentially smaller than the number necessary to learn $T$.
[ { "created": "Tue, 3 Nov 2020 09:26:27 GMT", "version": "v1" } ]
2020-11-04
[ [ "Blanc", "Guy", "" ], [ "Gupta", "Neha", "" ], [ "Lange", "Jane", "" ], [ "Tan", "Li-Yang", "" ] ]
We show that top-down decision tree learning heuristics are amenable to highly efficient learnability estimation: for monotone target functions, the error of the decision tree hypothesis constructed by these heuristics can be estimated with polylogarithmically many labeled examples, exponentially smaller than the number necessary to run these heuristics, and indeed, exponentially smaller than information-theoretic minimum required to learn a good decision tree. This adds to a small but growing list of fundamental learning algorithms that have been shown to be amenable to learnability estimation. En route to this result, we design and analyze sample-efficient minibatch versions of top-down decision tree learning heuristics and show that they achieve the same provable guarantees as the full-batch versions. We further give "active local" versions of these heuristics: given a test point $x^\star$, we show how the label $T(x^\star)$ of the decision tree hypothesis $T$ can be computed with polylogarithmically many labeled examples, exponentially smaller than the number necessary to learn $T$.
2009.06560
Lily Xu
Lily Xu, Elizabeth Bondi, Fei Fang, Andrew Perrault, Kai Wang, Milind Tambe
Dual-Mandate Patrols: Multi-Armed Bandits for Green Security
Published at AAAI 2021. 9 pages (paper and references), 3 page appendix. 6 figures and 1 table
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conservation efforts in green security domains to protect wildlife and forests are constrained by the limited availability of defenders (i.e., patrollers), who must patrol vast areas to protect from attackers (e.g., poachers or illegal loggers). Defenders must choose how much time to spend in each region of the protected area, balancing exploration of infrequently visited regions and exploitation of known hotspots. We formulate the problem as a stochastic multi-armed bandit, where each action represents a patrol strategy, enabling us to guarantee the rate of convergence of the patrolling policy. However, a naive bandit approach would compromise short-term performance for long-term optimality, resulting in animals poached and forests destroyed. To speed up performance, we leverage smoothness in the reward function and decomposability of actions. We show a synergy between Lipschitz-continuity and decomposition as each aids the convergence of the other. In doing so, we bridge the gap between combinatorial and Lipschitz bandits, presenting a no-regret approach that tightens existing guarantees while optimizing for short-term performance. We demonstrate that our algorithm, LIZARD, improves performance on real-world poaching data from Cambodia.
[ { "created": "Mon, 14 Sep 2020 16:40:44 GMT", "version": "v1" }, { "created": "Tue, 15 Dec 2020 05:35:48 GMT", "version": "v2" }, { "created": "Fri, 26 Apr 2024 13:51:17 GMT", "version": "v3" } ]
2024-04-29
[ [ "Xu", "Lily", "" ], [ "Bondi", "Elizabeth", "" ], [ "Fang", "Fei", "" ], [ "Perrault", "Andrew", "" ], [ "Wang", "Kai", "" ], [ "Tambe", "Milind", "" ] ]
Conservation efforts in green security domains to protect wildlife and forests are constrained by the limited availability of defenders (i.e., patrollers), who must patrol vast areas to protect from attackers (e.g., poachers or illegal loggers). Defenders must choose how much time to spend in each region of the protected area, balancing exploration of infrequently visited regions and exploitation of known hotspots. We formulate the problem as a stochastic multi-armed bandit, where each action represents a patrol strategy, enabling us to guarantee the rate of convergence of the patrolling policy. However, a naive bandit approach would compromise short-term performance for long-term optimality, resulting in animals poached and forests destroyed. To speed up performance, we leverage smoothness in the reward function and decomposability of actions. We show a synergy between Lipschitz-continuity and decomposition as each aids the convergence of the other. In doing so, we bridge the gap between combinatorial and Lipschitz bandits, presenting a no-regret approach that tightens existing guarantees while optimizing for short-term performance. We demonstrate that our algorithm, LIZARD, improves performance on real-world poaching data from Cambodia.
2212.00222
Madelyn Shapiro
Emilie Purvine, Davis Brown, Brett Jefferson, Cliff Joslyn, Brenda Praggastis, Archit Rathore, Madelyn Shapiro, Bei Wang, Youjia Zhou
Experimental Observations of the Topology of Convolutional Neural Network Activations
Accepted at AAAI 2023. This version includes supplementary material
null
null
null
cs.LG cs.CG
http://creativecommons.org/licenses/by/4.0/
Topological data analysis (TDA) is a branch of computational mathematics, bridging algebraic topology and data science, that provides compact, noise-robust representations of complex structures. Deep neural networks (DNNs) learn millions of parameters associated with a series of transformations defined by the model architecture, resulting in high-dimensional, difficult-to-interpret internal representations of input data. As DNNs become more ubiquitous across multiple sectors of our society, there is increasing recognition that mathematical methods are needed to aid analysts, researchers, and practitioners in understanding and interpreting how these models' internal representations relate to the final classification. In this paper, we apply cutting edge techniques from TDA with the goal of gaining insight into the interpretability of convolutional neural networks used for image classification. We use two common TDA approaches to explore several methods for modeling hidden-layer activations as high-dimensional point clouds, and provide experimental evidence that these point clouds capture valuable structural information about the model's process. First, we demonstrate that a distance metric based on persistent homology can be used to quantify meaningful differences between layers, and we discuss these distances in the broader context of existing representational similarity metrics for neural network interpretability. Second, we show that a mapper graph can provide semantic insight into how these models organize hierarchical class knowledge at each layer. These observations demonstrate that TDA is a useful tool to help deep learning practitioners unlock the hidden structures of their models.
[ { "created": "Thu, 1 Dec 2022 02:05:44 GMT", "version": "v1" } ]
2022-12-02
[ [ "Purvine", "Emilie", "" ], [ "Brown", "Davis", "" ], [ "Jefferson", "Brett", "" ], [ "Joslyn", "Cliff", "" ], [ "Praggastis", "Brenda", "" ], [ "Rathore", "Archit", "" ], [ "Shapiro", "Madelyn", "" ], [ "Wang", "Bei", "" ], [ "Zhou", "Youjia", "" ] ]
Topological data analysis (TDA) is a branch of computational mathematics, bridging algebraic topology and data science, that provides compact, noise-robust representations of complex structures. Deep neural networks (DNNs) learn millions of parameters associated with a series of transformations defined by the model architecture, resulting in high-dimensional, difficult-to-interpret internal representations of input data. As DNNs become more ubiquitous across multiple sectors of our society, there is increasing recognition that mathematical methods are needed to aid analysts, researchers, and practitioners in understanding and interpreting how these models' internal representations relate to the final classification. In this paper, we apply cutting edge techniques from TDA with the goal of gaining insight into the interpretability of convolutional neural networks used for image classification. We use two common TDA approaches to explore several methods for modeling hidden-layer activations as high-dimensional point clouds, and provide experimental evidence that these point clouds capture valuable structural information about the model's process. First, we demonstrate that a distance metric based on persistent homology can be used to quantify meaningful differences between layers, and we discuss these distances in the broader context of existing representational similarity metrics for neural network interpretability. Second, we show that a mapper graph can provide semantic insight into how these models organize hierarchical class knowledge at each layer. These observations demonstrate that TDA is a useful tool to help deep learning practitioners unlock the hidden structures of their models.
1404.3660
Ren\'e van Bevern
Ren\'e van Bevern, Sepp Hartung, Andr\'e Nichterlein, Manuel Sorge
Constant-factor approximations for Capacitated Arc Routing without triangle inequality
null
Operations Research Letters 42(4):290--292, 2014
10.1016/j.orl.2014.05.002
null
cs.DS cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given an undirected graph with edge costs and edge demands, the Capacitated Arc Routing problem (CARP) asks for minimum-cost routes for equal-capacity vehicles so as to satisfy all demands. Constant-factor polynomial-time approximation algorithms were proposed for CARP with triangle inequality, while CARP was claimed to be NP-hard to approximate within any constant factor in general. Correcting this claim, we show that any factor {\alpha} approximation for CARP with triangle inequality yields a factor {\alpha} approximation for the general CARP.
[ { "created": "Mon, 14 Apr 2014 17:28:19 GMT", "version": "v1" } ]
2014-07-15
[ [ "van Bevern", "René", "" ], [ "Hartung", "Sepp", "" ], [ "Nichterlein", "André", "" ], [ "Sorge", "Manuel", "" ] ]
Given an undirected graph with edge costs and edge demands, the Capacitated Arc Routing problem (CARP) asks for minimum-cost routes for equal-capacity vehicles so as to satisfy all demands. Constant-factor polynomial-time approximation algorithms were proposed for CARP with triangle inequality, while CARP was claimed to be NP-hard to approximate within any constant factor in general. Correcting this claim, we show that any factor {\alpha} approximation for CARP with triangle inequality yields a factor {\alpha} approximation for the general CARP.
2105.01747
Pradeep Kr. Banerjee
Pradeep Kr. Banerjee, Guido Mont\'ufar
Information Complexity and Generalization Bounds
To appear in 2021 IEEE International Symposium on Information Theory (ISIT); 23 pages
null
10.1109/ISIT45174.2021.9517960
null
cs.LG cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
We present a unifying picture of PAC-Bayesian and mutual information-based upper bounds on the generalization error of randomized learning algorithms. As we show, Tong Zhang's information exponential inequality (IEI) gives a general recipe for constructing bounds of both flavors. We show that several important results in the literature can be obtained as simple corollaries of the IEI under different assumptions on the loss function. Moreover, we obtain new bounds for data-dependent priors and unbounded loss functions. Optimizing the bounds gives rise to variants of the Gibbs algorithm, for which we discuss two practical examples for learning with neural networks, namely, Entropy- and PAC-Bayes- SGD. Further, we use an Occam's factor argument to show a PAC-Bayesian bound that incorporates second-order curvature information of the training loss.
[ { "created": "Tue, 4 May 2021 20:37:57 GMT", "version": "v1" }, { "created": "Sun, 24 Oct 2021 02:02:45 GMT", "version": "v2" } ]
2021-10-26
[ [ "Banerjee", "Pradeep Kr.", "" ], [ "Montúfar", "Guido", "" ] ]
We present a unifying picture of PAC-Bayesian and mutual information-based upper bounds on the generalization error of randomized learning algorithms. As we show, Tong Zhang's information exponential inequality (IEI) gives a general recipe for constructing bounds of both flavors. We show that several important results in the literature can be obtained as simple corollaries of the IEI under different assumptions on the loss function. Moreover, we obtain new bounds for data-dependent priors and unbounded loss functions. Optimizing the bounds gives rise to variants of the Gibbs algorithm, for which we discuss two practical examples for learning with neural networks, namely, Entropy- and PAC-Bayes- SGD. Further, we use an Occam's factor argument to show a PAC-Bayesian bound that incorporates second-order curvature information of the training loss.
2310.15928
Claire Chen
Carlota Par\'es Morlans, Claire Chen, Yijia Weng, Michelle Yi, Yuying Huang, Nick Heppert, Linqi Zhou, Leonidas Guibas, Jeannette Bohg
AO-Grasp: Articulated Object Grasp Generation
Project website: https://stanford-iprl-lab.github.io/ao-grasp
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
We introduce AO-Grasp, a grasp proposal method that generates 6 DoF grasps that enable robots to interact with articulated objects, such as opening and closing cabinets and appliances. AO-Grasp consists of two main contributions: the AO-Grasp Model and the AO-Grasp Dataset. Given a segmented partial point cloud of a single articulated object, the AO-Grasp Model predicts the best grasp points on the object with an Actionable Grasp Point Predictor. Then, it finds corresponding grasp orientations for each of these points, resulting in stable and actionable grasp proposals. We train the AO-Grasp Model on our new AO-Grasp Dataset, which contains 78K actionable parallel-jaw grasps on synthetic articulated objects. In simulation, AO-Grasp achieves a 45.0 % grasp success rate, whereas the highest performing baseline achieves a 35.0% success rate. Additionally, we evaluate AO-Grasp on 120 real-world scenes of objects with varied geometries, articulation axes, and joint states, where AO-Grasp produces successful grasps on 67.5% of scenes, while the baseline only produces successful grasps on 33.3% of scenes. To the best of our knowledge, AO-Grasp is the first method for generating 6 DoF grasps on articulated objects directly from partial point clouds without requiring part detection or hand-designed grasp heuristics. Project website: https://stanford-iprl-lab.github.io/ao-grasp
[ { "created": "Tue, 24 Oct 2023 15:26:57 GMT", "version": "v1" }, { "created": "Mon, 18 Mar 2024 17:36:33 GMT", "version": "v2" } ]
2024-03-19
[ [ "Morlans", "Carlota Parés", "" ], [ "Chen", "Claire", "" ], [ "Weng", "Yijia", "" ], [ "Yi", "Michelle", "" ], [ "Huang", "Yuying", "" ], [ "Heppert", "Nick", "" ], [ "Zhou", "Linqi", "" ], [ "Guibas", "Leonidas", "" ], [ "Bohg", "Jeannette", "" ] ]
We introduce AO-Grasp, a grasp proposal method that generates 6 DoF grasps that enable robots to interact with articulated objects, such as opening and closing cabinets and appliances. AO-Grasp consists of two main contributions: the AO-Grasp Model and the AO-Grasp Dataset. Given a segmented partial point cloud of a single articulated object, the AO-Grasp Model predicts the best grasp points on the object with an Actionable Grasp Point Predictor. Then, it finds corresponding grasp orientations for each of these points, resulting in stable and actionable grasp proposals. We train the AO-Grasp Model on our new AO-Grasp Dataset, which contains 78K actionable parallel-jaw grasps on synthetic articulated objects. In simulation, AO-Grasp achieves a 45.0 % grasp success rate, whereas the highest performing baseline achieves a 35.0% success rate. Additionally, we evaluate AO-Grasp on 120 real-world scenes of objects with varied geometries, articulation axes, and joint states, where AO-Grasp produces successful grasps on 67.5% of scenes, while the baseline only produces successful grasps on 33.3% of scenes. To the best of our knowledge, AO-Grasp is the first method for generating 6 DoF grasps on articulated objects directly from partial point clouds without requiring part detection or hand-designed grasp heuristics. Project website: https://stanford-iprl-lab.github.io/ao-grasp
2406.17070
Dimitris Chytas
Dimitris Chytas, Nithin Raveendran, Bane Vasi\'c
Collective Bit Flipping-Based Decoding of Quantum LDPC Codes
13 pages, 12 figures
null
null
null
cs.IT math.IT quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quantum low-density parity-check (QLDPC) codes have been proven to achieve higher minimum distances at higher code rates than surface codes. However, this family of codes imposes stringent latency requirements and poor performance under iterative decoding, especially when the variable degree is low. In this work, we improve both the error correction performance and decoding latency of variable degree-3 (dv-3) QLDPC codes under iterative decoding. Firstly, we perform a detailed analysis of the structure of a well-known family of QLDPC codes, i.e., hypergraph product-based codes. Then, we propose a decoding approach that stems from the knowledge of harmful configurations apparent in these codes. Our decoding scheme is based on applying a modified version of bit flipping (BF) decoding, namely two-bit bit flipping (TBF) decoding, which adds more degrees of freedom to BF decoding. The granularity offered by TBF decoding helps us design sets of decoders that operate in parallel and can collectively decode error patterns appearing in harmful configurations of the code, thus addressing both the latency and performance requirements. Finally, simulation results demonstrate that the proposed decoding scheme surpasses other iterative decoding approaches for various dv-3 QLDPC codes.
[ { "created": "Mon, 24 Jun 2024 18:51:48 GMT", "version": "v1" } ]
2024-06-26
[ [ "Chytas", "Dimitris", "" ], [ "Raveendran", "Nithin", "" ], [ "Vasić", "Bane", "" ] ]
Quantum low-density parity-check (QLDPC) codes have been proven to achieve higher minimum distances at higher code rates than surface codes. However, this family of codes imposes stringent latency requirements and poor performance under iterative decoding, especially when the variable degree is low. In this work, we improve both the error correction performance and decoding latency of variable degree-3 (dv-3) QLDPC codes under iterative decoding. Firstly, we perform a detailed analysis of the structure of a well-known family of QLDPC codes, i.e., hypergraph product-based codes. Then, we propose a decoding approach that stems from the knowledge of harmful configurations apparent in these codes. Our decoding scheme is based on applying a modified version of bit flipping (BF) decoding, namely two-bit bit flipping (TBF) decoding, which adds more degrees of freedom to BF decoding. The granularity offered by TBF decoding helps us design sets of decoders that operate in parallel and can collectively decode error patterns appearing in harmful configurations of the code, thus addressing both the latency and performance requirements. Finally, simulation results demonstrate that the proposed decoding scheme surpasses other iterative decoding approaches for various dv-3 QLDPC codes.
2104.10845
Li Zhang
Yuxuan Chen, Li Zhang, Shijian Li, Gang Pan
Optimize Neural Fictitious Self-Play in Regret Minimization Thinking
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Optimization of deep learning algorithms to approach Nash Equilibrium remains a significant problem in imperfect information games, e.g. StarCraft and poker. Neural Fictitious Self-Play (NFSP) has provided an effective way to learn approximate Nash Equilibrium without prior domain knowledge in imperfect information games. However, optimality gap was left as an optimization problem of NFSP and by solving the problem, the performance of NFSP could be improved. In this study, focusing on the optimality gap of NFSP, we have proposed a new method replacing NFSP's best response computation with regret matching method. The new algorithm can make the optimality gap converge to zero as it iterates, thus converge faster than original NFSP. We have conduct experiments on three typical environments of perfect-information games and imperfect information games in OpenSpiel and all showed that our new algorithm performances better than original NFSP.
[ { "created": "Thu, 22 Apr 2021 03:24:23 GMT", "version": "v1" } ]
2021-04-23
[ [ "Chen", "Yuxuan", "" ], [ "Zhang", "Li", "" ], [ "Li", "Shijian", "" ], [ "Pan", "Gang", "" ] ]
Optimization of deep learning algorithms to approach Nash Equilibrium remains a significant problem in imperfect information games, e.g. StarCraft and poker. Neural Fictitious Self-Play (NFSP) has provided an effective way to learn approximate Nash Equilibrium without prior domain knowledge in imperfect information games. However, optimality gap was left as an optimization problem of NFSP and by solving the problem, the performance of NFSP could be improved. In this study, focusing on the optimality gap of NFSP, we have proposed a new method replacing NFSP's best response computation with regret matching method. The new algorithm can make the optimality gap converge to zero as it iterates, thus converge faster than original NFSP. We have conduct experiments on three typical environments of perfect-information games and imperfect information games in OpenSpiel and all showed that our new algorithm performances better than original NFSP.
2405.08709
Amirreza Zamani
Amirreza Zamani, Sajad Daei, Tobias J. Oechtering, Mikael Skoglund
Multi-Task Private Semantic Communication
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a multi-task private semantic communication problem, in which an encoder has access to an information source arbitrarily correlated with some latent private data. A user has $L$ tasks with priorities. The encoder designs a message to be revealed which is called the semantic of the information source. Due to the privacy constraints the semantic can not be disclosed directly and the encoder adds noise to produce disclosed data. The goal is to design the disclosed data that maximizes the weighted sum of the utilities achieved by the user while satisfying a privacy constraint on the private data. In this work, we first consider a single-task scenario and design the added noise utilizing various methods including the extended versions of the Functional Representation Lemma, Strong Functional Representation Lemma, and separation technique. We then study the multi-task scenario and derive a simple design of the source semantics. We show that in the multi-task scenario the main problem can be divided into multiple parallel single-task problems.
[ { "created": "Tue, 14 May 2024 15:50:49 GMT", "version": "v1" } ]
2024-05-15
[ [ "Zamani", "Amirreza", "" ], [ "Daei", "Sajad", "" ], [ "Oechtering", "Tobias J.", "" ], [ "Skoglund", "Mikael", "" ] ]
We study a multi-task private semantic communication problem, in which an encoder has access to an information source arbitrarily correlated with some latent private data. A user has $L$ tasks with priorities. The encoder designs a message to be revealed which is called the semantic of the information source. Due to the privacy constraints the semantic can not be disclosed directly and the encoder adds noise to produce disclosed data. The goal is to design the disclosed data that maximizes the weighted sum of the utilities achieved by the user while satisfying a privacy constraint on the private data. In this work, we first consider a single-task scenario and design the added noise utilizing various methods including the extended versions of the Functional Representation Lemma, Strong Functional Representation Lemma, and separation technique. We then study the multi-task scenario and derive a simple design of the source semantics. We show that in the multi-task scenario the main problem can be divided into multiple parallel single-task problems.
2402.08897
Adam Seewald
Adam Seewald, Marvin Chanc\'an, Connor M. McCann, Seonghoon Noh, Omeed Fallahi, Hector Castillo, Ian Abraham, Aaron M. Dollar
RB5 Low-Cost Explorer: Implementing Autonomous Long-Term Exploration on Low-Cost Robotic Hardware
7 pages, 5 figures, ICRA'24
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
This systems paper presents the implementation and design of RB5, a wheeled robot for autonomous long-term exploration with fewer and cheaper sensors. Requiring just an RGB-D camera and low-power computing hardware, the system consists of an experimental platform with rocker-bogie suspension. It operates in unknown and GPS-denied environments and on indoor and outdoor terrains. The exploration consists of a methodology that extends frontier- and sampling-based exploration with a path-following vector field and a state-of-the-art SLAM algorithm. The methodology allows the robot to explore its surroundings at lower update frequencies, enabling the use of lower-performing and lower-cost hardware while still retaining good autonomous performance. The approach further consists of a methodology to interact with a remotely located human operator based on an inexpensive long-range and low-power communication technology from the internet-of-things domain (i.e., LoRa) and a customized communication protocol. The results and the feasibility analysis show the possible applications and limitations of the approach.
[ { "created": "Wed, 14 Feb 2024 02:07:04 GMT", "version": "v1" } ]
2024-02-15
[ [ "Seewald", "Adam", "" ], [ "Chancán", "Marvin", "" ], [ "McCann", "Connor M.", "" ], [ "Noh", "Seonghoon", "" ], [ "Fallahi", "Omeed", "" ], [ "Castillo", "Hector", "" ], [ "Abraham", "Ian", "" ], [ "Dollar", "Aaron M.", "" ] ]
This systems paper presents the implementation and design of RB5, a wheeled robot for autonomous long-term exploration with fewer and cheaper sensors. Requiring just an RGB-D camera and low-power computing hardware, the system consists of an experimental platform with rocker-bogie suspension. It operates in unknown and GPS-denied environments and on indoor and outdoor terrains. The exploration consists of a methodology that extends frontier- and sampling-based exploration with a path-following vector field and a state-of-the-art SLAM algorithm. The methodology allows the robot to explore its surroundings at lower update frequencies, enabling the use of lower-performing and lower-cost hardware while still retaining good autonomous performance. The approach further consists of a methodology to interact with a remotely located human operator based on an inexpensive long-range and low-power communication technology from the internet-of-things domain (i.e., LoRa) and a customized communication protocol. The results and the feasibility analysis show the possible applications and limitations of the approach.
2312.07553
Joon Hyun Jeong
Joonhyun Jeong
Hijacking Context in Large Multi-modal Models
Technical Report. Preprint
ICLR 2024 Workshop on Reliable and Responsible Foundation Models
null
null
cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
Recently, Large Multi-modal Models (LMMs) have demonstrated their ability to understand the visual contents of images given the instructions regarding the images. Built upon the Large Language Models (LLMs), LMMs also inherit their abilities and characteristics such as in-context learning where a coherent sequence of images and texts are given as the input prompt. However, we identify a new limitation of off-the-shelf LMMs where a small fraction of incoherent images or text descriptions mislead LMMs to only generate biased output about the hijacked context, not the originally intended context. To address this, we propose a pre-filtering method that removes irrelevant contexts via GPT-4V, based on its robustness towards distribution shift within the contexts. We further investigate whether replacing the hijacked visual and textual contexts with the correlated ones via GPT-4V and text-to-image models can help yield coherent responses.
[ { "created": "Thu, 7 Dec 2023 11:23:29 GMT", "version": "v1" }, { "created": "Mon, 13 May 2024 10:42:05 GMT", "version": "v2" } ]
2024-05-14
[ [ "Jeong", "Joonhyun", "" ] ]
Recently, Large Multi-modal Models (LMMs) have demonstrated their ability to understand the visual contents of images given the instructions regarding the images. Built upon the Large Language Models (LLMs), LMMs also inherit their abilities and characteristics such as in-context learning where a coherent sequence of images and texts are given as the input prompt. However, we identify a new limitation of off-the-shelf LMMs where a small fraction of incoherent images or text descriptions mislead LMMs to only generate biased output about the hijacked context, not the originally intended context. To address this, we propose a pre-filtering method that removes irrelevant contexts via GPT-4V, based on its robustness towards distribution shift within the contexts. We further investigate whether replacing the hijacked visual and textual contexts with the correlated ones via GPT-4V and text-to-image models can help yield coherent responses.
2405.13365
Zavareh Bozorgasl
Zavareh Bozorgasl and Hao Chen
Clipped Uniform Quantizers for Communication-Efficient Federated Learning
Work in progress
null
null
null
cs.LG cs.MA eess.SP
http://creativecommons.org/licenses/by/4.0/
This paper introduces an approach to employ clipped uniform quantization in federated learning settings, aiming to enhance model efficiency by reducing communication overhead without compromising accuracy. By employing optimal clipping thresholds and adaptive quantization schemes, our method significantly curtails the bit requirements for model weight transmissions between clients and the server. We explore the implications of symmetric clipping and uniform quantization on model performance, highlighting the utility of stochastic quantization to mitigate quantization artifacts and improve model robustness. Through extensive simulations on the MNIST dataset, our results demonstrate that the proposed method achieves near full-precision performance while ensuring substantial communication savings. Specifically, our approach facilitates efficient weight averaging based on quantization errors, effectively balancing the trade-off between communication efficiency and model accuracy. The comparative analysis with conventional quantization methods further confirms the superiority of our technique.
[ { "created": "Wed, 22 May 2024 05:48:25 GMT", "version": "v1" } ]
2024-05-24
[ [ "Bozorgasl", "Zavareh", "" ], [ "Chen", "Hao", "" ] ]
This paper introduces an approach to employ clipped uniform quantization in federated learning settings, aiming to enhance model efficiency by reducing communication overhead without compromising accuracy. By employing optimal clipping thresholds and adaptive quantization schemes, our method significantly curtails the bit requirements for model weight transmissions between clients and the server. We explore the implications of symmetric clipping and uniform quantization on model performance, highlighting the utility of stochastic quantization to mitigate quantization artifacts and improve model robustness. Through extensive simulations on the MNIST dataset, our results demonstrate that the proposed method achieves near full-precision performance while ensuring substantial communication savings. Specifically, our approach facilitates efficient weight averaging based on quantization errors, effectively balancing the trade-off between communication efficiency and model accuracy. The comparative analysis with conventional quantization methods further confirms the superiority of our technique.
2303.01295
Antonio Guerriero
Antonio Guerriero, Roberto Pietrantuono, Stefano Russo
Iterative Assessment and Improvement of DNN Operational Accuracy
Paper accepted at 45th International Conference on Software Engineering (ICSE'23 NIER), May 2023
null
10.1109/ICSE-NIER58687.2023.00014
null
cs.LG cs.AI cs.CV cs.SE
http://creativecommons.org/licenses/by/4.0/
Deep Neural Networks (DNN) are nowadays largely adopted in many application domains thanks to their human-like, or even superhuman, performance in specific tasks. However, due to unpredictable/unconsidered operating conditions, unexpected failures show up on field, making the performance of a DNN in operation very different from the one estimated prior to release. In the life cycle of DNN systems, the assessment of accuracy is typically addressed in two ways: offline, via sampling of operational inputs, or online, via pseudo-oracles. The former is considered more expensive due to the need for manual labeling of the sampled inputs. The latter is automatic but less accurate. We believe that emerging iterative industrial-strength life cycle models for Machine Learning systems, like MLOps, offer the possibility to leverage inputs observed in operation not only to provide faithful estimates of a DNN accuracy, but also to improve it through remodeling/retraining actions. We propose DAIC (DNN Assessment and Improvement Cycle), an approach which combines ''low-cost'' online pseudo-oracles and ''high-cost'' offline sampling techniques to estimate and improve the operational accuracy of a DNN in the iterations of its life cycle. Preliminary results show the benefits of combining the two approaches and integrating them in the DNN life cycle.
[ { "created": "Thu, 2 Mar 2023 14:21:54 GMT", "version": "v1" } ]
2024-03-27
[ [ "Guerriero", "Antonio", "" ], [ "Pietrantuono", "Roberto", "" ], [ "Russo", "Stefano", "" ] ]
Deep Neural Networks (DNN) are nowadays largely adopted in many application domains thanks to their human-like, or even superhuman, performance in specific tasks. However, due to unpredictable/unconsidered operating conditions, unexpected failures show up on field, making the performance of a DNN in operation very different from the one estimated prior to release. In the life cycle of DNN systems, the assessment of accuracy is typically addressed in two ways: offline, via sampling of operational inputs, or online, via pseudo-oracles. The former is considered more expensive due to the need for manual labeling of the sampled inputs. The latter is automatic but less accurate. We believe that emerging iterative industrial-strength life cycle models for Machine Learning systems, like MLOps, offer the possibility to leverage inputs observed in operation not only to provide faithful estimates of a DNN accuracy, but also to improve it through remodeling/retraining actions. We propose DAIC (DNN Assessment and Improvement Cycle), an approach which combines ''low-cost'' online pseudo-oracles and ''high-cost'' offline sampling techniques to estimate and improve the operational accuracy of a DNN in the iterations of its life cycle. Preliminary results show the benefits of combining the two approaches and integrating them in the DNN life cycle.
1406.5988
Luca Sanguinetti
Luca Sanguinetti, Aris L. Moustakas, Emil Bjornson, and Merouane Debbah
Large System Analysis of the Energy Consumption Distribution in Multi-User MIMO Systems with Mobility
8 figures, 2 tables, to appear on IEEE Transactions on Wireless Communications
null
10.1109/TWC.2014.2372761
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we consider the downlink of a single-cell multi-user MIMO system in which the base station (BS) makes use of $N$ antennas to communicate with $K$ single-antenna user equipments (UEs). The UEs move around in the cell according to a random walk mobility model. We aim at determining the energy consumption distribution when different linear precoding techniques are used at the BS to guarantee target rates within a finite time interval $T$. The analysis is conducted in the asymptotic regime where $N$ and $K$ grow large with fixed ratio under the assumption of perfect channel state information (CSI). Both recent and standard results from large system analysis are used to provide concise formulae for the asymptotic transmit powers and beamforming vectors for all considered schemes. These results are eventually used to provide a deterministic approximation of the energy consumption and to study its fluctuations around this value in the form of a central limit theorem. Closed-form expressions for the asymptotic means and variances are given. Numerical results are used to validate the accuracy of the theoretical analysis and to make comparisons. We show how the results can be used to approximate the probability that a battery-powered BS runs out of energy and also to design the cell radius for minimizing the energy consumption per unit area. The imperfect CSI case is also briefly considered.
[ { "created": "Mon, 23 Jun 2014 17:18:15 GMT", "version": "v1" }, { "created": "Mon, 5 Jan 2015 08:31:45 GMT", "version": "v2" } ]
2016-11-18
[ [ "Sanguinetti", "Luca", "" ], [ "Moustakas", "Aris L.", "" ], [ "Bjornson", "Emil", "" ], [ "Debbah", "Merouane", "" ] ]
In this work, we consider the downlink of a single-cell multi-user MIMO system in which the base station (BS) makes use of $N$ antennas to communicate with $K$ single-antenna user equipments (UEs). The UEs move around in the cell according to a random walk mobility model. We aim at determining the energy consumption distribution when different linear precoding techniques are used at the BS to guarantee target rates within a finite time interval $T$. The analysis is conducted in the asymptotic regime where $N$ and $K$ grow large with fixed ratio under the assumption of perfect channel state information (CSI). Both recent and standard results from large system analysis are used to provide concise formulae for the asymptotic transmit powers and beamforming vectors for all considered schemes. These results are eventually used to provide a deterministic approximation of the energy consumption and to study its fluctuations around this value in the form of a central limit theorem. Closed-form expressions for the asymptotic means and variances are given. Numerical results are used to validate the accuracy of the theoretical analysis and to make comparisons. We show how the results can be used to approximate the probability that a battery-powered BS runs out of energy and also to design the cell radius for minimizing the energy consumption per unit area. The imperfect CSI case is also briefly considered.
1911.00616
Eduardo Soares Mr
Eduardo Soares, Plamen Angelov
Novelty Detection and Learning from Extremely Weak Supervision
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we offer a method and algorithm, which make possible fully autonomous (unsupervised) detection of new classes, and learning following a very parsimonious training priming (few labeled data samples only). Moreover, new unknown classes may appear at a later stage and the proposed xClass method and algorithm are able to successfully discover this and learn from the data autonomously. Furthermore, the features (inputs to the classifier) are automatically sub-selected by the algorithm based on the accumulated data density per feature per class. As a result, a highly efficient, lean, human-understandable, autonomously self-learning model (which only needs an extremely parsimonious priming) emerges from the data. To validate our proposal we tested it on two challenging problems, including imbalanced Caltech-101 data set and iRoads dataset. Not only we achieved higher precision, but, more significantly, we only used a single class beforehand, while other methods used all the available classes) and we generated interpretable models with smaller number of features used, through extremely weak and weak supervision.
[ { "created": "Fri, 1 Nov 2019 23:51:08 GMT", "version": "v1" } ]
2019-11-05
[ [ "Soares", "Eduardo", "" ], [ "Angelov", "Plamen", "" ] ]
In this paper we offer a method and algorithm, which make possible fully autonomous (unsupervised) detection of new classes, and learning following a very parsimonious training priming (few labeled data samples only). Moreover, new unknown classes may appear at a later stage and the proposed xClass method and algorithm are able to successfully discover this and learn from the data autonomously. Furthermore, the features (inputs to the classifier) are automatically sub-selected by the algorithm based on the accumulated data density per feature per class. As a result, a highly efficient, lean, human-understandable, autonomously self-learning model (which only needs an extremely parsimonious priming) emerges from the data. To validate our proposal we tested it on two challenging problems, including imbalanced Caltech-101 data set and iRoads dataset. Not only we achieved higher precision, but, more significantly, we only used a single class beforehand, while other methods used all the available classes) and we generated interpretable models with smaller number of features used, through extremely weak and weak supervision.
2203.11903
Chace Lee
Chace Lee, Angelica Willis, Christina Chen, Marcin Sieniek, Akib Uddin, Jonny Wong, Rory Pilgrim, Katherine Chou, Daniel Tse, Shravya Shetty, Ryan G. Gomes
Enabling faster and more reliable sonographic assessment of gestational age through machine learning
null
null
null
null
cs.LG cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fetal ultrasounds are an essential part of prenatal care and can be used to estimate gestational age (GA). Accurate GA assessment is important for providing appropriate prenatal care throughout pregnancy and identifying complications such as fetal growth disorders. Since derivation of GA from manual fetal biometry measurements (head, abdomen, femur) are operator-dependent and time-consuming, there have been a number of research efforts focused on using artificial intelligence (AI) models to estimate GA using standard biometry images, but there is still room to improve the accuracy and reliability of these AI systems for widescale adoption. To improve GA estimates, without significant change to provider workflows, we leverage AI to interpret standard plane ultrasound images as well as 'fly-to' ultrasound videos, which are 5-10s videos automatically recorded as part of the standard of care before the still image is captured. We developed and validated three AI models: an image model using standard plane images, a video model using fly-to videos, and an ensemble model (combining both image and video). All three were statistically superior to standard fetal biometry-based GA estimates derived by expert sonographers, the ensemble model has the lowest mean absolute error (MAE) compared to the clinical standard fetal biometry (mean difference: -1.51 $\pm$ 3.96 days, 95% CI [-1.9, -1.1]) on a test set that consisted of 404 participants. We showed that our models outperform standard biometry by a more substantial margin on fetuses that were small for GA. Our AI models have the potential to empower trained operators to estimate GA with higher accuracy while reducing the amount of time required and user variability in measurement acquisition.
[ { "created": "Tue, 22 Mar 2022 17:15:56 GMT", "version": "v1" } ]
2022-03-23
[ [ "Lee", "Chace", "" ], [ "Willis", "Angelica", "" ], [ "Chen", "Christina", "" ], [ "Sieniek", "Marcin", "" ], [ "Uddin", "Akib", "" ], [ "Wong", "Jonny", "" ], [ "Pilgrim", "Rory", "" ], [ "Chou", "Katherine", "" ], [ "Tse", "Daniel", "" ], [ "Shetty", "Shravya", "" ], [ "Gomes", "Ryan G.", "" ] ]
Fetal ultrasounds are an essential part of prenatal care and can be used to estimate gestational age (GA). Accurate GA assessment is important for providing appropriate prenatal care throughout pregnancy and identifying complications such as fetal growth disorders. Since derivation of GA from manual fetal biometry measurements (head, abdomen, femur) are operator-dependent and time-consuming, there have been a number of research efforts focused on using artificial intelligence (AI) models to estimate GA using standard biometry images, but there is still room to improve the accuracy and reliability of these AI systems for widescale adoption. To improve GA estimates, without significant change to provider workflows, we leverage AI to interpret standard plane ultrasound images as well as 'fly-to' ultrasound videos, which are 5-10s videos automatically recorded as part of the standard of care before the still image is captured. We developed and validated three AI models: an image model using standard plane images, a video model using fly-to videos, and an ensemble model (combining both image and video). All three were statistically superior to standard fetal biometry-based GA estimates derived by expert sonographers, the ensemble model has the lowest mean absolute error (MAE) compared to the clinical standard fetal biometry (mean difference: -1.51 $\pm$ 3.96 days, 95% CI [-1.9, -1.1]) on a test set that consisted of 404 participants. We showed that our models outperform standard biometry by a more substantial margin on fetuses that were small for GA. Our AI models have the potential to empower trained operators to estimate GA with higher accuracy while reducing the amount of time required and user variability in measurement acquisition.
2210.00891
Enzo Tartaglione
Enzo Tartaglione
Information Removal at the bottleneck in Deep Neural Networks
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning models are nowadays broadly deployed to solve an incredibly large variety of tasks. Commonly, leveraging over the availability of "big data", deep neural networks are trained as black-boxes, minimizing an objective function at its output. This however does not allow control over the propagation of some specific features through the model, like gender or race, for solving some an uncorrelated task. This raises issues either in the privacy domain (considering the propagation of unwanted information) and of bias (considering that these features are potentially used to solve the given task). In this work we propose IRENE, a method to achieve information removal at the bottleneck of deep neural networks, which explicitly minimizes the estimated mutual information between the features to be kept ``private'' and the target. Experiments on a synthetic dataset and on CelebA validate the effectiveness of the proposed approach, and open the road towards the development of approaches guaranteeing information removal in deep neural networks.
[ { "created": "Fri, 30 Sep 2022 14:20:21 GMT", "version": "v1" } ]
2022-10-04
[ [ "Tartaglione", "Enzo", "" ] ]
Deep learning models are nowadays broadly deployed to solve an incredibly large variety of tasks. Commonly, leveraging over the availability of "big data", deep neural networks are trained as black-boxes, minimizing an objective function at its output. This however does not allow control over the propagation of some specific features through the model, like gender or race, for solving some an uncorrelated task. This raises issues either in the privacy domain (considering the propagation of unwanted information) and of bias (considering that these features are potentially used to solve the given task). In this work we propose IRENE, a method to achieve information removal at the bottleneck of deep neural networks, which explicitly minimizes the estimated mutual information between the features to be kept ``private'' and the target. Experiments on a synthetic dataset and on CelebA validate the effectiveness of the proposed approach, and open the road towards the development of approaches guaranteeing information removal in deep neural networks.
1004.3566
Vishal Goyal
G. Murugesan, C. Chellappan
An Economic-based Resource Management and Scheduling for Grid Computing Applications
International Journal of Computer Science Issues online at http://ijcsi.org/articles/An-Economic-based-Resource-Management-and-Scheduling-for-Grid-Computing-Applications.php
IJCSI, Volume 7, Issue 2, March 2010
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Resource management and scheduling plays a crucial role in achieving high utilization of resources in grid computing environments. Due to heterogeneity of resources, scheduling an application is significantly complicated and challenging task in grid system. Most of the researches in this area are mainly focused on to improve the performance of the grid system. There were some allocation model has been proposed based on divisible load theory with different type of workloads and a single originating processor. In this paper we introduce a new resource allocation model with multiple load originating processors as an economic model. Solutions for an optimal allocation of fraction of loads to nodes obtained to minimize the cost of the grid users via linear programming approach. It is found that the resource allocation model can efficiently and effectively allocate workloads to proper resources. Experimental results showed that the proposed model obtained the better solution in terms of cost and time.
[ { "created": "Tue, 20 Apr 2010 20:32:31 GMT", "version": "v1" } ]
2010-04-22
[ [ "Murugesan", "G.", "" ], [ "Chellappan", "C.", "" ] ]
Resource management and scheduling plays a crucial role in achieving high utilization of resources in grid computing environments. Due to heterogeneity of resources, scheduling an application is significantly complicated and challenging task in grid system. Most of the researches in this area are mainly focused on to improve the performance of the grid system. There were some allocation model has been proposed based on divisible load theory with different type of workloads and a single originating processor. In this paper we introduce a new resource allocation model with multiple load originating processors as an economic model. Solutions for an optimal allocation of fraction of loads to nodes obtained to minimize the cost of the grid users via linear programming approach. It is found that the resource allocation model can efficiently and effectively allocate workloads to proper resources. Experimental results showed that the proposed model obtained the better solution in terms of cost and time.
1912.12220
Do\u{g}analp Ergen\c{c}
Do\u{g}analp Ergen\c{c} and Ertan Onur
On Network Traffic Forecasting using Autoregressive Models
null
null
null
null
cs.NI eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Various statistical analysis methods are studied for years to extract accurate trends of network traffic and predict the future load mainly to allocate required resources. Besides, many stochastic modeling techniques are offered to represent fundamental characteristics of different types of network traffic. In this study, we analyze autoregressive traffic forecasting techniques considering their popularity and wide-use in the domain. In comparison to similar works, we present important traffic characteristics and discussions from the literature to create a self-consistent guidance along with the survey. Then, we approach to techniques in the literature revealing which network characteristics they can capture offering a characteristic-based framework. Most importantly, we aim to fill the gap between the statistical analysis of those methods and their relevance with networking by discussing significant aspects and requirements for accurate forecasting from a network-telemetric perspective.
[ { "created": "Fri, 27 Dec 2019 16:26:25 GMT", "version": "v1" } ]
2019-12-30
[ [ "Ergenç", "Doğanalp", "" ], [ "Onur", "Ertan", "" ] ]
Various statistical analysis methods are studied for years to extract accurate trends of network traffic and predict the future load mainly to allocate required resources. Besides, many stochastic modeling techniques are offered to represent fundamental characteristics of different types of network traffic. In this study, we analyze autoregressive traffic forecasting techniques considering their popularity and wide-use in the domain. In comparison to similar works, we present important traffic characteristics and discussions from the literature to create a self-consistent guidance along with the survey. Then, we approach to techniques in the literature revealing which network characteristics they can capture offering a characteristic-based framework. Most importantly, we aim to fill the gap between the statistical analysis of those methods and their relevance with networking by discussing significant aspects and requirements for accurate forecasting from a network-telemetric perspective.
2003.13165
Balakumar Sundaralingam
Balakumar Sundaralingam and Tucker Hermans
In-Hand Object-Dynamics Inference using Tactile Fingertips
Accepted at IEEE Transactions on Robotics (T-RO). Website: https://sites.google.com/view/tactile-obj-dynamics
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Having the ability to estimate an object's properties through interaction will enable robots to manipulate novel objects. Object's dynamics, specifically the friction and inertial parameters have only been estimated in a lab environment with precise and often external sensing. Could we infer an object's dynamics in the wild with only the robot's sensors? In this paper, we explore the estimation of dynamics of a grasped object in motion, with tactile force sensing at multiple fingertips. Our estimation approach does not rely on torque sensing to estimate the dynamics. To estimate friction, we develop a control scheme to actively interact with the object until slip is detected. To robustly perform the inertial estimation, we setup a factor graph that fuses all our sensor measurements on physically consistent manifolds and perform inference. We show that tactile fingertips enable in-hand dynamics estimation of low mass objects.
[ { "created": "Mon, 30 Mar 2020 00:12:11 GMT", "version": "v1" }, { "created": "Tue, 19 Jan 2021 04:37:38 GMT", "version": "v2" } ]
2021-01-20
[ [ "Sundaralingam", "Balakumar", "" ], [ "Hermans", "Tucker", "" ] ]
Having the ability to estimate an object's properties through interaction will enable robots to manipulate novel objects. Object's dynamics, specifically the friction and inertial parameters have only been estimated in a lab environment with precise and often external sensing. Could we infer an object's dynamics in the wild with only the robot's sensors? In this paper, we explore the estimation of dynamics of a grasped object in motion, with tactile force sensing at multiple fingertips. Our estimation approach does not rely on torque sensing to estimate the dynamics. To estimate friction, we develop a control scheme to actively interact with the object until slip is detected. To robustly perform the inertial estimation, we setup a factor graph that fuses all our sensor measurements on physically consistent manifolds and perform inference. We show that tactile fingertips enable in-hand dynamics estimation of low mass objects.
2302.07832
Aodong Li
Aodong Li, Chen Qiu, Marius Kloft, Padhraic Smyth, Stephan Mandt, Maja Rudolph
Deep Anomaly Detection under Labeling Budget Constraints
ICML 2023
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Selecting informative data points for expert feedback can significantly improve the performance of anomaly detection (AD) in various contexts, such as medical diagnostics or fraud detection. In this paper, we determine a set of theoretical conditions under which anomaly scores generalize from labeled queries to unlabeled data. Motivated by these results, we propose a data labeling strategy with optimal data coverage under labeling budget constraints. In addition, we propose a new learning framework for semi-supervised AD. Extensive experiments on image, tabular, and video data sets show that our approach results in state-of-the-art semi-supervised AD performance under labeling budget constraints.
[ { "created": "Wed, 15 Feb 2023 18:18:35 GMT", "version": "v1" }, { "created": "Tue, 4 Jul 2023 18:33:10 GMT", "version": "v2" } ]
2023-07-06
[ [ "Li", "Aodong", "" ], [ "Qiu", "Chen", "" ], [ "Kloft", "Marius", "" ], [ "Smyth", "Padhraic", "" ], [ "Mandt", "Stephan", "" ], [ "Rudolph", "Maja", "" ] ]
Selecting informative data points for expert feedback can significantly improve the performance of anomaly detection (AD) in various contexts, such as medical diagnostics or fraud detection. In this paper, we determine a set of theoretical conditions under which anomaly scores generalize from labeled queries to unlabeled data. Motivated by these results, we propose a data labeling strategy with optimal data coverage under labeling budget constraints. In addition, we propose a new learning framework for semi-supervised AD. Extensive experiments on image, tabular, and video data sets show that our approach results in state-of-the-art semi-supervised AD performance under labeling budget constraints.
1706.02337
Xiao Yang
Xiao Yang, Ersin Yumer, Paul Asente, Mike Kraley, Daniel Kifer, C. Lee Giles
Learning to Extract Semantic Structure from Documents Using Multimodal Fully Convolutional Neural Network
CVPR 2017 Spotlight
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an end-to-end, multimodal, fully convolutional network for extracting semantic structures from document images. We consider document semantic structure extraction as a pixel-wise segmentation task, and propose a unified model that classifies pixels based not only on their visual appearance, as in the traditional page segmentation task, but also on the content of underlying text. Moreover, we propose an efficient synthetic document generation process that we use to generate pretraining data for our network. Once the network is trained on a large set of synthetic documents, we fine-tune the network on unlabeled real documents using a semi-supervised approach. We systematically study the optimum network architecture and show that both our multimodal approach and the synthetic data pretraining significantly boost the performance.
[ { "created": "Wed, 7 Jun 2017 18:51:31 GMT", "version": "v1" } ]
2017-06-09
[ [ "Yang", "Xiao", "" ], [ "Yumer", "Ersin", "" ], [ "Asente", "Paul", "" ], [ "Kraley", "Mike", "" ], [ "Kifer", "Daniel", "" ], [ "Giles", "C. Lee", "" ] ]
We present an end-to-end, multimodal, fully convolutional network for extracting semantic structures from document images. We consider document semantic structure extraction as a pixel-wise segmentation task, and propose a unified model that classifies pixels based not only on their visual appearance, as in the traditional page segmentation task, but also on the content of underlying text. Moreover, we propose an efficient synthetic document generation process that we use to generate pretraining data for our network. Once the network is trained on a large set of synthetic documents, we fine-tune the network on unlabeled real documents using a semi-supervised approach. We systematically study the optimum network architecture and show that both our multimodal approach and the synthetic data pretraining significantly boost the performance.
2403.02308
Yuchen Duan
Yuchen Duan, Weiyun Wang, Zhe Chen, Xizhou Zhu, Lewei Lu, Tong Lu, Yu Qiao, Hongsheng Li, Jifeng Dai, Wenhai Wang
Vision-RWKV: Efficient and Scalable Visual Perception with RWKV-Like Architectures
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transformers have revolutionized computer vision and natural language processing, but their high computational complexity limits their application in high-resolution image processing and long-context analysis. This paper introduces Vision-RWKV (VRWKV), a model adapted from the RWKV model used in the NLP field with necessary modifications for vision tasks. Similar to the Vision Transformer (ViT), our model is designed to efficiently handle sparse inputs and demonstrate robust global processing capabilities, while also scaling up effectively, accommodating both large-scale parameters and extensive datasets. Its distinctive advantage lies in its reduced spatial aggregation complexity, which renders it exceptionally adept at processing high-resolution images seamlessly, eliminating the necessity for windowing operations. Our evaluations demonstrate that VRWKV surpasses ViT's performance in image classification and has significantly faster speeds and lower memory usage processing high-resolution inputs. In dense prediction tasks, it outperforms window-based models, maintaining comparable speeds. These results highlight VRWKV's potential as a more efficient alternative for visual perception tasks. Code is released at \url{https://github.com/OpenGVLab/Vision-RWKV}.
[ { "created": "Mon, 4 Mar 2024 18:46:20 GMT", "version": "v1" }, { "created": "Thu, 7 Mar 2024 15:43:08 GMT", "version": "v2" } ]
2024-03-08
[ [ "Duan", "Yuchen", "" ], [ "Wang", "Weiyun", "" ], [ "Chen", "Zhe", "" ], [ "Zhu", "Xizhou", "" ], [ "Lu", "Lewei", "" ], [ "Lu", "Tong", "" ], [ "Qiao", "Yu", "" ], [ "Li", "Hongsheng", "" ], [ "Dai", "Jifeng", "" ], [ "Wang", "Wenhai", "" ] ]
Transformers have revolutionized computer vision and natural language processing, but their high computational complexity limits their application in high-resolution image processing and long-context analysis. This paper introduces Vision-RWKV (VRWKV), a model adapted from the RWKV model used in the NLP field with necessary modifications for vision tasks. Similar to the Vision Transformer (ViT), our model is designed to efficiently handle sparse inputs and demonstrate robust global processing capabilities, while also scaling up effectively, accommodating both large-scale parameters and extensive datasets. Its distinctive advantage lies in its reduced spatial aggregation complexity, which renders it exceptionally adept at processing high-resolution images seamlessly, eliminating the necessity for windowing operations. Our evaluations demonstrate that VRWKV surpasses ViT's performance in image classification and has significantly faster speeds and lower memory usage processing high-resolution inputs. In dense prediction tasks, it outperforms window-based models, maintaining comparable speeds. These results highlight VRWKV's potential as a more efficient alternative for visual perception tasks. Code is released at \url{https://github.com/OpenGVLab/Vision-RWKV}.
2407.00024
Lang He Ph.D
Lang He, Kai Chen, Junnan Zhao, Yimeng Wang, Ercheng Pei, Haifeng Chen, Jiewei Jiang, Shiqing Zhang, Jie Zhang, Zhongmin Wang, Tao He, Prayag Tiwari
LMVD: A Large-Scale Multimodal Vlog Dataset for Depression Detection in the Wild
null
null
null
null
cs.CV cs.AI cs.MM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Depression can significantly impact many aspects of an individual's life, including their personal and social functioning, academic and work performance, and overall quality of life. Many researchers within the field of affective computing are adopting deep learning technology to explore potential patterns related to the detection of depression. However, because of subjects' privacy protection concerns, that data in this area is still scarce, presenting a challenge for the deep discriminative models used in detecting depression. To navigate these obstacles, a large-scale multimodal vlog dataset (LMVD), for depression recognition in the wild is built. In LMVD, which has 1823 samples with 214 hours of the 1475 participants captured from four multimedia platforms (Sina Weibo, Bilibili, Tiktok, and YouTube). A novel architecture termed MDDformer to learn the non-verbal behaviors of individuals is proposed. Extensive validations are performed on the LMVD dataset, demonstrating superior performance for depression detection. We anticipate that the LMVD will contribute a valuable function to the depression detection community. The data and code will released at the link: https://github.com/helang818/LMVD/.
[ { "created": "Thu, 9 May 2024 01:27:10 GMT", "version": "v1" } ]
2024-07-02
[ [ "He", "Lang", "" ], [ "Chen", "Kai", "" ], [ "Zhao", "Junnan", "" ], [ "Wang", "Yimeng", "" ], [ "Pei", "Ercheng", "" ], [ "Chen", "Haifeng", "" ], [ "Jiang", "Jiewei", "" ], [ "Zhang", "Shiqing", "" ], [ "Zhang", "Jie", "" ], [ "Wang", "Zhongmin", "" ], [ "He", "Tao", "" ], [ "Tiwari", "Prayag", "" ] ]
Depression can significantly impact many aspects of an individual's life, including their personal and social functioning, academic and work performance, and overall quality of life. Many researchers within the field of affective computing are adopting deep learning technology to explore potential patterns related to the detection of depression. However, because of subjects' privacy protection concerns, that data in this area is still scarce, presenting a challenge for the deep discriminative models used in detecting depression. To navigate these obstacles, a large-scale multimodal vlog dataset (LMVD), for depression recognition in the wild is built. In LMVD, which has 1823 samples with 214 hours of the 1475 participants captured from four multimedia platforms (Sina Weibo, Bilibili, Tiktok, and YouTube). A novel architecture termed MDDformer to learn the non-verbal behaviors of individuals is proposed. Extensive validations are performed on the LMVD dataset, demonstrating superior performance for depression detection. We anticipate that the LMVD will contribute a valuable function to the depression detection community. The data and code will released at the link: https://github.com/helang818/LMVD/.
1812.02615
Muhammad Junaid Farooq
Jin Shang and Muhammad Junaid Farooq and Quanyan Zhu
Real-Time Transmission Mechanism Design for Wireless IoT Sensors with Energy Harvesting under Power Saving Mode
null
null
null
null
cs.SY eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Internet of things (IoT) comprises of wireless sensors and actuators connected via access points to the Internet. Often, the sensing devices are remotely deployed with limited battery power and are equipped with energy harvesting equipment. These devices transmit real-time data to the base station (BS), which is used in applications such as anomaly detection. Under sufficient power availability, wireless transmissions from sensors can be scheduled at regular time intervals to maintain real-time data acquisition. However, once the battery is significantly depleted, the devices enter into power saving mode and need to be more selective in transmitting information to the BS. Transmitting a particular piece of sensed data consumes power while discarding it may result in loss of utility at the BS. The goal is to design an optimal dynamic policy which enables the device to decide whether to transmit or to discard a piece of sensing data particularly under the power saving mode. This will enable the sensor to prolong its operation while causing minimum loss of utility to the application. We develop an analytical framework to capture the utility of the IoT sensor transmissions and leverage dynamic programming based approach to derive an optimal real-time transmission policy that is based on the statistics of information arrival, the likelihood of harvested energy, and designed lifetime of the sensors. Numerical results show that if the statistics of future data valuation are accurately predicted, there is a significant increase in utility obtained at the BS as well as the battery lifetime.
[ { "created": "Thu, 6 Dec 2018 15:46:04 GMT", "version": "v1" }, { "created": "Wed, 30 Jan 2019 17:02:29 GMT", "version": "v2" }, { "created": "Mon, 8 Apr 2019 17:28:31 GMT", "version": "v3" } ]
2019-04-09
[ [ "Shang", "Jin", "" ], [ "Farooq", "Muhammad Junaid", "" ], [ "Zhu", "Quanyan", "" ] ]
The Internet of things (IoT) comprises of wireless sensors and actuators connected via access points to the Internet. Often, the sensing devices are remotely deployed with limited battery power and are equipped with energy harvesting equipment. These devices transmit real-time data to the base station (BS), which is used in applications such as anomaly detection. Under sufficient power availability, wireless transmissions from sensors can be scheduled at regular time intervals to maintain real-time data acquisition. However, once the battery is significantly depleted, the devices enter into power saving mode and need to be more selective in transmitting information to the BS. Transmitting a particular piece of sensed data consumes power while discarding it may result in loss of utility at the BS. The goal is to design an optimal dynamic policy which enables the device to decide whether to transmit or to discard a piece of sensing data particularly under the power saving mode. This will enable the sensor to prolong its operation while causing minimum loss of utility to the application. We develop an analytical framework to capture the utility of the IoT sensor transmissions and leverage dynamic programming based approach to derive an optimal real-time transmission policy that is based on the statistics of information arrival, the likelihood of harvested energy, and designed lifetime of the sensors. Numerical results show that if the statistics of future data valuation are accurately predicted, there is a significant increase in utility obtained at the BS as well as the battery lifetime.
1803.11256
Alexander Kott
Alexander Kott
Challenges and Characteristics of Intelligent Autonomy for Internet of Battle Things in Highly Adversarial Environments
This is a version of the paper that was presented at, and will appear in the Proceedings of the 2018 Spring Symposium of AAAI, March 26-28, 2018, Palo Alto, CA
null
null
null
cs.CY cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Numerous, artificially intelligent, networked things will populate the battlefield of the future, operating in close collaboration with human warfighters, and fighting as teams in highly adversarial environments. This paper explores the characteristics, capabilities and intelligence required of such a network of intelligent things and humans - Internet of Battle Things (IOBT). It will experience unique challenges that are not yet well addressed by the current generation of AI and machine learning.
[ { "created": "Tue, 20 Mar 2018 22:15:14 GMT", "version": "v1" }, { "created": "Fri, 13 Apr 2018 19:36:14 GMT", "version": "v2" } ]
2018-04-17
[ [ "Kott", "Alexander", "" ] ]
Numerous, artificially intelligent, networked things will populate the battlefield of the future, operating in close collaboration with human warfighters, and fighting as teams in highly adversarial environments. This paper explores the characteristics, capabilities and intelligence required of such a network of intelligent things and humans - Internet of Battle Things (IOBT). It will experience unique challenges that are not yet well addressed by the current generation of AI and machine learning.
2303.09187
Zhongwei Qiu
Zhongwei Qiu, Yang Qiansheng, Jian Wang, Haocheng Feng, Junyu Han, Errui Ding, Chang Xu, Dongmei Fu, Jingdong Wang
PSVT: End-to-End Multi-person 3D Pose and Shape Estimation with Progressive Video Transformers
CVPR2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Existing methods of multi-person video 3D human Pose and Shape Estimation (PSE) typically adopt a two-stage strategy, which first detects human instances in each frame and then performs single-person PSE with temporal model. However, the global spatio-temporal context among spatial instances can not be captured. In this paper, we propose a new end-to-end multi-person 3D Pose and Shape estimation framework with progressive Video Transformer, termed PSVT. In PSVT, a spatio-temporal encoder (STE) captures the global feature dependencies among spatial objects. Then, spatio-temporal pose decoder (STPD) and shape decoder (STSD) capture the global dependencies between pose queries and feature tokens, shape queries and feature tokens, respectively. To handle the variances of objects as time proceeds, a novel scheme of progressive decoding is used to update pose and shape queries at each frame. Besides, we propose a novel pose-guided attention (PGA) for shape decoder to better predict shape parameters. The two components strengthen the decoder of PSVT to improve performance. Extensive experiments on the four datasets show that PSVT achieves stage-of-the-art results.
[ { "created": "Thu, 16 Mar 2023 09:55:43 GMT", "version": "v1" } ]
2023-03-17
[ [ "Qiu", "Zhongwei", "" ], [ "Qiansheng", "Yang", "" ], [ "Wang", "Jian", "" ], [ "Feng", "Haocheng", "" ], [ "Han", "Junyu", "" ], [ "Ding", "Errui", "" ], [ "Xu", "Chang", "" ], [ "Fu", "Dongmei", "" ], [ "Wang", "Jingdong", "" ] ]
Existing methods of multi-person video 3D human Pose and Shape Estimation (PSE) typically adopt a two-stage strategy, which first detects human instances in each frame and then performs single-person PSE with temporal model. However, the global spatio-temporal context among spatial instances can not be captured. In this paper, we propose a new end-to-end multi-person 3D Pose and Shape estimation framework with progressive Video Transformer, termed PSVT. In PSVT, a spatio-temporal encoder (STE) captures the global feature dependencies among spatial objects. Then, spatio-temporal pose decoder (STPD) and shape decoder (STSD) capture the global dependencies between pose queries and feature tokens, shape queries and feature tokens, respectively. To handle the variances of objects as time proceeds, a novel scheme of progressive decoding is used to update pose and shape queries at each frame. Besides, we propose a novel pose-guided attention (PGA) for shape decoder to better predict shape parameters. The two components strengthen the decoder of PSVT to improve performance. Extensive experiments on the four datasets show that PSVT achieves stage-of-the-art results.
2405.03251
Zhenmei Shi
Jiuxiang Gu, Chenyang Li, Yingyu Liang, Zhenmei Shi, Zhao Song
Exploring the Frontiers of Softmax: Provable Optimization, Applications in Diffusion Model, and Beyond
53 pages
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
The softmax activation function plays a crucial role in the success of large language models (LLMs), particularly in the self-attention mechanism of the widely adopted Transformer architecture. However, the underlying learning dynamics that contribute to the effectiveness of softmax remain largely unexplored. As a step towards better understanding, this paper provides a theoretical study of the optimization and generalization properties of two-layer softmax neural networks, providing theoretical insights into their superior performance as other activation functions, such as ReLU and exponential. Leveraging the Neural Tangent Kernel (NTK) framework, our analysis reveals that the normalization effect of the softmax function leads to a good perturbation property of the induced NTK matrix, resulting in a good convex region of the loss landscape. Consequently, softmax neural networks can learn the target function in the over-parametrization regime. To demonstrate the broad applicability of our theoretical findings, we apply them to the task of learning score estimation functions in diffusion models, a promising approach for generative modeling. Our analysis shows that gradient-based algorithms can learn the score function with a provable accuracy. Our work provides a deeper understanding of the effectiveness of softmax neural networks and their potential in various domains, paving the way for further advancements in natural language processing and beyond.
[ { "created": "Mon, 6 May 2024 08:15:29 GMT", "version": "v1" } ]
2024-05-07
[ [ "Gu", "Jiuxiang", "" ], [ "Li", "Chenyang", "" ], [ "Liang", "Yingyu", "" ], [ "Shi", "Zhenmei", "" ], [ "Song", "Zhao", "" ] ]
The softmax activation function plays a crucial role in the success of large language models (LLMs), particularly in the self-attention mechanism of the widely adopted Transformer architecture. However, the underlying learning dynamics that contribute to the effectiveness of softmax remain largely unexplored. As a step towards better understanding, this paper provides a theoretical study of the optimization and generalization properties of two-layer softmax neural networks, providing theoretical insights into their superior performance as other activation functions, such as ReLU and exponential. Leveraging the Neural Tangent Kernel (NTK) framework, our analysis reveals that the normalization effect of the softmax function leads to a good perturbation property of the induced NTK matrix, resulting in a good convex region of the loss landscape. Consequently, softmax neural networks can learn the target function in the over-parametrization regime. To demonstrate the broad applicability of our theoretical findings, we apply them to the task of learning score estimation functions in diffusion models, a promising approach for generative modeling. Our analysis shows that gradient-based algorithms can learn the score function with a provable accuracy. Our work provides a deeper understanding of the effectiveness of softmax neural networks and their potential in various domains, paving the way for further advancements in natural language processing and beyond.
2402.10102
Irina Ar\'evalo
Jose L. Salmeron and Irina Ar\'evalo
A privacy-preserving, distributed and cooperative FCM-based learning approach for Cancer Research
Rough Sets: International Joint Conference, IJCRS 2020
null
null
null
cs.AI cs.DC
http://creativecommons.org/licenses/by/4.0/
Distributed Artificial Intelligence is attracting interest day by day. In this paper, the authors introduce an innovative methodology for distributed learning of Particle Swarm Optimization-based Fuzzy Cognitive Maps in a privacy-preserving way. The authors design a training scheme for collaborative FCM learning that offers data privacy compliant with the current regulation. This method is applied to a cancer detection problem, proving that the performance of the model is improved by the Federated Learning process, and obtaining similar results to the ones that can be found in the literature.
[ { "created": "Thu, 15 Feb 2024 16:56:25 GMT", "version": "v1" } ]
2024-02-16
[ [ "Salmeron", "Jose L.", "" ], [ "Arévalo", "Irina", "" ] ]
Distributed Artificial Intelligence is attracting interest day by day. In this paper, the authors introduce an innovative methodology for distributed learning of Particle Swarm Optimization-based Fuzzy Cognitive Maps in a privacy-preserving way. The authors design a training scheme for collaborative FCM learning that offers data privacy compliant with the current regulation. This method is applied to a cancer detection problem, proving that the performance of the model is improved by the Federated Learning process, and obtaining similar results to the ones that can be found in the literature.
2003.04470
Vuong M. Ngo
V.M. Ngo, N.A. Le-Khac, and M.T. Kechadi
Data Warehouse and Decision Support on Integrated Crop Big Data
13 pages, 11 figures. arXiv admin note: text overlap with arXiv:1905.12411
International Journal of Business Process Integration and Management 2020 Vol.10 No.1
10.1504/IJBPIM.2020.113115
null
cs.DB cs.DC cs.LG cs.PF
http://creativecommons.org/licenses/by/4.0/
In recent years, precision agriculture is becoming very popular. The introduction of modern information and communication technologies for collecting and processing Agricultural data revolutionise the agriculture practises. This has started a while ago (early 20th century) and it is driven by the low cost of collecting data about everything; from information on fields such as seed, soil, fertiliser, pest, to weather data, drones and satellites images. Specially, the agricultural data mining today is considered as Big Data application in terms of volume, variety, velocity and veracity. Hence it leads to challenges in processing vast amounts of complex and diverse information to extract useful knowledge for the farmer, agronomist, and other businesses. It is a key foundation to establishing a crop intelligence platform, which will enable efficient resource management and high quality agronomy decision making and recommendations. In this paper, we designed and implemented a continental level agricultural data warehouse (ADW). ADW is characterised by its (1) flexible schema; (2) data integration from real agricultural multi datasets; (3) data science and business intelligent support; (4) high performance; (5) high storage; (6) security; (7) governance and monitoring; (8) consistency, availability and partition tolerant; (9) cloud compatibility. We also evaluate the performance of ADW and present some complex queries to extract and return necessary knowledge about crop management.
[ { "created": "Tue, 10 Mar 2020 00:10:22 GMT", "version": "v1" }, { "created": "Mon, 12 Apr 2021 08:45:11 GMT", "version": "v2" } ]
2021-04-13
[ [ "Ngo", "V. M.", "" ], [ "Le-Khac", "N. A.", "" ], [ "Kechadi", "M. T.", "" ] ]
In recent years, precision agriculture is becoming very popular. The introduction of modern information and communication technologies for collecting and processing Agricultural data revolutionise the agriculture practises. This has started a while ago (early 20th century) and it is driven by the low cost of collecting data about everything; from information on fields such as seed, soil, fertiliser, pest, to weather data, drones and satellites images. Specially, the agricultural data mining today is considered as Big Data application in terms of volume, variety, velocity and veracity. Hence it leads to challenges in processing vast amounts of complex and diverse information to extract useful knowledge for the farmer, agronomist, and other businesses. It is a key foundation to establishing a crop intelligence platform, which will enable efficient resource management and high quality agronomy decision making and recommendations. In this paper, we designed and implemented a continental level agricultural data warehouse (ADW). ADW is characterised by its (1) flexible schema; (2) data integration from real agricultural multi datasets; (3) data science and business intelligent support; (4) high performance; (5) high storage; (6) security; (7) governance and monitoring; (8) consistency, availability and partition tolerant; (9) cloud compatibility. We also evaluate the performance of ADW and present some complex queries to extract and return necessary knowledge about crop management.
1701.05013
Veronika Cheplygina
Veronika Cheplygina, Isabel Pino Pe\~na, Jesper Holst Pedersen, David A. Lynch, Lauge S{\o}rensen, Marleen de Bruijne
Transfer learning for multi-center classification of chronic obstructive pulmonary disease
Accepted at Journal of Biomedical and Health Informatics
null
10.1109/JBHI.2017.2769800
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chronic obstructive pulmonary disease (COPD) is a lung disease which can be quantified using chest computed tomography (CT) scans. Recent studies have shown that COPD can be automatically diagnosed using weakly supervised learning of intensity and texture distributions. However, up till now such classifiers have only been evaluated on scans from a single domain, and it is unclear whether they would generalize across domains, such as different scanners or scanning protocols. To address this problem, we investigate classification of COPD in a multi-center dataset with a total of 803 scans from three different centers, four different scanners, with heterogenous subject distributions. Our method is based on Gaussian texture features, and a weighted logistic classifier, which increases the weights of samples similar to the test data. We show that Gaussian texture features outperform intensity features previously used in multi-center classification tasks. We also show that a weighting strategy based on a classifier that is trained to discriminate between scans from different domains, can further improve the results. To encourage further research into transfer learning methods for classification of COPD, upon acceptance of the paper we will release two feature datasets used in this study on http://bigr.nl/research/projects/copd
[ { "created": "Wed, 18 Jan 2017 11:13:01 GMT", "version": "v1" }, { "created": "Thu, 23 Nov 2017 14:10:34 GMT", "version": "v2" } ]
2017-11-27
[ [ "Cheplygina", "Veronika", "" ], [ "Peña", "Isabel Pino", "" ], [ "Pedersen", "Jesper Holst", "" ], [ "Lynch", "David A.", "" ], [ "Sørensen", "Lauge", "" ], [ "de Bruijne", "Marleen", "" ] ]
Chronic obstructive pulmonary disease (COPD) is a lung disease which can be quantified using chest computed tomography (CT) scans. Recent studies have shown that COPD can be automatically diagnosed using weakly supervised learning of intensity and texture distributions. However, up till now such classifiers have only been evaluated on scans from a single domain, and it is unclear whether they would generalize across domains, such as different scanners or scanning protocols. To address this problem, we investigate classification of COPD in a multi-center dataset with a total of 803 scans from three different centers, four different scanners, with heterogenous subject distributions. Our method is based on Gaussian texture features, and a weighted logistic classifier, which increases the weights of samples similar to the test data. We show that Gaussian texture features outperform intensity features previously used in multi-center classification tasks. We also show that a weighting strategy based on a classifier that is trained to discriminate between scans from different domains, can further improve the results. To encourage further research into transfer learning methods for classification of COPD, upon acceptance of the paper we will release two feature datasets used in this study on http://bigr.nl/research/projects/copd
2212.11122
Parviz Ali
Parviz Ali
Diamond Abrasive Electroplated Surface Anomaly Detection using Convolutional Neural Networks for Industrial Quality Inspection
null
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Electroplated diamond abrasive tools require nickel coating on a metal surface for abrasive bonding and part functionality. The electroplated nickel-coated abrasive tool is expected to have a high-quality part performance by having a nickel coating thickness of between 50% to 60% of the abrasive median diameter, uniformity of the nickel layer, abrasive distribution over the electroplated surface, and bright gloss. Electroplating parameters are set accordingly for this purpose. Industrial quality inspection for defects of these abrasive electroplated parts with optical inspection instruments is extremely challenging due to the diamond's light refraction, dispersion nature, and reflective bright nickel surface. The difficulty posed by this challenge requires parts to be quality inspected manually with an eye loupe that is subjective and costly. In this study, we use a Convolutional Neural Network (CNN) model in the production line to detect abrasive electroplated part anomalies allowing us to fix or eliminate those parts or elements that are in bad condition from the production chain and ultimately reduce manual quality inspection cost. We used 744 samples to train our model. Our model successfully identified over 99% of the parts with an anomaly. Keywords: Artificial Intelligence, Anomaly Detection, Industrial Quality Inspection, Electroplating, Diamond Abrasive Tool
[ { "created": "Sun, 11 Dec 2022 20:14:18 GMT", "version": "v1" } ]
2022-12-22
[ [ "Ali", "Parviz", "" ] ]
Electroplated diamond abrasive tools require nickel coating on a metal surface for abrasive bonding and part functionality. The electroplated nickel-coated abrasive tool is expected to have a high-quality part performance by having a nickel coating thickness of between 50% to 60% of the abrasive median diameter, uniformity of the nickel layer, abrasive distribution over the electroplated surface, and bright gloss. Electroplating parameters are set accordingly for this purpose. Industrial quality inspection for defects of these abrasive electroplated parts with optical inspection instruments is extremely challenging due to the diamond's light refraction, dispersion nature, and reflective bright nickel surface. The difficulty posed by this challenge requires parts to be quality inspected manually with an eye loupe that is subjective and costly. In this study, we use a Convolutional Neural Network (CNN) model in the production line to detect abrasive electroplated part anomalies allowing us to fix or eliminate those parts or elements that are in bad condition from the production chain and ultimately reduce manual quality inspection cost. We used 744 samples to train our model. Our model successfully identified over 99% of the parts with an anomaly. Keywords: Artificial Intelligence, Anomaly Detection, Industrial Quality Inspection, Electroplating, Diamond Abrasive Tool
1307.0214
Thijs Laarhoven
Thijs Laarhoven
Dynamic Traitor Tracing Schemes, Revisited
7 pages, 1 figure (6 subfigures), 1 table
IEEE Workshop on Information Forensics and Security (WIFS), pp. 191-196, 2013
10.1109/WIFS.2013.6707817
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We revisit recent results from the area of collusion-resistant traitor tracing, and show how they can be combined and improved to obtain more efficient dynamic traitor tracing schemes. In particular, we show how the dynamic Tardos scheme of Laarhoven et al. can be combined with the optimized score functions of Oosterwijk et al. to trace coalitions much faster. If the attack strategy is known, in many cases the order of the code length goes down from quadratic to linear in the number of colluders, while if the attack is not known, we show how the interleaving defense may be used to catch all colluders about twice as fast as in the dynamic Tardos scheme. Some of these results also apply to the static traitor tracing setting where the attack strategy is known in advance, and to group testing.
[ { "created": "Sun, 30 Jun 2013 15:55:11 GMT", "version": "v1" } ]
2016-11-17
[ [ "Laarhoven", "Thijs", "" ] ]
We revisit recent results from the area of collusion-resistant traitor tracing, and show how they can be combined and improved to obtain more efficient dynamic traitor tracing schemes. In particular, we show how the dynamic Tardos scheme of Laarhoven et al. can be combined with the optimized score functions of Oosterwijk et al. to trace coalitions much faster. If the attack strategy is known, in many cases the order of the code length goes down from quadratic to linear in the number of colluders, while if the attack is not known, we show how the interleaving defense may be used to catch all colluders about twice as fast as in the dynamic Tardos scheme. Some of these results also apply to the static traitor tracing setting where the attack strategy is known in advance, and to group testing.
1509.00721
Andrey Shchurov
Andrey A. Shchurov
A Multilayer Model of Computer Networks
5 pages, 4 figures. ISSN:2231-2803
International Journal of Computer Trends and Technology (IJCTT) V26(1):12-16, August 2015
10.14445/22312803/IJCTT-V26P103
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The fundamental concept of applying the system methodology to network analysis declares that network architecture should take into account services and applications which this network provides and supports. This work introduces a formal model of computer networks on the basis of the hierarchical multilayer networks. In turn, individual layers are represented as multiplex networks. The concept of layered networks provides conditions of top-down consistency of the model. Next, we determined the necessary set of layers for network architecture representation with regard to applying the system methodology to network analysis.
[ { "created": "Wed, 2 Sep 2015 14:34:53 GMT", "version": "v1" } ]
2015-09-03
[ [ "Shchurov", "Andrey A.", "" ] ]
The fundamental concept of applying the system methodology to network analysis declares that network architecture should take into account services and applications which this network provides and supports. This work introduces a formal model of computer networks on the basis of the hierarchical multilayer networks. In turn, individual layers are represented as multiplex networks. The concept of layered networks provides conditions of top-down consistency of the model. Next, we determined the necessary set of layers for network architecture representation with regard to applying the system methodology to network analysis.
1811.00753
Kartik Ahuja
Kartik Ahuja, Mihaela van der Schaar
Risk-Stratify: Confident Stratification Of Patients Based On Risk
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A clinician desires to use a risk-stratification method that achieves confident risk-stratification - the risk estimates of the different patients reflect the true risks with a high probability. This allows him/her to use these risks to make accurate predictions about prognosis and decisions about screening, treatments for the current patient. We develop Risk-stratify - a two phase algorithm that is designed to achieve confident risk-stratification. In the first phase, we grow a tree to partition the covariate space. Each node in the tree is split using statistical tests that determine if the risks of the child nodes are different or not. The choice of the statistical tests depends on whether the data is censored (Log-rank test) or not (U-test). The set of the leaves of the tree form a partition. The risk distribution of patients that belong to a leaf is different from the sibling leaf but not the rest of the leaves. Therefore, some of the leaves that have similar underlying risks are incorrectly specified to have different risks. In the second phase, we develop a novel recursive graph decomposition approach to address this problem. We merge the leaves of the tree that have similar risks to form new leaves that form the final output. We apply Risk-stratify on a cohort of patients (with no history of cardiovascular disease) from UK Biobank and assess their risk for cardiovascular disease. Risk-stratify significantly improves risk-stratification, i.e., a lower fraction of the groups have over/under estimated risks (measured in terms of false discovery rate; 33% reduction) in comparison to state-of-the-art methods for cardiovascular prediction (Random forests, Cox model, etc.). We find that the Cox model significantly over estimates the risk of 21,621 patients out of 216,211 patients. Risk-stratify can accurately categorize 2,987 of these 21,621 patients as low-risk individuals.
[ { "created": "Fri, 2 Nov 2018 06:30:52 GMT", "version": "v1" } ]
2018-11-05
[ [ "Ahuja", "Kartik", "" ], [ "van der Schaar", "Mihaela", "" ] ]
A clinician desires to use a risk-stratification method that achieves confident risk-stratification - the risk estimates of the different patients reflect the true risks with a high probability. This allows him/her to use these risks to make accurate predictions about prognosis and decisions about screening, treatments for the current patient. We develop Risk-stratify - a two phase algorithm that is designed to achieve confident risk-stratification. In the first phase, we grow a tree to partition the covariate space. Each node in the tree is split using statistical tests that determine if the risks of the child nodes are different or not. The choice of the statistical tests depends on whether the data is censored (Log-rank test) or not (U-test). The set of the leaves of the tree form a partition. The risk distribution of patients that belong to a leaf is different from the sibling leaf but not the rest of the leaves. Therefore, some of the leaves that have similar underlying risks are incorrectly specified to have different risks. In the second phase, we develop a novel recursive graph decomposition approach to address this problem. We merge the leaves of the tree that have similar risks to form new leaves that form the final output. We apply Risk-stratify on a cohort of patients (with no history of cardiovascular disease) from UK Biobank and assess their risk for cardiovascular disease. Risk-stratify significantly improves risk-stratification, i.e., a lower fraction of the groups have over/under estimated risks (measured in terms of false discovery rate; 33% reduction) in comparison to state-of-the-art methods for cardiovascular prediction (Random forests, Cox model, etc.). We find that the Cox model significantly over estimates the risk of 21,621 patients out of 216,211 patients. Risk-stratify can accurately categorize 2,987 of these 21,621 patients as low-risk individuals.
1812.03615
Isuru Godage
Jiahao Deng, Brandon H. Meng, Iyad Kanj, Isuru S. Godage
Near-optimal Smooth Path Planning for Multisection Continuum Arms
Submitted to 2019 IEEE International Conference on Soft Robotics (RoboSoft 2019)
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the path planning problem for continuum-arm robots, in which we are given a starting and an end point, and we need to compute a path for the tip of the continuum arm between the two points. We consider both cases where obstacles are present and where they are not. We demonstrate how to leverage the continuum arm features to introduce a new model that enables a path planning approach based on the configurations graph, for a continuum arm consisting of three sections, each consisting of three muscle actuators. The algorithm we apply to the configurations graph allows us to exploit parallelism in the computation to obtain efficient implementation. We conducted extensive tests, and the obtained results show the completeness of the proposed algorithm under the considered discretizations, in both cases where obstacles are present and where they are not. We compared our approach to the standard inverse kinematics approach. While the inverse kinematics approach is much faster when successful, our algorithm always succeeds in finding a path or reporting that no path exists, compared to a roughly 70% success rate of the inverse kinematics approach (when a path exists).
[ { "created": "Mon, 10 Dec 2018 04:00:27 GMT", "version": "v1" } ]
2018-12-11
[ [ "Deng", "Jiahao", "" ], [ "Meng", "Brandon H.", "" ], [ "Kanj", "Iyad", "" ], [ "Godage", "Isuru S.", "" ] ]
We study the path planning problem for continuum-arm robots, in which we are given a starting and an end point, and we need to compute a path for the tip of the continuum arm between the two points. We consider both cases where obstacles are present and where they are not. We demonstrate how to leverage the continuum arm features to introduce a new model that enables a path planning approach based on the configurations graph, for a continuum arm consisting of three sections, each consisting of three muscle actuators. The algorithm we apply to the configurations graph allows us to exploit parallelism in the computation to obtain efficient implementation. We conducted extensive tests, and the obtained results show the completeness of the proposed algorithm under the considered discretizations, in both cases where obstacles are present and where they are not. We compared our approach to the standard inverse kinematics approach. While the inverse kinematics approach is much faster when successful, our algorithm always succeeds in finding a path or reporting that no path exists, compared to a roughly 70% success rate of the inverse kinematics approach (when a path exists).
1011.6030
Bernard Cousin
Shadi Jawhar (IRISA), Bernard Cousin (IRISA)
Optical Multicast Routing Under Light Splitter Constraints
null
7th International Conference on Information Technology : New Generations (ITNG 2010), Las Vegas : United States (2010)
10.1109/ITNG.2010.168
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
During the past few years, we have observed the emergence of new applications that use multicast transmission. For a multicast routing algorithm to be applicable in optical networks, it must route data only to group members, optimize and maintain loop-free routes, and concentrate the routes on a subset of network links. For an all-optical switch to play the role of a branching router, it must be equipped with a light splitter. Light splitters are expensive equipments and therefore it will be very expensive to implement splitters on all optical switches. Optical light splitters are only implemented on some optical switches. That limited availability of light splitters raises a new problem when we want to implement multicast protocols in optical network (because usual multicast protocols make the assumption that all nodes have branching capabilities). Another issue is the knowledge of the locations of light splitters in the optical network. Nodes in the network should be able to identify the locations of light splitters scattered in the optical network so it can construct multicast trees. These problems must be resolved by implementing a multicast routing protocol that must take into consideration that not all nodes can be branching node. As a result, a new signaling process must be implemented so that light paths can be created, spanning from source to the group members.
[ { "created": "Sun, 28 Nov 2010 11:04:01 GMT", "version": "v1" } ]
2010-11-30
[ [ "Jawhar", "Shadi", "", "IRISA" ], [ "Cousin", "Bernard", "", "IRISA" ] ]
During the past few years, we have observed the emergence of new applications that use multicast transmission. For a multicast routing algorithm to be applicable in optical networks, it must route data only to group members, optimize and maintain loop-free routes, and concentrate the routes on a subset of network links. For an all-optical switch to play the role of a branching router, it must be equipped with a light splitter. Light splitters are expensive equipments and therefore it will be very expensive to implement splitters on all optical switches. Optical light splitters are only implemented on some optical switches. That limited availability of light splitters raises a new problem when we want to implement multicast protocols in optical network (because usual multicast protocols make the assumption that all nodes have branching capabilities). Another issue is the knowledge of the locations of light splitters in the optical network. Nodes in the network should be able to identify the locations of light splitters scattered in the optical network so it can construct multicast trees. These problems must be resolved by implementing a multicast routing protocol that must take into consideration that not all nodes can be branching node. As a result, a new signaling process must be implemented so that light paths can be created, spanning from source to the group members.
2012.04733
Jiaqi Wang
Jiaqi Wang, Kai Chen, Rui Xu, Ziwei Liu, Chen Change Loy, Dahua Lin
CARAFE++: Unified Content-Aware ReAssembly of FEatures
Technical Report. Extended journal version of the conference paper that appeared as arXiv:1905.02188
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Feature reassembly, i.e. feature downsampling and upsampling, is a key operation in a number of modern convolutional network architectures, e.g., residual networks and feature pyramids. Its design is critical for dense prediction tasks such as object detection and semantic/instance segmentation. In this work, we propose unified Content-Aware ReAssembly of FEatures (CARAFE++), a universal, lightweight and highly effective operator to fulfill this goal. CARAFE++ has several appealing properties: (1) Unlike conventional methods such as pooling and interpolation that only exploit sub-pixel neighborhood, CARAFE++ aggregates contextual information within a large receptive field. (2) Instead of using a fixed kernel for all samples (e.g. convolution and deconvolution), CARAFE++ generates adaptive kernels on-the-fly to enable instance-specific content-aware handling. (3) CARAFE++ introduces little computational overhead and can be readily integrated into modern network architectures. We conduct comprehensive evaluations on standard benchmarks in object detection, instance/semantic segmentation and image inpainting. CARAFE++ shows consistent and substantial gains across all the tasks (2.5% APbox, 2.1% APmask, 1.94% mIoU, 1.35 dB respectively) with negligible computational overhead. It shows great potential to serve as a strong building block for modern deep networks.
[ { "created": "Mon, 7 Dec 2020 07:34:57 GMT", "version": "v1" } ]
2020-12-10
[ [ "Wang", "Jiaqi", "" ], [ "Chen", "Kai", "" ], [ "Xu", "Rui", "" ], [ "Liu", "Ziwei", "" ], [ "Loy", "Chen Change", "" ], [ "Lin", "Dahua", "" ] ]
Feature reassembly, i.e. feature downsampling and upsampling, is a key operation in a number of modern convolutional network architectures, e.g., residual networks and feature pyramids. Its design is critical for dense prediction tasks such as object detection and semantic/instance segmentation. In this work, we propose unified Content-Aware ReAssembly of FEatures (CARAFE++), a universal, lightweight and highly effective operator to fulfill this goal. CARAFE++ has several appealing properties: (1) Unlike conventional methods such as pooling and interpolation that only exploit sub-pixel neighborhood, CARAFE++ aggregates contextual information within a large receptive field. (2) Instead of using a fixed kernel for all samples (e.g. convolution and deconvolution), CARAFE++ generates adaptive kernels on-the-fly to enable instance-specific content-aware handling. (3) CARAFE++ introduces little computational overhead and can be readily integrated into modern network architectures. We conduct comprehensive evaluations on standard benchmarks in object detection, instance/semantic segmentation and image inpainting. CARAFE++ shows consistent and substantial gains across all the tasks (2.5% APbox, 2.1% APmask, 1.94% mIoU, 1.35 dB respectively) with negligible computational overhead. It shows great potential to serve as a strong building block for modern deep networks.
1903.02639
Keegan Lensink
Eldad Haber, Keegan Lensink, Eran Treister, Lars Ruthotto
IMEXnet: A Forward Stable Deep Neural Network
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep convolutional neural networks have revolutionized many machine learning and computer vision tasks, however, some remaining key challenges limit their wider use. These challenges include improving the network's robustness to perturbations of the input image and the limited ``field of view'' of convolution operators. We introduce the IMEXnet that addresses these challenges by adapting semi-implicit methods for partial differential equations. Compared to similar explicit networks, such as residual networks, our network is more stable, which has recently shown to reduce the sensitivity to small changes in the input features and improve generalization. The addition of an implicit step connects all pixels in each channel of the image and therefore addresses the field of view problem while still being comparable to standard convolutions in terms of the number of parameters and computational complexity. We also present a new dataset for semantic segmentation and demonstrate the effectiveness of our architecture using the NYU Depth dataset.
[ { "created": "Wed, 6 Mar 2019 22:33:06 GMT", "version": "v1" }, { "created": "Fri, 17 May 2019 21:45:28 GMT", "version": "v2" } ]
2019-05-21
[ [ "Haber", "Eldad", "" ], [ "Lensink", "Keegan", "" ], [ "Treister", "Eran", "" ], [ "Ruthotto", "Lars", "" ] ]
Deep convolutional neural networks have revolutionized many machine learning and computer vision tasks, however, some remaining key challenges limit their wider use. These challenges include improving the network's robustness to perturbations of the input image and the limited ``field of view'' of convolution operators. We introduce the IMEXnet that addresses these challenges by adapting semi-implicit methods for partial differential equations. Compared to similar explicit networks, such as residual networks, our network is more stable, which has recently shown to reduce the sensitivity to small changes in the input features and improve generalization. The addition of an implicit step connects all pixels in each channel of the image and therefore addresses the field of view problem while still being comparable to standard convolutions in terms of the number of parameters and computational complexity. We also present a new dataset for semantic segmentation and demonstrate the effectiveness of our architecture using the NYU Depth dataset.
2404.14779
Cl\'ement Christophe
Cl\'ement Christophe, Praveen K Kanithi, Prateek Munjal, Tathagata Raha, Nasir Hayat, Ronnie Rajan, Ahmed Al-Mahrooqi, Avani Gupta, Muhammad Umar Salman, Gurpreet Gosal, Bhargav Kanakiya, Charles Chen, Natalia Vassilieva, Boulbaba Ben Amor, Marco AF Pimentel, Shadab Khan
Med42 -- Evaluating Fine-Tuning Strategies for Medical LLMs: Full-Parameter vs. Parameter-Efficient Approaches
Published at AAAI 2024 Spring Symposium - Clinical Foundation Models
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
This study presents a comprehensive analysis and comparison of two predominant fine-tuning methodologies - full-parameter fine-tuning and parameter-efficient tuning - within the context of medical Large Language Models (LLMs). We developed and refined a series of LLMs, based on the Llama-2 architecture, specifically designed to enhance medical knowledge retrieval, reasoning, and question-answering capabilities. Our experiments systematically evaluate the effectiveness of these tuning strategies across various well-known medical benchmarks. Notably, our medical LLM Med42 showed an accuracy level of 72% on the US Medical Licensing Examination (USMLE) datasets, setting a new standard in performance for openly available medical LLMs. Through this comparative analysis, we aim to identify the most effective and efficient method for fine-tuning LLMs in the medical domain, thereby contributing significantly to the advancement of AI-driven healthcare applications.
[ { "created": "Tue, 23 Apr 2024 06:36:21 GMT", "version": "v1" } ]
2024-04-24
[ [ "Christophe", "Clément", "" ], [ "Kanithi", "Praveen K", "" ], [ "Munjal", "Prateek", "" ], [ "Raha", "Tathagata", "" ], [ "Hayat", "Nasir", "" ], [ "Rajan", "Ronnie", "" ], [ "Al-Mahrooqi", "Ahmed", "" ], [ "Gupta", "Avani", "" ], [ "Salman", "Muhammad Umar", "" ], [ "Gosal", "Gurpreet", "" ], [ "Kanakiya", "Bhargav", "" ], [ "Chen", "Charles", "" ], [ "Vassilieva", "Natalia", "" ], [ "Amor", "Boulbaba Ben", "" ], [ "Pimentel", "Marco AF", "" ], [ "Khan", "Shadab", "" ] ]
This study presents a comprehensive analysis and comparison of two predominant fine-tuning methodologies - full-parameter fine-tuning and parameter-efficient tuning - within the context of medical Large Language Models (LLMs). We developed and refined a series of LLMs, based on the Llama-2 architecture, specifically designed to enhance medical knowledge retrieval, reasoning, and question-answering capabilities. Our experiments systematically evaluate the effectiveness of these tuning strategies across various well-known medical benchmarks. Notably, our medical LLM Med42 showed an accuracy level of 72% on the US Medical Licensing Examination (USMLE) datasets, setting a new standard in performance for openly available medical LLMs. Through this comparative analysis, we aim to identify the most effective and efficient method for fine-tuning LLMs in the medical domain, thereby contributing significantly to the advancement of AI-driven healthcare applications.
1812.03304
Peiyao Shen
Peiyao Shen, Xuebo Zhang and Yongchun Fang
Real-time Acceleration-continuous Path-constrained Trajectory Planning With Built-in Tradability Between Cruise and Time-optimal Motions
12 pages, 19 figures
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a novel real-time acceleration-continuous path-constrained trajectory planning algorithm is proposed with an appealing built-in tradability mechanism between cruise motion and time-optimal motion. Different from existing approaches, the proposed approach smoothens time-optimal trajectories with bang-bang input structures to generate acceleration-continuous trajectories while preserving the completeness property. More importantly, a novel built-in tradability mechanism is proposed and embedded into the trajectory planning framework, so that the proportion of the cruise motion and time-optimal motion can be flexibly adjusted by changing a user-specified functional parameter. Thus, the user can easily apply the trajectory planning algorithm for various tasks with different requirements on motion efficiency and cruise proportion. Moreover, it is shown that feasible trajectories are computed more quickly than optimal trajectories. Rigorous mathematical analysis and proofs are provided for these aforementioned results. Comparative simulation and experimental results on omnidirectional wheeled mobile robots demonstrate the capability of the proposed algorithm in terms of flexible tunning between cruise and time-optimal motions, as well as higher computational efficiency.
[ { "created": "Sat, 8 Dec 2018 12:02:49 GMT", "version": "v1" } ]
2018-12-11
[ [ "Shen", "Peiyao", "" ], [ "Zhang", "Xuebo", "" ], [ "Fang", "Yongchun", "" ] ]
In this paper, a novel real-time acceleration-continuous path-constrained trajectory planning algorithm is proposed with an appealing built-in tradability mechanism between cruise motion and time-optimal motion. Different from existing approaches, the proposed approach smoothens time-optimal trajectories with bang-bang input structures to generate acceleration-continuous trajectories while preserving the completeness property. More importantly, a novel built-in tradability mechanism is proposed and embedded into the trajectory planning framework, so that the proportion of the cruise motion and time-optimal motion can be flexibly adjusted by changing a user-specified functional parameter. Thus, the user can easily apply the trajectory planning algorithm for various tasks with different requirements on motion efficiency and cruise proportion. Moreover, it is shown that feasible trajectories are computed more quickly than optimal trajectories. Rigorous mathematical analysis and proofs are provided for these aforementioned results. Comparative simulation and experimental results on omnidirectional wheeled mobile robots demonstrate the capability of the proposed algorithm in terms of flexible tunning between cruise and time-optimal motions, as well as higher computational efficiency.