id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2402.03989
Anton Backhaus
Anton Backhaus, Thorsten Luettel, Hans-Joachim Wuensche
YOLOPoint Joint Keypoint and Object Detection
12 pages, 5 figures
Proceedings of Advanced Concepts for Intelligent Vision Systems, 14124, 112-123 (2023)
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Intelligent vehicles of the future must be capable of understanding and navigating safely through their surroundings. Camera-based vehicle systems can use keypoints as well as objects as low- and high-level landmarks for GNSS-independent SLAM and visual odometry. To this end we propose YOLOPoint, a convolutional neural network model that simultaneously detects keypoints and objects in an image by combining YOLOv5 and SuperPoint to create a single forward-pass network that is both real-time capable and accurate. By using a shared backbone and a light-weight network structure, YOLOPoint is able to perform competitively on both the HPatches and KITTI benchmarks.
[ { "created": "Tue, 6 Feb 2024 13:31:45 GMT", "version": "v1" } ]
2024-02-07
[ [ "Backhaus", "Anton", "" ], [ "Luettel", "Thorsten", "" ], [ "Wuensche", "Hans-Joachim", "" ] ]
Intelligent vehicles of the future must be capable of understanding and navigating safely through their surroundings. Camera-based vehicle systems can use keypoints as well as objects as low- and high-level landmarks for GNSS-independent SLAM and visual odometry. To this end we propose YOLOPoint, a convolutional neural network model that simultaneously detects keypoints and objects in an image by combining YOLOv5 and SuperPoint to create a single forward-pass network that is both real-time capable and accurate. By using a shared backbone and a light-weight network structure, YOLOPoint is able to perform competitively on both the HPatches and KITTI benchmarks.
1603.02813
Emre Erturk
Emre Erturk
Using a Cloud Based Collaboration Technology in a Systems Analysis and Design Course
null
null
null
null
cs.CY cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to effectively prepare the next generation of IT professionals and systems analysts, it is important to incorporate cloud based online collaboration tools into the coursework for developing the students' cooperative skills as well as for storing and sharing content. For these pedagogical and practical reasons, Google Drive has been used at a medium-sized institution of higher education in New Zealand during the Systems Analysis and Design course. Ongoing and successful use of any learning technology requires gathering meaningful feedback from students, and acting as a mentor during their learning journey. This study has been developed and implemented to help students enjoy the collaborative technology and to help increase their satisfaction and commitment. In order to overcome the obstacles that may prevent students from using Google Drive optimally, an initial survey has been conducted to better understand the influential factors and issues. Furthermore, this study aims at promoting various types of collaboration and sharing: seeing and learning from other students' work, receiving direct suggestions from others, and allowing others to edit documents that belong to them. Following the results of the first quantitative survey, numerous teaching strategies were formulated and implemented. A final qualitative survey was done at the end of the course for students to evaluate their project work. The results of this study also provide original practical and theoretical implications that may be of interest to other researchers, course designers, and teachers.
[ { "created": "Wed, 9 Mar 2016 08:51:33 GMT", "version": "v1" } ]
2016-03-10
[ [ "Erturk", "Emre", "" ] ]
In order to effectively prepare the next generation of IT professionals and systems analysts, it is important to incorporate cloud based online collaboration tools into the coursework for developing the students' cooperative skills as well as for storing and sharing content. For these pedagogical and practical reasons, Google Drive has been used at a medium-sized institution of higher education in New Zealand during the Systems Analysis and Design course. Ongoing and successful use of any learning technology requires gathering meaningful feedback from students, and acting as a mentor during their learning journey. This study has been developed and implemented to help students enjoy the collaborative technology and to help increase their satisfaction and commitment. In order to overcome the obstacles that may prevent students from using Google Drive optimally, an initial survey has been conducted to better understand the influential factors and issues. Furthermore, this study aims at promoting various types of collaboration and sharing: seeing and learning from other students' work, receiving direct suggestions from others, and allowing others to edit documents that belong to them. Following the results of the first quantitative survey, numerous teaching strategies were formulated and implemented. A final qualitative survey was done at the end of the course for students to evaluate their project work. The results of this study also provide original practical and theoretical implications that may be of interest to other researchers, course designers, and teachers.
1905.02906
Jaehwan Lee
Jaehwan Lee and Donggeon Yoo and Jung Yin Huh and Hyo-Eun Kim
Photometric Transformer Networks and Label Adjustment for Breast Density Prediction
miccai 2019 submission
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Grading breast density is highly sensitive to normalization settings of digital mammogram as the density is tightly correlated with the distribution of pixel intensity. Also, the grade varies with readers due to uncertain grading criteria. These issues are inherent in the density assessment of digital mammography. They are problematic when designing a computer-aided prediction model for breast density and become worse if the data comes from multiple sites. In this paper, we proposed two novel deep learning techniques for breast density prediction: 1) photometric transformation which adaptively normalizes the input mammograms, and 2) label distillation which adjusts the label by using its output prediction. The photometric transformer network predicts optimal parameters for photometric transformation on the fly, learned jointly with the main prediction network. The label distillation, a type of pseudo-label techniques, is intended to mitigate the grading variation. We experimentally showed that the proposed methods are beneficial in terms of breast density prediction, resulting in significant performance improvement compared to various previous approaches.
[ { "created": "Wed, 8 May 2019 04:32:34 GMT", "version": "v1" } ]
2019-05-09
[ [ "Lee", "Jaehwan", "" ], [ "Yoo", "Donggeon", "" ], [ "Huh", "Jung Yin", "" ], [ "Kim", "Hyo-Eun", "" ] ]
Grading breast density is highly sensitive to normalization settings of digital mammogram as the density is tightly correlated with the distribution of pixel intensity. Also, the grade varies with readers due to uncertain grading criteria. These issues are inherent in the density assessment of digital mammography. They are problematic when designing a computer-aided prediction model for breast density and become worse if the data comes from multiple sites. In this paper, we proposed two novel deep learning techniques for breast density prediction: 1) photometric transformation which adaptively normalizes the input mammograms, and 2) label distillation which adjusts the label by using its output prediction. The photometric transformer network predicts optimal parameters for photometric transformation on the fly, learned jointly with the main prediction network. The label distillation, a type of pseudo-label techniques, is intended to mitigate the grading variation. We experimentally showed that the proposed methods are beneficial in terms of breast density prediction, resulting in significant performance improvement compared to various previous approaches.
2002.10916
Rodrigo de Lamare
S. B. Pinto and R. C. de Lamare
Study of Coarse Quantization-Aware Block Diagonalization Algorithms for MIMO Systems with Low Resolution
3 figures, 9 pages. arXiv admin note: text overlap with arXiv:1707.00953
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is known that the estimated energy consumption of digital-to analog converters (DACs) is around 30\% of the energy consumed by analog-to-digital converters (ADCs) keeping fixed the sampling rate and bit resolution. Assuming that similarly to ADC, DAC dissipation doubles with every extra bit of resolution, a decrease in two resolution bits, for instance from 4 to 2 bits, represents a 75$\% $ lower dissipation. The current limitations in sum-rates of 1-bit quantization have motivated researchers to consider extra bits in resolution to obtain higher levels of sum-rates. Following this, we devise coarse quantization-aware precoding using few bits for the broadcast channel of multiple-antenna systems based on the Bussgang theorem. In particular, we consider block diagonalization algorithms, which have not been considered in the literature so far. The sum-rates achieved by the proposed Coarse Quantization-Aware Block Diagonalization (CQA-BD) and its regularized version (CQA-RBD) are superior to those previously reported in the literature. Simulations illustrate the performance of the proposed CQA-BD and CGA-RBD algorithms against existing approaches.
[ { "created": "Sun, 23 Feb 2020 01:45:50 GMT", "version": "v1" } ]
2020-02-26
[ [ "Pinto", "S. B.", "" ], [ "de Lamare", "R. C.", "" ] ]
It is known that the estimated energy consumption of digital-to analog converters (DACs) is around 30\% of the energy consumed by analog-to-digital converters (ADCs) keeping fixed the sampling rate and bit resolution. Assuming that similarly to ADC, DAC dissipation doubles with every extra bit of resolution, a decrease in two resolution bits, for instance from 4 to 2 bits, represents a 75$\% $ lower dissipation. The current limitations in sum-rates of 1-bit quantization have motivated researchers to consider extra bits in resolution to obtain higher levels of sum-rates. Following this, we devise coarse quantization-aware precoding using few bits for the broadcast channel of multiple-antenna systems based on the Bussgang theorem. In particular, we consider block diagonalization algorithms, which have not been considered in the literature so far. The sum-rates achieved by the proposed Coarse Quantization-Aware Block Diagonalization (CQA-BD) and its regularized version (CQA-RBD) are superior to those previously reported in the literature. Simulations illustrate the performance of the proposed CQA-BD and CGA-RBD algorithms against existing approaches.
1910.07963
Nicolas Tremblay
Yusuf Y. Pilavci, Pierre-Olivier Amblard, Simon Barthelm\'e, Nicolas Tremblay
Smoothing graph signals via random spanning forests
null
null
null
null
cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Another facet of the elegant link between random processes on graphs and Laplacian-based numerical linear algebra is uncovered: based on random spanning forests, novel Monte-Carlo estimators for graph signal smoothing are proposed. These random forests are sampled efficiently via a variant of Wilson's algorithm --in time linear in the number of edges. The theoretical variance of the proposed estimators are analyzed, and their application to several problems are considered, such as Tikhonov denoising of graph signals or semi-supervised learning for node classification on graphs.
[ { "created": "Thu, 17 Oct 2019 15:11:03 GMT", "version": "v1" }, { "created": "Wed, 5 Feb 2020 15:26:33 GMT", "version": "v2" } ]
2020-02-06
[ [ "Pilavci", "Yusuf Y.", "" ], [ "Amblard", "Pierre-Olivier", "" ], [ "Barthelmé", "Simon", "" ], [ "Tremblay", "Nicolas", "" ] ]
Another facet of the elegant link between random processes on graphs and Laplacian-based numerical linear algebra is uncovered: based on random spanning forests, novel Monte-Carlo estimators for graph signal smoothing are proposed. These random forests are sampled efficiently via a variant of Wilson's algorithm --in time linear in the number of edges. The theoretical variance of the proposed estimators are analyzed, and their application to several problems are considered, such as Tikhonov denoising of graph signals or semi-supervised learning for node classification on graphs.
1605.02324
Charles Jeon
Charles Jeon, Arian Maleki, and Christoph Studer
On the Performance of Mismatched Data Detection in Large MIMO Systems
Will be presented at the 2016 IEEE International Symposium on Information Theory
null
10.1109/ISIT.2016.7541285
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the performance of mismatched data detection in large multiple-input multiple-output (MIMO) systems, where the prior distribution of the transmit signal used in the data detector differs from the true prior. To minimize the performance loss caused by this prior mismatch, we include a tuning stage into our recently-proposed large MIMO approximate message passing (LAMA) algorithm, which allows us to develop mismatched LAMA algorithms with optimal as well as sub-optimal tuning. We show that carefully-selected priors often enable simpler and computationally more efficient algorithms compared to LAMA with the true prior while achieving near-optimal performance. A performance analysis of our algorithms for a Gaussian prior and a uniform prior within a hypercube covering the QAM constellation recovers classical and recent results on linear and non-linear MIMO data detection, respectively.
[ { "created": "Sun, 8 May 2016 14:37:45 GMT", "version": "v1" }, { "created": "Wed, 22 Jun 2016 16:35:30 GMT", "version": "v2" } ]
2018-11-12
[ [ "Jeon", "Charles", "" ], [ "Maleki", "Arian", "" ], [ "Studer", "Christoph", "" ] ]
We investigate the performance of mismatched data detection in large multiple-input multiple-output (MIMO) systems, where the prior distribution of the transmit signal used in the data detector differs from the true prior. To minimize the performance loss caused by this prior mismatch, we include a tuning stage into our recently-proposed large MIMO approximate message passing (LAMA) algorithm, which allows us to develop mismatched LAMA algorithms with optimal as well as sub-optimal tuning. We show that carefully-selected priors often enable simpler and computationally more efficient algorithms compared to LAMA with the true prior while achieving near-optimal performance. A performance analysis of our algorithms for a Gaussian prior and a uniform prior within a hypercube covering the QAM constellation recovers classical and recent results on linear and non-linear MIMO data detection, respectively.
2210.02231
Yue Zhu
Yue Zhu, David Picard
Decanus to Legatus: Synthetic training for 2D-3D human pose lifting
Accepted by ACCV 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
3D human pose estimation is a challenging task because of the difficulty to acquire ground-truth data outside of controlled environments. A number of further issues have been hindering progress in building a universal and robust model for this task, including domain gaps between different datasets, unseen actions between train and test datasets, various hardware settings and high cost of annotation, etc. In this paper, we propose an algorithm to generate infinite 3D synthetic human poses (Legatus) from a 3D pose distribution based on 10 initial handcrafted 3D poses (Decanus) during the training of a 2D to 3D human pose lifter neural network. Our results show that we can achieve 3D pose estimation performance comparable to methods using real data from specialized datasets but in a zero-shot setup, showing the generalization potential of our framework.
[ { "created": "Wed, 5 Oct 2022 13:10:19 GMT", "version": "v1" } ]
2022-10-06
[ [ "Zhu", "Yue", "" ], [ "Picard", "David", "" ] ]
3D human pose estimation is a challenging task because of the difficulty to acquire ground-truth data outside of controlled environments. A number of further issues have been hindering progress in building a universal and robust model for this task, including domain gaps between different datasets, unseen actions between train and test datasets, various hardware settings and high cost of annotation, etc. In this paper, we propose an algorithm to generate infinite 3D synthetic human poses (Legatus) from a 3D pose distribution based on 10 initial handcrafted 3D poses (Decanus) during the training of a 2D to 3D human pose lifter neural network. Our results show that we can achieve 3D pose estimation performance comparable to methods using real data from specialized datasets but in a zero-shot setup, showing the generalization potential of our framework.
1807.11580
Ryo Yoshinaka
Yuki Nozaki, Diptarama Hendrian, Ryo Yoshinaka, Takashi Horiyama, Ayumi Shinohara
Enumerating Cryptarithms Using Deterministic Finite Automata
null
null
null
null
cs.FL cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A cryptarithm is a mathematical puzzle where given an arithmetic equation written with letters rather than numerals, a player must discover an assignment of numerals on letters that makes the equation hold true. In this paper, we propose a method to construct a DFA that accepts cryptarithms that admit (unique) solutions for each base. We implemented the method and constructed a DFA for bases $k \le 7$. Those DFAs can be used as complete catalogues of cryptarithms,whose applications include enumeration of and counting the exact numbers $G_k(n)$ of cryptarithm instances with $n$ digits that admit base-$k$ solutions. Moreover, explicit formulas for $G_2(n)$ and $G_3(n)$ are given.
[ { "created": "Fri, 27 Jul 2018 01:37:45 GMT", "version": "v1" } ]
2018-08-01
[ [ "Nozaki", "Yuki", "" ], [ "Hendrian", "Diptarama", "" ], [ "Yoshinaka", "Ryo", "" ], [ "Horiyama", "Takashi", "" ], [ "Shinohara", "Ayumi", "" ] ]
A cryptarithm is a mathematical puzzle where given an arithmetic equation written with letters rather than numerals, a player must discover an assignment of numerals on letters that makes the equation hold true. In this paper, we propose a method to construct a DFA that accepts cryptarithms that admit (unique) solutions for each base. We implemented the method and constructed a DFA for bases $k \le 7$. Those DFAs can be used as complete catalogues of cryptarithms,whose applications include enumeration of and counting the exact numbers $G_k(n)$ of cryptarithm instances with $n$ digits that admit base-$k$ solutions. Moreover, explicit formulas for $G_2(n)$ and $G_3(n)$ are given.
1011.0640
M. Emre Celebi
M. Emre Celebi, Hitoshi Iyatomi, Gerald Schaefer, William V. Stoecker
Lesion Border Detection in Dermoscopy Images
10 pages, 1 figure, 3 tables
Computerized Medical Imaging and Graphics 33 (2009) 148--153
10.1016/j.compmedimag.2008.11.002
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Dermoscopy is one of the major imaging modalities used in the diagnosis of melanoma and other pigmented skin lesions. Due to the difficulty and subjectivity of human interpretation, computerized analysis of dermoscopy images has become an important research area. One of the most important steps in dermoscopy image analysis is the automated detection of lesion borders. Methods: In this article, we present a systematic overview of the recent border detection methods in the literature paying particular attention to computational issues and evaluation aspects. Conclusion: Common problems with the existing approaches include the acquisition, size, and diagnostic distribution of the test image set, the evaluation of the results, and the inadequate description of the employed methods. Border determination by dermatologists appears to depend upon higher-level knowledge, therefore it is likely that the incorporation of domain knowledge in automated methods will enable them to perform better, especially in sets of images with a variety of diagnoses.
[ { "created": "Sat, 30 Oct 2010 17:17:02 GMT", "version": "v1" } ]
2010-11-13
[ [ "Celebi", "M. Emre", "" ], [ "Iyatomi", "Hitoshi", "" ], [ "Schaefer", "Gerald", "" ], [ "Stoecker", "William V.", "" ] ]
Background: Dermoscopy is one of the major imaging modalities used in the diagnosis of melanoma and other pigmented skin lesions. Due to the difficulty and subjectivity of human interpretation, computerized analysis of dermoscopy images has become an important research area. One of the most important steps in dermoscopy image analysis is the automated detection of lesion borders. Methods: In this article, we present a systematic overview of the recent border detection methods in the literature paying particular attention to computational issues and evaluation aspects. Conclusion: Common problems with the existing approaches include the acquisition, size, and diagnostic distribution of the test image set, the evaluation of the results, and the inadequate description of the employed methods. Border determination by dermatologists appears to depend upon higher-level knowledge, therefore it is likely that the incorporation of domain knowledge in automated methods will enable them to perform better, especially in sets of images with a variety of diagnoses.
2309.16237
Jiaman Li
Jiaman Li, Jiajun Wu, C. Karen Liu
Object Motion Guided Human Motion Synthesis
SIGGRAPH Asia 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Modeling human behaviors in contextual environments has a wide range of applications in character animation, embodied AI, VR/AR, and robotics. In real-world scenarios, humans frequently interact with the environment and manipulate various objects to complete daily tasks. In this work, we study the problem of full-body human motion synthesis for the manipulation of large-sized objects. We propose Object MOtion guided human MOtion synthesis (OMOMO), a conditional diffusion framework that can generate full-body manipulation behaviors from only the object motion. Since naively applying diffusion models fails to precisely enforce contact constraints between the hands and the object, OMOMO learns two separate denoising processes to first predict hand positions from object motion and subsequently synthesize full-body poses based on the predicted hand positions. By employing the hand positions as an intermediate representation between the two denoising processes, we can explicitly enforce contact constraints, resulting in more physically plausible manipulation motions. With the learned model, we develop a novel system that captures full-body human manipulation motions by simply attaching a smartphone to the object being manipulated. Through extensive experiments, we demonstrate the effectiveness of our proposed pipeline and its ability to generalize to unseen objects. Additionally, as high-quality human-object interaction datasets are scarce, we collect a large-scale dataset consisting of 3D object geometry, object motion, and human motion. Our dataset contains human-object interaction motion for 15 objects, with a total duration of approximately 10 hours.
[ { "created": "Thu, 28 Sep 2023 08:22:00 GMT", "version": "v1" } ]
2023-09-29
[ [ "Li", "Jiaman", "" ], [ "Wu", "Jiajun", "" ], [ "Liu", "C. Karen", "" ] ]
Modeling human behaviors in contextual environments has a wide range of applications in character animation, embodied AI, VR/AR, and robotics. In real-world scenarios, humans frequently interact with the environment and manipulate various objects to complete daily tasks. In this work, we study the problem of full-body human motion synthesis for the manipulation of large-sized objects. We propose Object MOtion guided human MOtion synthesis (OMOMO), a conditional diffusion framework that can generate full-body manipulation behaviors from only the object motion. Since naively applying diffusion models fails to precisely enforce contact constraints between the hands and the object, OMOMO learns two separate denoising processes to first predict hand positions from object motion and subsequently synthesize full-body poses based on the predicted hand positions. By employing the hand positions as an intermediate representation between the two denoising processes, we can explicitly enforce contact constraints, resulting in more physically plausible manipulation motions. With the learned model, we develop a novel system that captures full-body human manipulation motions by simply attaching a smartphone to the object being manipulated. Through extensive experiments, we demonstrate the effectiveness of our proposed pipeline and its ability to generalize to unseen objects. Additionally, as high-quality human-object interaction datasets are scarce, we collect a large-scale dataset consisting of 3D object geometry, object motion, and human motion. Our dataset contains human-object interaction motion for 15 objects, with a total duration of approximately 10 hours.
1907.02765
Zhi Wang
Xueshuo Xie and Zhi Wang and Xuhang Xiao and Lei Yang and Shenwei Huang and Tao Li
A Pvalue-guided Anomaly Detection Approach Combining Multiple Heterogeneous Log Parser Algorithms on IIoT Systems
7 pages, 3 figures
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Industrial Internet of Things (IIoT) is becoming an attack target of advanced persistent threat (APT). Currently, IIoT logs have not been effectively used for anomaly detection. In this paper, we use blockchain to prevent logs from being tampered with and propose a pvalue-guided anomaly detection approach. This approach uses statistical pvalues to combine multiple heterogeneous log parser algorithms. The weighted edit distance is selected as a score function to calculate the nonconformity score between a log and a predefined event. The pvalue is calculated based on the non-conformity scores which indicate how well a log matches an event. This approach is tested on a large number of real-world HDFS logs and IIoT logs. The experiment results show that abnormal events could be effectively recognized by our pvalue-guided approach.
[ { "created": "Fri, 5 Jul 2019 10:44:37 GMT", "version": "v1" } ]
2019-07-08
[ [ "Xie", "Xueshuo", "" ], [ "Wang", "Zhi", "" ], [ "Xiao", "Xuhang", "" ], [ "Yang", "Lei", "" ], [ "Huang", "Shenwei", "" ], [ "Li", "Tao", "" ] ]
Industrial Internet of Things (IIoT) is becoming an attack target of advanced persistent threat (APT). Currently, IIoT logs have not been effectively used for anomaly detection. In this paper, we use blockchain to prevent logs from being tampered with and propose a pvalue-guided anomaly detection approach. This approach uses statistical pvalues to combine multiple heterogeneous log parser algorithms. The weighted edit distance is selected as a score function to calculate the nonconformity score between a log and a predefined event. The pvalue is calculated based on the non-conformity scores which indicate how well a log matches an event. This approach is tested on a large number of real-world HDFS logs and IIoT logs. The experiment results show that abnormal events could be effectively recognized by our pvalue-guided approach.
1806.04713
Hanan Aldarmaki
Hanan Aldarmaki and Mona Diab
Evaluation of Unsupervised Compositional Representations
12 pages, 5 figures. COLING 2018
Proceedings of the 27th International Conference on Computational Linguistics (2018)
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We evaluated various compositional models, from bag-of-words representations to compositional RNN-based models, on several extrinsic supervised and unsupervised evaluation benchmarks. Our results confirm that weighted vector averaging can outperform context-sensitive models in most benchmarks, but structural features encoded in RNN models can also be useful in certain classification tasks. We analyzed some of the evaluation datasets to identify the aspects of meaning they measure and the characteristics of the various models that explain their performance variance.
[ { "created": "Tue, 12 Jun 2018 18:53:14 GMT", "version": "v1" }, { "created": "Thu, 14 Jun 2018 16:43:27 GMT", "version": "v2" } ]
2018-11-30
[ [ "Aldarmaki", "Hanan", "" ], [ "Diab", "Mona", "" ] ]
We evaluated various compositional models, from bag-of-words representations to compositional RNN-based models, on several extrinsic supervised and unsupervised evaluation benchmarks. Our results confirm that weighted vector averaging can outperform context-sensitive models in most benchmarks, but structural features encoded in RNN models can also be useful in certain classification tasks. We analyzed some of the evaluation datasets to identify the aspects of meaning they measure and the characteristics of the various models that explain their performance variance.
1805.02716
Eliu Huerta
E. A. Huerta, Daniel George, Zhizhen Zhao and Gabrielle Allen
Real-time regression analysis with deep convolutional neural networks
3 pages. Position Paper accepted to SciML2018: DOE ASCR Workshop on Scientific Machine Learning. North Bethesda, MD, United States, January 30-February 1, 2018
null
null
null
cs.LG astro-ph.IM cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We discuss the development of novel deep learning algorithms to enable real-time regression analysis for time series data. We showcase the application of this new method with a timely case study, and then discuss the applicability of this approach to tackle similar challenges across science domains.
[ { "created": "Mon, 7 May 2018 19:43:26 GMT", "version": "v1" } ]
2018-05-09
[ [ "Huerta", "E. A.", "" ], [ "George", "Daniel", "" ], [ "Zhao", "Zhizhen", "" ], [ "Allen", "Gabrielle", "" ] ]
We discuss the development of novel deep learning algorithms to enable real-time regression analysis for time series data. We showcase the application of this new method with a timely case study, and then discuss the applicability of this approach to tackle similar challenges across science domains.
2205.07314
Pradeep Gupta Dr.
Raghav Dalmia, Aryaman Sinha, Ruchi Verma, P. K. Gupta
Dynamic Ready Queue Based Process Priority Scheduling Algorithm
5 pages, 7 Figures, 5 Tables
null
null
null
cs.OS
http://creativecommons.org/licenses/by/4.0/
CPU scheduling is the reason behind the performance of multiprocessing and in time-shared operating systems. Different scheduling criteria are used to evaluate Central Processing Unit Scheduling algorithms which are based on different properties of the system. Round Robin is known to be the most recurrent pre-emptive algorithm used in an environment where processes are allotted a unit of time and multiprocessing operating systems. In this paper, a reformed variation of the Round Robin algorithm has been introduced to minimise the completion time, turnaround time, waiting time and number of context switches that results in the better performance of the system. The proposed work consists of calculation of priority on the basis of the difference between time spent in ready upto the moment and arrival time of the process, to ease up the burden on the ready queue. We have also evaluated the performance of the proposed approach on different datasets and measured the different scheduling criteria.
[ { "created": "Sun, 15 May 2022 15:38:59 GMT", "version": "v1" } ]
2022-05-17
[ [ "Dalmia", "Raghav", "" ], [ "Sinha", "Aryaman", "" ], [ "Verma", "Ruchi", "" ], [ "Gupta", "P. K.", "" ] ]
CPU scheduling is the reason behind the performance of multiprocessing and in time-shared operating systems. Different scheduling criteria are used to evaluate Central Processing Unit Scheduling algorithms which are based on different properties of the system. Round Robin is known to be the most recurrent pre-emptive algorithm used in an environment where processes are allotted a unit of time and multiprocessing operating systems. In this paper, a reformed variation of the Round Robin algorithm has been introduced to minimise the completion time, turnaround time, waiting time and number of context switches that results in the better performance of the system. The proposed work consists of calculation of priority on the basis of the difference between time spent in ready upto the moment and arrival time of the process, to ease up the burden on the ready queue. We have also evaluated the performance of the proposed approach on different datasets and measured the different scheduling criteria.
2210.15377
Miriam Ansch\"utz
Miriam Ansch\"utz, Tobias Eder, Georg Groh
Retrieving Users' Opinions on Social Media with Multimodal Aspect-Based Sentiment Analysis
8 pages, 5 figures, published at 2023 IEEE 17th International Conference on Semantic Computing (ICSC)
null
null
null
cs.IR cs.AI cs.CL cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
People post their opinions and experiences on social media, yielding rich databases of end-users' sentiments. This paper shows to what extent machine learning can analyze and structure these databases. An automated data analysis pipeline is deployed to provide insights into user-generated content for researchers in other domains. First, the domain expert can select an image and a term of interest. Then, the pipeline uses image retrieval to find all images showing similar content and applies aspect-based sentiment analysis to outline users' opinions about the selected term. As part of an interdisciplinary project between architecture and computer science researchers, an empirical study of Hamburg's Elbphilharmonie was conveyed. Therefore, we selected 300 thousand posts with the hashtag \enquote{\texttt{hamburg}} from the platform Flickr. Image retrieval methods generated a subset of slightly more than 1.5 thousand images displaying the Elbphilharmonie. We found that these posts mainly convey a neutral or positive sentiment towards it. With this pipeline, we suggest a new semantic computing method that offers novel insights into end-users opinions, e.g., for architecture domain experts.
[ { "created": "Thu, 27 Oct 2022 12:38:10 GMT", "version": "v1" }, { "created": "Mon, 9 Jan 2023 07:40:32 GMT", "version": "v2" } ]
2023-01-10
[ [ "Anschütz", "Miriam", "" ], [ "Eder", "Tobias", "" ], [ "Groh", "Georg", "" ] ]
People post their opinions and experiences on social media, yielding rich databases of end-users' sentiments. This paper shows to what extent machine learning can analyze and structure these databases. An automated data analysis pipeline is deployed to provide insights into user-generated content for researchers in other domains. First, the domain expert can select an image and a term of interest. Then, the pipeline uses image retrieval to find all images showing similar content and applies aspect-based sentiment analysis to outline users' opinions about the selected term. As part of an interdisciplinary project between architecture and computer science researchers, an empirical study of Hamburg's Elbphilharmonie was conveyed. Therefore, we selected 300 thousand posts with the hashtag \enquote{\texttt{hamburg}} from the platform Flickr. Image retrieval methods generated a subset of slightly more than 1.5 thousand images displaying the Elbphilharmonie. We found that these posts mainly convey a neutral or positive sentiment towards it. With this pipeline, we suggest a new semantic computing method that offers novel insights into end-users opinions, e.g., for architecture domain experts.
0811.4483
Ga\"etan Le Guelvouit
Ga\"etan Le Guelvouit, St\'ephane Pateux
Wide spread spectrum watermarking with side information and interference cancellation
12 pages, 8 figures
Proc. IS&T/SPIE Electronic Imaging, vol. 5020, Santa Clara, CA, Jan. 2003
10.1117/12.476839
null
cs.MM cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nowadays, a popular method used for additive watermarking is wide spread spectrum. It consists in adding a spread signal into the host document. This signal is obtained by the sum of a set of carrier vectors, which are modulated by the bits to be embedded. To extract these embedded bits, weighted correlations between the watermarked document and the carriers are computed. Unfortunately, even without any attack, the obtained set of bits can be corrupted due to the interference with the host signal (host interference) and also due to the interference with the others carriers (inter-symbols interference (ISI) due to the non-orthogonality of the carriers). Some recent watermarking algorithms deal with host interference using side informed methods, but inter-symbols interference problem is still open. In this paper, we deal with interference cancellation methods, and we propose to consider ISI as side information and to integrate it into the host signal. This leads to a great improvement of extraction performance in term of signal-to-noise ratio and/or watermark robustness.
[ { "created": "Fri, 28 Nov 2008 16:28:59 GMT", "version": "v1" } ]
2008-12-01
[ [ "Guelvouit", "Gaëtan Le", "" ], [ "Pateux", "Stéphane", "" ] ]
Nowadays, a popular method used for additive watermarking is wide spread spectrum. It consists in adding a spread signal into the host document. This signal is obtained by the sum of a set of carrier vectors, which are modulated by the bits to be embedded. To extract these embedded bits, weighted correlations between the watermarked document and the carriers are computed. Unfortunately, even without any attack, the obtained set of bits can be corrupted due to the interference with the host signal (host interference) and also due to the interference with the others carriers (inter-symbols interference (ISI) due to the non-orthogonality of the carriers). Some recent watermarking algorithms deal with host interference using side informed methods, but inter-symbols interference problem is still open. In this paper, we deal with interference cancellation methods, and we propose to consider ISI as side information and to integrate it into the host signal. This leads to a great improvement of extraction performance in term of signal-to-noise ratio and/or watermark robustness.
2309.15646
Devin Kuang
Xiangyu Zhang, Zongqiang Kuang, Zehao Zhang, Fan Huang, Xianfeng Tan
Cold & Warm Net: Addressing Cold-Start Users in Recommender Systems
null
null
null
null
cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cold-start recommendation is one of the major challenges faced by recommender systems (RS). Herein, we focus on the user cold-start problem. Recently, methods utilizing side information or meta-learning have been used to model cold-start users. However, it is difficult to deploy these methods to industrial RS. There has not been much research that pays attention to the user cold-start problem in the matching stage. In this paper, we propose Cold & Warm Net based on expert models who are responsible for modeling cold-start and warm-up users respectively. A gate network is applied to incorporate the results from two experts. Furthermore, dynamic knowledge distillation acting as a teacher selector is introduced to assist experts in better learning user representation. With comprehensive mutual information, features highly relevant to user behavior are selected for the bias net which explicitly models user behavior bias. Finally, we evaluate our Cold & Warm Net on public datasets in comparison to models commonly applied in the matching stage and it outperforms other models on all user types. The proposed model has also been deployed on an industrial short video platform and achieves a significant increase in app dwell time and user retention rate.
[ { "created": "Wed, 27 Sep 2023 13:31:43 GMT", "version": "v1" } ]
2023-09-28
[ [ "Zhang", "Xiangyu", "" ], [ "Kuang", "Zongqiang", "" ], [ "Zhang", "Zehao", "" ], [ "Huang", "Fan", "" ], [ "Tan", "Xianfeng", "" ] ]
Cold-start recommendation is one of the major challenges faced by recommender systems (RS). Herein, we focus on the user cold-start problem. Recently, methods utilizing side information or meta-learning have been used to model cold-start users. However, it is difficult to deploy these methods to industrial RS. There has not been much research that pays attention to the user cold-start problem in the matching stage. In this paper, we propose Cold & Warm Net based on expert models who are responsible for modeling cold-start and warm-up users respectively. A gate network is applied to incorporate the results from two experts. Furthermore, dynamic knowledge distillation acting as a teacher selector is introduced to assist experts in better learning user representation. With comprehensive mutual information, features highly relevant to user behavior are selected for the bias net which explicitly models user behavior bias. Finally, we evaluate our Cold & Warm Net on public datasets in comparison to models commonly applied in the matching stage and it outperforms other models on all user types. The proposed model has also been deployed on an industrial short video platform and achieves a significant increase in app dwell time and user retention rate.
2203.07407
Manon Blanc
Manon Blanc and Kristoffer Arnsfelt Hansen
Computational Complexity of Multi-Player Evolutionarily Stable Strategies
null
null
null
null
cs.CC
http://creativecommons.org/licenses/by/4.0/
In this paper we study the computational complexity of computing an evolutionary stable strategy (ESS) in multi-player symmetric games. For two-player games, deciding existence of an ESS is complete for {\Sigma} 2 , the second level of the polynomial time hierarchy. We show that deciding existence of an ESS of a multi-player game is closely connected to the second level of the real polynomial time hierarchy. Namely, we show that the problem is hard for a complexity class we denote as \exists D . \forall R and is a member of \exists\forall R, where the former class restrict the latter by having the existentially quantified variables be Boolean rather then real-valued. As a special case of our results it follows that deciding whether a given strategy is an ESS is complete for \forall R. A concept strongly related to ESS is that of a locally superior strategy (LSS). We extend our results about ESS and show that deciding existence of an LSS of a multiplayer game is likewise hard for \exists D \forall R and a member of \exists\forall R, and as a special case that deciding whether a given strategy is an LSS is complete for \forall R.
[ { "created": "Mon, 14 Mar 2022 18:13:59 GMT", "version": "v1" } ]
2022-03-16
[ [ "Blanc", "Manon", "" ], [ "Hansen", "Kristoffer Arnsfelt", "" ] ]
In this paper we study the computational complexity of computing an evolutionary stable strategy (ESS) in multi-player symmetric games. For two-player games, deciding existence of an ESS is complete for {\Sigma} 2 , the second level of the polynomial time hierarchy. We show that deciding existence of an ESS of a multi-player game is closely connected to the second level of the real polynomial time hierarchy. Namely, we show that the problem is hard for a complexity class we denote as \exists D . \forall R and is a member of \exists\forall R, where the former class restrict the latter by having the existentially quantified variables be Boolean rather then real-valued. As a special case of our results it follows that deciding whether a given strategy is an ESS is complete for \forall R. A concept strongly related to ESS is that of a locally superior strategy (LSS). We extend our results about ESS and show that deciding existence of an LSS of a multiplayer game is likewise hard for \exists D \forall R and a member of \exists\forall R, and as a special case that deciding whether a given strategy is an LSS is complete for \forall R.
0802.3267
Amitabh Trehan
Tom Hayes and Navin Rustagi and Jared Saia and Amitabh Trehan
The Forgiving Tree: A Self-Healing Distributed Data Structure
Submitted to Principles of Distributed Computing (PODC) 2008
PODC '08: Proceedings of the twenty-seventh ACM symposium on Principles of distributed computing. 2008, pages 203--212
null
null
cs.DC cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that the following process continues for up to n rounds where n is the total number of nodes initially in the network: the adversary deletes an arbitrary node from the network, then the network responds by quickly adding a small number of new edges. We present a distributed data structure that ensures two key properties. First, the diameter of the network is never more than $O(\log \Delta)$ times its original diameter, where $\Delta$ is the maximum degree of the network initially. We note that for many peer-to-peer systems, $\Delta$ is polylogarithmic, so the diameter increase would be a O(log log n) multiplicative factor. Second, the degree of any node never increases by more than 3 over its original degree. Our data structure is fully distributed, has O(1) latency per round and requires each node to send and receive O(1) messages per round. The data structure requires an initial setup phase that has latency equal to the diameter of the original network, and requires, with high probability, each node v to send O(log n) messages along every edge incident to v. Our approach is orthogonal and complementary to traditional topology-based approaches to defending against attack.
[ { "created": "Fri, 22 Feb 2008 08:22:33 GMT", "version": "v1" } ]
2009-02-15
[ [ "Hayes", "Tom", "" ], [ "Rustagi", "Navin", "" ], [ "Saia", "Jared", "" ], [ "Trehan", "Amitabh", "" ] ]
We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that the following process continues for up to n rounds where n is the total number of nodes initially in the network: the adversary deletes an arbitrary node from the network, then the network responds by quickly adding a small number of new edges. We present a distributed data structure that ensures two key properties. First, the diameter of the network is never more than $O(\log \Delta)$ times its original diameter, where $\Delta$ is the maximum degree of the network initially. We note that for many peer-to-peer systems, $\Delta$ is polylogarithmic, so the diameter increase would be a O(log log n) multiplicative factor. Second, the degree of any node never increases by more than 3 over its original degree. Our data structure is fully distributed, has O(1) latency per round and requires each node to send and receive O(1) messages per round. The data structure requires an initial setup phase that has latency equal to the diameter of the original network, and requires, with high probability, each node v to send O(log n) messages along every edge incident to v. Our approach is orthogonal and complementary to traditional topology-based approaches to defending against attack.
2310.03003
Vijay Gadepally
Siddharth Samsi, Dan Zhao, Joseph McDonald, Baolin Li, Adam Michaleas, Michael Jones, William Bergeron, Jeremy Kepner, Devesh Tiwari, Vijay Gadepally
From Words to Watts: Benchmarking the Energy Costs of Large Language Model Inference
null
null
null
null
cs.CL cs.DC
http://creativecommons.org/licenses/by/4.0/
Large language models (LLMs) have exploded in popularity due to their new generative capabilities that go far beyond prior state-of-the-art. These technologies are increasingly being leveraged in various domains such as law, finance, and medicine. However, these models carry significant computational challenges, especially the compute and energy costs required for inference. Inference energy costs already receive less attention than the energy costs of training LLMs -- despite how often these large models are called on to conduct inference in reality (e.g., ChatGPT). As these state-of-the-art LLMs see increasing usage and deployment in various domains, a better understanding of their resource utilization is crucial for cost-savings, scaling performance, efficient hardware usage, and optimal inference strategies. In this paper, we describe experiments conducted to study the computational and energy utilization of inference with LLMs. We benchmark and conduct a preliminary analysis of the inference performance and inference energy costs of different sizes of LLaMA -- a recent state-of-the-art LLM -- developed by Meta AI on two generations of popular GPUs (NVIDIA V100 \& A100) and two datasets (Alpaca and GSM8K) to reflect the diverse set of tasks/benchmarks for LLMs in research and practice. We present the results of multi-node, multi-GPU inference using model sharding across up to 32 GPUs. To our knowledge, our work is the one of the first to study LLM inference performance from the perspective of computational and energy resources at this scale.
[ { "created": "Wed, 4 Oct 2023 17:41:59 GMT", "version": "v1" } ]
2023-10-05
[ [ "Samsi", "Siddharth", "" ], [ "Zhao", "Dan", "" ], [ "McDonald", "Joseph", "" ], [ "Li", "Baolin", "" ], [ "Michaleas", "Adam", "" ], [ "Jones", "Michael", "" ], [ "Bergeron", "William", "" ], [ "Kepner", "Jeremy", "" ], [ "Tiwari", "Devesh", "" ], [ "Gadepally", "Vijay", "" ] ]
Large language models (LLMs) have exploded in popularity due to their new generative capabilities that go far beyond prior state-of-the-art. These technologies are increasingly being leveraged in various domains such as law, finance, and medicine. However, these models carry significant computational challenges, especially the compute and energy costs required for inference. Inference energy costs already receive less attention than the energy costs of training LLMs -- despite how often these large models are called on to conduct inference in reality (e.g., ChatGPT). As these state-of-the-art LLMs see increasing usage and deployment in various domains, a better understanding of their resource utilization is crucial for cost-savings, scaling performance, efficient hardware usage, and optimal inference strategies. In this paper, we describe experiments conducted to study the computational and energy utilization of inference with LLMs. We benchmark and conduct a preliminary analysis of the inference performance and inference energy costs of different sizes of LLaMA -- a recent state-of-the-art LLM -- developed by Meta AI on two generations of popular GPUs (NVIDIA V100 \& A100) and two datasets (Alpaca and GSM8K) to reflect the diverse set of tasks/benchmarks for LLMs in research and practice. We present the results of multi-node, multi-GPU inference using model sharding across up to 32 GPUs. To our knowledge, our work is the one of the first to study LLM inference performance from the perspective of computational and energy resources at this scale.
2010.05446
Bo Chen
Jing Zhang, Bo Chen, Lingxi Zhang, Xirui Ke, Haipeng Ding
Neural, Symbolic and Neural-Symbolic Reasoning on Knowledge Graphs
29 pages, AI Open Journal 2021
null
null
null
cs.AI cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge graph reasoning is the fundamental component to support machine learning applications such as information extraction, information retrieval, and recommendation. Since knowledge graphs can be viewed as the discrete symbolic representations of knowledge, reasoning on knowledge graphs can naturally leverage the symbolic techniques. However, symbolic reasoning is intolerant of the ambiguous and noisy data. On the contrary, the recent advances of deep learning promote neural reasoning on knowledge graphs, which is robust to the ambiguous and noisy data, but lacks interpretability compared to symbolic reasoning. Considering the advantages and disadvantages of both methodologies, recent efforts have been made on combining the two reasoning methods. In this survey, we take a thorough look at the development of the symbolic, neural and hybrid reasoning on knowledge graphs. We survey two specific reasoning tasks, knowledge graph completion and question answering on knowledge graphs, and explain them in a unified reasoning framework. We also briefly discuss the future directions for knowledge graph reasoning.
[ { "created": "Mon, 12 Oct 2020 04:28:57 GMT", "version": "v1" }, { "created": "Wed, 28 Oct 2020 06:47:45 GMT", "version": "v2" }, { "created": "Thu, 29 Oct 2020 04:49:30 GMT", "version": "v3" }, { "created": "Fri, 26 Mar 2021 06:46:16 GMT", "version": "v4" }, { "created": "Wed, 31 Mar 2021 02:53:48 GMT", "version": "v5" } ]
2021-04-01
[ [ "Zhang", "Jing", "" ], [ "Chen", "Bo", "" ], [ "Zhang", "Lingxi", "" ], [ "Ke", "Xirui", "" ], [ "Ding", "Haipeng", "" ] ]
Knowledge graph reasoning is the fundamental component to support machine learning applications such as information extraction, information retrieval, and recommendation. Since knowledge graphs can be viewed as the discrete symbolic representations of knowledge, reasoning on knowledge graphs can naturally leverage the symbolic techniques. However, symbolic reasoning is intolerant of the ambiguous and noisy data. On the contrary, the recent advances of deep learning promote neural reasoning on knowledge graphs, which is robust to the ambiguous and noisy data, but lacks interpretability compared to symbolic reasoning. Considering the advantages and disadvantages of both methodologies, recent efforts have been made on combining the two reasoning methods. In this survey, we take a thorough look at the development of the symbolic, neural and hybrid reasoning on knowledge graphs. We survey two specific reasoning tasks, knowledge graph completion and question answering on knowledge graphs, and explain them in a unified reasoning framework. We also briefly discuss the future directions for knowledge graph reasoning.
2108.08874
Zu Kim
Zu Kim, Andr\'e Araujo, Bingyi Cao, Cam Askew, Jack Sim, Mike Green, N'Mah Fodiatu Yilla, Tobias Weyand
Towards A Fairer Landmark Recognition Dataset
Please cite the full detailed version of the paper instead: Improving Fairness in Large-Scale Object Recognition by CrowdSourced Demographic Information arXiv:2206.01326
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
We introduce a new landmark recognition dataset, which is created with a focus on fair worldwide representation. While previous work proposes to collect as many images as possible from web repositories, we instead argue that such approaches can lead to biased data. To create a more comprehensive and equitable dataset, we start by defining the fair relevance of a landmark to the world population. These relevances are estimated by combining anonymized Google Maps user contribution statistics with the contributors' demographic information. We present a stratification approach and analysis which leads to a much fairer coverage of the world, compared to existing datasets. The resulting datasets are used to evaluate computer vision models as part of the the Google Landmark Recognition and RetrievalChallenges 2021.
[ { "created": "Thu, 19 Aug 2021 18:42:22 GMT", "version": "v1" }, { "created": "Mon, 6 Jun 2022 15:36:36 GMT", "version": "v2" } ]
2022-06-07
[ [ "Kim", "Zu", "" ], [ "Araujo", "André", "" ], [ "Cao", "Bingyi", "" ], [ "Askew", "Cam", "" ], [ "Sim", "Jack", "" ], [ "Green", "Mike", "" ], [ "Yilla", "N'Mah Fodiatu", "" ], [ "Weyand", "Tobias", "" ] ]
We introduce a new landmark recognition dataset, which is created with a focus on fair worldwide representation. While previous work proposes to collect as many images as possible from web repositories, we instead argue that such approaches can lead to biased data. To create a more comprehensive and equitable dataset, we start by defining the fair relevance of a landmark to the world population. These relevances are estimated by combining anonymized Google Maps user contribution statistics with the contributors' demographic information. We present a stratification approach and analysis which leads to a much fairer coverage of the world, compared to existing datasets. The resulting datasets are used to evaluate computer vision models as part of the the Google Landmark Recognition and RetrievalChallenges 2021.
2306.02632
Catie Cuan
Catie Cuan, Emre Fisher, Allison Okamura, and Tom Engbersen
Music Mode: Transforming Robot Movement into Music Increases Likability and Perceived Intelligence
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As robots enter everyday spaces like offices, the sounds they create affect how they are perceived. We present Music Mode, a novel mapping between a robot's joint motions and sounds, programmed by artists and engineers to make the robot generate music as it moves. Two experiments were designed to characterize the effect of this musical augmentation on human users. In the first experiment, a robot performed three tasks while playing three different sound mappings. Results showed that participants observing the robot perceived it as more safe, animate, intelligent, anthropomorphic, and likable when playing the Music Mode Orchestra software. To test whether the results of the first experiment were due to the Music Mode algorithm, rather than music alone, we conducted a second experiment. Here the robot performed the same three tasks, while a participant observed via video, but the Orchestra music was either linked to its movement or random. Participants rated the robots as more intelligent when the music was linked to the movement. Robots using Music Mode logged approximately two hundred hours of operation while navigating, wiping tables, and sorting trash, and bystander comments made during this operating time served as an embedded case study. This paper has both designerly contributions and engineering contributions. The contributions are: (1) an interdisciplinary choreographic, musical, and coding design process to develop a real-world robot sound feature, (2) a technical implementation for movement-based sound generation, and (3) two experiments and an embedded case study of robots running this feature during daily work activities that resulted in increased likeability and perceived intelligence of the robot.
[ { "created": "Mon, 5 Jun 2023 07:04:45 GMT", "version": "v1" }, { "created": "Mon, 1 Apr 2024 01:40:59 GMT", "version": "v2" } ]
2024-04-02
[ [ "Cuan", "Catie", "" ], [ "Fisher", "Emre", "" ], [ "Okamura", "Allison", "" ], [ "Engbersen", "Tom", "" ] ]
As robots enter everyday spaces like offices, the sounds they create affect how they are perceived. We present Music Mode, a novel mapping between a robot's joint motions and sounds, programmed by artists and engineers to make the robot generate music as it moves. Two experiments were designed to characterize the effect of this musical augmentation on human users. In the first experiment, a robot performed three tasks while playing three different sound mappings. Results showed that participants observing the robot perceived it as more safe, animate, intelligent, anthropomorphic, and likable when playing the Music Mode Orchestra software. To test whether the results of the first experiment were due to the Music Mode algorithm, rather than music alone, we conducted a second experiment. Here the robot performed the same three tasks, while a participant observed via video, but the Orchestra music was either linked to its movement or random. Participants rated the robots as more intelligent when the music was linked to the movement. Robots using Music Mode logged approximately two hundred hours of operation while navigating, wiping tables, and sorting trash, and bystander comments made during this operating time served as an embedded case study. This paper has both designerly contributions and engineering contributions. The contributions are: (1) an interdisciplinary choreographic, musical, and coding design process to develop a real-world robot sound feature, (2) a technical implementation for movement-based sound generation, and (3) two experiments and an embedded case study of robots running this feature during daily work activities that resulted in increased likeability and perceived intelligence of the robot.
2003.08747
Laura Rieger
Laura Rieger, Lars Kai Hansen
IROF: a low resource evaluation metric for explanation methods
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The adoption of machine learning in health care hinges on the transparency of the used algorithms, necessitating the need for explanation methods. However, despite a growing literature on explaining neural networks, no consensus has been reached on how to evaluate those explanation methods. We propose IROF, a new approach to evaluating explanation methods that circumvents the need for manual evaluation. Compared to other recent work, our approach requires several orders of magnitude less computational resources and no human input, making it accessible to lower resource groups and robust to human bias.
[ { "created": "Mon, 9 Mar 2020 13:01:30 GMT", "version": "v1" } ]
2020-03-20
[ [ "Rieger", "Laura", "" ], [ "Hansen", "Lars Kai", "" ] ]
The adoption of machine learning in health care hinges on the transparency of the used algorithms, necessitating the need for explanation methods. However, despite a growing literature on explaining neural networks, no consensus has been reached on how to evaluate those explanation methods. We propose IROF, a new approach to evaluating explanation methods that circumvents the need for manual evaluation. Compared to other recent work, our approach requires several orders of magnitude less computational resources and no human input, making it accessible to lower resource groups and robust to human bias.
1704.02537
Nikhil Mande
Arkadev Chattopadhyay, Nikhil S. Mande
Dual polynomials and communication complexity of $\textsf{XOR}$ functions
null
null
null
null
cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show a new duality between the polynomial margin complexity of $f$ and the discrepancy of the function $f \circ \textsf{XOR}$, called an $\textsf{XOR}$ function. Using this duality, we develop polynomial based techniques for understanding the bounded error ($\textsf{BPP}$) and the weakly-unbounded error ($\textsf{PP}$) communication complexities of $\textsf{XOR}$ functions. We show the following. A weak form of an interesting conjecture of Zhang and Shi (Quantum Information and Computation, 2009) (The full conjecture has just been reported to be independently settled by Hatami and Qian (Arxiv, 2017). However, their techniques are quite different and are not known to yield many of the results we obtain here). Zhang and Shi assert that for symmetric functions $f : \{0, 1\}^n \rightarrow \{-1, 1\}$, the weakly unbounded-error complexity of $f \circ \textsf{XOR}$ is essentially characterized by the number of points $i$ in the set $\{0,1, \dots,n-2\}$ for which $D_f(i) \neq D_f(i+2)$, where $D_f$ is the predicate corresponding to $f$. The number of such points is called the odd-even degree of $f$. We show that the $\textsf{PP}$ complexity of $f \circ \textsf{XOR}$ is $\Omega(k/ \log(n/k))$. We resolve a conjecture of a different Zhang characterizing the Threshold of Parity circuit size of symmetric functions in terms of their odd-even degree. We obtain a new proof of the exponential separation between $\textsf{PP}^{cc}$ and $\textsf{UPP}^{cc}$ via an $\textsf{XOR}$ function. We provide a characterization of the approximate spectral norm of symmetric functions, affirming a conjecture of Ada et al. (APPROX-RANDOM, 2012) which has several consequences. Additionally, we prove strong $\textsf{UPP}$ lower bounds for $f \circ \textsf{XOR}$, when $f$ is symmetric and periodic with period $O(n^{1/2-\epsilon})$, for any constant $\epsilon > 0$.
[ { "created": "Sat, 8 Apr 2017 21:27:11 GMT", "version": "v1" } ]
2017-04-11
[ [ "Chattopadhyay", "Arkadev", "" ], [ "Mande", "Nikhil S.", "" ] ]
We show a new duality between the polynomial margin complexity of $f$ and the discrepancy of the function $f \circ \textsf{XOR}$, called an $\textsf{XOR}$ function. Using this duality, we develop polynomial based techniques for understanding the bounded error ($\textsf{BPP}$) and the weakly-unbounded error ($\textsf{PP}$) communication complexities of $\textsf{XOR}$ functions. We show the following. A weak form of an interesting conjecture of Zhang and Shi (Quantum Information and Computation, 2009) (The full conjecture has just been reported to be independently settled by Hatami and Qian (Arxiv, 2017). However, their techniques are quite different and are not known to yield many of the results we obtain here). Zhang and Shi assert that for symmetric functions $f : \{0, 1\}^n \rightarrow \{-1, 1\}$, the weakly unbounded-error complexity of $f \circ \textsf{XOR}$ is essentially characterized by the number of points $i$ in the set $\{0,1, \dots,n-2\}$ for which $D_f(i) \neq D_f(i+2)$, where $D_f$ is the predicate corresponding to $f$. The number of such points is called the odd-even degree of $f$. We show that the $\textsf{PP}$ complexity of $f \circ \textsf{XOR}$ is $\Omega(k/ \log(n/k))$. We resolve a conjecture of a different Zhang characterizing the Threshold of Parity circuit size of symmetric functions in terms of their odd-even degree. We obtain a new proof of the exponential separation between $\textsf{PP}^{cc}$ and $\textsf{UPP}^{cc}$ via an $\textsf{XOR}$ function. We provide a characterization of the approximate spectral norm of symmetric functions, affirming a conjecture of Ada et al. (APPROX-RANDOM, 2012) which has several consequences. Additionally, we prove strong $\textsf{UPP}$ lower bounds for $f \circ \textsf{XOR}$, when $f$ is symmetric and periodic with period $O(n^{1/2-\epsilon})$, for any constant $\epsilon > 0$.
2102.03520
Jie Mei
Jie Mei, Jenq-Neng Hwang, Suzanne Romain, Craig Rose, Braden Moore, and Kelsey Magrane
Video-based Hierarchical Species Classification for Longline Fishing Monitoring
To be published in CVAUI2020 in conjunction with ICPR2020
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The goal of electronic monitoring (EM) of longline fishing is to monitor the fish catching activities on fishing vessels, either for the regulatory compliance or catch counting. Hierarchical classification based on videos allows for inexpensive and efficient fish species identification of catches from longline fishing, where fishes are under severe deformation and self-occlusion during the catching process. More importantly, the flexibility of hierarchical classification mitigates the laborious efforts of human reviews by providing confidence scores in different hierarchical levels. Some related works either use cascaded models for hierarchical classification or make predictions per image or predict one overlapping hierarchical data structure of the dataset in advance. However, with a known non-overlapping hierarchical data structure provided by fisheries scientists, our method enforces the hierarchical data structure and introduces an efficient training and inference strategy for video-based fisheries data. Our experiments show that the proposed method outperforms the classic flat classification system significantly and our ablation study justifies our contributions in CNN model design, training strategy, and the video-based inference schemes for the hierarchical fish species classification task.
[ { "created": "Sat, 6 Feb 2021 06:10:52 GMT", "version": "v1" } ]
2021-02-09
[ [ "Mei", "Jie", "" ], [ "Hwang", "Jenq-Neng", "" ], [ "Romain", "Suzanne", "" ], [ "Rose", "Craig", "" ], [ "Moore", "Braden", "" ], [ "Magrane", "Kelsey", "" ] ]
The goal of electronic monitoring (EM) of longline fishing is to monitor the fish catching activities on fishing vessels, either for the regulatory compliance or catch counting. Hierarchical classification based on videos allows for inexpensive and efficient fish species identification of catches from longline fishing, where fishes are under severe deformation and self-occlusion during the catching process. More importantly, the flexibility of hierarchical classification mitigates the laborious efforts of human reviews by providing confidence scores in different hierarchical levels. Some related works either use cascaded models for hierarchical classification or make predictions per image or predict one overlapping hierarchical data structure of the dataset in advance. However, with a known non-overlapping hierarchical data structure provided by fisheries scientists, our method enforces the hierarchical data structure and introduces an efficient training and inference strategy for video-based fisheries data. Our experiments show that the proposed method outperforms the classic flat classification system significantly and our ablation study justifies our contributions in CNN model design, training strategy, and the video-based inference schemes for the hierarchical fish species classification task.
2405.20527
Francesco Ronzano
Francesco Ronzano and Jay Nanavati
Towards Ontology-Enhanced Representation Learning for Large Language Models
14 pages, 1 figure
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Taking advantage of the widespread use of ontologies to organise and harmonize knowledge across several distinct domains, this paper proposes a novel approach to improve an embedding-Large Language Model (embedding-LLM) of interest by infusing the knowledge formalized by a reference ontology: ontological knowledge infusion aims at boosting the ability of the considered LLM to effectively model the knowledge domain described by the infused ontology. The linguistic information (i.e. concept synonyms and descriptions) and structural information (i.e. is-a relations) formalized by the ontology are utilized to compile a comprehensive set of concept definitions, with the assistance of a powerful generative LLM (i.e. GPT-3.5-turbo). These concept definitions are then employed to fine-tune the target embedding-LLM using a contrastive learning framework. To demonstrate and evaluate the proposed approach, we utilize the biomedical disease ontology MONDO. The results show that embedding-LLMs enhanced by ontological disease knowledge exhibit an improved capability to effectively evaluate the similarity of in-domain sentences from biomedical documents mentioning diseases, without compromising their out-of-domain performance.
[ { "created": "Thu, 30 May 2024 23:01:10 GMT", "version": "v1" } ]
2024-06-03
[ [ "Ronzano", "Francesco", "" ], [ "Nanavati", "Jay", "" ] ]
Taking advantage of the widespread use of ontologies to organise and harmonize knowledge across several distinct domains, this paper proposes a novel approach to improve an embedding-Large Language Model (embedding-LLM) of interest by infusing the knowledge formalized by a reference ontology: ontological knowledge infusion aims at boosting the ability of the considered LLM to effectively model the knowledge domain described by the infused ontology. The linguistic information (i.e. concept synonyms and descriptions) and structural information (i.e. is-a relations) formalized by the ontology are utilized to compile a comprehensive set of concept definitions, with the assistance of a powerful generative LLM (i.e. GPT-3.5-turbo). These concept definitions are then employed to fine-tune the target embedding-LLM using a contrastive learning framework. To demonstrate and evaluate the proposed approach, we utilize the biomedical disease ontology MONDO. The results show that embedding-LLMs enhanced by ontological disease knowledge exhibit an improved capability to effectively evaluate the similarity of in-domain sentences from biomedical documents mentioning diseases, without compromising their out-of-domain performance.
2308.04761
Zijian Li
Zijian Li, Yuchang Sun, Jiawei Shao, Yuyi Mao, Jessie Hui Wang, Jun Zhang
Feature Matching Data Synthesis for Non-IID Federated Learning
16 pages
null
null
null
cs.LG cs.AI cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated learning (FL) has emerged as a privacy-preserving paradigm that trains neural networks on edge devices without collecting data at a central server. However, FL encounters an inherent challenge in dealing with non-independent and identically distributed (non-IID) data among devices. To address this challenge, this paper proposes a hard feature matching data synthesis (HFMDS) method to share auxiliary data besides local models. Specifically, synthetic data are generated by learning the essential class-relevant features of real samples and discarding the redundant features, which helps to effectively tackle the non-IID issue. For better privacy preservation, we propose a hard feature augmentation method to transfer real features towards the decision boundary, with which the synthetic data not only improve the model generalization but also erase the information of real features. By integrating the proposed HFMDS method with FL, we present a novel FL framework with data augmentation to relieve data heterogeneity. The theoretical analysis highlights the effectiveness of our proposed data synthesis method in solving the non-IID challenge. Simulation results further demonstrate that our proposed HFMDS-FL algorithm outperforms the baselines in terms of accuracy, privacy preservation, and computational cost on various benchmark datasets.
[ { "created": "Wed, 9 Aug 2023 07:49:39 GMT", "version": "v1" } ]
2023-08-10
[ [ "Li", "Zijian", "" ], [ "Sun", "Yuchang", "" ], [ "Shao", "Jiawei", "" ], [ "Mao", "Yuyi", "" ], [ "Wang", "Jessie Hui", "" ], [ "Zhang", "Jun", "" ] ]
Federated learning (FL) has emerged as a privacy-preserving paradigm that trains neural networks on edge devices without collecting data at a central server. However, FL encounters an inherent challenge in dealing with non-independent and identically distributed (non-IID) data among devices. To address this challenge, this paper proposes a hard feature matching data synthesis (HFMDS) method to share auxiliary data besides local models. Specifically, synthetic data are generated by learning the essential class-relevant features of real samples and discarding the redundant features, which helps to effectively tackle the non-IID issue. For better privacy preservation, we propose a hard feature augmentation method to transfer real features towards the decision boundary, with which the synthetic data not only improve the model generalization but also erase the information of real features. By integrating the proposed HFMDS method with FL, we present a novel FL framework with data augmentation to relieve data heterogeneity. The theoretical analysis highlights the effectiveness of our proposed data synthesis method in solving the non-IID challenge. Simulation results further demonstrate that our proposed HFMDS-FL algorithm outperforms the baselines in terms of accuracy, privacy preservation, and computational cost on various benchmark datasets.
2312.14211
Sergi Blanco-Cuaresma
Sergi Blanco-Cuaresma, Ioana Ciuc\u{a}, Alberto Accomazzi, Michael J. Kurtz, Edwin A. Henneken, Kelly E. Lockhart, Felix Grezes, Thomas Allen, Golnaz Shapurian, Carolyn S. Grant, Donna M. Thompson, Timothy W. Hostetler, Matthew R. Templeton, Shinyi Chen, Jennifer Koch, Taylor Jacovich, Daniel Chivvis, Fernanda de Macedo Alves, Jean-Claude Paquin, Jennifer Bartlett, Mugdha Polimera, and Stephanie Jarmak
Experimenting with Large Language Models and vector embeddings in NASA SciX
To appear in the proceedings of the 33th annual international Astronomical Data Analysis Software & Systems (ADASS XXXIII)
null
null
null
cs.CL astro-ph.IM cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Open-source Large Language Models enable projects such as NASA SciX (i.e., NASA ADS) to think out of the box and try alternative approaches for information retrieval and data augmentation, while respecting data copyright and users' privacy. However, when large language models are directly prompted with questions without any context, they are prone to hallucination. At NASA SciX we have developed an experiment where we created semantic vectors for our large collection of abstracts and full-text content, and we designed a prompt system to ask questions using contextual chunks from our system. Based on a non-systematic human evaluation, the experiment shows a lower degree of hallucination and better responses when using Retrieval Augmented Generation. Further exploration is required to design new features and data augmentation processes at NASA SciX that leverages this technology while respecting the high level of trust and quality that the project holds.
[ { "created": "Thu, 21 Dec 2023 10:19:58 GMT", "version": "v1" } ]
2023-12-25
[ [ "Blanco-Cuaresma", "Sergi", "" ], [ "Ciucă", "Ioana", "" ], [ "Accomazzi", "Alberto", "" ], [ "Kurtz", "Michael J.", "" ], [ "Henneken", "Edwin A.", "" ], [ "Lockhart", "Kelly E.", "" ], [ "Grezes", "Felix", "" ], [ "Allen", "Thomas", "" ], [ "Shapurian", "Golnaz", "" ], [ "Grant", "Carolyn S.", "" ], [ "Thompson", "Donna M.", "" ], [ "Hostetler", "Timothy W.", "" ], [ "Templeton", "Matthew R.", "" ], [ "Chen", "Shinyi", "" ], [ "Koch", "Jennifer", "" ], [ "Jacovich", "Taylor", "" ], [ "Chivvis", "Daniel", "" ], [ "Alves", "Fernanda de Macedo", "" ], [ "Paquin", "Jean-Claude", "" ], [ "Bartlett", "Jennifer", "" ], [ "Polimera", "Mugdha", "" ], [ "Jarmak", "Stephanie", "" ] ]
Open-source Large Language Models enable projects such as NASA SciX (i.e., NASA ADS) to think out of the box and try alternative approaches for information retrieval and data augmentation, while respecting data copyright and users' privacy. However, when large language models are directly prompted with questions without any context, they are prone to hallucination. At NASA SciX we have developed an experiment where we created semantic vectors for our large collection of abstracts and full-text content, and we designed a prompt system to ask questions using contextual chunks from our system. Based on a non-systematic human evaluation, the experiment shows a lower degree of hallucination and better responses when using Retrieval Augmented Generation. Further exploration is required to design new features and data augmentation processes at NASA SciX that leverages this technology while respecting the high level of trust and quality that the project holds.
1501.07429
Nans Lefebvre
Nans Lefebvre
Convergence law for hyper-graphs with prescribed degree sequences
10 pages, 6 figures
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We view hyper-graphs as incidence graphs, i.e. bipartite graphs with a set of nodes representing vertices and a set of nodes representing hyper-edges, with two nodes being adjacent if the corresponding vertex belongs to the corresponding hyper-edge. It defines a random hyper-multigraph specified by two distributions, one for the degrees of the vertices, and one for the sizes of the hyper-edges. We develop the logical analysis of this framework and first prove a convergence law for first-order logic, then characterise the limit first-order theories defined by a wide class of degree distributions. Convergence laws of other models follow, and in particular for the classical Erd\H{o}s-R\'enyi graphs and $k$-uniform hyper-graphs.
[ { "created": "Thu, 29 Jan 2015 12:07:25 GMT", "version": "v1" }, { "created": "Mon, 16 Feb 2015 01:02:47 GMT", "version": "v2" }, { "created": "Wed, 6 May 2015 21:36:34 GMT", "version": "v3" } ]
2015-05-08
[ [ "Lefebvre", "Nans", "" ] ]
We view hyper-graphs as incidence graphs, i.e. bipartite graphs with a set of nodes representing vertices and a set of nodes representing hyper-edges, with two nodes being adjacent if the corresponding vertex belongs to the corresponding hyper-edge. It defines a random hyper-multigraph specified by two distributions, one for the degrees of the vertices, and one for the sizes of the hyper-edges. We develop the logical analysis of this framework and first prove a convergence law for first-order logic, then characterise the limit first-order theories defined by a wide class of degree distributions. Convergence laws of other models follow, and in particular for the classical Erd\H{o}s-R\'enyi graphs and $k$-uniform hyper-graphs.
2404.11757
Mike Merrill
Mike A. Merrill and Mingtian Tan and Vinayak Gupta and Tom Hartvigsen and Tim Althoff
Language Models Still Struggle to Zero-shot Reason about Time Series
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Time series are critical for decision-making in fields like finance and healthcare. Their importance has driven a recent influx of works passing time series into language models, leading to non-trivial forecasting on some datasets. But it remains unknown whether non-trivial forecasting implies that language models can reason about time series. To address this gap, we generate a first-of-its-kind evaluation framework for time series reasoning, including formal tasks and a corresponding dataset of multi-scale time series paired with text captions across ten domains. Using these data, we probe whether language models achieve three forms of reasoning: (1) Etiological Reasoning - given an input time series, can the language model identify the scenario that most likely created it? (2) Question Answering - can a language model answer factual questions about time series? (3) Context-Aided Forecasting - does highly relevant textual context improve a language model's time series forecasts? We find that otherwise highly-capable language models demonstrate surprisingly limited time series reasoning: they score marginally above random on etiological and question answering tasks (up to 30 percentage points worse than humans) and show modest success in using context to improve forecasting. These weakness showcase that time series reasoning is an impactful, yet deeply underdeveloped direction for language model research. We also make our datasets and code public at to support further research in this direction at https://github.com/behavioral-data/TSandLanguage
[ { "created": "Wed, 17 Apr 2024 21:27:33 GMT", "version": "v1" } ]
2024-04-19
[ [ "Merrill", "Mike A.", "" ], [ "Tan", "Mingtian", "" ], [ "Gupta", "Vinayak", "" ], [ "Hartvigsen", "Tom", "" ], [ "Althoff", "Tim", "" ] ]
Time series are critical for decision-making in fields like finance and healthcare. Their importance has driven a recent influx of works passing time series into language models, leading to non-trivial forecasting on some datasets. But it remains unknown whether non-trivial forecasting implies that language models can reason about time series. To address this gap, we generate a first-of-its-kind evaluation framework for time series reasoning, including formal tasks and a corresponding dataset of multi-scale time series paired with text captions across ten domains. Using these data, we probe whether language models achieve three forms of reasoning: (1) Etiological Reasoning - given an input time series, can the language model identify the scenario that most likely created it? (2) Question Answering - can a language model answer factual questions about time series? (3) Context-Aided Forecasting - does highly relevant textual context improve a language model's time series forecasts? We find that otherwise highly-capable language models demonstrate surprisingly limited time series reasoning: they score marginally above random on etiological and question answering tasks (up to 30 percentage points worse than humans) and show modest success in using context to improve forecasting. These weakness showcase that time series reasoning is an impactful, yet deeply underdeveloped direction for language model research. We also make our datasets and code public at to support further research in this direction at https://github.com/behavioral-data/TSandLanguage
2205.11194
Tao Shen
Tao Shen, Xiubo Geng, Chongyang Tao, Can Xu, Guodong Long, Kai Zhang, Daxin Jiang
UnifieR: A Unified Retriever for Large-Scale Retrieval
To appear at KDD ADS 2023
null
null
null
cs.IR cs.CL
http://creativecommons.org/licenses/by/4.0/
Large-scale retrieval is to recall relevant documents from a huge collection given a query. It relies on representation learning to embed documents and queries into a common semantic encoding space. According to the encoding space, recent retrieval methods based on pre-trained language models (PLM) can be coarsely categorized into either dense-vector or lexicon-based paradigms. These two paradigms unveil the PLMs' representation capability in different granularities, i.e., global sequence-level compression and local word-level contexts, respectively. Inspired by their complementary global-local contextualization and distinct representing views, we propose a new learning framework, UnifieR which unifies dense-vector and lexicon-based retrieval in one model with a dual-representing capability. Experiments on passage retrieval benchmarks verify its effectiveness in both paradigms. A uni-retrieval scheme is further presented with even better retrieval quality. We lastly evaluate the model on BEIR benchmark to verify its transferability.
[ { "created": "Mon, 23 May 2022 11:01:59 GMT", "version": "v1" }, { "created": "Sun, 4 Jun 2023 12:59:36 GMT", "version": "v2" } ]
2023-06-06
[ [ "Shen", "Tao", "" ], [ "Geng", "Xiubo", "" ], [ "Tao", "Chongyang", "" ], [ "Xu", "Can", "" ], [ "Long", "Guodong", "" ], [ "Zhang", "Kai", "" ], [ "Jiang", "Daxin", "" ] ]
Large-scale retrieval is to recall relevant documents from a huge collection given a query. It relies on representation learning to embed documents and queries into a common semantic encoding space. According to the encoding space, recent retrieval methods based on pre-trained language models (PLM) can be coarsely categorized into either dense-vector or lexicon-based paradigms. These two paradigms unveil the PLMs' representation capability in different granularities, i.e., global sequence-level compression and local word-level contexts, respectively. Inspired by their complementary global-local contextualization and distinct representing views, we propose a new learning framework, UnifieR which unifies dense-vector and lexicon-based retrieval in one model with a dual-representing capability. Experiments on passage retrieval benchmarks verify its effectiveness in both paradigms. A uni-retrieval scheme is further presented with even better retrieval quality. We lastly evaluate the model on BEIR benchmark to verify its transferability.
2006.05935
Dieter B\"uchler
Dieter B\"uchler, Simon Guist, Roberto Calandra, Vincent Berenz, Bernhard Sch\"olkopf, Jan Peters
Learning to Play Table Tennis From Scratch using Muscular Robots
11 pages, 8 figures. Submitted to T-RO. For more information visit https://muscularTT.embodied.ml
null
null
null
cs.RO cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dynamic tasks like table tennis are relatively easy to learn for humans but pose significant challenges to robots. Such tasks require accurate control of fast movements and precise timing in the presence of imprecise state estimation of the flying ball and the robot. Reinforcement Learning (RL) has shown promise in learning of complex control tasks from data. However, applying step-based RL to dynamic tasks on real systems is safety-critical as RL requires exploring and failing safely for millions of time steps in high-speed regimes. In this paper, we demonstrate that safe learning of table tennis using model-free Reinforcement Learning can be achieved by using robot arms driven by pneumatic artificial muscles (PAMs). Softness and back-drivability properties of PAMs prevent the system from leaving the safe region of its state space. In this manner, RL empowers the robot to return and smash real balls with 5 m\s and 12m\s on average to a desired landing point. Our setup allows the agent to learn this safety-critical task (i) without safety constraints in the algorithm, (ii) while maximizing the speed of returned balls directly in the reward function (iii) using a stochastic policy that acts directly on the low-level controls of the real system and (iv) trains for thousands of trials (v) from scratch without any prior knowledge. Additionally, we present HYSR, a practical hybrid sim and real training that avoids playing real balls during training by randomly replaying recorded ball trajectories in simulation and applying actions to the real robot. This work is the first to (a) fail-safe learn of a safety-critical dynamic task using anthropomorphic robot arms, (b) learn a precision-demanding problem with a PAM-driven system despite the control challenges and (c) train robots to play table tennis without real balls. Videos and datasets are available at muscularTT.embodied.ml.
[ { "created": "Wed, 10 Jun 2020 16:43:27 GMT", "version": "v1" } ]
2020-06-11
[ [ "Büchler", "Dieter", "" ], [ "Guist", "Simon", "" ], [ "Calandra", "Roberto", "" ], [ "Berenz", "Vincent", "" ], [ "Schölkopf", "Bernhard", "" ], [ "Peters", "Jan", "" ] ]
Dynamic tasks like table tennis are relatively easy to learn for humans but pose significant challenges to robots. Such tasks require accurate control of fast movements and precise timing in the presence of imprecise state estimation of the flying ball and the robot. Reinforcement Learning (RL) has shown promise in learning of complex control tasks from data. However, applying step-based RL to dynamic tasks on real systems is safety-critical as RL requires exploring and failing safely for millions of time steps in high-speed regimes. In this paper, we demonstrate that safe learning of table tennis using model-free Reinforcement Learning can be achieved by using robot arms driven by pneumatic artificial muscles (PAMs). Softness and back-drivability properties of PAMs prevent the system from leaving the safe region of its state space. In this manner, RL empowers the robot to return and smash real balls with 5 m\s and 12m\s on average to a desired landing point. Our setup allows the agent to learn this safety-critical task (i) without safety constraints in the algorithm, (ii) while maximizing the speed of returned balls directly in the reward function (iii) using a stochastic policy that acts directly on the low-level controls of the real system and (iv) trains for thousands of trials (v) from scratch without any prior knowledge. Additionally, we present HYSR, a practical hybrid sim and real training that avoids playing real balls during training by randomly replaying recorded ball trajectories in simulation and applying actions to the real robot. This work is the first to (a) fail-safe learn of a safety-critical dynamic task using anthropomorphic robot arms, (b) learn a precision-demanding problem with a PAM-driven system despite the control challenges and (c) train robots to play table tennis without real balls. Videos and datasets are available at muscularTT.embodied.ml.
2102.00434
Gilad Yehudai
Eran Malach, Gilad Yehudai, Shai Shalev-Shwartz, Ohad Shamir
The Connection Between Approximation, Depth Separation and Learnability in Neural Networks
COLT 2021 camera ready version
null
null
null
cs.LG cs.NE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several recent works have shown separation results between deep neural networks, and hypothesis classes with inferior approximation capacity such as shallow networks or kernel classes. On the other hand, the fact that deep networks can efficiently express a target function does not mean that this target function can be learned efficiently by deep neural networks. In this work we study the intricate connection between learnability and approximation capacity. We show that learnability with deep networks of a target function depends on the ability of simpler classes to approximate the target. Specifically, we show that a necessary condition for a function to be learnable by gradient descent on deep neural networks is to be able to approximate the function, at least in a weak sense, with shallow neural networks. We also show that a class of functions can be learned by an efficient statistical query algorithm if and only if it can be approximated in a weak sense by some kernel class. We give several examples of functions which demonstrate depth separation, and conclude that they cannot be efficiently learned, even by a hypothesis class that can efficiently approximate them.
[ { "created": "Sun, 31 Jan 2021 11:32:30 GMT", "version": "v1" }, { "created": "Sun, 18 Jul 2021 12:32:55 GMT", "version": "v2" } ]
2021-07-20
[ [ "Malach", "Eran", "" ], [ "Yehudai", "Gilad", "" ], [ "Shalev-Shwartz", "Shai", "" ], [ "Shamir", "Ohad", "" ] ]
Several recent works have shown separation results between deep neural networks, and hypothesis classes with inferior approximation capacity such as shallow networks or kernel classes. On the other hand, the fact that deep networks can efficiently express a target function does not mean that this target function can be learned efficiently by deep neural networks. In this work we study the intricate connection between learnability and approximation capacity. We show that learnability with deep networks of a target function depends on the ability of simpler classes to approximate the target. Specifically, we show that a necessary condition for a function to be learnable by gradient descent on deep neural networks is to be able to approximate the function, at least in a weak sense, with shallow neural networks. We also show that a class of functions can be learned by an efficient statistical query algorithm if and only if it can be approximated in a weak sense by some kernel class. We give several examples of functions which demonstrate depth separation, and conclude that they cannot be efficiently learned, even by a hypothesis class that can efficiently approximate them.
1909.01383
Elena Voita
Elena Voita, Rico Sennrich, Ivan Titov
Context-Aware Monolingual Repair for Neural Machine Translation
EMNLP 2019 (camera-ready)
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern sentence-level NMT systems often produce plausible translations of isolated sentences. However, when put in context, these translations may end up being inconsistent with each other. We propose a monolingual DocRepair model to correct inconsistencies between sentence-level translations. DocRepair performs automatic post-editing on a sequence of sentence-level translations, refining translations of sentences in context of each other. For training, the DocRepair model requires only monolingual document-level data in the target language. It is trained as a monolingual sequence-to-sequence model that maps inconsistent groups of sentences into consistent ones. The consistent groups come from the original training data; the inconsistent groups are obtained by sampling round-trip translations for each isolated sentence. We show that this approach successfully imitates inconsistencies we aim to fix: using contrastive evaluation, we show large improvements in the translation of several contextual phenomena in an English-Russian translation task, as well as improvements in the BLEU score. We also conduct a human evaluation and show a strong preference of the annotators to corrected translations over the baseline ones. Moreover, we analyze which discourse phenomena are hard to capture using monolingual data only.
[ { "created": "Tue, 3 Sep 2019 18:12:36 GMT", "version": "v1" }, { "created": "Tue, 15 Oct 2019 11:13:52 GMT", "version": "v2" } ]
2019-10-16
[ [ "Voita", "Elena", "" ], [ "Sennrich", "Rico", "" ], [ "Titov", "Ivan", "" ] ]
Modern sentence-level NMT systems often produce plausible translations of isolated sentences. However, when put in context, these translations may end up being inconsistent with each other. We propose a monolingual DocRepair model to correct inconsistencies between sentence-level translations. DocRepair performs automatic post-editing on a sequence of sentence-level translations, refining translations of sentences in context of each other. For training, the DocRepair model requires only monolingual document-level data in the target language. It is trained as a monolingual sequence-to-sequence model that maps inconsistent groups of sentences into consistent ones. The consistent groups come from the original training data; the inconsistent groups are obtained by sampling round-trip translations for each isolated sentence. We show that this approach successfully imitates inconsistencies we aim to fix: using contrastive evaluation, we show large improvements in the translation of several contextual phenomena in an English-Russian translation task, as well as improvements in the BLEU score. We also conduct a human evaluation and show a strong preference of the annotators to corrected translations over the baseline ones. Moreover, we analyze which discourse phenomena are hard to capture using monolingual data only.
1406.1273
Will Rosenbaum
Rafail Ostrovsky and Will Rosenbaum
On The Communication Complexity of Finding an (Approximate) Stable Marriage
This paper has been subsumed by arXiv:1405.7709
null
null
null
cs.CC cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider the communication complexity of protocols that compute stable matchings. We work within the context of Gale and Shapley's original stable marriage problem\cite{GS62}: $n$ men and $n$ women each privately hold a total and strict ordering on all of the members of the opposite gender. They wish to collaborate in order to find a stable matching---a pairing of the men and women such that no unmatched pair mutually prefer each other to their assigned partners in the matching. We show that any communication protocol (deterministic, nondeterministic, or randomized) that correctly ouputs a stable matching requires $\Omega(n^2)$ bits of communication. Thus, the original algorithm of Gale and Shapley is communication-optimal up to a logarithmic factor. We then introduce a "divorce metric" on the set of all matchings, which allows us to consider approximately stable matchings. We describe an efficient algorithm to compute the "distance to stability" of a given matching. We then show that even under the relaxed requirement that a protocol only yield an approximate stable matching, the $\Omega(n^2)$ communication lower bound still holds.
[ { "created": "Thu, 5 Jun 2014 05:33:15 GMT", "version": "v1" }, { "created": "Thu, 9 Oct 2014 04:17:30 GMT", "version": "v2" } ]
2014-10-10
[ [ "Ostrovsky", "Rafail", "" ], [ "Rosenbaum", "Will", "" ] ]
In this paper, we consider the communication complexity of protocols that compute stable matchings. We work within the context of Gale and Shapley's original stable marriage problem\cite{GS62}: $n$ men and $n$ women each privately hold a total and strict ordering on all of the members of the opposite gender. They wish to collaborate in order to find a stable matching---a pairing of the men and women such that no unmatched pair mutually prefer each other to their assigned partners in the matching. We show that any communication protocol (deterministic, nondeterministic, or randomized) that correctly ouputs a stable matching requires $\Omega(n^2)$ bits of communication. Thus, the original algorithm of Gale and Shapley is communication-optimal up to a logarithmic factor. We then introduce a "divorce metric" on the set of all matchings, which allows us to consider approximately stable matchings. We describe an efficient algorithm to compute the "distance to stability" of a given matching. We then show that even under the relaxed requirement that a protocol only yield an approximate stable matching, the $\Omega(n^2)$ communication lower bound still holds.
2311.03367
Felix Hoops
Felix Hoops, Alexander M\"uhle, Florian Matthes, Christoph Meinel
A Taxonomy of Decentralized Identifier Methods for Practitioners
null
2023 IEEE International Conference on Decentralized Applications and Infrastructures (DAPPS), Athens, Greece, 2023, pp. 57-65
10.1109/DAPPS57946.2023.00017
null
cs.SE
http://creativecommons.org/licenses/by-nc-nd/4.0/
A core part of the new identity management paradigm of Self-Sovereign Identity (SSI) is the W3C Decentralized Identifiers (DIDs) standard. The diversity of interoperable implementations encouraged by the paradigm is key for a less centralized future, and it is made possible by the concept of DIDs. However, this leads to a kind of dilemma of choices, where practitioners are faced with the difficult decision of which methods to choose and support in their applications. Due to the decentralized development of DID method specifications and the overwhelming number of different choices, it is hard to get an overview. In this paper, we propose a taxonomy of DID methods with the goal to empower practitioners to make informed decisions when selecting DID methods. To that end, our taxonomy is designed to provide an overview of the current landscape while providing adoption-relevant characteristics. For this purpose, we rely on the Nickerson et al. methodology for taxonomy creation, utilizing both conceptual-to-empirical and empirical-to-conceptual approaches. During the iterative process, we collect and survey an extensive and potentially exhaustive list of around 160 DID methods from various sources. The taxonomy we arrive at uses a total of 7 dimensions and 22 characteristics to span the contemporary design space of DID methods from the perspective of a practitioner. In addition to elaborating on these characteristics, we also discuss how a practitioner can use the taxonomy to select suitable DID methods for a specific use case.
[ { "created": "Wed, 18 Oct 2023 13:01:40 GMT", "version": "v1" } ]
2023-11-08
[ [ "Hoops", "Felix", "" ], [ "Mühle", "Alexander", "" ], [ "Matthes", "Florian", "" ], [ "Meinel", "Christoph", "" ] ]
A core part of the new identity management paradigm of Self-Sovereign Identity (SSI) is the W3C Decentralized Identifiers (DIDs) standard. The diversity of interoperable implementations encouraged by the paradigm is key for a less centralized future, and it is made possible by the concept of DIDs. However, this leads to a kind of dilemma of choices, where practitioners are faced with the difficult decision of which methods to choose and support in their applications. Due to the decentralized development of DID method specifications and the overwhelming number of different choices, it is hard to get an overview. In this paper, we propose a taxonomy of DID methods with the goal to empower practitioners to make informed decisions when selecting DID methods. To that end, our taxonomy is designed to provide an overview of the current landscape while providing adoption-relevant characteristics. For this purpose, we rely on the Nickerson et al. methodology for taxonomy creation, utilizing both conceptual-to-empirical and empirical-to-conceptual approaches. During the iterative process, we collect and survey an extensive and potentially exhaustive list of around 160 DID methods from various sources. The taxonomy we arrive at uses a total of 7 dimensions and 22 characteristics to span the contemporary design space of DID methods from the perspective of a practitioner. In addition to elaborating on these characteristics, we also discuss how a practitioner can use the taxonomy to select suitable DID methods for a specific use case.
2005.06213
Giulio Ermanno Pibiri
Simon Gog, Giulio Ermanno Pibiri, and Rossano Venturini
Efficient and Effective Query Auto-Completion
Published in SIGIR 2020
SIGIR 2020: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. July 2020. Pages 2271-2280
10.1145/3397271.3401432
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Query Auto-Completion (QAC) is an ubiquitous feature of modern textual search systems, suggesting possible ways of completing the query being typed by the user. Efficiency is crucial to make the system have a real-time responsiveness when operating in the million-scale search space. Prior work has extensively advocated the use of a trie data structure for fast prefix-search operations in compact space. However, searching by prefix has little discovery power in that only completions that are prefixed by the query are returned. This may impact negatively the effectiveness of the QAC system, with a consequent monetary loss for real applications like Web Search Engines and eCommerce. In this work we describe the implementation that empowers a new QAC system at eBay, and discuss its efficiency/effectiveness in relation to other approaches at the state-of-the-art. The solution is based on the combination of an inverted index with succinct data structures, a much less explored direction in the literature. This system is replacing the previous implementation based on Apache SOLR that was not always able to meet the required service-level-agreement.
[ { "created": "Wed, 13 May 2020 09:07:43 GMT", "version": "v1" }, { "created": "Wed, 10 Jun 2020 08:28:57 GMT", "version": "v2" } ]
2022-02-08
[ [ "Gog", "Simon", "" ], [ "Pibiri", "Giulio Ermanno", "" ], [ "Venturini", "Rossano", "" ] ]
Query Auto-Completion (QAC) is an ubiquitous feature of modern textual search systems, suggesting possible ways of completing the query being typed by the user. Efficiency is crucial to make the system have a real-time responsiveness when operating in the million-scale search space. Prior work has extensively advocated the use of a trie data structure for fast prefix-search operations in compact space. However, searching by prefix has little discovery power in that only completions that are prefixed by the query are returned. This may impact negatively the effectiveness of the QAC system, with a consequent monetary loss for real applications like Web Search Engines and eCommerce. In this work we describe the implementation that empowers a new QAC system at eBay, and discuss its efficiency/effectiveness in relation to other approaches at the state-of-the-art. The solution is based on the combination of an inverted index with succinct data structures, a much less explored direction in the literature. This system is replacing the previous implementation based on Apache SOLR that was not always able to meet the required service-level-agreement.
1803.00005
Longquan Dai
Longquan Dai, Mengke Yuan, Zechao Li, Xiaopeng Zhang, Jinhui Tang
Hardware-Efficient Guided Image Filtering For Multi-Label Problem
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Guided Filter (GF) is well-known for its linear complexity. However, when filtering an image with an n-channel guidance, GF needs to invert an n x n matrix for each pixel. To the best of our knowledge existing matrix inverse algorithms are inefficient on current hardwares. This shortcoming limits applications of multichannel guidance in computation intensive system such as multi-label system. We need a new GF-like filter that can perform fast multichannel image guided filtering. Since the optimal linear complexity of GF cannot be minimized further, the only way thus is to bring all potentialities of current parallel computing hardwares into full play. In this paper we propose a hardware-efficient Guided Filter (HGF), which solves the efficiency problem of multichannel guided image filtering and yields competent results when applying it to multi-label problems with synthesized polynomial multichannel guidance. Specifically, in order to boost the filtering performance, HGF takes a new matrix inverse algorithm which only involves two hardware-efficient operations: element-wise arithmetic calculations and box filtering. In order to break the linear model restriction, HGF synthesizes a polynomial multichannel guidance to introduce nonlinearity. Benefiting from our polynomial guidance and hardware-efficient matrix inverse algorithm, HGF not only is more sensitive to the underlying structure of guidance but also achieves the fastest computing speed. Due to these merits, HGF obtains state-of-the-art results in terms of accuracy and efficiency in the computation intensive multi-label
[ { "created": "Wed, 28 Feb 2018 07:27:43 GMT", "version": "v1" } ]
2018-03-02
[ [ "Dai", "Longquan", "" ], [ "Yuan", "Mengke", "" ], [ "Li", "Zechao", "" ], [ "Zhang", "Xiaopeng", "" ], [ "Tang", "Jinhui", "" ] ]
The Guided Filter (GF) is well-known for its linear complexity. However, when filtering an image with an n-channel guidance, GF needs to invert an n x n matrix for each pixel. To the best of our knowledge existing matrix inverse algorithms are inefficient on current hardwares. This shortcoming limits applications of multichannel guidance in computation intensive system such as multi-label system. We need a new GF-like filter that can perform fast multichannel image guided filtering. Since the optimal linear complexity of GF cannot be minimized further, the only way thus is to bring all potentialities of current parallel computing hardwares into full play. In this paper we propose a hardware-efficient Guided Filter (HGF), which solves the efficiency problem of multichannel guided image filtering and yields competent results when applying it to multi-label problems with synthesized polynomial multichannel guidance. Specifically, in order to boost the filtering performance, HGF takes a new matrix inverse algorithm which only involves two hardware-efficient operations: element-wise arithmetic calculations and box filtering. In order to break the linear model restriction, HGF synthesizes a polynomial multichannel guidance to introduce nonlinearity. Benefiting from our polynomial guidance and hardware-efficient matrix inverse algorithm, HGF not only is more sensitive to the underlying structure of guidance but also achieves the fastest computing speed. Due to these merits, HGF obtains state-of-the-art results in terms of accuracy and efficiency in the computation intensive multi-label
2405.08760
Akhila Yerukola
Akhila Yerukola, Saujas Vaduguru, Daniel Fried, Maarten Sap
Is the Pope Catholic? Yes, the Pope is Catholic. Generative Evaluation of Non-Literal Intent Resolution in LLMs
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans often express their communicative intents indirectly or non-literally, which requires their interlocutors -- human or AI -- to understand beyond the literal meaning of words. While most existing work has focused on discriminative evaluations, we present a new approach to generatively evaluate large language models' (LLMs') intention understanding by examining their responses to non-literal utterances. Ideally, an LLM should respond in line with the true intention of a non-literal utterance, not its literal interpretation. Our findings show that LLMs struggle to generate pragmatically relevant responses to non-literal language, achieving only 50-55% accuracy on average. While explicitly providing oracle intentions significantly improves performance (e.g., 75% for Mistral-Instruct), this still indicates challenges in leveraging given intentions to produce appropriate responses. Using chain-of-thought to make models spell out intentions yields much smaller gains (60% for Mistral-Instruct). These findings suggest that LLMs are not yet effective pragmatic interlocutors, highlighting the need for better approaches for modeling intentions and utilizing them for pragmatic generation.
[ { "created": "Tue, 14 May 2024 16:48:56 GMT", "version": "v1" }, { "created": "Wed, 19 Jun 2024 19:07:47 GMT", "version": "v2" } ]
2024-06-21
[ [ "Yerukola", "Akhila", "" ], [ "Vaduguru", "Saujas", "" ], [ "Fried", "Daniel", "" ], [ "Sap", "Maarten", "" ] ]
Humans often express their communicative intents indirectly or non-literally, which requires their interlocutors -- human or AI -- to understand beyond the literal meaning of words. While most existing work has focused on discriminative evaluations, we present a new approach to generatively evaluate large language models' (LLMs') intention understanding by examining their responses to non-literal utterances. Ideally, an LLM should respond in line with the true intention of a non-literal utterance, not its literal interpretation. Our findings show that LLMs struggle to generate pragmatically relevant responses to non-literal language, achieving only 50-55% accuracy on average. While explicitly providing oracle intentions significantly improves performance (e.g., 75% for Mistral-Instruct), this still indicates challenges in leveraging given intentions to produce appropriate responses. Using chain-of-thought to make models spell out intentions yields much smaller gains (60% for Mistral-Instruct). These findings suggest that LLMs are not yet effective pragmatic interlocutors, highlighting the need for better approaches for modeling intentions and utilizing them for pragmatic generation.
2301.01221
Ali Rahimi
Vesal Ahsani, Ali Rahimi, Mehdi Letafati, Babak Hossein Khalaj
Unlocking Metaverse-as-a-Service The three pillars to watch: Privacy and Security, Edge Computing, and Blockchain
21 pages, 4 figures, added references for section 3-A
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article, the authors provide a comprehensive overview on three core pillars of metaverse-as-a-service (MaaS) platforms; privacy and security, edge computing, and blockchain technology. The article starts by investigating security aspects for the wireless access to the metaverse. Then it goes through the privacy and security issues inside the metaverse from data-centric, learning-centric, and human-centric points-of-view. The authors address private and secure mechanisms for privatizing sensitive data attributes and securing machine learning algorithms running in a distributed manner within the metaverse platforms. Novel visions and less-investigated methods are reviewed to help mobile network operators and metaverse service providers facilitate the realization of secure and private MaaS through different layers of the metaverse, ranging from the access layer to the social interactions among clients. Later in the article, it has been explained how the paradigm of edge computing can strengthen different aspects of the metaverse. Along with that, the challenges of using edge computing in the metaverse have been comprehensively investigated. Additionally, the paper has comprehensively investigated and analyzed 10 main challenges of MaaS platforms and thoroughly discussed how blockchain technology provides solutions for these constraints. At the final, future vision and directions, such as content-centric security and zero-trust metaverse, some blockchain's unsolved challenges are also discussed to bring further insights for the network designers in the metaverse era.
[ { "created": "Sun, 1 Jan 2023 15:34:18 GMT", "version": "v1" }, { "created": "Wed, 11 Jan 2023 22:38:32 GMT", "version": "v2" } ]
2023-01-13
[ [ "Ahsani", "Vesal", "" ], [ "Rahimi", "Ali", "" ], [ "Letafati", "Mehdi", "" ], [ "Khalaj", "Babak Hossein", "" ] ]
In this article, the authors provide a comprehensive overview on three core pillars of metaverse-as-a-service (MaaS) platforms; privacy and security, edge computing, and blockchain technology. The article starts by investigating security aspects for the wireless access to the metaverse. Then it goes through the privacy and security issues inside the metaverse from data-centric, learning-centric, and human-centric points-of-view. The authors address private and secure mechanisms for privatizing sensitive data attributes and securing machine learning algorithms running in a distributed manner within the metaverse platforms. Novel visions and less-investigated methods are reviewed to help mobile network operators and metaverse service providers facilitate the realization of secure and private MaaS through different layers of the metaverse, ranging from the access layer to the social interactions among clients. Later in the article, it has been explained how the paradigm of edge computing can strengthen different aspects of the metaverse. Along with that, the challenges of using edge computing in the metaverse have been comprehensively investigated. Additionally, the paper has comprehensively investigated and analyzed 10 main challenges of MaaS platforms and thoroughly discussed how blockchain technology provides solutions for these constraints. At the final, future vision and directions, such as content-centric security and zero-trust metaverse, some blockchain's unsolved challenges are also discussed to bring further insights for the network designers in the metaverse era.
2003.04826
Syed Mohammed Arshad Zaidi
Anuj Sharma, Syed Mohammed Arshad Zaidi
Optimizations to the Parallel Breath First Search on Distributed Memory
5 pages
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graphs and their traversal is becoming significant as it is applicable to various areas of mathematics, science and technology. Various problems in fields as varied as biochemistry (genomics), electrical engineering (communication networks), computer science (algorithms and computation) can be modeled as Graph problems. Real world scenarios including communities their interconnections and related properties can be studied using graphs. So fast, scalable, low-cost execution of parallel graph algorithms is very important. In this implementation of parallel breadth first search of graphs, we implemented Parallel BFS algorithm with 1-D partitioning of graph as described in [2] and have reduced execution time by optimizing communication for local buffers.
[ { "created": "Tue, 10 Mar 2020 16:11:46 GMT", "version": "v1" } ]
2020-03-11
[ [ "Sharma", "Anuj", "" ], [ "Zaidi", "Syed Mohammed Arshad", "" ] ]
Graphs and their traversal is becoming significant as it is applicable to various areas of mathematics, science and technology. Various problems in fields as varied as biochemistry (genomics), electrical engineering (communication networks), computer science (algorithms and computation) can be modeled as Graph problems. Real world scenarios including communities their interconnections and related properties can be studied using graphs. So fast, scalable, low-cost execution of parallel graph algorithms is very important. In this implementation of parallel breadth first search of graphs, we implemented Parallel BFS algorithm with 1-D partitioning of graph as described in [2] and have reduced execution time by optimizing communication for local buffers.
1211.4206
Pascal Frossard
Enrico Magli, Mea Wang, Pascal Frossard, Athina Markopoulou
Network Coding Meets Multimedia: a Review
Part of this work is under publication in IEEE Transactions on Multimedia
null
10.1109/TMM.2013.2241415
null
cs.MM cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While every network node only relays messages in a traditional communication system, the recent network coding (NC) paradigm proposes to implement simple in-network processing with packet combinations in the nodes. NC extends the concept of "encoding" a message beyond source coding (for compression) and channel coding (for protection against errors and losses). It has been shown to increase network throughput compared to traditional networks implementation, to reduce delay and to provide robustness to transmission errors and network dynamics. These features are so appealing for multimedia applications that they have spurred a large research effort towards the development of multimedia-specific NC techniques. This paper reviews the recent work in NC for multimedia applications and focuses on the techniques that fill the gap between NC theory and practical applications. It outlines the benefits of NC and presents the open challenges in this area. The paper initially focuses on multimedia-specific aspects of network coding, in particular delay, in-network error control, and media-specific error control. These aspects permit to handle varying network conditions as well as client heterogeneity, which are critical to the design and deployment of multimedia systems. After introducing these general concepts, the paper reviews in detail two applications that lend themselves naturally to NC via the cooperation and broadcast models, namely peer-to-peer multimedia streaming and wireless networking.
[ { "created": "Sun, 18 Nov 2012 10:43:54 GMT", "version": "v1" } ]
2016-11-15
[ [ "Magli", "Enrico", "" ], [ "Wang", "Mea", "" ], [ "Frossard", "Pascal", "" ], [ "Markopoulou", "Athina", "" ] ]
While every network node only relays messages in a traditional communication system, the recent network coding (NC) paradigm proposes to implement simple in-network processing with packet combinations in the nodes. NC extends the concept of "encoding" a message beyond source coding (for compression) and channel coding (for protection against errors and losses). It has been shown to increase network throughput compared to traditional networks implementation, to reduce delay and to provide robustness to transmission errors and network dynamics. These features are so appealing for multimedia applications that they have spurred a large research effort towards the development of multimedia-specific NC techniques. This paper reviews the recent work in NC for multimedia applications and focuses on the techniques that fill the gap between NC theory and practical applications. It outlines the benefits of NC and presents the open challenges in this area. The paper initially focuses on multimedia-specific aspects of network coding, in particular delay, in-network error control, and media-specific error control. These aspects permit to handle varying network conditions as well as client heterogeneity, which are critical to the design and deployment of multimedia systems. After introducing these general concepts, the paper reviews in detail two applications that lend themselves naturally to NC via the cooperation and broadcast models, namely peer-to-peer multimedia streaming and wireless networking.
2205.06360
William Schultz
William Schultz, Ian Dardik, Stavros Tripakis
Plain and Simple Inductive Invariant Inference for Distributed Protocols in TLA+
null
null
null
null
cs.LO cs.DC
http://creativecommons.org/licenses/by/4.0/
We present a new technique for automatically inferring inductive invariants of parameterized distributed protocols specified in TLA+. Ours is the first such invariant inference technique to work directly on TLA+, an expressive, high level specification language. To achieve this, we present a new algorithm for invariant inference that is based around a core procedure for generating plain, potentially non-inductive lemma invariants that are used as candidate conjuncts of an overall inductive invariant. We couple this with a greedy lemma invariant selection procedure that selects lemmas that eliminate the largest number of counterexamples to induction at each round of our inference procedure. We have implemented our algorithm in a tool, endive, and evaluate it on a diverse set of distributed protocol benchmarks, demonstrating competitive performance and ability to uniquely solve an industrial scale reconfiguration protocol.
[ { "created": "Thu, 12 May 2022 20:53:44 GMT", "version": "v1" }, { "created": "Sat, 1 Oct 2022 15:43:09 GMT", "version": "v2" } ]
2022-10-04
[ [ "Schultz", "William", "" ], [ "Dardik", "Ian", "" ], [ "Tripakis", "Stavros", "" ] ]
We present a new technique for automatically inferring inductive invariants of parameterized distributed protocols specified in TLA+. Ours is the first such invariant inference technique to work directly on TLA+, an expressive, high level specification language. To achieve this, we present a new algorithm for invariant inference that is based around a core procedure for generating plain, potentially non-inductive lemma invariants that are used as candidate conjuncts of an overall inductive invariant. We couple this with a greedy lemma invariant selection procedure that selects lemmas that eliminate the largest number of counterexamples to induction at each round of our inference procedure. We have implemented our algorithm in a tool, endive, and evaluate it on a diverse set of distributed protocol benchmarks, demonstrating competitive performance and ability to uniquely solve an industrial scale reconfiguration protocol.
2303.13015
Marc Katzef
Marc Katzef, Andrew C. Cullen, Tansu Alpcan, Christopher Leckie, Justin Kopacz
Failure-tolerant Distributed Learning for Anomaly Detection in Wireless Networks
null
null
null
null
cs.LG cs.AI cs.DC
http://creativecommons.org/licenses/by/4.0/
The analysis of distributed techniques is often focused upon their efficiency, without considering their robustness (or lack thereof). Such a consideration is particularly important when devices or central servers can fail, which can potentially cripple distributed systems. When such failures arise in wireless communications networks, important services that they use/provide (like anomaly detection) can be left inoperable and can result in a cascade of security problems. In this paper, we present a novel method to address these risks by combining both flat- and star-topologies, combining the performance and reliability benefits of both. We refer to this method as "Tol-FL", due to its increased failure-tolerance as compared to the technique of Federated Learning. Our approach both limits device failure risks while outperforming prior methods by up to 8% in terms of anomaly detection AUROC in a range of realistic settings that consider client as well as server failure, all while reducing communication costs. This performance demonstrates that Tol-FL is a highly suitable method for distributed model training for anomaly detection, especially in the domain of wireless networks.
[ { "created": "Thu, 23 Mar 2023 03:39:12 GMT", "version": "v1" } ]
2023-03-24
[ [ "Katzef", "Marc", "" ], [ "Cullen", "Andrew C.", "" ], [ "Alpcan", "Tansu", "" ], [ "Leckie", "Christopher", "" ], [ "Kopacz", "Justin", "" ] ]
The analysis of distributed techniques is often focused upon their efficiency, without considering their robustness (or lack thereof). Such a consideration is particularly important when devices or central servers can fail, which can potentially cripple distributed systems. When such failures arise in wireless communications networks, important services that they use/provide (like anomaly detection) can be left inoperable and can result in a cascade of security problems. In this paper, we present a novel method to address these risks by combining both flat- and star-topologies, combining the performance and reliability benefits of both. We refer to this method as "Tol-FL", due to its increased failure-tolerance as compared to the technique of Federated Learning. Our approach both limits device failure risks while outperforming prior methods by up to 8% in terms of anomaly detection AUROC in a range of realistic settings that consider client as well as server failure, all while reducing communication costs. This performance demonstrates that Tol-FL is a highly suitable method for distributed model training for anomaly detection, especially in the domain of wireless networks.
2301.06549
Muhammad Mahboob Ur Rahman
Rabia Ahmed, Ahsan Mehmood, Muhammad Mahboob Ur Rahman, Octavia A. Dobre
A Deep Learning & Fast Wavelet Transform-based Hybrid Approach for Denoising of PPG Signals
4 pages, 8 figures
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
This letter presents a novel hybrid method that leverages deep learning to exploit the multi-resolution analysis capability of the wavelets, in order to denoise a photoplethysmography (PPG) signal. Under the proposed method, a noisy PPG sequence of length N is first decomposed into L detailed coefficients using the fast wavelet transform (FWT). Then, the clean PPG sequence is reconstructed as follows. A custom feedforward neural network (FFNN) provides the binary weights for each of the wavelet sub-signals outputted by the inverse-FWT block. This way, all those sub-signals which correspond to noise or artefacts are discarded during reconstruction. The FFNN is trained on the Beth Israel Deaconess Medical Center (BIDMC) dataset under the supervised learning framework, whereby we compute the mean squared-error (MSE) between the denoised sequence and the reference clean PPG signal, and compute the gradient of the MSE for the back-propagation. Numerical results show that the proposed method effectively denoises the corrupted PPG and video-PPG signal.
[ { "created": "Mon, 16 Jan 2023 18:27:12 GMT", "version": "v1" } ]
2023-01-18
[ [ "Ahmed", "Rabia", "" ], [ "Mehmood", "Ahsan", "" ], [ "Rahman", "Muhammad Mahboob Ur", "" ], [ "Dobre", "Octavia A.", "" ] ]
This letter presents a novel hybrid method that leverages deep learning to exploit the multi-resolution analysis capability of the wavelets, in order to denoise a photoplethysmography (PPG) signal. Under the proposed method, a noisy PPG sequence of length N is first decomposed into L detailed coefficients using the fast wavelet transform (FWT). Then, the clean PPG sequence is reconstructed as follows. A custom feedforward neural network (FFNN) provides the binary weights for each of the wavelet sub-signals outputted by the inverse-FWT block. This way, all those sub-signals which correspond to noise or artefacts are discarded during reconstruction. The FFNN is trained on the Beth Israel Deaconess Medical Center (BIDMC) dataset under the supervised learning framework, whereby we compute the mean squared-error (MSE) between the denoised sequence and the reference clean PPG signal, and compute the gradient of the MSE for the back-propagation. Numerical results show that the proposed method effectively denoises the corrupted PPG and video-PPG signal.
2212.00548
Ting Zhang
Ting Zhang, DongGyun Han, Venkatesh Vinayakarao, Ivana Clairine Irsan, Bowen Xu, Ferdian Thung, David Lo, Lingxiao Jiang
Duplicate Bug Report Detection: How Far Are We?
Accepted by ACM Transactions on Software Engineering and Methodology
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many Duplicate Bug Report Detection (DBRD) techniques have been proposed in the research literature. The industry uses some other techniques. Unfortunately, there is insufficient comparison among them, and it is unclear how far we have been. This work fills this gap by comparing the aforementioned techniques. To compare them, we first need a benchmark that can estimate how a tool would perform if applied in a realistic setting today. Thus, we first investigated potential biases that affect the fair comparison of the accuracy of DBRD techniques. Our experiments suggest that data age and issue tracking system choice cause a significant difference. Based on these findings, we prepared a new benchmark. We then used it to evaluate DBRD techniques to estimate better how far we have been. Surprisingly, a simpler technique outperforms recently proposed sophisticated techniques on most projects in our benchmark. In addition, we compared the DBRD techniques proposed in research with those used in Mozilla and VSCode. Surprisingly, we observe that a simple technique already adopted in practice can achieve comparable results as a recently proposed research tool. Our study gives reflections on the current state of DBRD, and we share our insights to benefit future DBRD research.
[ { "created": "Thu, 1 Dec 2022 14:54:45 GMT", "version": "v1" } ]
2022-12-02
[ [ "Zhang", "Ting", "" ], [ "Han", "DongGyun", "" ], [ "Vinayakarao", "Venkatesh", "" ], [ "Irsan", "Ivana Clairine", "" ], [ "Xu", "Bowen", "" ], [ "Thung", "Ferdian", "" ], [ "Lo", "David", "" ], [ "Jiang", "Lingxiao", "" ] ]
Many Duplicate Bug Report Detection (DBRD) techniques have been proposed in the research literature. The industry uses some other techniques. Unfortunately, there is insufficient comparison among them, and it is unclear how far we have been. This work fills this gap by comparing the aforementioned techniques. To compare them, we first need a benchmark that can estimate how a tool would perform if applied in a realistic setting today. Thus, we first investigated potential biases that affect the fair comparison of the accuracy of DBRD techniques. Our experiments suggest that data age and issue tracking system choice cause a significant difference. Based on these findings, we prepared a new benchmark. We then used it to evaluate DBRD techniques to estimate better how far we have been. Surprisingly, a simpler technique outperforms recently proposed sophisticated techniques on most projects in our benchmark. In addition, we compared the DBRD techniques proposed in research with those used in Mozilla and VSCode. Surprisingly, we observe that a simple technique already adopted in practice can achieve comparable results as a recently proposed research tool. Our study gives reflections on the current state of DBRD, and we share our insights to benefit future DBRD research.
2109.00833
Dominik Martin
Dominik Martin, Niklas K\"uhl, Marcel Schwenk
Towards a Reference Architecture for Future Industrial Internet of Things Networks
null
Proceedings of the 2021 IEEE 23rd Conference on Business Informatics (CBI) Vol. 2
10.1109/CBI52690.2021.10049
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the continuing decrease of sensor technology prices as well as the increase of communication and analytical capabilities of modern internet of things devices, the continuously generated amount of data is constantly growing. Various use cases show the untapped potential of this data for new business models. However, conventional industrial IT networks of traditional manufacturing companies can hardly meet the modern requirements emerging with today's and future industrial internet of things applications. Outdated and rigid network infrastructures are one of the main reasons for hesitant innovation efforts and cross-organizational collaborations as well as the slow adoption of modern business models by traditional manufacturing companies. Following the design science research paradigm, our work contributes by elaborating on a comprehensive list of requirements for future industrial internet of things networks from a theoretical and practical perspective as well as a proposed reference architecture acting as a blueprint for future implementations.
[ { "created": "Thu, 2 Sep 2021 10:33:53 GMT", "version": "v1" } ]
2021-09-03
[ [ "Martin", "Dominik", "" ], [ "Kühl", "Niklas", "" ], [ "Schwenk", "Marcel", "" ] ]
With the continuing decrease of sensor technology prices as well as the increase of communication and analytical capabilities of modern internet of things devices, the continuously generated amount of data is constantly growing. Various use cases show the untapped potential of this data for new business models. However, conventional industrial IT networks of traditional manufacturing companies can hardly meet the modern requirements emerging with today's and future industrial internet of things applications. Outdated and rigid network infrastructures are one of the main reasons for hesitant innovation efforts and cross-organizational collaborations as well as the slow adoption of modern business models by traditional manufacturing companies. Following the design science research paradigm, our work contributes by elaborating on a comprehensive list of requirements for future industrial internet of things networks from a theoretical and practical perspective as well as a proposed reference architecture acting as a blueprint for future implementations.
1509.03614
Karla Saur
Karla Saur and Joseph Collard and Nate Foster and Arjun Guha and Laurent Vanbever and Michael Hicks
Morpheus: Safe and Flexible Dynamic Updates for SDNs
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
SDN controllers must be periodically modified to add features, improve performance, and fix bugs, but current techniques for implementing dynamic updates are inadequate. Simply halting old controllers and bringing up new ones can cause state to be lost, which often leads to incorrect behavior-e.g., if the state represents hosts blacklisted by a firewall, then traffic that should be blocked may be allowed to pass through. Techniques based on record and replay can reconstruct state automatically, but they are expensive to deploy and can lead to incorrect behavior. Problematic scenarios are especially likely to arise in distributed controllers and with semantics-altering updates. This paper presents a new approach to implementing dynamic controller updates based on explicit state transfer. Instead of attempting to infer state changes automatically-an approach that is expensive and fundamentally incomplete-our framework gives programmers effective tools for implementing correct updates that avoid major disruptions. We develop primitives that enable programmers to directly (and easily, in most cases) initialize the new controller's state as a function of old state and we design protocols that ensure consistent behavior during the transition. We also present a prototype implementation called Morpheus, and evaluate its effectiveness on representative case studies.
[ { "created": "Fri, 11 Sep 2015 19:10:43 GMT", "version": "v1" } ]
2015-09-14
[ [ "Saur", "Karla", "" ], [ "Collard", "Joseph", "" ], [ "Foster", "Nate", "" ], [ "Guha", "Arjun", "" ], [ "Vanbever", "Laurent", "" ], [ "Hicks", "Michael", "" ] ]
SDN controllers must be periodically modified to add features, improve performance, and fix bugs, but current techniques for implementing dynamic updates are inadequate. Simply halting old controllers and bringing up new ones can cause state to be lost, which often leads to incorrect behavior-e.g., if the state represents hosts blacklisted by a firewall, then traffic that should be blocked may be allowed to pass through. Techniques based on record and replay can reconstruct state automatically, but they are expensive to deploy and can lead to incorrect behavior. Problematic scenarios are especially likely to arise in distributed controllers and with semantics-altering updates. This paper presents a new approach to implementing dynamic controller updates based on explicit state transfer. Instead of attempting to infer state changes automatically-an approach that is expensive and fundamentally incomplete-our framework gives programmers effective tools for implementing correct updates that avoid major disruptions. We develop primitives that enable programmers to directly (and easily, in most cases) initialize the new controller's state as a function of old state and we design protocols that ensure consistent behavior during the transition. We also present a prototype implementation called Morpheus, and evaluate its effectiveness on representative case studies.
2109.02199
Nan Xue
Rujiao Long and Wen Wang and Nan Xue and Feiyu Gao and Zhibo Yang and Yongpan Wang and Gui-Song Xia
Parsing Table Structures in the Wild
Accepted to ICCV 2021
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
This paper tackles the problem of table structure parsing (TSP) from images in the wild. In contrast to existing studies that mainly focus on parsing well-aligned tabular images with simple layouts from scanned PDF documents, we aim to establish a practical table structure parsing system for real-world scenarios where tabular input images are taken or scanned with severe deformation, bending or occlusions. For designing such a system, we propose an approach named Cycle-CenterNet on the top of CenterNet with a novel cycle-pairing module to simultaneously detect and group tabular cells into structured tables. In the cycle-pairing module, a new pairing loss function is proposed for the network training. Alongside with our Cycle-CenterNet, we also present a large-scale dataset, named Wired Table in the Wild (WTW), which includes well-annotated structure parsing of multiple style tables in several scenes like the photo, scanning files, web pages, \emph{etc.}. In experiments, we demonstrate that our Cycle-CenterNet consistently achieves the best accuracy of table structure parsing on the new WTW dataset by 24.6\% absolute improvement evaluated by the TEDS metric. A more comprehensive experimental analysis also validates the advantages of our proposed methods for the TSP task.
[ { "created": "Mon, 6 Sep 2021 01:05:48 GMT", "version": "v1" } ]
2021-09-07
[ [ "Long", "Rujiao", "" ], [ "Wang", "Wen", "" ], [ "Xue", "Nan", "" ], [ "Gao", "Feiyu", "" ], [ "Yang", "Zhibo", "" ], [ "Wang", "Yongpan", "" ], [ "Xia", "Gui-Song", "" ] ]
This paper tackles the problem of table structure parsing (TSP) from images in the wild. In contrast to existing studies that mainly focus on parsing well-aligned tabular images with simple layouts from scanned PDF documents, we aim to establish a practical table structure parsing system for real-world scenarios where tabular input images are taken or scanned with severe deformation, bending or occlusions. For designing such a system, we propose an approach named Cycle-CenterNet on the top of CenterNet with a novel cycle-pairing module to simultaneously detect and group tabular cells into structured tables. In the cycle-pairing module, a new pairing loss function is proposed for the network training. Alongside with our Cycle-CenterNet, we also present a large-scale dataset, named Wired Table in the Wild (WTW), which includes well-annotated structure parsing of multiple style tables in several scenes like the photo, scanning files, web pages, \emph{etc.}. In experiments, we demonstrate that our Cycle-CenterNet consistently achieves the best accuracy of table structure parsing on the new WTW dataset by 24.6\% absolute improvement evaluated by the TEDS metric. A more comprehensive experimental analysis also validates the advantages of our proposed methods for the TSP task.
2303.14603
Hina Qayyum
Hina Qayyum, Benjamin Zi Hao Zhao, Ian D. Wood, Muhammad Ikram, Mohamed Ali Kaafar, Nicolas Kourtellis
A longitudinal study of the top 1% toxic Twitter profiles
null
null
10.1145/3578503.3583619
null
cs.SI cs.CY
http://creativecommons.org/licenses/by-sa/4.0/
Toxicity is endemic to online social networks including Twitter. It follows a Pareto like distribution where most of the toxicity is generated by a very small number of profiles and as such, analyzing and characterizing these toxic profiles is critical. Prior research has largely focused on sporadic, event centric toxic content to characterize toxicity on the platform. Instead, we approach the problem of characterizing toxic content from a profile centric point of view. We study 143K Twitter profiles and focus on the behavior of the top 1 percent producers of toxic content on Twitter, based on toxicity scores of their tweets availed by Perspective API. With a total of 293M tweets, spanning 16 years of activity, the longitudinal data allow us to reconstruct the timelines of all profiles involved. We use these timelines to gauge the behavior of the most toxic Twitter profiles compared to the rest of the Twitter population. We study the pattern of tweet posting from highly toxic accounts, based on the frequency and how prolific they are, the nature of hashtags and URLs, profile metadata, and Botometer scores. We find that the highly toxic profiles post coherent and well articulated content, their tweets keep to a narrow theme with lower diversity in hashtags, URLs, and domains, they are thematically similar to each other, and have a high likelihood of bot like behavior, likely to have progenitors with intentions to influence, based on high fake followers score. Our work contributes insight into the top 1 percent of toxic profiles on Twitter and establishes the profile centric approach to investigate toxicity on Twitter to be beneficial.
[ { "created": "Sun, 26 Mar 2023 01:55:28 GMT", "version": "v1" } ]
2023-03-28
[ [ "Qayyum", "Hina", "" ], [ "Zhao", "Benjamin Zi Hao", "" ], [ "Wood", "Ian D.", "" ], [ "Ikram", "Muhammad", "" ], [ "Kaafar", "Mohamed Ali", "" ], [ "Kourtellis", "Nicolas", "" ] ]
Toxicity is endemic to online social networks including Twitter. It follows a Pareto like distribution where most of the toxicity is generated by a very small number of profiles and as such, analyzing and characterizing these toxic profiles is critical. Prior research has largely focused on sporadic, event centric toxic content to characterize toxicity on the platform. Instead, we approach the problem of characterizing toxic content from a profile centric point of view. We study 143K Twitter profiles and focus on the behavior of the top 1 percent producers of toxic content on Twitter, based on toxicity scores of their tweets availed by Perspective API. With a total of 293M tweets, spanning 16 years of activity, the longitudinal data allow us to reconstruct the timelines of all profiles involved. We use these timelines to gauge the behavior of the most toxic Twitter profiles compared to the rest of the Twitter population. We study the pattern of tweet posting from highly toxic accounts, based on the frequency and how prolific they are, the nature of hashtags and URLs, profile metadata, and Botometer scores. We find that the highly toxic profiles post coherent and well articulated content, their tweets keep to a narrow theme with lower diversity in hashtags, URLs, and domains, they are thematically similar to each other, and have a high likelihood of bot like behavior, likely to have progenitors with intentions to influence, based on high fake followers score. Our work contributes insight into the top 1 percent of toxic profiles on Twitter and establishes the profile centric approach to investigate toxicity on Twitter to be beneficial.
1904.05173
Stanis{\l}aw Ambroszkiewicz
Stanislaw Ambroszkiewicz
Combinatorial constructions of intrinsic geometries
The final version (hopefully)
null
null
null
cs.CG math.DG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A generic method for combinatorial constructions of intrinsic geometrical spaces is presented. It is based on the well known inverse sequences of finite graphs that determine (in the limit) topological spaces. If a pattern of the construction is sufficiently regular and uniform, then the notions of metric, geodesic and curvature can be defined in the space as the limits of their finite versions in the graphs. This gives rise to consider the graphs with metrics as finite approximations of the geometry of the space. On the basis of simple and generic examples, several nonstandard and novel notions are proposed for the Foundations of Geometry. They may be considered as a subject of a critical discussion.
[ { "created": "Wed, 10 Apr 2019 13:21:08 GMT", "version": "v1" }, { "created": "Tue, 9 Jul 2019 20:07:11 GMT", "version": "v2" }, { "created": "Mon, 23 Sep 2019 19:14:57 GMT", "version": "v3" }, { "created": "Wed, 6 Nov 2019 15:36:29 GMT", "version": "v4" }, { "created": "Sat, 7 Dec 2019 11:12:42 GMT", "version": "v5" }, { "created": "Mon, 27 Apr 2020 11:10:15 GMT", "version": "v6" }, { "created": "Thu, 8 Oct 2020 15:57:59 GMT", "version": "v7" } ]
2020-10-09
[ [ "Ambroszkiewicz", "Stanislaw", "" ] ]
A generic method for combinatorial constructions of intrinsic geometrical spaces is presented. It is based on the well known inverse sequences of finite graphs that determine (in the limit) topological spaces. If a pattern of the construction is sufficiently regular and uniform, then the notions of metric, geodesic and curvature can be defined in the space as the limits of their finite versions in the graphs. This gives rise to consider the graphs with metrics as finite approximations of the geometry of the space. On the basis of simple and generic examples, several nonstandard and novel notions are proposed for the Foundations of Geometry. They may be considered as a subject of a critical discussion.
1407.5718
Kamal Rahimi Malekshan
K. Rahimi Malekshan, F. Lahouti
Distributed Cross-layer Dynamic Route Selection in Wireless Multiuser Multihop Networks
Submitted to IEEE Transaction on Wireless Comunications
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In wireless ad-hoc networks, forwarding data through intermediate relays extends the coverage area and enhances the network throughput. We consider a general wireless multiuser multihop transmission, where each data flow is subject to a constraint on the end-to-end buffering delay and the associated packet drop rate as a quality of service (QoS) requirement. The objective is to maximize the weighted sum-rate between source destination pairs, while the corresponding QoS requirements are satisfied. We introduce two new distributed cross-layer dynamic route selection schemes in this setting that are designed involving physical, MAC, and network layers. In the proposed opportunistic cross-layer dynamic route selection scheme, routes are assigned dynamically based on the state of network nodes' buffers and the instantaneous state of fading channels. In the same setting, the proposed time division cross layer dynamic route selection scheme utilizes the average quality of channels instead for more efficient implementation. Detailed results and comparisons are provided, which demonstrate the superior performance of the proposed cross-layer dynamic route selection schemes.
[ { "created": "Tue, 22 Jul 2014 03:08:49 GMT", "version": "v1" } ]
2014-07-23
[ [ "Malekshan", "K. Rahimi", "" ], [ "Lahouti", "F.", "" ] ]
In wireless ad-hoc networks, forwarding data through intermediate relays extends the coverage area and enhances the network throughput. We consider a general wireless multiuser multihop transmission, where each data flow is subject to a constraint on the end-to-end buffering delay and the associated packet drop rate as a quality of service (QoS) requirement. The objective is to maximize the weighted sum-rate between source destination pairs, while the corresponding QoS requirements are satisfied. We introduce two new distributed cross-layer dynamic route selection schemes in this setting that are designed involving physical, MAC, and network layers. In the proposed opportunistic cross-layer dynamic route selection scheme, routes are assigned dynamically based on the state of network nodes' buffers and the instantaneous state of fading channels. In the same setting, the proposed time division cross layer dynamic route selection scheme utilizes the average quality of channels instead for more efficient implementation. Detailed results and comparisons are provided, which demonstrate the superior performance of the proposed cross-layer dynamic route selection schemes.
2308.04689
Piyush Vyas
Piyush Vyas, Akhilesh Chauhan, Tushar Mandge, Surbhi Hardikar
Web crawler strategies for web pages under robot.txt restriction
null
null
null
null
cs.AI cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the present time, all know about World Wide Web and work over the Internet daily. In this paper, we introduce the search engines working for keywords that are entered by users to find something. The search engine uses different search algorithms for convenient results for providing to the net surfer. Net surfers go with the top search results but how did the results of web pages get higher ranks over search engines? how the search engine got that all the web pages in the database? This paper gives the answers to all these kinds of basic questions. Web crawlers working for search engines and robot exclusion protocol rules for web crawlers are also addressed in this research paper. Webmaster uses different restriction facts in robot.txt file to instruct web crawler, some basic formats of robot.txt are also mentioned in this paper.
[ { "created": "Wed, 9 Aug 2023 03:52:48 GMT", "version": "v1" }, { "created": "Wed, 28 Feb 2024 21:29:43 GMT", "version": "v2" } ]
2024-03-01
[ [ "Vyas", "Piyush", "" ], [ "Chauhan", "Akhilesh", "" ], [ "Mandge", "Tushar", "" ], [ "Hardikar", "Surbhi", "" ] ]
In the present time, all know about World Wide Web and work over the Internet daily. In this paper, we introduce the search engines working for keywords that are entered by users to find something. The search engine uses different search algorithms for convenient results for providing to the net surfer. Net surfers go with the top search results but how did the results of web pages get higher ranks over search engines? how the search engine got that all the web pages in the database? This paper gives the answers to all these kinds of basic questions. Web crawlers working for search engines and robot exclusion protocol rules for web crawlers are also addressed in this research paper. Webmaster uses different restriction facts in robot.txt file to instruct web crawler, some basic formats of robot.txt are also mentioned in this paper.
1909.02384
Yukuan Yang
Yukuan Yang, Shuang Wu, Lei Deng, Tianyi Yan, Yuan Xie, Guoqi Li
Training High-Performance and Large-Scale Deep Neural Networks with Full 8-bit Integers
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural network (DNN) quantization converting floating-point (FP) data in the network to integers (INT) is an effective way to shrink the model size for memory saving and simplify the operations for compute acceleration. Recently, researches on DNN quantization develop from inference to training, laying a foundation for the online training on accelerators. However, existing schemes leaving batch normalization (BN) untouched during training are mostly incomplete quantization that still adopts high precision FP in some parts of the data paths. Currently, there is no solution that can use only low bit-width INT data during the whole training process of large-scale DNNs with acceptable accuracy. In this work, through decomposing all the computation steps in DNNs and fusing three special quantization functions to satisfy the different precision requirements, we propose a unified complete quantization framework termed as ``WAGEUBN'' to quantize DNNs involving all data paths including W (Weights), A (Activation), G (Gradient), E (Error), U (Update), and BN. Moreover, the Momentum optimizer is also quantized to realize a completely quantized framework. Experiments on ResNet18/34/50 models demonstrate that WAGEUBN can achieve competitive accuracy on the ImageNet dataset. For the first time, the study of quantization in large-scale DNNs is advanced to the full 8-bit INT level. In this way, all the operations in the training and inference can be bit-wise operations, pushing towards faster processing speed, decreased memory cost, and higher energy efficiency. Our throughout quantization framework has great potential for future efficient portable devices with online learning ability.
[ { "created": "Thu, 5 Sep 2019 13:17:38 GMT", "version": "v1" }, { "created": "Tue, 31 Dec 2019 14:31:20 GMT", "version": "v2" } ]
2020-01-01
[ [ "Yang", "Yukuan", "" ], [ "Wu", "Shuang", "" ], [ "Deng", "Lei", "" ], [ "Yan", "Tianyi", "" ], [ "Xie", "Yuan", "" ], [ "Li", "Guoqi", "" ] ]
Deep neural network (DNN) quantization converting floating-point (FP) data in the network to integers (INT) is an effective way to shrink the model size for memory saving and simplify the operations for compute acceleration. Recently, researches on DNN quantization develop from inference to training, laying a foundation for the online training on accelerators. However, existing schemes leaving batch normalization (BN) untouched during training are mostly incomplete quantization that still adopts high precision FP in some parts of the data paths. Currently, there is no solution that can use only low bit-width INT data during the whole training process of large-scale DNNs with acceptable accuracy. In this work, through decomposing all the computation steps in DNNs and fusing three special quantization functions to satisfy the different precision requirements, we propose a unified complete quantization framework termed as ``WAGEUBN'' to quantize DNNs involving all data paths including W (Weights), A (Activation), G (Gradient), E (Error), U (Update), and BN. Moreover, the Momentum optimizer is also quantized to realize a completely quantized framework. Experiments on ResNet18/34/50 models demonstrate that WAGEUBN can achieve competitive accuracy on the ImageNet dataset. For the first time, the study of quantization in large-scale DNNs is advanced to the full 8-bit INT level. In this way, all the operations in the training and inference can be bit-wise operations, pushing towards faster processing speed, decreased memory cost, and higher energy efficiency. Our throughout quantization framework has great potential for future efficient portable devices with online learning ability.
2304.02811
Haoyang Zheng
Haoyang Zheng, Yao Huang, Ziyang Huang, Wenrui Hao, Guang Lin
HomPINNs: homotopy physics-informed neural networks for solving the inverse problems of nonlinear differential equations with multiple solutions
20 pages, 15 figures, 7 tables
Volume 500, 2024
10.1016/j.jcp.2023.112751
null
cs.LG cs.AI cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to the complex behavior arising from non-uniqueness, symmetry, and bifurcations in the solution space, solving inverse problems of nonlinear differential equations (DEs) with multiple solutions is a challenging task. To address this, we propose homotopy physics-informed neural networks (HomPINNs), a novel framework that leverages homotopy continuation and neural networks (NNs) to solve inverse problems. The proposed framework begins with the use of NNs to simultaneously approximate unlabeled observations across diverse solutions while adhering to DE constraints. Through homotopy continuation, the proposed method solves the inverse problem by tracing the observations and identifying multiple solutions. The experiments involve testing the performance of the proposed method on one-dimensional DEs and applying it to solve a two-dimensional Gray-Scott simulation. Our findings demonstrate that the proposed method is scalable and adaptable, providing an effective solution for solving DEs with multiple solutions and unknown parameters. Moreover, it has significant potential for various applications in scientific computing, such as modeling complex systems and solving inverse problems in physics, chemistry, biology, etc.
[ { "created": "Thu, 6 Apr 2023 01:20:23 GMT", "version": "v1" }, { "created": "Wed, 17 Jan 2024 18:14:20 GMT", "version": "v2" } ]
2024-01-18
[ [ "Zheng", "Haoyang", "" ], [ "Huang", "Yao", "" ], [ "Huang", "Ziyang", "" ], [ "Hao", "Wenrui", "" ], [ "Lin", "Guang", "" ] ]
Due to the complex behavior arising from non-uniqueness, symmetry, and bifurcations in the solution space, solving inverse problems of nonlinear differential equations (DEs) with multiple solutions is a challenging task. To address this, we propose homotopy physics-informed neural networks (HomPINNs), a novel framework that leverages homotopy continuation and neural networks (NNs) to solve inverse problems. The proposed framework begins with the use of NNs to simultaneously approximate unlabeled observations across diverse solutions while adhering to DE constraints. Through homotopy continuation, the proposed method solves the inverse problem by tracing the observations and identifying multiple solutions. The experiments involve testing the performance of the proposed method on one-dimensional DEs and applying it to solve a two-dimensional Gray-Scott simulation. Our findings demonstrate that the proposed method is scalable and adaptable, providing an effective solution for solving DEs with multiple solutions and unknown parameters. Moreover, it has significant potential for various applications in scientific computing, such as modeling complex systems and solving inverse problems in physics, chemistry, biology, etc.
2105.11210
Chenliang Li
Chenliang Li, Bin Bi, Ming Yan, Wei Wang, Songfang Huang, Fei Huang and Luo Si
StructuralLM: Structural Pre-training for Form Understanding
Accepted by ACL2021 main conference
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large pre-trained language models achieve state-of-the-art results when fine-tuned on downstream NLP tasks. However, they almost exclusively focus on text-only representation, while neglecting cell-level layout information that is important for form image understanding. In this paper, we propose a new pre-training approach, StructuralLM, to jointly leverage cell and layout information from scanned documents. Specifically, we pre-train StructuralLM with two new designs to make the most of the interactions of cell and layout information: 1) each cell as a semantic unit; 2) classification of cell positions. The pre-trained StructuralLM achieves new state-of-the-art results in different types of downstream tasks, including form understanding (from 78.95 to 85.14), document visual question answering (from 72.59 to 83.94) and document image classification (from 94.43 to 96.08).
[ { "created": "Mon, 24 May 2021 11:33:20 GMT", "version": "v1" } ]
2021-05-25
[ [ "Li", "Chenliang", "" ], [ "Bi", "Bin", "" ], [ "Yan", "Ming", "" ], [ "Wang", "Wei", "" ], [ "Huang", "Songfang", "" ], [ "Huang", "Fei", "" ], [ "Si", "Luo", "" ] ]
Large pre-trained language models achieve state-of-the-art results when fine-tuned on downstream NLP tasks. However, they almost exclusively focus on text-only representation, while neglecting cell-level layout information that is important for form image understanding. In this paper, we propose a new pre-training approach, StructuralLM, to jointly leverage cell and layout information from scanned documents. Specifically, we pre-train StructuralLM with two new designs to make the most of the interactions of cell and layout information: 1) each cell as a semantic unit; 2) classification of cell positions. The pre-trained StructuralLM achieves new state-of-the-art results in different types of downstream tasks, including form understanding (from 78.95 to 85.14), document visual question answering (from 72.59 to 83.94) and document image classification (from 94.43 to 96.08).
1906.02461
Yichong Leng
Yichong Leng, Xu Tan, Tao Qin, Xiang-Yang Li and Tie-Yan Liu
Unsupervised Pivot Translation for Distant Languages
Accepted by ACL-2019
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unsupervised neural machine translation (NMT) has attracted a lot of attention recently. While state-of-the-art methods for unsupervised translation usually perform well between similar languages (e.g., English-German translation), they perform poorly between distant languages, because unsupervised alignment does not work well for distant languages. In this work, we introduce unsupervised pivot translation for distant languages, which translates a language to a distant language through multiple hops, and the unsupervised translation on each hop is relatively easier than the original direct translation. We propose a learning to route (LTR) method to choose the translation path between the source and target languages. LTR is trained on language pairs whose best translation path is available and is applied on the unseen language pairs for path selection. Experiments on 20 languages and 294 distant language pairs demonstrate the advantages of the unsupervised pivot translation for distant languages, as well as the effectiveness of the proposed LTR for path selection. Specifically, in the best case, LTR achieves an improvement of 5.58 BLEU points over the conventional direct unsupervised method.
[ { "created": "Thu, 6 Jun 2019 07:48:36 GMT", "version": "v1" }, { "created": "Wed, 12 Jun 2019 05:07:08 GMT", "version": "v2" }, { "created": "Tue, 25 Jun 2019 02:58:06 GMT", "version": "v3" } ]
2019-06-26
[ [ "Leng", "Yichong", "" ], [ "Tan", "Xu", "" ], [ "Qin", "Tao", "" ], [ "Li", "Xiang-Yang", "" ], [ "Liu", "Tie-Yan", "" ] ]
Unsupervised neural machine translation (NMT) has attracted a lot of attention recently. While state-of-the-art methods for unsupervised translation usually perform well between similar languages (e.g., English-German translation), they perform poorly between distant languages, because unsupervised alignment does not work well for distant languages. In this work, we introduce unsupervised pivot translation for distant languages, which translates a language to a distant language through multiple hops, and the unsupervised translation on each hop is relatively easier than the original direct translation. We propose a learning to route (LTR) method to choose the translation path between the source and target languages. LTR is trained on language pairs whose best translation path is available and is applied on the unseen language pairs for path selection. Experiments on 20 languages and 294 distant language pairs demonstrate the advantages of the unsupervised pivot translation for distant languages, as well as the effectiveness of the proposed LTR for path selection. Specifically, in the best case, LTR achieves an improvement of 5.58 BLEU points over the conventional direct unsupervised method.
2210.15173
Gasper Begus
Ga\v{s}per Begu\v{s}, Alan Zhou, Peter Wu, Gopala K Anumanchipalli
Articulation GAN: Unsupervised modeling of articulatory learning
ICASSP 2023
ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing
10.1109/ICASSP49357.2023.10096800
null
cs.SD cs.AI cs.CL eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative deep neural networks are widely used for speech synthesis, but most existing models directly generate waveforms or spectral outputs. Humans, however, produce speech by controlling articulators, which results in the production of speech sounds through physical properties of sound propagation. We introduce the Articulatory Generator to the Generative Adversarial Network paradigm, a new unsupervised generative model of speech production/synthesis. The Articulatory Generator more closely mimics human speech production by learning to generate articulatory representations (electromagnetic articulography or EMA) in a fully unsupervised manner. A separate pre-trained physical model (ema2wav) then transforms the generated EMA representations to speech waveforms, which get sent to the Discriminator for evaluation. Articulatory analysis suggests that the network learns to control articulators in a similar manner to humans during speech production. Acoustic analysis of the outputs suggests that the network learns to generate words that are both present and absent in the training distribution. We additionally discuss implications of articulatory representations for cognitive models of human language and speech technology in general.
[ { "created": "Thu, 27 Oct 2022 05:07:04 GMT", "version": "v1" }, { "created": "Sun, 12 Mar 2023 20:28:46 GMT", "version": "v2" } ]
2023-05-10
[ [ "Beguš", "Gašper", "" ], [ "Zhou", "Alan", "" ], [ "Wu", "Peter", "" ], [ "Anumanchipalli", "Gopala K", "" ] ]
Generative deep neural networks are widely used for speech synthesis, but most existing models directly generate waveforms or spectral outputs. Humans, however, produce speech by controlling articulators, which results in the production of speech sounds through physical properties of sound propagation. We introduce the Articulatory Generator to the Generative Adversarial Network paradigm, a new unsupervised generative model of speech production/synthesis. The Articulatory Generator more closely mimics human speech production by learning to generate articulatory representations (electromagnetic articulography or EMA) in a fully unsupervised manner. A separate pre-trained physical model (ema2wav) then transforms the generated EMA representations to speech waveforms, which get sent to the Discriminator for evaluation. Articulatory analysis suggests that the network learns to control articulators in a similar manner to humans during speech production. Acoustic analysis of the outputs suggests that the network learns to generate words that are both present and absent in the training distribution. We additionally discuss implications of articulatory representations for cognitive models of human language and speech technology in general.
1709.07957
Zackory Erickson
Zackory Erickson, Maggie Collier, Ariel Kapusta, and Charles C. Kemp
Tracking Human Pose During Robot-Assisted Dressing using Single-Axis Capacitive Proximity Sensing
8 pages, 13 figures, 2018 IEEE Robotics and Automation Letters (RA-L)
null
10.1109/LRA.2018.2812912
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dressing is a fundamental task of everyday living and robots offer an opportunity to assist people with motor impairments. While several robotic systems have explored robot-assisted dressing, few have considered how a robot can manage errors in human pose estimation, or adapt to human motion in real time during dressing assistance. In addition, estimating pose changes due to human motion can be challenging with vision-based techniques since dressing is often intended to visually occlude the body with clothing. We present a method to track a person's pose in real time using capacitive proximity sensing. This sensing approach gives direct estimates of distance with low latency, has a high signal-to-noise ratio, and has low computational requirements. Using our method, a robot can adjust for errors in the estimated pose of a person and physically follow the contours and movements of the person while providing dressing assistance. As part of an evaluation of our method, the robot successfully pulled the sleeve of a hospital gown and a cardigan onto the right arms of 10 human participants, despite arm motions and large errors in the initially estimated pose of the person's arm. We also show that a capacitive sensor is unaffected by visual occlusion of the body and can sense a person's body through cotton clothing.
[ { "created": "Fri, 22 Sep 2017 21:26:49 GMT", "version": "v1" }, { "created": "Fri, 24 May 2019 21:41:53 GMT", "version": "v2" } ]
2019-05-28
[ [ "Erickson", "Zackory", "" ], [ "Collier", "Maggie", "" ], [ "Kapusta", "Ariel", "" ], [ "Kemp", "Charles C.", "" ] ]
Dressing is a fundamental task of everyday living and robots offer an opportunity to assist people with motor impairments. While several robotic systems have explored robot-assisted dressing, few have considered how a robot can manage errors in human pose estimation, or adapt to human motion in real time during dressing assistance. In addition, estimating pose changes due to human motion can be challenging with vision-based techniques since dressing is often intended to visually occlude the body with clothing. We present a method to track a person's pose in real time using capacitive proximity sensing. This sensing approach gives direct estimates of distance with low latency, has a high signal-to-noise ratio, and has low computational requirements. Using our method, a robot can adjust for errors in the estimated pose of a person and physically follow the contours and movements of the person while providing dressing assistance. As part of an evaluation of our method, the robot successfully pulled the sleeve of a hospital gown and a cardigan onto the right arms of 10 human participants, despite arm motions and large errors in the initially estimated pose of the person's arm. We also show that a capacitive sensor is unaffected by visual occlusion of the body and can sense a person's body through cotton clothing.
2208.10695
Jinpeng Li
Lingfeng Li, Huaiwei Cong, Gangming Zhao, Junran Peng, Zheng Zhang, and Jinpeng Li
Structure Regularized Attentive Network for Automatic Femoral Head Necrosis Diagnosis and Localization
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, several works have adopted the convolutional neural network (CNN) to diagnose the avascular necrosis of the femoral head (AVNFH) based on X-ray images or magnetic resonance imaging (MRI). However, due to the tissue overlap, X-ray images are difficult to provide fine-grained features for early diagnosis. MRI, on the other hand, has a long imaging time, is more expensive, making it impractical in mass screening. Computed tomography (CT) shows layer-wise tissues, is faster to image, and is less costly than MRI. However, to our knowledge, there is no work on CT-based automated diagnosis of AVNFH. In this work, we collected and labeled a large-scale dataset for AVNFH ranking. In addition, existing end-to-end CNNs only yields the classification result and are difficult to provide more information for doctors in diagnosis. To address this issue, we propose the structure regularized attentive network (SRANet), which is able to highlight the necrotic regions during classification based on patch attention. SRANet extracts features in chunks of images, obtains weight via the attention mechanism to aggregate the features, and constrains them by a structural regularizer with prior knowledge to improve the generalization. SRANet was evaluated on our AVNFH-CT dataset. Experimental results show that SRANet is superior to CNNs for AVNFH classification, moreover, it can localize lesions and provide more information to assist doctors in diagnosis. Our codes are made public at https://github.com/tomas-lilingfeng/SRANet.
[ { "created": "Tue, 23 Aug 2022 02:31:38 GMT", "version": "v1" } ]
2022-08-24
[ [ "Li", "Lingfeng", "" ], [ "Cong", "Huaiwei", "" ], [ "Zhao", "Gangming", "" ], [ "Peng", "Junran", "" ], [ "Zhang", "Zheng", "" ], [ "Li", "Jinpeng", "" ] ]
In recent years, several works have adopted the convolutional neural network (CNN) to diagnose the avascular necrosis of the femoral head (AVNFH) based on X-ray images or magnetic resonance imaging (MRI). However, due to the tissue overlap, X-ray images are difficult to provide fine-grained features for early diagnosis. MRI, on the other hand, has a long imaging time, is more expensive, making it impractical in mass screening. Computed tomography (CT) shows layer-wise tissues, is faster to image, and is less costly than MRI. However, to our knowledge, there is no work on CT-based automated diagnosis of AVNFH. In this work, we collected and labeled a large-scale dataset for AVNFH ranking. In addition, existing end-to-end CNNs only yields the classification result and are difficult to provide more information for doctors in diagnosis. To address this issue, we propose the structure regularized attentive network (SRANet), which is able to highlight the necrotic regions during classification based on patch attention. SRANet extracts features in chunks of images, obtains weight via the attention mechanism to aggregate the features, and constrains them by a structural regularizer with prior knowledge to improve the generalization. SRANet was evaluated on our AVNFH-CT dataset. Experimental results show that SRANet is superior to CNNs for AVNFH classification, moreover, it can localize lesions and provide more information to assist doctors in diagnosis. Our codes are made public at https://github.com/tomas-lilingfeng/SRANet.
2305.14779
Nikita Srivatsan
Nikita Srivatsan, Sofia Samaniego, Omar Florez, Taylor Berg-Kirkpatrick
Alt-Text with Context: Improving Accessibility for Images on Twitter
ICLR 2024
null
null
null
cs.CV cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
In this work we present an approach for generating alternative text (or alt-text) descriptions for images shared on social media, specifically Twitter. More than just a special case of image captioning, alt-text is both more literally descriptive and context-specific. Also critically, images posted to Twitter are often accompanied by user-written text that despite not necessarily describing the image may provide useful context that if properly leveraged can be informative. We address this task with a multimodal model that conditions on both textual information from the associated social media post as well as visual signal from the image, and demonstrate that the utility of these two information sources stacks. We put forward a new dataset of 371k images paired with alt-text and tweets scraped from Twitter and evaluate on it across a variety of automated metrics as well as human evaluation. We show that our approach of conditioning on both tweet text and visual information significantly outperforms prior work, by more than 2x on BLEU@4.
[ { "created": "Wed, 24 May 2023 06:35:26 GMT", "version": "v1" }, { "created": "Tue, 3 Oct 2023 23:01:05 GMT", "version": "v2" }, { "created": "Thu, 29 Feb 2024 22:31:53 GMT", "version": "v3" } ]
2024-03-04
[ [ "Srivatsan", "Nikita", "" ], [ "Samaniego", "Sofia", "" ], [ "Florez", "Omar", "" ], [ "Berg-Kirkpatrick", "Taylor", "" ] ]
In this work we present an approach for generating alternative text (or alt-text) descriptions for images shared on social media, specifically Twitter. More than just a special case of image captioning, alt-text is both more literally descriptive and context-specific. Also critically, images posted to Twitter are often accompanied by user-written text that despite not necessarily describing the image may provide useful context that if properly leveraged can be informative. We address this task with a multimodal model that conditions on both textual information from the associated social media post as well as visual signal from the image, and demonstrate that the utility of these two information sources stacks. We put forward a new dataset of 371k images paired with alt-text and tweets scraped from Twitter and evaluate on it across a variety of automated metrics as well as human evaluation. We show that our approach of conditioning on both tweet text and visual information significantly outperforms prior work, by more than 2x on BLEU@4.
2208.00231
Samuel Yang
Alexander Liu, Samuel Yang
Masked Autoencoders As The Unified Learners For Pre-Trained Sentence Representation
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Despite the progresses on pre-trained language models, there is a lack of unified frameworks for pre-trained sentence representation. As such, it calls for different pre-training methods for specific scenarios, and the pre-trained models are likely to be limited by their universality and representation quality. In this work, we extend the recently proposed MAE style pre-training strategy, RetroMAE, such that it may effectively support a wide variety of sentence representation tasks. The extended framework consists of two stages, with RetroMAE conducted throughout the process. The first stage performs RetroMAE over generic corpora, like Wikipedia, BookCorpus, etc., from which the base model is learned. The second stage takes place on domain-specific data, e.g., MS MARCO and NLI, where the base model is continuingly trained based on RetroMAE and contrastive learning. The pre-training outputs at the two stages may serve different applications, whose effectiveness are verified with comprehensive experiments. Concretely, the base model are proved to be effective for zero-shot retrieval, with remarkable performances achieved on BEIR benchmark. The continuingly pre-trained models further benefit more downstream tasks, including the domain-specific dense retrieval on MS MARCO, Natural Questions, and the sentence embeddings' quality for standard STS and transfer tasks in SentEval. The empirical insights of this work may inspire the future design of sentence representation pre-training. Our pre-trained models and source code will be released to the public communities.
[ { "created": "Sat, 30 Jul 2022 14:34:55 GMT", "version": "v1" } ]
2022-08-02
[ [ "Liu", "Alexander", "" ], [ "Yang", "Samuel", "" ] ]
Despite the progresses on pre-trained language models, there is a lack of unified frameworks for pre-trained sentence representation. As such, it calls for different pre-training methods for specific scenarios, and the pre-trained models are likely to be limited by their universality and representation quality. In this work, we extend the recently proposed MAE style pre-training strategy, RetroMAE, such that it may effectively support a wide variety of sentence representation tasks. The extended framework consists of two stages, with RetroMAE conducted throughout the process. The first stage performs RetroMAE over generic corpora, like Wikipedia, BookCorpus, etc., from which the base model is learned. The second stage takes place on domain-specific data, e.g., MS MARCO and NLI, where the base model is continuingly trained based on RetroMAE and contrastive learning. The pre-training outputs at the two stages may serve different applications, whose effectiveness are verified with comprehensive experiments. Concretely, the base model are proved to be effective for zero-shot retrieval, with remarkable performances achieved on BEIR benchmark. The continuingly pre-trained models further benefit more downstream tasks, including the domain-specific dense retrieval on MS MARCO, Natural Questions, and the sentence embeddings' quality for standard STS and transfer tasks in SentEval. The empirical insights of this work may inspire the future design of sentence representation pre-training. Our pre-trained models and source code will be released to the public communities.
2304.07674
Nathan Klein
Nathan Klein and Neil Olver
Thin trees for laminar families
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the laminar-constrained spanning tree problem, the goal is to find a minimum-cost spanning tree which respects upper bounds on the number of times each cut in a given laminar family is crossed. This generalizes the well-studied degree-bounded spanning tree problem, as well as a previously studied setting where a chain of cuts is given. We give the first constant-factor approximation algorithm; in particular we show how to obtain a multiplicative violation of the crossing bounds of less than 22 while losing less than a factor of 5 in terms of cost. Our result compares to the natural LP relaxation. As a consequence, our results show that given a $k$-edge-connected graph and a laminar family $\mathcal{L} \subseteq 2^V$ of cuts, there exists a spanning tree which contains only an $O(1/k)$ fraction of the edges across every cut in $\mathcal{L}$. This can be viewed as progress towards the Thin Tree Conjecture, which (in a strong form) states that this guarantee can be obtained for all cuts simultaneously.
[ { "created": "Sun, 16 Apr 2023 02:33:04 GMT", "version": "v1" } ]
2023-04-18
[ [ "Klein", "Nathan", "" ], [ "Olver", "Neil", "" ] ]
In the laminar-constrained spanning tree problem, the goal is to find a minimum-cost spanning tree which respects upper bounds on the number of times each cut in a given laminar family is crossed. This generalizes the well-studied degree-bounded spanning tree problem, as well as a previously studied setting where a chain of cuts is given. We give the first constant-factor approximation algorithm; in particular we show how to obtain a multiplicative violation of the crossing bounds of less than 22 while losing less than a factor of 5 in terms of cost. Our result compares to the natural LP relaxation. As a consequence, our results show that given a $k$-edge-connected graph and a laminar family $\mathcal{L} \subseteq 2^V$ of cuts, there exists a spanning tree which contains only an $O(1/k)$ fraction of the edges across every cut in $\mathcal{L}$. This can be viewed as progress towards the Thin Tree Conjecture, which (in a strong form) states that this guarantee can be obtained for all cuts simultaneously.
2203.04524
Arundhati Banerjee
Arundhati Banerjee, Ramina Ghods, Jeff Schneider
Multi-Agent Active Search using Detection and Location Uncertainty
Accepted to ICRA 2023
null
null
null
cs.RO cs.LG cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Active search, in applications like environment monitoring or disaster response missions, involves autonomous agents detecting targets in a search space using decision making algorithms that adapt to the history of their observations. Active search algorithms must contend with two types of uncertainty: detection uncertainty and location uncertainty. The more common approach in robotics is to focus on location uncertainty and remove detection uncertainty by thresholding the detection probability to zero or one. In contrast, it is common in the sparse signal processing literature to assume the target location is accurate and instead focus on the uncertainty of its detection. In this work, we first propose an inference method to jointly handle both target detection and location uncertainty. We then build a decision making algorithm on this inference method that uses Thompson sampling to enable decentralized multi-agent active search. We perform simulation experiments to show that our algorithms outperform competing baselines that only account for either target detection or location uncertainty. We finally demonstrate the real world transferability of our algorithms using a realistic simulation environment we created on the Unreal Engine 4 platform with an AirSim plugin.
[ { "created": "Wed, 9 Mar 2022 04:53:37 GMT", "version": "v1" }, { "created": "Mon, 22 May 2023 06:09:36 GMT", "version": "v2" } ]
2023-05-23
[ [ "Banerjee", "Arundhati", "" ], [ "Ghods", "Ramina", "" ], [ "Schneider", "Jeff", "" ] ]
Active search, in applications like environment monitoring or disaster response missions, involves autonomous agents detecting targets in a search space using decision making algorithms that adapt to the history of their observations. Active search algorithms must contend with two types of uncertainty: detection uncertainty and location uncertainty. The more common approach in robotics is to focus on location uncertainty and remove detection uncertainty by thresholding the detection probability to zero or one. In contrast, it is common in the sparse signal processing literature to assume the target location is accurate and instead focus on the uncertainty of its detection. In this work, we first propose an inference method to jointly handle both target detection and location uncertainty. We then build a decision making algorithm on this inference method that uses Thompson sampling to enable decentralized multi-agent active search. We perform simulation experiments to show that our algorithms outperform competing baselines that only account for either target detection or location uncertainty. We finally demonstrate the real world transferability of our algorithms using a realistic simulation environment we created on the Unreal Engine 4 platform with an AirSim plugin.
2204.00656
Samrudhdhi Bharatkumar Rangrej
Samrudhdhi B. Rangrej, Chetan L. Srinidhi, James J. Clark
Consistency driven Sequential Transformers Attention Model for Partially Observable Scenes
Accepted to CVPR 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most hard attention models initially observe a complete scene to locate and sense informative glimpses, and predict class-label of a scene based on glimpses. However, in many applications (e.g., aerial imaging), observing an entire scene is not always feasible due to the limited time and resources available for acquisition. In this paper, we develop a Sequential Transformers Attention Model (STAM) that only partially observes a complete image and predicts informative glimpse locations solely based on past glimpses. We design our agent using DeiT-distilled and train it with a one-step actor-critic algorithm. Furthermore, to improve classification performance, we introduce a novel training objective, which enforces consistency between the class distribution predicted by a teacher model from a complete image and the class distribution predicted by our agent using glimpses. When the agent senses only 4% of the total image area, the inclusion of the proposed consistency loss in our training objective yields 3% and 8% higher accuracy on ImageNet and fMoW datasets, respectively. Moreover, our agent outperforms previous state-of-the-art by observing nearly 27% and 42% fewer pixels in glimpses on ImageNet and fMoW.
[ { "created": "Fri, 1 Apr 2022 18:51:55 GMT", "version": "v1" } ]
2022-04-05
[ [ "Rangrej", "Samrudhdhi B.", "" ], [ "Srinidhi", "Chetan L.", "" ], [ "Clark", "James J.", "" ] ]
Most hard attention models initially observe a complete scene to locate and sense informative glimpses, and predict class-label of a scene based on glimpses. However, in many applications (e.g., aerial imaging), observing an entire scene is not always feasible due to the limited time and resources available for acquisition. In this paper, we develop a Sequential Transformers Attention Model (STAM) that only partially observes a complete image and predicts informative glimpse locations solely based on past glimpses. We design our agent using DeiT-distilled and train it with a one-step actor-critic algorithm. Furthermore, to improve classification performance, we introduce a novel training objective, which enforces consistency between the class distribution predicted by a teacher model from a complete image and the class distribution predicted by our agent using glimpses. When the agent senses only 4% of the total image area, the inclusion of the proposed consistency loss in our training objective yields 3% and 8% higher accuracy on ImageNet and fMoW datasets, respectively. Moreover, our agent outperforms previous state-of-the-art by observing nearly 27% and 42% fewer pixels in glimpses on ImageNet and fMoW.
2004.12615
Jingjing Li
Li Jingjing, Chen Erpeng, Ding Zhengming, Zhu Lei, Lu Ke, Shen Heng Tao
Maximum Density Divergence for Domain Adaptation
Published on IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)
null
10.1109/TPAMI.2020.2991050
null
cs.CV cs.LG cs.MM stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unsupervised domain adaptation addresses the problem of transferring knowledge from a well-labeled source domain to an unlabeled target domain where the two domains have distinctive data distributions. Thus, the essence of domain adaptation is to mitigate the distribution divergence between the two domains. The state-of-the-art methods practice this very idea by either conducting adversarial training or minimizing a metric which defines the distribution gaps. In this paper, we propose a new domain adaptation method named Adversarial Tight Match (ATM) which enjoys the benefits of both adversarial training and metric learning. Specifically, at first, we propose a novel distance loss, named Maximum Density Divergence (MDD), to quantify the distribution divergence. MDD minimizes the inter-domain divergence ("match" in ATM) and maximizes the intra-class density ("tight" in ATM). Then, to address the equilibrium challenge issue in adversarial domain adaptation, we consider leveraging the proposed MDD into adversarial domain adaptation framework. At last, we tailor the proposed MDD as a practical learning loss and report our ATM. Both empirical evaluation and theoretical analysis are reported to verify the effectiveness of the proposed method. The experimental results on four benchmarks, both classical and large-scale, show that our method is able to achieve new state-of-the-art performance on most evaluations. Codes and datasets used in this paper are available at {\it github.com/lijin118/ATM}.
[ { "created": "Mon, 27 Apr 2020 07:35:06 GMT", "version": "v1" } ]
2020-04-28
[ [ "Jingjing", "Li", "" ], [ "Erpeng", "Chen", "" ], [ "Zhengming", "Ding", "" ], [ "Lei", "Zhu", "" ], [ "Ke", "Lu", "" ], [ "Tao", "Shen Heng", "" ] ]
Unsupervised domain adaptation addresses the problem of transferring knowledge from a well-labeled source domain to an unlabeled target domain where the two domains have distinctive data distributions. Thus, the essence of domain adaptation is to mitigate the distribution divergence between the two domains. The state-of-the-art methods practice this very idea by either conducting adversarial training or minimizing a metric which defines the distribution gaps. In this paper, we propose a new domain adaptation method named Adversarial Tight Match (ATM) which enjoys the benefits of both adversarial training and metric learning. Specifically, at first, we propose a novel distance loss, named Maximum Density Divergence (MDD), to quantify the distribution divergence. MDD minimizes the inter-domain divergence ("match" in ATM) and maximizes the intra-class density ("tight" in ATM). Then, to address the equilibrium challenge issue in adversarial domain adaptation, we consider leveraging the proposed MDD into adversarial domain adaptation framework. At last, we tailor the proposed MDD as a practical learning loss and report our ATM. Both empirical evaluation and theoretical analysis are reported to verify the effectiveness of the proposed method. The experimental results on four benchmarks, both classical and large-scale, show that our method is able to achieve new state-of-the-art performance on most evaluations. Codes and datasets used in this paper are available at {\it github.com/lijin118/ATM}.
1905.13352
Hafiz Muhammad Mohsin Bashir
Hafiz Mohsin Bashir, Abdullah Bin Faisal, Muhammad Asim Jamshed, Peter Vondras, Ali Musa Iftikhar, Ihsan Ayyub Qazi, Fahad R. Dogar
Reducing Tail Latency via Safe and Simple Duplication
null
null
null
null
cs.NI cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Duplication can be a powerful strategy for overcoming stragglers in cloud services, but is often used conservatively because of the risk of overloading the system. We present duplicate-aware scheduling or DAS, which makes duplication safe and easy to use, by leveraging the two well-known primitives of prioritization and purging. To support DAS across diverse layers of a cloud system (e.g., network, storage, etc), we propose the D-Stage abstraction, which decouples the duplication policy from the mechanism, and facilitates working with legacy layers of a system. Using this abstraction, we evaluate the benefits of DAS for two data parallel applications (HDFS, an in-memory workload generator) and a network function (snort-based IDS cluster). Our experiments on the public cloud and Emulab show that DAS is safe to use, and the tail latency improvement holds across a wide range of workloads
[ { "created": "Thu, 30 May 2019 23:30:56 GMT", "version": "v1" } ]
2019-06-03
[ [ "Bashir", "Hafiz Mohsin", "" ], [ "Faisal", "Abdullah Bin", "" ], [ "Jamshed", "Muhammad Asim", "" ], [ "Vondras", "Peter", "" ], [ "Iftikhar", "Ali Musa", "" ], [ "Qazi", "Ihsan Ayyub", "" ], [ "Dogar", "Fahad R.", "" ] ]
Duplication can be a powerful strategy for overcoming stragglers in cloud services, but is often used conservatively because of the risk of overloading the system. We present duplicate-aware scheduling or DAS, which makes duplication safe and easy to use, by leveraging the two well-known primitives of prioritization and purging. To support DAS across diverse layers of a cloud system (e.g., network, storage, etc), we propose the D-Stage abstraction, which decouples the duplication policy from the mechanism, and facilitates working with legacy layers of a system. Using this abstraction, we evaluate the benefits of DAS for two data parallel applications (HDFS, an in-memory workload generator) and a network function (snort-based IDS cluster). Our experiments on the public cloud and Emulab show that DAS is safe to use, and the tail latency improvement holds across a wide range of workloads
2111.13091
Bas Van Den Heuvel
Bas van den Heuvel and Jorge A. P\'erez
Asynchronous Session-Based Concurrency: Deadlock-freedom in Cyclic Process Networks
Extended version of arXiv:2110.00146, doi:10.4204/EPTCS.347.3 and arXiv:2209.06820, doi:10.4204/EPTCS.368.5
null
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
We tackle the challenge of ensuring the deadlock-freedom property for message-passing processes that communicate asynchronously in cyclic process networks. Our contributions are twofold. First, we present Asynchronous Priority-based Classical Processes (APCP), a session-typed process framework that supports asynchronous communication, delegation, and recursion in cyclic process networks. Building upon the Curry-Howard correspondences between linear logic and session types, we establish essential meta-theoretical results for APCP, most notably deadlock freedom. Second, we present a new concurrent $\lambda$-calculus with asynchronous session types, dubbed LASTn. We illustrate LASTn by example and establish its meta-theoretical results; in particular, we show how to soundly transfer the deadlock-freedom guarantee from APCP. To this end, we develop a translation of terms in LASTn into processes in APCP that satisfies a strong formulation of operational correspondence.
[ { "created": "Thu, 25 Nov 2021 14:00:40 GMT", "version": "v1" }, { "created": "Fri, 1 Jul 2022 14:46:22 GMT", "version": "v2" }, { "created": "Thu, 6 Oct 2022 14:20:44 GMT", "version": "v3" }, { "created": "Mon, 22 Jan 2024 14:46:38 GMT", "version": "v4" }, { "created": "Thu, 4 Jul 2024 13:52:30 GMT", "version": "v5" } ]
2024-07-08
[ [ "Heuvel", "Bas van den", "" ], [ "Pérez", "Jorge A.", "" ] ]
We tackle the challenge of ensuring the deadlock-freedom property for message-passing processes that communicate asynchronously in cyclic process networks. Our contributions are twofold. First, we present Asynchronous Priority-based Classical Processes (APCP), a session-typed process framework that supports asynchronous communication, delegation, and recursion in cyclic process networks. Building upon the Curry-Howard correspondences between linear logic and session types, we establish essential meta-theoretical results for APCP, most notably deadlock freedom. Second, we present a new concurrent $\lambda$-calculus with asynchronous session types, dubbed LASTn. We illustrate LASTn by example and establish its meta-theoretical results; in particular, we show how to soundly transfer the deadlock-freedom guarantee from APCP. To this end, we develop a translation of terms in LASTn into processes in APCP that satisfies a strong formulation of operational correspondence.
2112.12589
Fateme Golivand Darvishvand
Moein Latifi, Fateme Golivand Darvishvand, Omid Khandel, Mobin Latifi Nowsoud
A deep reinforcement learning model for predictive maintenance planning of road assets: Integrating LCA and LCCA
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Road maintenance planning is an integral part of road asset management. One of the main challenges in Maintenance and Rehabilitation (M&R) practices is to determine maintenance type and timing. This research proposes a framework using Reinforcement Learning (RL) based on the Long Term Pavement Performance (LTPP) database to determine the type and timing of M&R practices. A predictive DNN model is first developed in the proposed algorithm, which serves as the Environment for the RL algorithm. For the Policy estimation of the RL model, both DQN and PPO models are developed. However, PPO has been selected in the end due to better convergence and higher sample efficiency. Indicators used in this study are International Roughness Index (IRI) and Rutting Depth (RD). Initially, we considered Cracking Metric (CM) as the third indicator, but it was then excluded due to the much fewer data compared to other indicators, which resulted in lower accuracy of the results. Furthermore, in cost-effectiveness calculation (reward), we considered both the economic and environmental impacts of M&R treatments. Costs and environmental impacts have been evaluated with paLATE 2.0 software. Our method is tested on a hypothetical case study of a six-lane highway with 23 kilometers length located in Texas, which has a warm and wet climate. The results propose a 20-year M&R plan in which road condition remains in an excellent condition range. Because the early state of the road is at a good level of service, there is no need for heavy maintenance practices in the first years. Later, after heavy M&R actions, there are several 1-2 years of no need for treatments. All of these show that the proposed plan has a logical result. Decision-makers and transportation agencies can use this scheme to conduct better maintenance practices that can prevent budget waste and, at the same time, minimize the environmental impacts.
[ { "created": "Mon, 20 Dec 2021 13:46:39 GMT", "version": "v1" }, { "created": "Fri, 24 Dec 2021 18:17:08 GMT", "version": "v2" }, { "created": "Mon, 27 Nov 2023 18:29:31 GMT", "version": "v3" } ]
2023-11-28
[ [ "Latifi", "Moein", "" ], [ "Darvishvand", "Fateme Golivand", "" ], [ "Khandel", "Omid", "" ], [ "Nowsoud", "Mobin Latifi", "" ] ]
Road maintenance planning is an integral part of road asset management. One of the main challenges in Maintenance and Rehabilitation (M&R) practices is to determine maintenance type and timing. This research proposes a framework using Reinforcement Learning (RL) based on the Long Term Pavement Performance (LTPP) database to determine the type and timing of M&R practices. A predictive DNN model is first developed in the proposed algorithm, which serves as the Environment for the RL algorithm. For the Policy estimation of the RL model, both DQN and PPO models are developed. However, PPO has been selected in the end due to better convergence and higher sample efficiency. Indicators used in this study are International Roughness Index (IRI) and Rutting Depth (RD). Initially, we considered Cracking Metric (CM) as the third indicator, but it was then excluded due to the much fewer data compared to other indicators, which resulted in lower accuracy of the results. Furthermore, in cost-effectiveness calculation (reward), we considered both the economic and environmental impacts of M&R treatments. Costs and environmental impacts have been evaluated with paLATE 2.0 software. Our method is tested on a hypothetical case study of a six-lane highway with 23 kilometers length located in Texas, which has a warm and wet climate. The results propose a 20-year M&R plan in which road condition remains in an excellent condition range. Because the early state of the road is at a good level of service, there is no need for heavy maintenance practices in the first years. Later, after heavy M&R actions, there are several 1-2 years of no need for treatments. All of these show that the proposed plan has a logical result. Decision-makers and transportation agencies can use this scheme to conduct better maintenance practices that can prevent budget waste and, at the same time, minimize the environmental impacts.
1708.02497
Janne Lepp\"a-aho
Janne Lepp\"a-aho, Santeri R\"ais\"anen, Xiao Yang, Teemu Roos
Learning non-parametric Markov networks with mutual information
null
null
null
null
cs.LG cs.IT math.IT stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a method for learning Markov network structures for continuous data without invoking any assumptions about the distribution of the variables. The method makes use of previous work on a non-parametric estimator for mutual information which is used to create a non-parametric test for multivariate conditional independence. This independence test is then combined with an efficient constraint-based algorithm for learning the graph structure. The performance of the method is evaluated on several synthetic data sets and it is shown to learn considerably more accurate structures than competing methods when the dependencies between the variables involve non-linearities.
[ { "created": "Tue, 8 Aug 2017 14:10:55 GMT", "version": "v1" } ]
2017-08-09
[ [ "Leppä-aho", "Janne", "" ], [ "Räisänen", "Santeri", "" ], [ "Yang", "Xiao", "" ], [ "Roos", "Teemu", "" ] ]
We propose a method for learning Markov network structures for continuous data without invoking any assumptions about the distribution of the variables. The method makes use of previous work on a non-parametric estimator for mutual information which is used to create a non-parametric test for multivariate conditional independence. This independence test is then combined with an efficient constraint-based algorithm for learning the graph structure. The performance of the method is evaluated on several synthetic data sets and it is shown to learn considerably more accurate structures than competing methods when the dependencies between the variables involve non-linearities.
1402.6556
Csaba P\u{a}tca\c{s}
Csaba Patcas and Attila Bartha
Evolutionary solving of the debts' clearing problem
13 pages, 5 figures
null
null
null
cs.NE cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The debts' clearing problem is about clearing all the debts in a group of n entities (persons, companies etc.) using a minimal number of money transaction operations. The problem is known to be NP-hard in the strong sense. As for many intractable problems, techniques from the field of artificial intelligence are useful in finding solutions close to optimum for large inputs. An evolutionary algorithm for solving the debts' clearing problem is proposed.
[ { "created": "Wed, 26 Feb 2014 14:39:57 GMT", "version": "v1" } ]
2014-02-27
[ [ "Patcas", "Csaba", "" ], [ "Bartha", "Attila", "" ] ]
The debts' clearing problem is about clearing all the debts in a group of n entities (persons, companies etc.) using a minimal number of money transaction operations. The problem is known to be NP-hard in the strong sense. As for many intractable problems, techniques from the field of artificial intelligence are useful in finding solutions close to optimum for large inputs. An evolutionary algorithm for solving the debts' clearing problem is proposed.
1103.2501
Anas Chaaban
Anas Chaaban, Aydin Sezgin, Bernd Bandemer, Arogyaswami Paulraj
On Gaussian Multiple Access Channels with Interference: Achievable Rates and Upper Bounds
submitted to MACOM 2011
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the interaction between two interfering Gaussian 2-user multiple access channels. The capacity region is characterized under mixed strong--extremely strong interference and individually very strong interference. Furthermore, the sum capacity is derived under a less restricting definition of very strong interference. Finally, a general upper bound on the sum capacity is provided, which is nearly tight for weak cross links.
[ { "created": "Sun, 13 Mar 2011 06:57:41 GMT", "version": "v1" } ]
2011-03-15
[ [ "Chaaban", "Anas", "" ], [ "Sezgin", "Aydin", "" ], [ "Bandemer", "Bernd", "" ], [ "Paulraj", "Arogyaswami", "" ] ]
We study the interaction between two interfering Gaussian 2-user multiple access channels. The capacity region is characterized under mixed strong--extremely strong interference and individually very strong interference. Furthermore, the sum capacity is derived under a less restricting definition of very strong interference. Finally, a general upper bound on the sum capacity is provided, which is nearly tight for weak cross links.
2309.05317
Anthony Frion
Anthony Frion, Lucas Drumetz, Mauro Dalla Mura, Guillaume Tochon, Abdeldjalil A\"issa El Bey
Neural Koopman prior for data assimilation
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
With the increasing availability of large scale datasets, computational power and tools like automatic differentiation and expressive neural network architectures, sequential data are now often treated in a data-driven way, with a dynamical model trained from the observation data. While neural networks are often seen as uninterpretable black-box architectures, they can still benefit from physical priors on the data and from mathematical knowledge. In this paper, we use a neural network architecture which leverages the long-known Koopman operator theory to embed dynamical systems in latent spaces where their dynamics can be described linearly, enabling a number of appealing features. We introduce methods that enable to train such a model for long-term continuous reconstruction, even in difficult contexts where the data comes in irregularly-sampled time series. The potential for self-supervised learning is also demonstrated, as we show the promising use of trained dynamical models as priors for variational data assimilation techniques, with applications to e.g. time series interpolation and forecasting.
[ { "created": "Mon, 11 Sep 2023 09:04:36 GMT", "version": "v1" }, { "created": "Wed, 6 Mar 2024 14:57:23 GMT", "version": "v2" }, { "created": "Fri, 21 Jun 2024 20:14:59 GMT", "version": "v3" } ]
2024-06-25
[ [ "Frion", "Anthony", "" ], [ "Drumetz", "Lucas", "" ], [ "Mura", "Mauro Dalla", "" ], [ "Tochon", "Guillaume", "" ], [ "Bey", "Abdeldjalil Aïssa El", "" ] ]
With the increasing availability of large scale datasets, computational power and tools like automatic differentiation and expressive neural network architectures, sequential data are now often treated in a data-driven way, with a dynamical model trained from the observation data. While neural networks are often seen as uninterpretable black-box architectures, they can still benefit from physical priors on the data and from mathematical knowledge. In this paper, we use a neural network architecture which leverages the long-known Koopman operator theory to embed dynamical systems in latent spaces where their dynamics can be described linearly, enabling a number of appealing features. We introduce methods that enable to train such a model for long-term continuous reconstruction, even in difficult contexts where the data comes in irregularly-sampled time series. The potential for self-supervised learning is also demonstrated, as we show the promising use of trained dynamical models as priors for variational data assimilation techniques, with applications to e.g. time series interpolation and forecasting.
2401.06161
Pavel Novoa-Hern\'andez Dr.
Marcelino Cabrera and Carlos Cruz and Pavel Novoa-Hern\'andez and David A. Pelta and Jos\'e Luis Verdegay
Trustworthy human-centric based Automated Decision-Making Systems
16 pages, 1 Table
null
null
null
cs.CY cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
Automated Decision-Making Systems (ADS) have become pervasive across various fields, activities, and occupations, to enhance performance. However, this widespread adoption introduces potential risks, including the misuse of ADS. Such misuse may manifest when ADS is employed in situations where it is unnecessary or when essential requirements, conditions, and terms are overlooked, leading to unintended consequences. This research paper presents a thorough examination of the implications, distinctions, and ethical considerations associated with digitalization, digital transformation, and the utilization of ADS in contemporary society and future contexts. Emphasis is placed on the imperative need for regulation, transparency, and ethical conduct in the deployment of ADS.
[ { "created": "Fri, 22 Dec 2023 11:02:57 GMT", "version": "v1" } ]
2024-01-15
[ [ "Cabrera", "Marcelino", "" ], [ "Cruz", "Carlos", "" ], [ "Novoa-Hernández", "Pavel", "" ], [ "Pelta", "David A.", "" ], [ "Verdegay", "José Luis", "" ] ]
Automated Decision-Making Systems (ADS) have become pervasive across various fields, activities, and occupations, to enhance performance. However, this widespread adoption introduces potential risks, including the misuse of ADS. Such misuse may manifest when ADS is employed in situations where it is unnecessary or when essential requirements, conditions, and terms are overlooked, leading to unintended consequences. This research paper presents a thorough examination of the implications, distinctions, and ethical considerations associated with digitalization, digital transformation, and the utilization of ADS in contemporary society and future contexts. Emphasis is placed on the imperative need for regulation, transparency, and ethical conduct in the deployment of ADS.
2212.03519
Bowen Xie
Bowen Xie, Yuxuan Sun, Sheng Zhou, Zhisheng Niu, Yang Xu, Jingran Chen, Deniz G\"und\"uz
MOB-FL: Mobility-Aware Federated Learning for Intelligent Connected Vehicles
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated learning (FL) is a promising approach to enable the future Internet of vehicles consisting of intelligent connected vehicles (ICVs) with powerful sensing, computing and communication capabilities. We consider a base station (BS) coordinating nearby ICVs to train a neural network in a collaborative yet distributed manner, in order to limit data traffic and privacy leakage. However, due to the mobility of vehicles, the connections between the BS and ICVs are short-lived, which affects the resource utilization of ICVs, and thus, the convergence speed of the training process. In this paper, we propose an accelerated FL-ICV framework, by optimizing the duration of each training round and the number of local iterations, for better convergence performance of FL. We propose a mobility-aware optimization algorithm called MOB-FL, which aims at maximizing the resource utilization of ICVs under short-lived wireless connections, so as to increase the convergence speed. Simulation results based on the beam selection and the trajectory prediction tasks verify the effectiveness of the proposed solution.
[ { "created": "Wed, 7 Dec 2022 08:53:53 GMT", "version": "v1" } ]
2022-12-08
[ [ "Xie", "Bowen", "" ], [ "Sun", "Yuxuan", "" ], [ "Zhou", "Sheng", "" ], [ "Niu", "Zhisheng", "" ], [ "Xu", "Yang", "" ], [ "Chen", "Jingran", "" ], [ "Gündüz", "Deniz", "" ] ]
Federated learning (FL) is a promising approach to enable the future Internet of vehicles consisting of intelligent connected vehicles (ICVs) with powerful sensing, computing and communication capabilities. We consider a base station (BS) coordinating nearby ICVs to train a neural network in a collaborative yet distributed manner, in order to limit data traffic and privacy leakage. However, due to the mobility of vehicles, the connections between the BS and ICVs are short-lived, which affects the resource utilization of ICVs, and thus, the convergence speed of the training process. In this paper, we propose an accelerated FL-ICV framework, by optimizing the duration of each training round and the number of local iterations, for better convergence performance of FL. We propose a mobility-aware optimization algorithm called MOB-FL, which aims at maximizing the resource utilization of ICVs under short-lived wireless connections, so as to increase the convergence speed. Simulation results based on the beam selection and the trajectory prediction tasks verify the effectiveness of the proposed solution.
2109.13860
Chengcheng Ye
Chengcheng Ye
Introduce the Result Into Self-Attention
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditional self-attention mechanisms in convolutional networks tend to use only the output of the previous layer as input to the attention network, such as SENet, CBAM, etc. In this paper, we propose a new attention modification method that tries to get the output of the classification network in advance and use it as a part of the input of the attention network. We used the auxiliary classifier proposed in GoogLeNet to obtain the results in advance and pass them into attention networks. we added this mechanism to SE-ResNet for our experiments and achieved a classification accuracy improvement of at most 1.94% on cifar100.
[ { "created": "Tue, 21 Sep 2021 02:16:00 GMT", "version": "v1" } ]
2021-09-29
[ [ "Ye", "Chengcheng", "" ] ]
Traditional self-attention mechanisms in convolutional networks tend to use only the output of the previous layer as input to the attention network, such as SENet, CBAM, etc. In this paper, we propose a new attention modification method that tries to get the output of the classification network in advance and use it as a part of the input of the attention network. We used the auxiliary classifier proposed in GoogLeNet to obtain the results in advance and pass them into attention networks. we added this mechanism to SE-ResNet for our experiments and achieved a classification accuracy improvement of at most 1.94% on cifar100.
1812.11859
Eric Benhamou
Eric Benhamou, Jamal Atif, Rida Laraki
A discrete version of CMA-ES
11 pages
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern machine learning uses more and more advanced optimization techniques to find optimal hyper parameters. Whenever the objective function is non-convex, non continuous and with potentially multiple local minima, standard gradient descent optimization methods fail. A last resource and very different method is to assume that the optimum(s), not necessarily unique, is/are distributed according to a distribution and iteratively to adapt the distribution according to tested points. These strategies originated in the early 1960s, named Evolution Strategy (ES) have culminated with the CMA-ES (Covariance Matrix Adaptation) ES. It relies on a multi variate normal distribution and is supposed to be state of the art for general optimization program. However, it is far from being optimal for discrete variables. In this paper, we extend the method to multivariate binomial correlated distributions. For such a distribution, we show that it shares similar features to the multi variate normal: independence and correlation is equivalent and correlation is efficiently modeled by interaction between different variables. We discuss this distribution in the framework of the exponential family. We prove that the model can estimate not only pairwise interactions among the two variables but also is capable of modeling higher order interactions. This allows creating a version of CMA ES that can accommodate efficiently discrete variables. We provide the corresponding algorithm and conclude.
[ { "created": "Thu, 27 Dec 2018 23:21:47 GMT", "version": "v1" }, { "created": "Mon, 11 Feb 2019 19:59:07 GMT", "version": "v2" } ]
2019-02-13
[ [ "Benhamou", "Eric", "" ], [ "Atif", "Jamal", "" ], [ "Laraki", "Rida", "" ] ]
Modern machine learning uses more and more advanced optimization techniques to find optimal hyper parameters. Whenever the objective function is non-convex, non continuous and with potentially multiple local minima, standard gradient descent optimization methods fail. A last resource and very different method is to assume that the optimum(s), not necessarily unique, is/are distributed according to a distribution and iteratively to adapt the distribution according to tested points. These strategies originated in the early 1960s, named Evolution Strategy (ES) have culminated with the CMA-ES (Covariance Matrix Adaptation) ES. It relies on a multi variate normal distribution and is supposed to be state of the art for general optimization program. However, it is far from being optimal for discrete variables. In this paper, we extend the method to multivariate binomial correlated distributions. For such a distribution, we show that it shares similar features to the multi variate normal: independence and correlation is equivalent and correlation is efficiently modeled by interaction between different variables. We discuss this distribution in the framework of the exponential family. We prove that the model can estimate not only pairwise interactions among the two variables but also is capable of modeling higher order interactions. This allows creating a version of CMA ES that can accommodate efficiently discrete variables. We provide the corresponding algorithm and conclude.
1510.03560
Julien Duchateau
Julien Duchateau, Fran\c{c}ois Rousselle, Nicolas Maquignon, Gilles Roussel, Christophe Renaud
A progressive mesh method for physical simulations using lattice Boltzmann method on single-node multi-gpu architectures
15 pages, International Journal of Distributed and Parallel Systems (IJDPS) Vol.6, No.5, September 2015
null
10.5121/ijdps.2015.6501
null
cs.DC physics.comp-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a new progressive mesh algorithm is introduced in order to perform fast physical simulations by the use of a lattice Boltzmann method (LBM) on a single-node multi-GPU architecture. This algorithm is able to mesh automatically the simulation domain according to the propagation of fluids. This method can also be useful in order to perform various types of simulations on complex geometries. The use of this algorithm combined with the massive parallelism of GPUs allows to obtain very good performance in comparison with the static mesh method used in literature. Several simulations are shown in order to evaluate the algorithm.
[ { "created": "Tue, 13 Oct 2015 07:32:24 GMT", "version": "v1" } ]
2015-10-14
[ [ "Duchateau", "Julien", "" ], [ "Rousselle", "François", "" ], [ "Maquignon", "Nicolas", "" ], [ "Roussel", "Gilles", "" ], [ "Renaud", "Christophe", "" ] ]
In this paper, a new progressive mesh algorithm is introduced in order to perform fast physical simulations by the use of a lattice Boltzmann method (LBM) on a single-node multi-GPU architecture. This algorithm is able to mesh automatically the simulation domain according to the propagation of fluids. This method can also be useful in order to perform various types of simulations on complex geometries. The use of this algorithm combined with the massive parallelism of GPUs allows to obtain very good performance in comparison with the static mesh method used in literature. Several simulations are shown in order to evaluate the algorithm.
2305.12559
Zsolt Pocze
Zsolt Pocze
Multi-scale information content measurement method based on Shannon information
12 pages, 5 figures, 4 tables
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
In this paper, we present a new multi-scale information content calculation method based on Shannon information (and Shannon entropy). The original method described by Claude E. Shannon and based on the logarithm of the probability of elements gives an upper limit to the information content of discrete patterns, but in many cases (for example, in the case of repeating patterns) it is inaccurate and does not approximate the true information content of the pattern well enough. The new mathematical method presented here provides a more accurate estimate of the (internal) information content of any discrete pattern based on Shannon's original function. The method is tested on different data sets and the results are compared with the results of other methods like compression algorithms.
[ { "created": "Sun, 21 May 2023 20:14:03 GMT", "version": "v1" } ]
2023-05-23
[ [ "Pocze", "Zsolt", "" ] ]
In this paper, we present a new multi-scale information content calculation method based on Shannon information (and Shannon entropy). The original method described by Claude E. Shannon and based on the logarithm of the probability of elements gives an upper limit to the information content of discrete patterns, but in many cases (for example, in the case of repeating patterns) it is inaccurate and does not approximate the true information content of the pattern well enough. The new mathematical method presented here provides a more accurate estimate of the (internal) information content of any discrete pattern based on Shannon's original function. The method is tested on different data sets and the results are compared with the results of other methods like compression algorithms.
2103.07054
Wei Chen
Wei Chen, Xi Jia, Hyung Jin Chang, Jinming Duan, Linlin Shen, Ales Leonardis
FS-Net: Fast Shape-based Network for Category-Level 6D Object Pose Estimation with Decoupled Rotation Mechanism
accepted by CVPR2021, oral
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we focus on category-level 6D pose and size estimation from monocular RGB-D image. Previous methods suffer from inefficient category-level pose feature extraction which leads to low accuracy and inference speed. To tackle this problem, we propose a fast shape-based network (FS-Net) with efficient category-level feature extraction for 6D pose estimation. First, we design an orientation aware autoencoder with 3D graph convolution for latent feature extraction. The learned latent feature is insensitive to point shift and object size thanks to the shift and scale-invariance properties of the 3D graph convolution. Then, to efficiently decode category-level rotation information from the latent feature, we propose a novel decoupled rotation mechanism that employs two decoders to complementarily access the rotation information. Meanwhile, we estimate translation and size by two residuals, which are the difference between the mean of object points and ground truth translation, and the difference between the mean size of the category and ground truth size, respectively. Finally, to increase the generalization ability of FS-Net, we propose an online box-cage based 3D deformation mechanism to augment the training data. Extensive experiments on two benchmark datasets show that the proposed method achieves state-of-the-art performance in both category- and instance-level 6D object pose estimation. Especially in category-level pose estimation, without extra synthetic data, our method outperforms existing methods by 6.3% on the NOCS-REAL dataset.
[ { "created": "Fri, 12 Mar 2021 03:07:24 GMT", "version": "v1" }, { "created": "Sun, 6 Jun 2021 09:50:51 GMT", "version": "v2" } ]
2021-06-08
[ [ "Chen", "Wei", "" ], [ "Jia", "Xi", "" ], [ "Chang", "Hyung Jin", "" ], [ "Duan", "Jinming", "" ], [ "Shen", "Linlin", "" ], [ "Leonardis", "Ales", "" ] ]
In this paper, we focus on category-level 6D pose and size estimation from monocular RGB-D image. Previous methods suffer from inefficient category-level pose feature extraction which leads to low accuracy and inference speed. To tackle this problem, we propose a fast shape-based network (FS-Net) with efficient category-level feature extraction for 6D pose estimation. First, we design an orientation aware autoencoder with 3D graph convolution for latent feature extraction. The learned latent feature is insensitive to point shift and object size thanks to the shift and scale-invariance properties of the 3D graph convolution. Then, to efficiently decode category-level rotation information from the latent feature, we propose a novel decoupled rotation mechanism that employs two decoders to complementarily access the rotation information. Meanwhile, we estimate translation and size by two residuals, which are the difference between the mean of object points and ground truth translation, and the difference between the mean size of the category and ground truth size, respectively. Finally, to increase the generalization ability of FS-Net, we propose an online box-cage based 3D deformation mechanism to augment the training data. Extensive experiments on two benchmark datasets show that the proposed method achieves state-of-the-art performance in both category- and instance-level 6D object pose estimation. Especially in category-level pose estimation, without extra synthetic data, our method outperforms existing methods by 6.3% on the NOCS-REAL dataset.
2403.13658
Moahmmod Suvon
Mohammod N. I. Suvon, Prasun C. Tripathi, Wenrui Fan, Shuo Zhou, Xianyuan Liu, Samer Alabed, Venet Osmani, Andrew J. Swift, Chen Chen, Haiping Lu
Multimodal Variational Autoencoder for Low-cost Cardiac Hemodynamics Instability Detection
null
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advancements in non-invasive detection of cardiac hemodynamic instability (CHDI) primarily focus on applying machine learning techniques to a single data modality, e.g. cardiac magnetic resonance imaging (MRI). Despite their potential, these approaches often fall short especially when the size of labeled patient data is limited, a common challenge in the medical domain. Furthermore, only a few studies have explored multimodal methods to study CHDI, which mostly rely on costly modalities such as cardiac MRI and echocardiogram. In response to these limitations, we propose a novel multimodal variational autoencoder ($\text{CardioVAE}_\text{X,G}$) to integrate low-cost chest X-ray (CXR) and electrocardiogram (ECG) modalities with pre-training on a large unlabeled dataset. Specifically, $\text{CardioVAE}_\text{X,G}$ introduces a novel tri-stream pre-training strategy to learn both shared and modality-specific features, thus enabling fine-tuning with both unimodal and multimodal datasets. We pre-train $\text{CardioVAE}_\text{X,G}$ on a large, unlabeled dataset of $50,982$ subjects from a subset of MIMIC database and then fine-tune the pre-trained model on a labeled dataset of $795$ subjects from the ASPIRE registry. Comprehensive evaluations against existing methods show that $\text{CardioVAE}_\text{X,G}$ offers promising performance (AUROC $=0.79$ and Accuracy $=0.77$), representing a significant step forward in non-invasive prediction of CHDI. Our model also excels in producing fine interpretations of predictions directly associated with clinical features, thereby supporting clinical decision-making.
[ { "created": "Wed, 20 Mar 2024 15:06:49 GMT", "version": "v1" }, { "created": "Thu, 20 Jun 2024 15:07:51 GMT", "version": "v2" }, { "created": "Fri, 5 Jul 2024 15:42:25 GMT", "version": "v3" } ]
2024-07-08
[ [ "Suvon", "Mohammod N. I.", "" ], [ "Tripathi", "Prasun C.", "" ], [ "Fan", "Wenrui", "" ], [ "Zhou", "Shuo", "" ], [ "Liu", "Xianyuan", "" ], [ "Alabed", "Samer", "" ], [ "Osmani", "Venet", "" ], [ "Swift", "Andrew J.", "" ], [ "Chen", "Chen", "" ], [ "Lu", "Haiping", "" ] ]
Recent advancements in non-invasive detection of cardiac hemodynamic instability (CHDI) primarily focus on applying machine learning techniques to a single data modality, e.g. cardiac magnetic resonance imaging (MRI). Despite their potential, these approaches often fall short especially when the size of labeled patient data is limited, a common challenge in the medical domain. Furthermore, only a few studies have explored multimodal methods to study CHDI, which mostly rely on costly modalities such as cardiac MRI and echocardiogram. In response to these limitations, we propose a novel multimodal variational autoencoder ($\text{CardioVAE}_\text{X,G}$) to integrate low-cost chest X-ray (CXR) and electrocardiogram (ECG) modalities with pre-training on a large unlabeled dataset. Specifically, $\text{CardioVAE}_\text{X,G}$ introduces a novel tri-stream pre-training strategy to learn both shared and modality-specific features, thus enabling fine-tuning with both unimodal and multimodal datasets. We pre-train $\text{CardioVAE}_\text{X,G}$ on a large, unlabeled dataset of $50,982$ subjects from a subset of MIMIC database and then fine-tune the pre-trained model on a labeled dataset of $795$ subjects from the ASPIRE registry. Comprehensive evaluations against existing methods show that $\text{CardioVAE}_\text{X,G}$ offers promising performance (AUROC $=0.79$ and Accuracy $=0.77$), representing a significant step forward in non-invasive prediction of CHDI. Our model also excels in producing fine interpretations of predictions directly associated with clinical features, thereby supporting clinical decision-making.
2012.08761
Satoshi Sunada
Genki Furuhata, Tomoaki Niiyama, and Satoshi Sunada
Physical deep learning based on optimal control of dynamical systems
11 pages, 9 figures
Phys. Rev. Applied 15, 034092 (2021)
10.1103/PhysRevApplied.15.034092
null
cs.NE cs.ET physics.app-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning is the backbone of artificial intelligence technologies, and it can be regarded as a kind of multilayer feedforward neural network. An essence of deep learning is information propagation through layers. This suggests that there is a connection between deep neural networks and dynamical systems in the sense that information propagation is explicitly modeled by the time-evolution of dynamical systems. In this study, we perform pattern recognition based on the optimal control of continuous-time dynamical systems, which is suitable for physical hardware implementation. The learning is based on the adjoint method to optimally control dynamical systems, and the deep (virtual) network structures based on the time evolution of the systems are used for processing input information. As a key example, we apply the dynamics-based recognition approach to an optoelectronic delay system and demonstrate that the use of the delay system allows for image recognition and nonlinear classifications using only a few control signals. This is in contrast to conventional multilayer neural networks, which require a large number of weight parameters to be trained. The proposed approach provides insight into the mechanisms of deep network processing in the framework of an optimal control problem and presents a pathway for realizing physical computing hardware.
[ { "created": "Wed, 16 Dec 2020 06:38:01 GMT", "version": "v1" }, { "created": "Thu, 1 Apr 2021 06:43:47 GMT", "version": "v2" } ]
2021-04-02
[ [ "Furuhata", "Genki", "" ], [ "Niiyama", "Tomoaki", "" ], [ "Sunada", "Satoshi", "" ] ]
Deep learning is the backbone of artificial intelligence technologies, and it can be regarded as a kind of multilayer feedforward neural network. An essence of deep learning is information propagation through layers. This suggests that there is a connection between deep neural networks and dynamical systems in the sense that information propagation is explicitly modeled by the time-evolution of dynamical systems. In this study, we perform pattern recognition based on the optimal control of continuous-time dynamical systems, which is suitable for physical hardware implementation. The learning is based on the adjoint method to optimally control dynamical systems, and the deep (virtual) network structures based on the time evolution of the systems are used for processing input information. As a key example, we apply the dynamics-based recognition approach to an optoelectronic delay system and demonstrate that the use of the delay system allows for image recognition and nonlinear classifications using only a few control signals. This is in contrast to conventional multilayer neural networks, which require a large number of weight parameters to be trained. The proposed approach provides insight into the mechanisms of deep network processing in the framework of an optimal control problem and presents a pathway for realizing physical computing hardware.
2105.15114
Spyridon Doukakis
Maria Chionidou-Moskofoglou, Spyridon Doukakis, Amalia Lappa
The use of e-portfolios in teaching and assessment
8 pages
Proceedings of the 7th International Conference on Technology in Mathematics Teaching, pp. 224-232, 2005
null
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
In this paper, we will initially go through the results of assessment in mathematics according to the international assessment programs PISA, TIMSS (2003), with respect to students' portfolios. Furthermore, we will present the forms and the ways of assessment and will focus on that assessment which refers to the use of eportfolios.
[ { "created": "Fri, 21 May 2021 05:53:11 GMT", "version": "v1" } ]
2021-06-01
[ [ "Chionidou-Moskofoglou", "Maria", "" ], [ "Doukakis", "Spyridon", "" ], [ "Lappa", "Amalia", "" ] ]
In this paper, we will initially go through the results of assessment in mathematics according to the international assessment programs PISA, TIMSS (2003), with respect to students' portfolios. Furthermore, we will present the forms and the ways of assessment and will focus on that assessment which refers to the use of eportfolios.
1802.07117
Ana Paula Appel
Ana Paula Appel, Paulo Rodrigo Cavalin, Marisa Affonso Vasconcelos, Claudio Santos Pinhanez
Combining Textual Content and Structure to Improve Dialog Similarity
5 pages
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chatbots, taking advantage of the success of the messaging apps and recent advances in Artificial Intelligence, have become very popular, from helping business to improve customer services to chatting to users for the sake of conversation and engagement (celebrity or personal bots). However, developing and improving a chatbot requires understanding their data generated by its users. Dialog data has a different nature of a simple question and answering interaction, in which context and temporal properties (turn order) creates a different understanding of such data. In this paper, we propose a novelty metric to compute dialogs' similarity based not only on the text content but also on the information related to the dialog structure. Our experimental results performed over the Switchboard dataset show that using evidence from both textual content and the dialog structure leads to more accurate results than using each measure in isolation.
[ { "created": "Tue, 20 Feb 2018 14:05:48 GMT", "version": "v1" } ]
2018-02-21
[ [ "Appel", "Ana Paula", "" ], [ "Cavalin", "Paulo Rodrigo", "" ], [ "Vasconcelos", "Marisa Affonso", "" ], [ "Pinhanez", "Claudio Santos", "" ] ]
Chatbots, taking advantage of the success of the messaging apps and recent advances in Artificial Intelligence, have become very popular, from helping business to improve customer services to chatting to users for the sake of conversation and engagement (celebrity or personal bots). However, developing and improving a chatbot requires understanding their data generated by its users. Dialog data has a different nature of a simple question and answering interaction, in which context and temporal properties (turn order) creates a different understanding of such data. In this paper, we propose a novelty metric to compute dialogs' similarity based not only on the text content but also on the information related to the dialog structure. Our experimental results performed over the Switchboard dataset show that using evidence from both textual content and the dialog structure leads to more accurate results than using each measure in isolation.
2208.03799
Martin Nisser
Martin Nisser and Yashaswini Makaram and Faraz Faruqi and Ryo Suzuki and Stefanie Mueller
Selective Self-Assembly using Re-Programmable Magnetic Pixels
2022 IEEE International Conference on Intelligent Robots and Systems (IROS)
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
This paper introduces a method to generate highly selective encodings that can be magnetically "programmed" onto physical modules to enable them to self-assemble in chosen configurations. We generate these encodings based on Hadamard matrices, and show how to design the faces of modules to be maximally attractive to their intended mate, while remaining maximally agnostic to other faces. We derive guarantees on these bounds, and verify their attraction and agnosticism experimentally. Using cubic modules whose faces have been covered in soft magnetic material, we show how inexpensive, passive modules with planar faces can be used to selectively self-assemble into target shapes without geometric guides. We show that these modules can be easily re-programmed for new target shapes using a CNC-based magnetic plotter, and demonstrate self-assembly of 8 cubes in a water tank.
[ { "created": "Sun, 7 Aug 2022 20:18:12 GMT", "version": "v1" } ]
2022-08-09
[ [ "Nisser", "Martin", "" ], [ "Makaram", "Yashaswini", "" ], [ "Faruqi", "Faraz", "" ], [ "Suzuki", "Ryo", "" ], [ "Mueller", "Stefanie", "" ] ]
This paper introduces a method to generate highly selective encodings that can be magnetically "programmed" onto physical modules to enable them to self-assemble in chosen configurations. We generate these encodings based on Hadamard matrices, and show how to design the faces of modules to be maximally attractive to their intended mate, while remaining maximally agnostic to other faces. We derive guarantees on these bounds, and verify their attraction and agnosticism experimentally. Using cubic modules whose faces have been covered in soft magnetic material, we show how inexpensive, passive modules with planar faces can be used to selectively self-assemble into target shapes without geometric guides. We show that these modules can be easily re-programmed for new target shapes using a CNC-based magnetic plotter, and demonstrate self-assembly of 8 cubes in a water tank.
cs/0112024
Thomas Schmidt
B. Feustel, T.C. Schmidt
Media Objects in Time - A Multimedia Streaming System
9 pdf pages
Computer Networks 37,6 (2001), pp. 729 - 737
null
null
cs.NI cs.MM
null
The widespread availability of networked multimedia potentials embedded in an infrastructure of qualitative superior kind gives rise to new approaches in the areas of teleteaching and internet presentation: The distribution of professionally styled multimedia streams has fallen in the realm of possibility. This paper presents a prototype - both model and runtime environment - of a time directed media system treating any kind of presentational contribution as reusable media object components. The plug-in free runtime system is based on a database and allows for a flexible support of static media types as well as for easy extensions by streaming media servers. The prototypic implementation includes a preliminary Web Authoring platform.
[ { "created": "Fri, 28 Dec 2001 20:19:09 GMT", "version": "v1" } ]
2007-05-23
[ [ "Feustel", "B.", "" ], [ "Schmidt", "T. C.", "" ] ]
The widespread availability of networked multimedia potentials embedded in an infrastructure of qualitative superior kind gives rise to new approaches in the areas of teleteaching and internet presentation: The distribution of professionally styled multimedia streams has fallen in the realm of possibility. This paper presents a prototype - both model and runtime environment - of a time directed media system treating any kind of presentational contribution as reusable media object components. The plug-in free runtime system is based on a database and allows for a flexible support of static media types as well as for easy extensions by streaming media servers. The prototypic implementation includes a preliminary Web Authoring platform.
2406.07141
Avinash Kori
Avinash Kori, Francesco Locatello, Ainkaran Santhirasekaram, Francesca Toni, Ben Glocker, Fabio De Sousa Ribeiro
Identifiable Object-Centric Representation Learning via Probabilistic Slot Attention
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Learning modular object-centric representations is crucial for systematic generalization. Existing methods show promising object-binding capabilities empirically, but theoretical identifiability guarantees remain relatively underdeveloped. Understanding when object-centric representations can theoretically be identified is crucial for scaling slot-based methods to high-dimensional images with correctness guarantees. To that end, we propose a probabilistic slot-attention algorithm that imposes an aggregate mixture prior over object-centric slot representations, thereby providing slot identifiability guarantees without supervision, up to an equivalence relation. We provide empirical verification of our theoretical identifiability result using both simple 2-dimensional data and high-resolution imaging datasets.
[ { "created": "Tue, 11 Jun 2024 10:40:54 GMT", "version": "v1" } ]
2024-06-12
[ [ "Kori", "Avinash", "" ], [ "Locatello", "Francesco", "" ], [ "Santhirasekaram", "Ainkaran", "" ], [ "Toni", "Francesca", "" ], [ "Glocker", "Ben", "" ], [ "Ribeiro", "Fabio De Sousa", "" ] ]
Learning modular object-centric representations is crucial for systematic generalization. Existing methods show promising object-binding capabilities empirically, but theoretical identifiability guarantees remain relatively underdeveloped. Understanding when object-centric representations can theoretically be identified is crucial for scaling slot-based methods to high-dimensional images with correctness guarantees. To that end, we propose a probabilistic slot-attention algorithm that imposes an aggregate mixture prior over object-centric slot representations, thereby providing slot identifiability guarantees without supervision, up to an equivalence relation. We provide empirical verification of our theoretical identifiability result using both simple 2-dimensional data and high-resolution imaging datasets.
2406.02826
Yu-Wen Chen
Yu-Wen Chen, Julia Hirschberg
Exploring Robustness in Doctor-Patient Conversation Summarization: An Analysis of Out-of-Domain SOAP Notes
Clinical NLP Workshop 2024
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Summarizing medical conversations poses unique challenges due to the specialized domain and the difficulty of collecting in-domain training data. In this study, we investigate the performance of state-of-the-art doctor-patient conversation generative summarization models on the out-of-domain data. We divide the summarization model of doctor-patient conversation into two configurations: (1) a general model, without specifying subjective (S), objective (O), and assessment (A) and plan (P) notes; (2) a SOAP-oriented model that generates a summary with SOAP sections. We analyzed the limitations and strengths of the fine-tuning language model-based methods and GPTs on both configurations. We also conducted a Linguistic Inquiry and Word Count analysis to compare the SOAP notes from different datasets. The results exhibit a strong correlation for reference notes across different datasets, indicating that format mismatch (i.e., discrepancies in word distribution) is not the main cause of performance decline on out-of-domain data. Lastly, a detailed analysis of SOAP notes is included to provide insights into missing information and hallucinations introduced by the models.
[ { "created": "Wed, 5 Jun 2024 00:11:20 GMT", "version": "v1" } ]
2024-06-06
[ [ "Chen", "Yu-Wen", "" ], [ "Hirschberg", "Julia", "" ] ]
Summarizing medical conversations poses unique challenges due to the specialized domain and the difficulty of collecting in-domain training data. In this study, we investigate the performance of state-of-the-art doctor-patient conversation generative summarization models on the out-of-domain data. We divide the summarization model of doctor-patient conversation into two configurations: (1) a general model, without specifying subjective (S), objective (O), and assessment (A) and plan (P) notes; (2) a SOAP-oriented model that generates a summary with SOAP sections. We analyzed the limitations and strengths of the fine-tuning language model-based methods and GPTs on both configurations. We also conducted a Linguistic Inquiry and Word Count analysis to compare the SOAP notes from different datasets. The results exhibit a strong correlation for reference notes across different datasets, indicating that format mismatch (i.e., discrepancies in word distribution) is not the main cause of performance decline on out-of-domain data. Lastly, a detailed analysis of SOAP notes is included to provide insights into missing information and hallucinations introduced by the models.
2209.09341
Nermin Samet
Georgy Ponimatkin, Nermin Samet, Yang Xiao, Yuming Du, Renaud Marlet, Vincent Lepetit
A Simple and Powerful Global Optimization for Unsupervised Video Object Segmentation
Accepted to the IEEE Winter Conference on Applications of Computer Vision (WACV) 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a simple, yet powerful approach for unsupervised object segmentation in videos. We introduce an objective function whose minimum represents the mask of the main salient object over the input sequence. It only relies on independent image features and optical flows, which can be obtained using off-the-shelf self-supervised methods. It scales with the length of the sequence with no need for superpixels or sparsification, and it generalizes to different datasets without any specific training. This objective function can actually be derived from a form of spectral clustering applied to the entire video. Our method achieves on-par performance with the state of the art on standard benchmarks (DAVIS2016, SegTrack-v2, FBMS59), while being conceptually and practically much simpler. Code is available at https://ponimatkin.github.io/ssl-vos.
[ { "created": "Mon, 19 Sep 2022 20:41:26 GMT", "version": "v1" }, { "created": "Wed, 19 Oct 2022 14:45:08 GMT", "version": "v2" } ]
2022-10-20
[ [ "Ponimatkin", "Georgy", "" ], [ "Samet", "Nermin", "" ], [ "Xiao", "Yang", "" ], [ "Du", "Yuming", "" ], [ "Marlet", "Renaud", "" ], [ "Lepetit", "Vincent", "" ] ]
We propose a simple, yet powerful approach for unsupervised object segmentation in videos. We introduce an objective function whose minimum represents the mask of the main salient object over the input sequence. It only relies on independent image features and optical flows, which can be obtained using off-the-shelf self-supervised methods. It scales with the length of the sequence with no need for superpixels or sparsification, and it generalizes to different datasets without any specific training. This objective function can actually be derived from a form of spectral clustering applied to the entire video. Our method achieves on-par performance with the state of the art on standard benchmarks (DAVIS2016, SegTrack-v2, FBMS59), while being conceptually and practically much simpler. Code is available at https://ponimatkin.github.io/ssl-vos.
1409.2073
Tobias Kortkamp
Tobias Kortkamp
An NLP Assistant for Clide
Bachelor Report
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/3.0/
This report describes an NLP assistant for the collaborative development environment Clide, that supports the development of NLP applications by providing easy access to some common NLP data structures. The assistant visualizes text fragments and their dependencies by displaying the semantic graph of a sentence, the coreference chain of a paragraph and mined triples that are extracted from a paragraph's semantic graphs and linked using its coreference chain. Using this information and a logic programming library, we create an NLP database which is used by a series of queries to mine the triples. The algorithm is tested by translating a natural language text describing a graph to an actual graph that is shown as an annotation in the text editor.
[ { "created": "Sun, 7 Sep 2014 02:31:03 GMT", "version": "v1" } ]
2014-09-09
[ [ "Kortkamp", "Tobias", "" ] ]
This report describes an NLP assistant for the collaborative development environment Clide, that supports the development of NLP applications by providing easy access to some common NLP data structures. The assistant visualizes text fragments and their dependencies by displaying the semantic graph of a sentence, the coreference chain of a paragraph and mined triples that are extracted from a paragraph's semantic graphs and linked using its coreference chain. Using this information and a logic programming library, we create an NLP database which is used by a series of queries to mine the triples. The algorithm is tested by translating a natural language text describing a graph to an actual graph that is shown as an annotation in the text editor.
1304.3111
Randall Smith
Randall Smith, Matthew Self, Peter Cheeseman
Estimating Uncertain Spatial Relationships in Robotics
Appears in Proceedings of the Second Conference on Uncertainty in Artificial Intelligence (UAI1986)
null
null
UAI-P-1986-PG-267-288
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we describe a representation for spatial information, called the stochastic map, and associated procedures for building it, reading information from it, and revising it incrementally as new information is obtained. The map contains the estimates of relationships among objects in the map, and their uncertainties, given all the available information. The procedures provide a general solution to the problem of estimating uncertain relative spatial relationships. The estimates are probabilistic in nature, an advance over the previous, very conservative, worst-case approaches to the problem. Finally, the procedures are developed in the context of state-estimation and filtering theory, which provides a solid basis for numerous extensions.
[ { "created": "Wed, 27 Mar 2013 19:54:21 GMT", "version": "v1" } ]
2013-04-12
[ [ "Smith", "Randall", "" ], [ "Self", "Matthew", "" ], [ "Cheeseman", "Peter", "" ] ]
In this paper, we describe a representation for spatial information, called the stochastic map, and associated procedures for building it, reading information from it, and revising it incrementally as new information is obtained. The map contains the estimates of relationships among objects in the map, and their uncertainties, given all the available information. The procedures provide a general solution to the problem of estimating uncertain relative spatial relationships. The estimates are probabilistic in nature, an advance over the previous, very conservative, worst-case approaches to the problem. Finally, the procedures are developed in the context of state-estimation and filtering theory, which provides a solid basis for numerous extensions.
0908.4413
Laurent Romary
Patrice Lopez (IDSL), Laurent Romary (IDSL, INRIA Saclay - Ile de France)
Multiple Retrieval Models and Regression Models for Prior Art Search
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents the system called PATATRAS (PATent and Article Tracking, Retrieval and AnalysiS) realized for the IP track of CLEF 2009. Our approach presents three main characteristics: 1. The usage of multiple retrieval models (KL, Okapi) and term index definitions (lemma, phrase, concept) for the three languages considered in the present track (English, French, German) producing ten different sets of ranked results. 2. The merging of the different results based on multiple regression models using an additional validation set created from the patent collection. 3. The exploitation of patent metadata and of the citation structures for creating restricted initial working sets of patents and for producing a final re-ranking regression model. As we exploit specific metadata of the patent documents and the citation relations only at the creation of initial working sets and during the final post ranking step, our architecture remains generic and easy to extend.
[ { "created": "Sun, 30 Aug 2009 18:50:19 GMT", "version": "v1" } ]
2009-09-01
[ [ "Lopez", "Patrice", "", "IDSL" ], [ "Romary", "Laurent", "", "IDSL, INRIA Saclay - Ile de\n France" ] ]
This paper presents the system called PATATRAS (PATent and Article Tracking, Retrieval and AnalysiS) realized for the IP track of CLEF 2009. Our approach presents three main characteristics: 1. The usage of multiple retrieval models (KL, Okapi) and term index definitions (lemma, phrase, concept) for the three languages considered in the present track (English, French, German) producing ten different sets of ranked results. 2. The merging of the different results based on multiple regression models using an additional validation set created from the patent collection. 3. The exploitation of patent metadata and of the citation structures for creating restricted initial working sets of patents and for producing a final re-ranking regression model. As we exploit specific metadata of the patent documents and the citation relations only at the creation of initial working sets and during the final post ranking step, our architecture remains generic and easy to extend.
1711.01243
Mohammad Ghasemzadeh
Mohammad Ghasemzadeh, Mohammad Samragh, Farinaz Koushanfar
ReBNet: Residual Binarized Neural Network
To Appear In The 26th IEEE International Symposium on Field-Programmable Custom Computing Machines
null
null
null
cs.LG cs.CV cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes ReBNet, an end-to-end framework for training reconfigurable binary neural networks on software and developing efficient accelerators for execution on FPGA. Binary neural networks offer an intriguing opportunity for deploying large-scale deep learning models on resource-constrained devices. Binarization reduces the memory footprint and replaces the power-hungry matrix-multiplication with light-weight XnorPopcount operations. However, binary networks suffer from a degraded accuracy compared to their fixed-point counterparts. We show that the state-of-the-art methods for optimizing binary networks accuracy, significantly increase the implementation cost and complexity. To compensate for the degraded accuracy while adhering to the simplicity of binary networks, we devise the first reconfigurable scheme that can adjust the classification accuracy based on the application. Our proposition improves the classification accuracy by representing features with multiple levels of residual binarization. Unlike previous methods, our approach does not exacerbate the area cost of the hardware accelerator. Instead, it provides a tradeoff between throughput and accuracy while the area overhead of multi-level binarization is negligible.
[ { "created": "Fri, 3 Nov 2017 17:12:15 GMT", "version": "v1" }, { "created": "Tue, 28 Nov 2017 00:45:57 GMT", "version": "v2" }, { "created": "Tue, 27 Mar 2018 20:58:01 GMT", "version": "v3" } ]
2018-03-29
[ [ "Ghasemzadeh", "Mohammad", "" ], [ "Samragh", "Mohammad", "" ], [ "Koushanfar", "Farinaz", "" ] ]
This paper proposes ReBNet, an end-to-end framework for training reconfigurable binary neural networks on software and developing efficient accelerators for execution on FPGA. Binary neural networks offer an intriguing opportunity for deploying large-scale deep learning models on resource-constrained devices. Binarization reduces the memory footprint and replaces the power-hungry matrix-multiplication with light-weight XnorPopcount operations. However, binary networks suffer from a degraded accuracy compared to their fixed-point counterparts. We show that the state-of-the-art methods for optimizing binary networks accuracy, significantly increase the implementation cost and complexity. To compensate for the degraded accuracy while adhering to the simplicity of binary networks, we devise the first reconfigurable scheme that can adjust the classification accuracy based on the application. Our proposition improves the classification accuracy by representing features with multiple levels of residual binarization. Unlike previous methods, our approach does not exacerbate the area cost of the hardware accelerator. Instead, it provides a tradeoff between throughput and accuracy while the area overhead of multi-level binarization is negligible.
2407.18981
Jan Clusmann
Jan Clusmann, Dyke Ferber, Isabella C. Wiest, Carolin V. Schneider, Titus J. Brinker, Sebastian Foersch, Daniel Truhn, Jakob N. Kather
Prompt Injection Attacks on Large Language Models in Oncology
57 Pages, 5 Figures
null
null
null
cs.CR cs.AI
http://creativecommons.org/licenses/by/4.0/
Vision-language artificial intelligence models (VLMs) possess medical knowledge and can be employed in healthcare in numerous ways, including as image interpreters, virtual scribes, and general decision support systems. However, here, we demonstrate that current VLMs applied to medical tasks exhibit a fundamental security flaw: they can be attacked by prompt injection attacks, which can be used to output harmful information just by interacting with the VLM, without any access to its parameters. We performed a quantitative study to evaluate the vulnerabilities to these attacks in four state of the art VLMs which have been proposed to be of utility in healthcare: Claude 3 Opus, Claude 3.5 Sonnet, Reka Core, and GPT-4o. Using a set of N=297 attacks, we show that all of these models are susceptible. Specifically, we show that embedding sub-visual prompts in medical imaging data can cause the model to provide harmful output, and that these prompts are non-obvious to human observers. Thus, our study demonstrates a key vulnerability in medical VLMs which should be mitigated before widespread clinical adoption.
[ { "created": "Tue, 23 Jul 2024 15:29:57 GMT", "version": "v1" } ]
2024-07-30
[ [ "Clusmann", "Jan", "" ], [ "Ferber", "Dyke", "" ], [ "Wiest", "Isabella C.", "" ], [ "Schneider", "Carolin V.", "" ], [ "Brinker", "Titus J.", "" ], [ "Foersch", "Sebastian", "" ], [ "Truhn", "Daniel", "" ], [ "Kather", "Jakob N.", "" ] ]
Vision-language artificial intelligence models (VLMs) possess medical knowledge and can be employed in healthcare in numerous ways, including as image interpreters, virtual scribes, and general decision support systems. However, here, we demonstrate that current VLMs applied to medical tasks exhibit a fundamental security flaw: they can be attacked by prompt injection attacks, which can be used to output harmful information just by interacting with the VLM, without any access to its parameters. We performed a quantitative study to evaluate the vulnerabilities to these attacks in four state of the art VLMs which have been proposed to be of utility in healthcare: Claude 3 Opus, Claude 3.5 Sonnet, Reka Core, and GPT-4o. Using a set of N=297 attacks, we show that all of these models are susceptible. Specifically, we show that embedding sub-visual prompts in medical imaging data can cause the model to provide harmful output, and that these prompts are non-obvious to human observers. Thus, our study demonstrates a key vulnerability in medical VLMs which should be mitigated before widespread clinical adoption.
0802.3441
Javier D. Garcia-Lasheras
Javier D. Garcia-Lasheras
Efficient implementation of GALS systems over commercial synchronous FPGAs: a new approach
English version of the paper presented in the Spanish Workshop on Reconfigurable Computing and Applications, Zaragoza (2007)
"Implementacion eficiente de sistemas GALS sobre FPGAs", Jornadas de Computacion Reconfigurable y Aplicaciones (JCRA'07), Zaragoza (2007)
null
null
cs.AR
null
The new vision presented is aimed to overcome the logic overhead issues that previous works exhibit when applying GALS techniques to programmable logic devices. The proposed new view relies in a 2-phase, bundled data parity based protocol for data transfer and clock generation tasks. The ability of the introduced methodology for smart real-time delay selection allows the implementation of a variety of new methodologies for electromagnetic interference mitigation and device environment changes adaptation.
[ { "created": "Sat, 23 Feb 2008 13:11:13 GMT", "version": "v1" } ]
2008-02-26
[ [ "Garcia-Lasheras", "Javier D.", "" ] ]
The new vision presented is aimed to overcome the logic overhead issues that previous works exhibit when applying GALS techniques to programmable logic devices. The proposed new view relies in a 2-phase, bundled data parity based protocol for data transfer and clock generation tasks. The ability of the introduced methodology for smart real-time delay selection allows the implementation of a variety of new methodologies for electromagnetic interference mitigation and device environment changes adaptation.
2211.01751
Pavel Andreev
Pavel Andreev, Nicholas Babaev, Azat Saginbaev, Ivan Shchekotov, Aibek Alanov
Iterative autoregression: a novel trick to improve your low-latency speech enhancement model
Accepted to Interspeech 2023
null
null
null
cs.SD cs.AI eess.AS
http://creativecommons.org/licenses/by-nc-sa/4.0/
Streaming models are an essential component of real-time speech enhancement tools. The streaming regime constrains speech enhancement models to use only a tiny context of future information. As a result, the low-latency streaming setup is generally considered a challenging task and has a significant negative impact on the model's quality. However, the sequential nature of streaming generation offers a natural possibility for autoregression, that is, utilizing previous predictions while making current ones. The conventional method for training autoregressive models is teacher forcing, but its primary drawback lies in the training-inference mismatch that can lead to a substantial degradation in quality. In this study, we propose a straightforward yet effective alternative technique for training autoregressive low-latency speech enhancement models. We demonstrate that the proposed approach leads to stable improvement across diverse architectures and training scenarios.
[ { "created": "Thu, 3 Nov 2022 12:32:33 GMT", "version": "v1" }, { "created": "Thu, 1 Jun 2023 15:50:00 GMT", "version": "v2" }, { "created": "Tue, 27 Jun 2023 13:54:39 GMT", "version": "v3" }, { "created": "Tue, 5 Dec 2023 11:36:32 GMT", "version": "v4" } ]
2023-12-06
[ [ "Andreev", "Pavel", "" ], [ "Babaev", "Nicholas", "" ], [ "Saginbaev", "Azat", "" ], [ "Shchekotov", "Ivan", "" ], [ "Alanov", "Aibek", "" ] ]
Streaming models are an essential component of real-time speech enhancement tools. The streaming regime constrains speech enhancement models to use only a tiny context of future information. As a result, the low-latency streaming setup is generally considered a challenging task and has a significant negative impact on the model's quality. However, the sequential nature of streaming generation offers a natural possibility for autoregression, that is, utilizing previous predictions while making current ones. The conventional method for training autoregressive models is teacher forcing, but its primary drawback lies in the training-inference mismatch that can lead to a substantial degradation in quality. In this study, we propose a straightforward yet effective alternative technique for training autoregressive low-latency speech enhancement models. We demonstrate that the proposed approach leads to stable improvement across diverse architectures and training scenarios.
2401.07314
Jiaqi Chen
Jiaqi Chen, Bingqian Lin, Ran Xu, Zhenhua Chai, Xiaodan Liang, Kwan-Yee K. Wong
MapGPT: Map-Guided Prompting with Adaptive Path Planning for Vision-and-Language Navigation
LLM/VLM-based VLN Agents. Accepted to ACL 2024. Project: https://chen-judge.github.io/MapGPT/
null
null
null
cs.AI cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Embodied agents equipped with GPT as their brains have exhibited extraordinary decision-making and generalization abilities across various tasks. However, existing zero-shot agents for vision-and-language navigation (VLN) only prompt GPT-4 to select potential locations within localized environments, without constructing an effective "global-view" for the agent to understand the overall environment. In this work, we present a novel map-guided GPT-based agent, dubbed MapGPT, which introduces an online linguistic-formed map to encourage global exploration. Specifically, we build an online map and incorporate it into the prompts that include node information and topological relationships, to help GPT understand the spatial environment. Benefiting from this design, we further propose an adaptive planning mechanism to assist the agent in performing multi-step path planning based on a map, systematically exploring multiple candidate nodes or sub-goals step by step. Extensive experiments demonstrate that our MapGPT is applicable to both GPT-4 and GPT-4V, achieving state-of-the-art zero-shot performance on R2R and REVERIE simultaneously (~10% and ~12% improvements in SR), and showcasing the newly emergent global thinking and path planning abilities of the GPT.
[ { "created": "Sun, 14 Jan 2024 15:34:48 GMT", "version": "v1" }, { "created": "Sun, 25 Feb 2024 14:39:48 GMT", "version": "v2" }, { "created": "Thu, 20 Jun 2024 07:23:45 GMT", "version": "v3" } ]
2024-06-21
[ [ "Chen", "Jiaqi", "" ], [ "Lin", "Bingqian", "" ], [ "Xu", "Ran", "" ], [ "Chai", "Zhenhua", "" ], [ "Liang", "Xiaodan", "" ], [ "Wong", "Kwan-Yee K.", "" ] ]
Embodied agents equipped with GPT as their brains have exhibited extraordinary decision-making and generalization abilities across various tasks. However, existing zero-shot agents for vision-and-language navigation (VLN) only prompt GPT-4 to select potential locations within localized environments, without constructing an effective "global-view" for the agent to understand the overall environment. In this work, we present a novel map-guided GPT-based agent, dubbed MapGPT, which introduces an online linguistic-formed map to encourage global exploration. Specifically, we build an online map and incorporate it into the prompts that include node information and topological relationships, to help GPT understand the spatial environment. Benefiting from this design, we further propose an adaptive planning mechanism to assist the agent in performing multi-step path planning based on a map, systematically exploring multiple candidate nodes or sub-goals step by step. Extensive experiments demonstrate that our MapGPT is applicable to both GPT-4 and GPT-4V, achieving state-of-the-art zero-shot performance on R2R and REVERIE simultaneously (~10% and ~12% improvements in SR), and showcasing the newly emergent global thinking and path planning abilities of the GPT.
1406.3693
Partha Pratim Ray
Partha Pratim Ray
Channel Modeling of Human Somatosensory Nanonetwork: Body Discriminative Touch and Proprioception Perspective
11 pages, 6 figures
International Journal on Computer Science and Engineering, Vol. 5 No. 10, pp. 874-884, Oct 2013
null
null
cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nanonetwork design and analysis has become a very interesting topic in recent years. Though this area of research is in its formative stage, it definitely posses a strong integrity in finding out numerous applications in medical and allied sciences. Nanonetworking is indeed a nature built foundation which comprises human intra body communications. Somatosensory system is the one of the critical and must have systems of human body. This literature concentrates on the body discriminative touch and proprioception mechanism of somatosensory system. This particular system is well architecture by medial lemniscal pathway, in human body for transduction of touch and proprioceptive information. This paper seeks out the novel communication channel model of somatosensory system. The working principle of the channel model is established by an equivalent Moore machine. A novel algorithm MLP is proposed after its name, medial lemniscal pathway. A novel naomachine and appropriate processing unit are also devised, based on the automaton.
[ { "created": "Sat, 14 Jun 2014 07:04:30 GMT", "version": "v1" } ]
2014-06-17
[ [ "Ray", "Partha Pratim", "" ] ]
Nanonetwork design and analysis has become a very interesting topic in recent years. Though this area of research is in its formative stage, it definitely posses a strong integrity in finding out numerous applications in medical and allied sciences. Nanonetworking is indeed a nature built foundation which comprises human intra body communications. Somatosensory system is the one of the critical and must have systems of human body. This literature concentrates on the body discriminative touch and proprioception mechanism of somatosensory system. This particular system is well architecture by medial lemniscal pathway, in human body for transduction of touch and proprioceptive information. This paper seeks out the novel communication channel model of somatosensory system. The working principle of the channel model is established by an equivalent Moore machine. A novel algorithm MLP is proposed after its name, medial lemniscal pathway. A novel naomachine and appropriate processing unit are also devised, based on the automaton.
cs/0307011
Naren Ramakrishnan
Atul Shenoy, Naren Ramakrishnan, Manuel A. Perez-Quinones, and Srinidhi Varadarajan
Supporting Out-of-turn Interactions in a Multimodal Web Interface
null
null
null
null
cs.IR cs.HC
null
Multimodal interfaces are becoming increasingly important with the advent of mobile devices, accessibility considerations, and novel software technologies that combine diverse interaction media. This article investigates systems support for web browsing in a multimodal interface. Specifically, we outline the design and implementation of a software framework that integrates hyperlink and speech modes of interaction. Instead of viewing speech as merely an alternative interaction medium, the framework uses it to support out-of-turn interaction, providing a flexibility of information access not possible with hyperlinks alone. This approach enables the creation of websites that adapt to the needs of users, yet permits the designer fine-grained control over what interactions to support. Design methodology, implementation details, and two case studies are presented.
[ { "created": "Fri, 4 Jul 2003 13:44:04 GMT", "version": "v1" } ]
2007-05-23
[ [ "Shenoy", "Atul", "" ], [ "Ramakrishnan", "Naren", "" ], [ "Perez-Quinones", "Manuel A.", "" ], [ "Varadarajan", "Srinidhi", "" ] ]
Multimodal interfaces are becoming increasingly important with the advent of mobile devices, accessibility considerations, and novel software technologies that combine diverse interaction media. This article investigates systems support for web browsing in a multimodal interface. Specifically, we outline the design and implementation of a software framework that integrates hyperlink and speech modes of interaction. Instead of viewing speech as merely an alternative interaction medium, the framework uses it to support out-of-turn interaction, providing a flexibility of information access not possible with hyperlinks alone. This approach enables the creation of websites that adapt to the needs of users, yet permits the designer fine-grained control over what interactions to support. Design methodology, implementation details, and two case studies are presented.