id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1210.0386
Junlin Hu
Junlin Hu and Ping Guo
Combined Descriptors in Spatial Pyramid Domain for Image Classification
9 pages, 5 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently spatial pyramid matching (SPM) with scale invariant feature transform (SIFT) descriptor has been successfully used in image classification. Unfortunately, the codebook generation and feature quantization procedures using SIFT feature have the high complexity both in time and space. To address this problem, in this paper, we propose an approach which combines local binary patterns (LBP) and three-patch local binary patterns (TPLBP) in spatial pyramid domain. The proposed method does not need to learn the codebook and feature quantization processing, hence it becomes very efficient. Experiments on two popular benchmark datasets demonstrate that the proposed method always significantly outperforms the very popular SPM based SIFT descriptor method both in time and classification accuracy.
[ { "created": "Mon, 1 Oct 2012 13:05:20 GMT", "version": "v1" }, { "created": "Tue, 2 Oct 2012 06:03:23 GMT", "version": "v2" }, { "created": "Wed, 3 Oct 2012 02:48:47 GMT", "version": "v3" } ]
2012-10-04
[ [ "Hu", "Junlin", "" ], [ "Guo", "Ping", "" ] ]
Recently spatial pyramid matching (SPM) with scale invariant feature transform (SIFT) descriptor has been successfully used in image classification. Unfortunately, the codebook generation and feature quantization procedures using SIFT feature have the high complexity both in time and space. To address this problem, in this paper, we propose an approach which combines local binary patterns (LBP) and three-patch local binary patterns (TPLBP) in spatial pyramid domain. The proposed method does not need to learn the codebook and feature quantization processing, hence it becomes very efficient. Experiments on two popular benchmark datasets demonstrate that the proposed method always significantly outperforms the very popular SPM based SIFT descriptor method both in time and classification accuracy.
1905.07113
Ye Zhu
Ye Zhu
High Throughput Push Based Storage Manager
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The storage manager, as a key component of the database system, is responsible for organizing, reading, and delivering data to the execution engine for processing. According to the data serving mechanism, existing storage managers are either pull-based, incurring high latency, or push-based, leading to a high number of I/O requests when the CPU is busy. To improve these shortcomings, this thesis proposes a push-based prefetching strategy in a column-wise storage manager. The proposed strategy implements an efficient cache layer to store shared data among queries to reduce the number of I/O requests. The capacity of the cache is maintained by a time access-aware eviction mechanism. Our strategy enables the storage manager to coordinate multiple queries by merging their requests and dynamically generate an optimal read order that maximizes the overall I/O throughput. We evaluated our storage manager both over a disk-based redundant array of independent disks (RAID) and an NVM Express (NVMe) solid-state drive (SSD). With the high read performance of the SSD, we successfully minimized the total read time and number of I/O accesses.
[ { "created": "Fri, 17 May 2019 04:50:53 GMT", "version": "v1" } ]
2019-05-20
[ [ "Zhu", "Ye", "" ] ]
The storage manager, as a key component of the database system, is responsible for organizing, reading, and delivering data to the execution engine for processing. According to the data serving mechanism, existing storage managers are either pull-based, incurring high latency, or push-based, leading to a high number of I/O requests when the CPU is busy. To improve these shortcomings, this thesis proposes a push-based prefetching strategy in a column-wise storage manager. The proposed strategy implements an efficient cache layer to store shared data among queries to reduce the number of I/O requests. The capacity of the cache is maintained by a time access-aware eviction mechanism. Our strategy enables the storage manager to coordinate multiple queries by merging their requests and dynamically generate an optimal read order that maximizes the overall I/O throughput. We evaluated our storage manager both over a disk-based redundant array of independent disks (RAID) and an NVM Express (NVMe) solid-state drive (SSD). With the high read performance of the SSD, we successfully minimized the total read time and number of I/O accesses.
1811.04363
Asaf Cohen
Neri Merhav and Asaf Cohen
Universal Randomized Guessing with Application to Asynchronous Decentralized Brute-Force Attacks
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Consider the problem of guessing the realization of a random vector $\textbf{X}$ by repeatedly submitting queries (guesses) of the form "Is $\textbf{X}$ equal to $\textbf{x}$?" until an affirmative answer is obtained. In this setup, a key figure of merit is the number of queries required until the right vector is identified, a number that is termed the \emph{guesswork}. Typically, one wishes to devise a guessing strategy which minimizes a certain guesswork moment. In this work, we study a universal, decentralized scenario where the guesser does not know the distribution of $\textbf{X}$, and is not allowed to use a strategy which prepares a list of words to be guessed in advance, or even remember which words were already used. Such a scenario is useful, for example, if bots within a Botnet carry out a brute-force attack in order to guess a password or decrypt a message, yet cannot coordinate the guesses between them or even know how many bots actually participate in the attack. We devise universal decentralized guessing strategies, first, for memoryless sources, and then generalize them for finite-state sources. In each case, we derive the guessing exponent, and then prove its asymptotic optimality by deriving a compatible converse bound. The strategies are based on randomized guessing using a universal distribution. We also extend the results to guessing with side information. Finally, for all above scenarios, we design efficient algorithms in order to sample from the universal distributions, resulting in strategies which do not depend on the source distribution, are efficient to implement, and can be used asynchronously by multiple agents.
[ { "created": "Sun, 11 Nov 2018 07:00:19 GMT", "version": "v1" } ]
2018-11-13
[ [ "Merhav", "Neri", "" ], [ "Cohen", "Asaf", "" ] ]
Consider the problem of guessing the realization of a random vector $\textbf{X}$ by repeatedly submitting queries (guesses) of the form "Is $\textbf{X}$ equal to $\textbf{x}$?" until an affirmative answer is obtained. In this setup, a key figure of merit is the number of queries required until the right vector is identified, a number that is termed the \emph{guesswork}. Typically, one wishes to devise a guessing strategy which minimizes a certain guesswork moment. In this work, we study a universal, decentralized scenario where the guesser does not know the distribution of $\textbf{X}$, and is not allowed to use a strategy which prepares a list of words to be guessed in advance, or even remember which words were already used. Such a scenario is useful, for example, if bots within a Botnet carry out a brute-force attack in order to guess a password or decrypt a message, yet cannot coordinate the guesses between them or even know how many bots actually participate in the attack. We devise universal decentralized guessing strategies, first, for memoryless sources, and then generalize them for finite-state sources. In each case, we derive the guessing exponent, and then prove its asymptotic optimality by deriving a compatible converse bound. The strategies are based on randomized guessing using a universal distribution. We also extend the results to guessing with side information. Finally, for all above scenarios, we design efficient algorithms in order to sample from the universal distributions, resulting in strategies which do not depend on the source distribution, are efficient to implement, and can be used asynchronously by multiple agents.
1912.03383
Yan Wang
Yan Wang, Xu Wei, Fengze Liu, Jieneng Chen, Yuyin Zhou, Wei Shen, Elliot K. Fishman, Alan L. Yuille
Deep Distance Transform for Tubular Structure Segmentation in CT Scans
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tubular structure segmentation in medical images, e.g., segmenting vessels in CT scans, serves as a vital step in the use of computers to aid in screening early stages of related diseases. But automatic tubular structure segmentation in CT scans is a challenging problem, due to issues such as poor contrast, noise and complicated background. A tubular structure usually has a cylinder-like shape which can be well represented by its skeleton and cross-sectional radii (scales). Inspired by this, we propose a geometry-aware tubular structure segmentation method, Deep Distance Transform (DDT), which combines intuitions from the classical distance transform for skeletonization and modern deep segmentation networks. DDT first learns a multi-task network to predict a segmentation mask for a tubular structure and a distance map. Each value in the map represents the distance from each tubular structure voxel to the tubular structure surface. Then the segmentation mask is refined by leveraging the shape prior reconstructed from the distance map. We apply our DDT on six medical image datasets. The experiments show that (1) DDT can boost tubular structure segmentation performance significantly (e.g., over 13% improvement measured by DSC for pancreatic duct segmentation), and (2) DDT additionally provides a geometrical measurement for a tubular structure, which is important for clinical diagnosis (e.g., the cross-sectional scale of a pancreatic duct can be an indicator for pancreatic cancer).
[ { "created": "Fri, 6 Dec 2019 23:04:51 GMT", "version": "v1" } ]
2019-12-10
[ [ "Wang", "Yan", "" ], [ "Wei", "Xu", "" ], [ "Liu", "Fengze", "" ], [ "Chen", "Jieneng", "" ], [ "Zhou", "Yuyin", "" ], [ "Shen", "Wei", "" ], [ "Fishman", "Elliot K.", "" ], [ "Yuille", "Alan L.", "" ] ]
Tubular structure segmentation in medical images, e.g., segmenting vessels in CT scans, serves as a vital step in the use of computers to aid in screening early stages of related diseases. But automatic tubular structure segmentation in CT scans is a challenging problem, due to issues such as poor contrast, noise and complicated background. A tubular structure usually has a cylinder-like shape which can be well represented by its skeleton and cross-sectional radii (scales). Inspired by this, we propose a geometry-aware tubular structure segmentation method, Deep Distance Transform (DDT), which combines intuitions from the classical distance transform for skeletonization and modern deep segmentation networks. DDT first learns a multi-task network to predict a segmentation mask for a tubular structure and a distance map. Each value in the map represents the distance from each tubular structure voxel to the tubular structure surface. Then the segmentation mask is refined by leveraging the shape prior reconstructed from the distance map. We apply our DDT on six medical image datasets. The experiments show that (1) DDT can boost tubular structure segmentation performance significantly (e.g., over 13% improvement measured by DSC for pancreatic duct segmentation), and (2) DDT additionally provides a geometrical measurement for a tubular structure, which is important for clinical diagnosis (e.g., the cross-sectional scale of a pancreatic duct can be an indicator for pancreatic cancer).
1903.09513
Julian Theis
Julian Theis, Ilia Mokhtarian, and Houshang Darabi
Process Mining of Programmable Logic Controllers: Input/Output Event Logs
null
null
10.1109/COASE.2019.8842900
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an approach to model an unknown Ladder Logic based Programmable Logic Controller (PLC) program consisting of Boolean logic and counters using Process Mining techniques. First, we tap the inputs and outputs of a PLC to create a data flow log. Second, we propose a method to translate the obtained data flow log to an event log suitable for Process Mining. In a third step, we propose a hybrid Petri net (PN) and neural network approach to approximate the logic of the actual underlying PLC program. We demonstrate the applicability of our proposed approach on a case study with three simulated scenarios.
[ { "created": "Fri, 22 Mar 2019 14:05:30 GMT", "version": "v1" } ]
2019-09-24
[ [ "Theis", "Julian", "" ], [ "Mokhtarian", "Ilia", "" ], [ "Darabi", "Houshang", "" ] ]
This paper presents an approach to model an unknown Ladder Logic based Programmable Logic Controller (PLC) program consisting of Boolean logic and counters using Process Mining techniques. First, we tap the inputs and outputs of a PLC to create a data flow log. Second, we propose a method to translate the obtained data flow log to an event log suitable for Process Mining. In a third step, we propose a hybrid Petri net (PN) and neural network approach to approximate the logic of the actual underlying PLC program. We demonstrate the applicability of our proposed approach on a case study with three simulated scenarios.
2110.00755
Kashif Ahmad
Imran Khan, Kashif Ahmad, Namra Gul, Talhat Khan, Nasir Ahmad, Ala Al-Fuqaha
Explainable Event Recognition
16 pages, 10 figures, 6 tables
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
The literature shows outstanding capabilities for CNNs in event recognition in images. However, fewer attempts are made to analyze the potential causes behind the decisions of the models and exploring whether the predictions are based on event-salient objects or regions? To explore this important aspect of event recognition, in this work, we propose an explainable event recognition framework relying on Grad-CAM and an Xception architecture-based CNN model. Experiments are conducted on three large-scale datasets covering a diversified set of natural disasters, social, and sports events. Overall, the model showed outstanding generalization capabilities obtaining overall F1-scores of 0.91, 0.94, and 0.97 on natural disasters, social, and sports events, respectively. Moreover, for subjective analysis of activation maps generated through Grad-CAM for the predicted samples of the model, a crowdsourcing study is conducted to analyze whether the model's predictions are based on event-related objects/regions or not? The results of the study indicate that 78%, 84%, and 78% of the model decisions on natural disasters, sports, and social events datasets, respectively, are based onevent-related objects or regions.
[ { "created": "Sat, 2 Oct 2021 08:40:33 GMT", "version": "v1" }, { "created": "Sun, 10 Oct 2021 12:27:27 GMT", "version": "v2" } ]
2021-10-12
[ [ "Khan", "Imran", "" ], [ "Ahmad", "Kashif", "" ], [ "Gul", "Namra", "" ], [ "Khan", "Talhat", "" ], [ "Ahmad", "Nasir", "" ], [ "Al-Fuqaha", "Ala", "" ] ]
The literature shows outstanding capabilities for CNNs in event recognition in images. However, fewer attempts are made to analyze the potential causes behind the decisions of the models and exploring whether the predictions are based on event-salient objects or regions? To explore this important aspect of event recognition, in this work, we propose an explainable event recognition framework relying on Grad-CAM and an Xception architecture-based CNN model. Experiments are conducted on three large-scale datasets covering a diversified set of natural disasters, social, and sports events. Overall, the model showed outstanding generalization capabilities obtaining overall F1-scores of 0.91, 0.94, and 0.97 on natural disasters, social, and sports events, respectively. Moreover, for subjective analysis of activation maps generated through Grad-CAM for the predicted samples of the model, a crowdsourcing study is conducted to analyze whether the model's predictions are based on event-related objects/regions or not? The results of the study indicate that 78%, 84%, and 78% of the model decisions on natural disasters, sports, and social events datasets, respectively, are based onevent-related objects or regions.
2304.07013
Xiaodan Hu
Xiaodan Hu, Yan Zhang, Hideaki Uchiyama, Naoya Isoyama, Nobuchika Sakata, Kiyoshi Kiyokawa
Smart Dimming Sunglasses for Photophobia Using Spatial Light Modulator
null
Elsevier Displays 81 (2024) 102611
10.1016/j.displa.2023.102611
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a smart sunglasses system engineered to assist individuals experiencing photophobia, particularly those highly sensitive to light intensity. The system integrates a high dynamic range (HDR) camera and a liquid crystal spatial light modulator (SLM) to dynamically regulate light, adapting to environmental scenes by modifying pixel transmittance through a specialized control algorithm, thereby offering adaptable light management to meet the users' visual needs. Nonetheless, a conventional occlusion mask on the SLM, intended to block incoming light, emerges blurred and insufficient due to a misaligned focal plane. To address the challenge of imprecise light filtering, we introduce an optimization algorithm that meticulously adjusts the light attenuation process, effectively diminishing excessive brightness in targeted areas without adversely impacting regions with acceptable levels of luminance.
[ { "created": "Fri, 14 Apr 2023 09:17:27 GMT", "version": "v1" }, { "created": "Thu, 29 Jun 2023 07:40:46 GMT", "version": "v2" }, { "created": "Sun, 9 Jul 2023 13:51:56 GMT", "version": "v3" }, { "created": "Tue, 10 Oct 2023 11:54:19 GMT", "version": "v4" } ]
2023-12-11
[ [ "Hu", "Xiaodan", "" ], [ "Zhang", "Yan", "" ], [ "Uchiyama", "Hideaki", "" ], [ "Isoyama", "Naoya", "" ], [ "Sakata", "Nobuchika", "" ], [ "Kiyokawa", "Kiyoshi", "" ] ]
We present a smart sunglasses system engineered to assist individuals experiencing photophobia, particularly those highly sensitive to light intensity. The system integrates a high dynamic range (HDR) camera and a liquid crystal spatial light modulator (SLM) to dynamically regulate light, adapting to environmental scenes by modifying pixel transmittance through a specialized control algorithm, thereby offering adaptable light management to meet the users' visual needs. Nonetheless, a conventional occlusion mask on the SLM, intended to block incoming light, emerges blurred and insufficient due to a misaligned focal plane. To address the challenge of imprecise light filtering, we introduce an optimization algorithm that meticulously adjusts the light attenuation process, effectively diminishing excessive brightness in targeted areas without adversely impacting regions with acceptable levels of luminance.
1006.1186
Secretary Aircc Journal
A.Nag (1), S. Biswas (2), D. Sarkar (2) and P.P. Sarkar (2) ((1)Academy of Technology - Hoogly, India and (2) University of Kalyani, India)
A novel technique for image steganography based on Block-DCT and Huffman Encoding
10 pages
International Journal of Computer Science and Information Technology 2.3 (2010) 103-112
10.5121/ijcsit.2010.2308
null
cs.MM
http://creativecommons.org/licenses/by-nc-sa/3.0/
Image steganography is the art of hiding information into a cover image. This paper presents a novel technique for Image steganography based on Block-DCT, where DCT is used to transform original image (cover image) blocks from spatial domain to frequency domain. Firstly a gray level image of size M x N is divided into no joint 8 x 8 blocks and a two dimensional Discrete Cosine Transform (2-d DCT) is performed on each of the P = MN / 64 blocks. Then Huffman encoding is also performed on the secret messages/images before embedding and each bit of Huffman code of secret message/image is embedded in the frequency domain by altering the least significant bit of each of the DCT coefficients of cover image blocks. The experimental results show that the algorithm has a high capacity and a good invisibility. Moreover PSNR of cover image with stego-image shows the better results in comparison with other existing steganography approaches. Furthermore, satisfactory security is maintained since the secret message/image cannot be extracted without knowing decoding rules and Huffman table.
[ { "created": "Mon, 7 Jun 2010 06:59:18 GMT", "version": "v1" } ]
2010-07-15
[ [ "Nag", "A.", "" ], [ "Biswas", "S.", "" ], [ "Sarkar", "D.", "" ], [ "Sarkar", "P. P.", "" ] ]
Image steganography is the art of hiding information into a cover image. This paper presents a novel technique for Image steganography based on Block-DCT, where DCT is used to transform original image (cover image) blocks from spatial domain to frequency domain. Firstly a gray level image of size M x N is divided into no joint 8 x 8 blocks and a two dimensional Discrete Cosine Transform (2-d DCT) is performed on each of the P = MN / 64 blocks. Then Huffman encoding is also performed on the secret messages/images before embedding and each bit of Huffman code of secret message/image is embedded in the frequency domain by altering the least significant bit of each of the DCT coefficients of cover image blocks. The experimental results show that the algorithm has a high capacity and a good invisibility. Moreover PSNR of cover image with stego-image shows the better results in comparison with other existing steganography approaches. Furthermore, satisfactory security is maintained since the secret message/image cannot be extracted without knowing decoding rules and Huffman table.
1911.03083
Bhavan Jasani
Bhavan Jasani, Rohit Girdhar, Deva Ramanan
Are we asking the right questions in MovieQA?
Spotlight presentation at CLVL workshop, ICCV 2019. Project page: https://bhavanj.github.io/MovieQAWithoutMovies/
null
null
null
cs.CV cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Joint vision and language tasks like visual question answering are fascinating because they explore high-level understanding, but at the same time, can be more prone to language biases. In this paper, we explore the biases in the MovieQA dataset and propose a strikingly simple model which can exploit them. We find that using the right word embedding is of utmost importance. By using an appropriately trained word embedding, about half the Question-Answers (QAs) can be answered by looking at the questions and answers alone, completely ignoring narrative context from video clips, subtitles, and movie scripts. Compared to the best published papers on the leaderboard, our simple question + answer only model improves accuracy by 5% for video + subtitle category, 5% for subtitle, 15% for DVS and 6% higher for scripts.
[ { "created": "Fri, 8 Nov 2019 06:49:45 GMT", "version": "v1" } ]
2019-11-11
[ [ "Jasani", "Bhavan", "" ], [ "Girdhar", "Rohit", "" ], [ "Ramanan", "Deva", "" ] ]
Joint vision and language tasks like visual question answering are fascinating because they explore high-level understanding, but at the same time, can be more prone to language biases. In this paper, we explore the biases in the MovieQA dataset and propose a strikingly simple model which can exploit them. We find that using the right word embedding is of utmost importance. By using an appropriately trained word embedding, about half the Question-Answers (QAs) can be answered by looking at the questions and answers alone, completely ignoring narrative context from video clips, subtitles, and movie scripts. Compared to the best published papers on the leaderboard, our simple question + answer only model improves accuracy by 5% for video + subtitle category, 5% for subtitle, 15% for DVS and 6% higher for scripts.
1312.0882
Yi Li
Yi Li, M. Cenk Gursoy, Senem Velipasalar
On the Throughput of Hybrid-ARQ under QoS Constraints
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hybrid Automatic Repeat Request (HARQ) is a high performance communication protocol, leading to effective use of the wireless channel and the resources with only limited feedback about the channel state information (CSI) to the transmitter. In this paper, the throughput of HARQ with incremental redundancy (IR) and fixed transmission rate is studied in the presence of quality of service (QoS) constraints imposed as limitations on buffer overflow probabilities. In particular, tools from the theory of renewal processes and stochastic network calculus are employed to characterize the maximum arrival rates that can be supported by the wireless channel when HARQ-IR is adopted. Effective capacity is employed as the throughput metric and a closed-form expression for the effective capacity of HARQ-IR is determined for small values of the QoS exponent. The impact of the fixed transmission rate, QoS constraints, and hard deadline limitations on the throughput is investigated and comparisons with regular ARQ operation are provided.
[ { "created": "Tue, 3 Dec 2013 17:21:42 GMT", "version": "v1" } ]
2013-12-04
[ [ "Li", "Yi", "" ], [ "Gursoy", "M. Cenk", "" ], [ "Velipasalar", "Senem", "" ] ]
Hybrid Automatic Repeat Request (HARQ) is a high performance communication protocol, leading to effective use of the wireless channel and the resources with only limited feedback about the channel state information (CSI) to the transmitter. In this paper, the throughput of HARQ with incremental redundancy (IR) and fixed transmission rate is studied in the presence of quality of service (QoS) constraints imposed as limitations on buffer overflow probabilities. In particular, tools from the theory of renewal processes and stochastic network calculus are employed to characterize the maximum arrival rates that can be supported by the wireless channel when HARQ-IR is adopted. Effective capacity is employed as the throughput metric and a closed-form expression for the effective capacity of HARQ-IR is determined for small values of the QoS exponent. The impact of the fixed transmission rate, QoS constraints, and hard deadline limitations on the throughput is investigated and comparisons with regular ARQ operation are provided.
1904.10454
Daniel Wiebking
Daniel Wiebking
Normalizers and permutational isomorphisms in simply-exponential time
12 pages
null
null
null
cs.DS cs.CC cs.DM math.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that normalizers and permutational isomorphisms of permutation groups given by generating sets can be computed in time simply exponential in the degree of the groups. The result is obtained by exploiting canonical forms for permutation groups (up to permutational isomorphism).
[ { "created": "Wed, 24 Apr 2019 17:28:15 GMT", "version": "v1" } ]
2019-06-11
[ [ "Wiebking", "Daniel", "" ] ]
We show that normalizers and permutational isomorphisms of permutation groups given by generating sets can be computed in time simply exponential in the degree of the groups. The result is obtained by exploiting canonical forms for permutation groups (up to permutational isomorphism).
1502.00115
Canyi Lu
Can-Yi Lu, De-Shuang Huang
Optimized Projection for Sparse Representation Based Classification
Neurocomputing 2013
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dimensionality reduction (DR) methods have been commonly used as a principled way to understand the high-dimensional data such as facial images. In this paper, we propose a new supervised DR method called Optimized Projection for Sparse Representation based Classification (OP-SRC), which is based on the recent face recognition method, Sparse Representation based Classification (SRC). SRC seeks a sparse linear combination on all the training data for a given query image, and make the decision by the minimal reconstruction residual. OP-SRC is designed on the decision rule of SRC, it aims to reduce the within-class reconstruction residual and simultaneously increase the between-class reconstruction residual on the training data. The projections are optimized and match well with the mechanism of SRC. Therefore, SRC performs well in the OP-SRC transformed space. The feasibility and effectiveness of the proposed method is verified on the Yale, ORL and UMIST databases with promising results.
[ { "created": "Sat, 31 Jan 2015 14:44:05 GMT", "version": "v1" } ]
2015-02-03
[ [ "Lu", "Can-Yi", "" ], [ "Huang", "De-Shuang", "" ] ]
Dimensionality reduction (DR) methods have been commonly used as a principled way to understand the high-dimensional data such as facial images. In this paper, we propose a new supervised DR method called Optimized Projection for Sparse Representation based Classification (OP-SRC), which is based on the recent face recognition method, Sparse Representation based Classification (SRC). SRC seeks a sparse linear combination on all the training data for a given query image, and make the decision by the minimal reconstruction residual. OP-SRC is designed on the decision rule of SRC, it aims to reduce the within-class reconstruction residual and simultaneously increase the between-class reconstruction residual on the training data. The projections are optimized and match well with the mechanism of SRC. Therefore, SRC performs well in the OP-SRC transformed space. The feasibility and effectiveness of the proposed method is verified on the Yale, ORL and UMIST databases with promising results.
2103.06587
Yuki Asano
Peiyang He, Charlie Griffin, Krzysztof Kacprzyk, Artjom Joosen, Michael Collyer, Aleksandar Shtedritski, Yuki M. Asano
Privacy-preserving Object Detection
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Privacy considerations and bias in datasets are quickly becoming high-priority issues that the computer vision community needs to face. So far, little attention has been given to practical solutions that do not involve collection of new datasets. In this work, we show that for object detection on COCO, both anonymizing the dataset by blurring faces, as well as swapping faces in a balanced manner along the gender and skin tone dimension, can retain object detection performances while preserving privacy and partially balancing bias.
[ { "created": "Thu, 11 Mar 2021 10:34:54 GMT", "version": "v1" } ]
2021-03-12
[ [ "He", "Peiyang", "" ], [ "Griffin", "Charlie", "" ], [ "Kacprzyk", "Krzysztof", "" ], [ "Joosen", "Artjom", "" ], [ "Collyer", "Michael", "" ], [ "Shtedritski", "Aleksandar", "" ], [ "Asano", "Yuki M.", "" ] ]
Privacy considerations and bias in datasets are quickly becoming high-priority issues that the computer vision community needs to face. So far, little attention has been given to practical solutions that do not involve collection of new datasets. In this work, we show that for object detection on COCO, both anonymizing the dataset by blurring faces, as well as swapping faces in a balanced manner along the gender and skin tone dimension, can retain object detection performances while preserving privacy and partially balancing bias.
1311.1976
Otfried Cheong
Otfried Cheong, Sariel Har-Peled, Heuna Kim, Hyo-Sil Kim
On the Number of Edges of Fan-Crossing Free Graphs
null
null
null
null
cs.CG cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A graph drawn in the plane with n vertices is k-fan-crossing free for k > 1 if there are no k+1 edges $g,e_1,...e_k$, such that $e_1,e_2,...e_k$ have a common endpoint and $g$ crosses all $e_i$. We prove a tight bound of 4n-8 on the maximum number of edges of a 2-fan-crossing free graph, and a tight 4n-9 bound for a straight-edge drawing. For k > 2, we prove an upper bound of 3(k-1)(n-2) edges. We also discuss generalizations to monotone graph properties.
[ { "created": "Fri, 8 Nov 2013 14:16:56 GMT", "version": "v1" } ]
2013-11-11
[ [ "Cheong", "Otfried", "" ], [ "Har-Peled", "Sariel", "" ], [ "Kim", "Heuna", "" ], [ "Kim", "Hyo-Sil", "" ] ]
A graph drawn in the plane with n vertices is k-fan-crossing free for k > 1 if there are no k+1 edges $g,e_1,...e_k$, such that $e_1,e_2,...e_k$ have a common endpoint and $g$ crosses all $e_i$. We prove a tight bound of 4n-8 on the maximum number of edges of a 2-fan-crossing free graph, and a tight 4n-9 bound for a straight-edge drawing. For k > 2, we prove an upper bound of 3(k-1)(n-2) edges. We also discuss generalizations to monotone graph properties.
2403.13941
Leonardo Borgioli
Leonardo Borgioli, Ki-Hwan Oh, Alberto Mangano, Alvaro Ducas, Luciano Ambrosini, Federico Pinto, Paula A Lopez, Jessica Cassiani, Milos Zefran, Liaohai Chen and Pier Cristoforo Giulianotti
Sensory Glove-Based Surgical Robot User Interface
6 pages, 5 figures, 7 tables, submitted to International Conference on Intelligent Robots and Systems (IROS)2024
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robotic surgery has reached a high level of maturity and has become an integral part of standard surgical care. However, existing surgeon consoles are bulky and take up valuable space in the operating room, present challenges for surgical team coordination, and their proprietary nature makes it difficult to take advantage of recent technological advances, especially in virtual and augmented reality. One potential area for further improvement is the integration of modern sensory gloves into robotic platforms, allowing surgeons to control robotic arms directly with their hand movements intuitively. We propose one such system that combines an HTC Vive tracker, a Manus Meta Prime 3 XR sensory glove, and God Vision wireless smart glasses. The system controls one arm of a da Vinci surgical robot. In addition to moving the arm, the surgeon can use fingers to control the end-effector of the surgical instrument. Hand gestures are used to implement clutching and similar functions. In particular, we introduce clutching of the instrument orientation, a functionality not available in the da Vinci system. The vibrotactile elements of the glove are used to provide feedback to the user when gesture commands are invoked. A preliminary evaluation of the system shows that it has excellent tracking accuracy and allows surgeons to efficiently perform common surgical training tasks with minimal practice with the new interface; this suggests that the interface is highly intuitive. The proposed system is inexpensive, allows rapid prototyping, and opens opportunities for further innovations in the design of surgical robot interfaces.
[ { "created": "Wed, 20 Mar 2024 19:26:27 GMT", "version": "v1" } ]
2024-03-22
[ [ "Borgioli", "Leonardo", "" ], [ "Oh", "Ki-Hwan", "" ], [ "Mangano", "Alberto", "" ], [ "Ducas", "Alvaro", "" ], [ "Ambrosini", "Luciano", "" ], [ "Pinto", "Federico", "" ], [ "Lopez", "Paula A", "" ], [ "Cassiani", "Jessica", "" ], [ "Zefran", "Milos", "" ], [ "Chen", "Liaohai", "" ], [ "Giulianotti", "Pier Cristoforo", "" ] ]
Robotic surgery has reached a high level of maturity and has become an integral part of standard surgical care. However, existing surgeon consoles are bulky and take up valuable space in the operating room, present challenges for surgical team coordination, and their proprietary nature makes it difficult to take advantage of recent technological advances, especially in virtual and augmented reality. One potential area for further improvement is the integration of modern sensory gloves into robotic platforms, allowing surgeons to control robotic arms directly with their hand movements intuitively. We propose one such system that combines an HTC Vive tracker, a Manus Meta Prime 3 XR sensory glove, and God Vision wireless smart glasses. The system controls one arm of a da Vinci surgical robot. In addition to moving the arm, the surgeon can use fingers to control the end-effector of the surgical instrument. Hand gestures are used to implement clutching and similar functions. In particular, we introduce clutching of the instrument orientation, a functionality not available in the da Vinci system. The vibrotactile elements of the glove are used to provide feedback to the user when gesture commands are invoked. A preliminary evaluation of the system shows that it has excellent tracking accuracy and allows surgeons to efficiently perform common surgical training tasks with minimal practice with the new interface; this suggests that the interface is highly intuitive. The proposed system is inexpensive, allows rapid prototyping, and opens opportunities for further innovations in the design of surgical robot interfaces.
2101.05718
Luiz Rodrigues
Luiz Rodrigues, Armando M. Toda, Wilk Oliveira, Paula T. Palomino, Julita Vassileva, Seiji Isotani
Automating Gamification Personalization: To the User and Beyond
14 pages, 2 figures, 8 tables. IEEE Transactions on Learning Technologies (2022)
null
10.1109/TLT.2022.3162409
null
cs.HC cs.AI
http://creativecommons.org/licenses/by/4.0/
Personalized gamification explores knowledge about the users to tailor gamification designs to improve one-size-fits-all gamification. The tailoring process should simultaneously consider user and contextual characteristics (e.g., activity to be done and geographic location), which leads to several occasions to tailor. Consequently, tools for automating gamification personalization are needed. The problems that emerge are that which of those characteristics are relevant and how to do such tailoring are open questions, and that the required automating tools are lacking. We tackled these problems in two steps. First, we conducted an exploratory study, collecting participants' opinions on the game elements they consider the most useful for different learning activity types (LAT) via survey. Then, we modeled opinions through conditional decision trees to address the aforementioned tailoring process. Second, as a product from the first step, we implemented a recommender system that suggests personalized gamification designs (which game elements to use), addressing the problem of automating gamification personalization. Our findings i) present empirical evidence that LAT, geographic locations, and other user characteristics affect users' preferences, ii) enable defining gamification designs tailored to user and contextual features simultaneously, and iii) provide technological aid for those interested in designing personalized gamification. The main implications are that demographics, game-related characteristics, geographic location, and LAT to be done, as well as the interaction between different kinds of information (user and contextual characteristics), should be considered in defining gamification designs and that personalizing gamification designs can be improved with aid from our recommender system.
[ { "created": "Thu, 14 Jan 2021 16:47:00 GMT", "version": "v1" } ]
2022-03-29
[ [ "Rodrigues", "Luiz", "" ], [ "Toda", "Armando M.", "" ], [ "Oliveira", "Wilk", "" ], [ "Palomino", "Paula T.", "" ], [ "Vassileva", "Julita", "" ], [ "Isotani", "Seiji", "" ] ]
Personalized gamification explores knowledge about the users to tailor gamification designs to improve one-size-fits-all gamification. The tailoring process should simultaneously consider user and contextual characteristics (e.g., activity to be done and geographic location), which leads to several occasions to tailor. Consequently, tools for automating gamification personalization are needed. The problems that emerge are that which of those characteristics are relevant and how to do such tailoring are open questions, and that the required automating tools are lacking. We tackled these problems in two steps. First, we conducted an exploratory study, collecting participants' opinions on the game elements they consider the most useful for different learning activity types (LAT) via survey. Then, we modeled opinions through conditional decision trees to address the aforementioned tailoring process. Second, as a product from the first step, we implemented a recommender system that suggests personalized gamification designs (which game elements to use), addressing the problem of automating gamification personalization. Our findings i) present empirical evidence that LAT, geographic locations, and other user characteristics affect users' preferences, ii) enable defining gamification designs tailored to user and contextual features simultaneously, and iii) provide technological aid for those interested in designing personalized gamification. The main implications are that demographics, game-related characteristics, geographic location, and LAT to be done, as well as the interaction between different kinds of information (user and contextual characteristics), should be considered in defining gamification designs and that personalizing gamification designs can be improved with aid from our recommender system.
1911.02744
Haoxuan You
Can Qin, Haoxuan You, Lichen Wang, C.-C. Jay Kuo, Yun Fu
PointDAN: A Multi-Scale 3D Domain Adaption Network for Point Cloud Representation
12 pages, 4 figures, 33rd Conference on Neural Information Processing Systems (NeurIPS 2019)
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Domain Adaptation (DA) approaches achieved significant improvements in a wide range of machine learning and computer vision tasks (i.e., classification, detection, and segmentation). However, as far as we are aware, there are few methods yet to achieve domain adaptation directly on 3D point cloud data. The unique challenge of point cloud data lies in its abundant spatial geometric information, and the semantics of the whole object is contributed by including regional geometric structures. Specifically, most general-purpose DA methods that struggle for global feature alignment and ignore local geometric information are not suitable for 3D domain alignment. In this paper, we propose a novel 3D Domain Adaptation Network for point cloud data (PointDAN). PointDAN jointly aligns the global and local features in multi-level. For local alignment, we propose Self-Adaptive (SA) node module with an adjusted receptive field to model the discriminative local structures for aligning domains. To represent hierarchically scaled features, node-attention module is further introduced to weight the relationship of SA nodes across objects and domains. For global alignment, an adversarial-training strategy is employed to learn and align global features across domains. Since there is no common evaluation benchmark for 3D point cloud DA scenario, we build a general benchmark (i.e., PointDA-10) extracted from three popular 3D object/scene datasets (i.e., ModelNet, ShapeNet and ScanNet) for cross-domain 3D objects classification fashion. Extensive experiments on PointDA-10 illustrate the superiority of our model over the state-of-the-art general-purpose DA methods.
[ { "created": "Thu, 7 Nov 2019 04:03:07 GMT", "version": "v1" } ]
2019-11-26
[ [ "Qin", "Can", "" ], [ "You", "Haoxuan", "" ], [ "Wang", "Lichen", "" ], [ "Kuo", "C. -C. Jay", "" ], [ "Fu", "Yun", "" ] ]
Domain Adaptation (DA) approaches achieved significant improvements in a wide range of machine learning and computer vision tasks (i.e., classification, detection, and segmentation). However, as far as we are aware, there are few methods yet to achieve domain adaptation directly on 3D point cloud data. The unique challenge of point cloud data lies in its abundant spatial geometric information, and the semantics of the whole object is contributed by including regional geometric structures. Specifically, most general-purpose DA methods that struggle for global feature alignment and ignore local geometric information are not suitable for 3D domain alignment. In this paper, we propose a novel 3D Domain Adaptation Network for point cloud data (PointDAN). PointDAN jointly aligns the global and local features in multi-level. For local alignment, we propose Self-Adaptive (SA) node module with an adjusted receptive field to model the discriminative local structures for aligning domains. To represent hierarchically scaled features, node-attention module is further introduced to weight the relationship of SA nodes across objects and domains. For global alignment, an adversarial-training strategy is employed to learn and align global features across domains. Since there is no common evaluation benchmark for 3D point cloud DA scenario, we build a general benchmark (i.e., PointDA-10) extracted from three popular 3D object/scene datasets (i.e., ModelNet, ShapeNet and ScanNet) for cross-domain 3D objects classification fashion. Extensive experiments on PointDA-10 illustrate the superiority of our model over the state-of-the-art general-purpose DA methods.
1209.5077
Farhad Farokhi
Farhad Farokhi, Henrik Sandberg, Karl H. Johansson
Complexity Reduction for Parameter-Dependent Linear Systems
null
null
null
null
cs.SY math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a complexity reduction algorithm for a family of parameter-dependent linear systems when the system parameters belong to a compact semi-algebraic set. This algorithm potentially describes the underlying dynamical system with fewer parameters or state variables. To do so, it minimizes the distance (i.e., H-infinity-norm of the difference) between the original system and its reduced version. We present a sub-optimal solution to this problem using sum-of-squares optimization methods. We present the results for both continuous-time and discrete-time systems. Lastly, we illustrate the applicability of our proposed algorithm on numerical examples.
[ { "created": "Sun, 23 Sep 2012 15:32:09 GMT", "version": "v1" } ]
2012-09-25
[ [ "Farokhi", "Farhad", "" ], [ "Sandberg", "Henrik", "" ], [ "Johansson", "Karl H.", "" ] ]
We present a complexity reduction algorithm for a family of parameter-dependent linear systems when the system parameters belong to a compact semi-algebraic set. This algorithm potentially describes the underlying dynamical system with fewer parameters or state variables. To do so, it minimizes the distance (i.e., H-infinity-norm of the difference) between the original system and its reduced version. We present a sub-optimal solution to this problem using sum-of-squares optimization methods. We present the results for both continuous-time and discrete-time systems. Lastly, we illustrate the applicability of our proposed algorithm on numerical examples.
2206.02659
Hongyang Zhang
Haotian Ju, Dongyue Li, Hongyang R. Zhang
Robust Fine-Tuning of Deep Neural Networks with Hessian-based Generalization Guarantees
38 pages. Appeared in ICML 2022
null
null
null
cs.LG cs.CV math.ST stat.ML stat.TH
http://creativecommons.org/licenses/by/4.0/
We consider fine-tuning a pretrained deep neural network on a target task. We study the generalization properties of fine-tuning to understand the problem of overfitting, which has often been observed (e.g., when the target dataset is small or when the training labels are noisy). Existing generalization measures for deep networks depend on notions such as distance from the initialization (i.e., the pretrained network) of the fine-tuned model and noise stability properties of deep networks. This paper identifies a Hessian-based distance measure through PAC-Bayesian analysis, which is shown to correlate well with observed generalization gaps of fine-tuned models. Theoretically, we prove Hessian distance-based generalization bounds for fine-tuned models. We also describe an extended study of fine-tuning against label noise, where overfitting remains a critical problem. We present an algorithm and a generalization error guarantee for this algorithm under a class conditional independent noise model. Empirically, we observe that the Hessian-based distance measure can match the scale of the observed generalization gap of fine-tuned models in practice. We also test our algorithm on several image classification tasks with noisy training labels, showing gains over prior methods and decreases in the Hessian distance measure of the fine-tuned model.
[ { "created": "Mon, 6 Jun 2022 14:52:46 GMT", "version": "v1" }, { "created": "Mon, 29 Aug 2022 00:20:04 GMT", "version": "v2" }, { "created": "Sat, 5 Nov 2022 06:03:40 GMT", "version": "v3" }, { "created": "Sun, 5 Feb 2023 07:16:05 GMT", "version": "v4" }, { "created": "Mon, 7 Aug 2023 01:20:01 GMT", "version": "v5" }, { "created": "Fri, 22 Dec 2023 20:36:36 GMT", "version": "v6" } ]
2023-12-27
[ [ "Ju", "Haotian", "" ], [ "Li", "Dongyue", "" ], [ "Zhang", "Hongyang R.", "" ] ]
We consider fine-tuning a pretrained deep neural network on a target task. We study the generalization properties of fine-tuning to understand the problem of overfitting, which has often been observed (e.g., when the target dataset is small or when the training labels are noisy). Existing generalization measures for deep networks depend on notions such as distance from the initialization (i.e., the pretrained network) of the fine-tuned model and noise stability properties of deep networks. This paper identifies a Hessian-based distance measure through PAC-Bayesian analysis, which is shown to correlate well with observed generalization gaps of fine-tuned models. Theoretically, we prove Hessian distance-based generalization bounds for fine-tuned models. We also describe an extended study of fine-tuning against label noise, where overfitting remains a critical problem. We present an algorithm and a generalization error guarantee for this algorithm under a class conditional independent noise model. Empirically, we observe that the Hessian-based distance measure can match the scale of the observed generalization gap of fine-tuned models in practice. We also test our algorithm on several image classification tasks with noisy training labels, showing gains over prior methods and decreases in the Hessian distance measure of the fine-tuned model.
2402.18495
Xiaowei Li
Qin Zhang, Xiaowei Li, Jiexin Lu, Liping Qiu, Shirui Pan, Xiaojun Chen, Junyang Chen
ROG$_{PL}$: Robust Open-Set Graph Learning via Region-Based Prototype Learning
9 pages, 5 figures
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Open-set graph learning is a practical task that aims to classify the known class nodes and to identify unknown class samples as unknowns. Conventional node classification methods usually perform unsatisfactorily in open-set scenarios due to the complex data they encounter, such as out-of-distribution (OOD) data and in-distribution (IND) noise. OOD data are samples that do not belong to any known classes. They are outliers if they occur in training (OOD noise), and open-set samples if they occur in testing. IND noise are training samples which are assigned incorrect labels. The existence of IND noise and OOD noise is prevalent, which usually cause the ambiguity problem, including the intra-class variety problem and the inter-class confusion problem. Thus, to explore robust open-set learning methods is necessary and difficult, and it becomes even more difficult for non-IID graph data.To this end, we propose a unified framework named ROG$_{PL}$ to achieve robust open-set learning on complex noisy graph data, by introducing prototype learning. In specific, ROG$_{PL}$ consists of two modules, i.e., denoising via label propagation and open-set prototype learning via regions. The first module corrects noisy labels through similarity-based label propagation and removes low-confidence samples, to solve the intra-class variety problem caused by noise. The second module learns open-set prototypes for each known class via non-overlapped regions and remains both interior and border prototypes to remedy the inter-class confusion problem.The two modules are iteratively updated under the constraints of classification loss and prototype diversity loss. To the best of our knowledge, the proposed ROG$_{PL}$ is the first robust open-set node classification method for graph data with complex noise.
[ { "created": "Wed, 28 Feb 2024 17:25:06 GMT", "version": "v1" }, { "created": "Thu, 29 Feb 2024 13:02:50 GMT", "version": "v2" } ]
2024-03-01
[ [ "Zhang", "Qin", "" ], [ "Li", "Xiaowei", "" ], [ "Lu", "Jiexin", "" ], [ "Qiu", "Liping", "" ], [ "Pan", "Shirui", "" ], [ "Chen", "Xiaojun", "" ], [ "Chen", "Junyang", "" ] ]
Open-set graph learning is a practical task that aims to classify the known class nodes and to identify unknown class samples as unknowns. Conventional node classification methods usually perform unsatisfactorily in open-set scenarios due to the complex data they encounter, such as out-of-distribution (OOD) data and in-distribution (IND) noise. OOD data are samples that do not belong to any known classes. They are outliers if they occur in training (OOD noise), and open-set samples if they occur in testing. IND noise are training samples which are assigned incorrect labels. The existence of IND noise and OOD noise is prevalent, which usually cause the ambiguity problem, including the intra-class variety problem and the inter-class confusion problem. Thus, to explore robust open-set learning methods is necessary and difficult, and it becomes even more difficult for non-IID graph data.To this end, we propose a unified framework named ROG$_{PL}$ to achieve robust open-set learning on complex noisy graph data, by introducing prototype learning. In specific, ROG$_{PL}$ consists of two modules, i.e., denoising via label propagation and open-set prototype learning via regions. The first module corrects noisy labels through similarity-based label propagation and removes low-confidence samples, to solve the intra-class variety problem caused by noise. The second module learns open-set prototypes for each known class via non-overlapped regions and remains both interior and border prototypes to remedy the inter-class confusion problem.The two modules are iteratively updated under the constraints of classification loss and prototype diversity loss. To the best of our knowledge, the proposed ROG$_{PL}$ is the first robust open-set node classification method for graph data with complex noise.
1905.07573
Sherif Saad
Sherif Saad, William Briguglio and Haytham Elmiligi
The Curious Case of Machine Learning In Malware Detection
9 pages
5th International Conference on Information Systems Security and Privacy, 2019
null
null
cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we argue that machine learning techniques are not ready for malware detection in the wild. Given the current trend in malware development and the increase of unconventional malware attacks, we expect that dynamic malware analysis is the future for antimalware detection and prevention systems. A comprehensive review of machine learning for malware detection is presented. Then, we discuss how malware detection in the wild present unique challenges for the current state-of-the-art machine learning techniques. We defined three critical problems that limit the success of malware detectors powered by machine learning in the wild. Next, we discuss possible solutions to these challenges and present the requirements of next-generation malware detection. Finally, we outline potential research directions in machine learning for malware detection.
[ { "created": "Sat, 18 May 2019 10:34:36 GMT", "version": "v1" } ]
2019-05-21
[ [ "Saad", "Sherif", "" ], [ "Briguglio", "William", "" ], [ "Elmiligi", "Haytham", "" ] ]
In this paper, we argue that machine learning techniques are not ready for malware detection in the wild. Given the current trend in malware development and the increase of unconventional malware attacks, we expect that dynamic malware analysis is the future for antimalware detection and prevention systems. A comprehensive review of machine learning for malware detection is presented. Then, we discuss how malware detection in the wild present unique challenges for the current state-of-the-art machine learning techniques. We defined three critical problems that limit the success of malware detectors powered by machine learning in the wild. Next, we discuss possible solutions to these challenges and present the requirements of next-generation malware detection. Finally, we outline potential research directions in machine learning for malware detection.
2109.14985
Christophe De Wagter
Christophe De Wagter and Federico Paredes-Vall\'es and Nilay Sheth and Guido de Croon
The Artificial Intelligence behind the winning entry to the 2019 AI Robotic Racing Competition
null
null
10.55417/fr.2022042
null
cs.RO cs.AI
http://creativecommons.org/licenses/by/4.0/
Robotics is the next frontier in the progress of Artificial Intelligence (AI), as the real world in which robots operate represents an enormous, complex, continuous state space with inherent real-time requirements. One extreme challenge in robotics is currently formed by autonomous drone racing. Human drone racers can fly through complex tracks at speeds of up to 190 km/h. Achieving similar speeds with autonomous drones signifies tackling fundamental problems in AI under extreme restrictions in terms of resources. In this article, we present the winning solution of the first AI Robotic Racing (AIRR) Circuit, a competition consisting of four races in which all participating teams used the same drone, to which they had limited access. The core of our approach is inspired by how human pilots combine noisy observations of the race gates with their mental model of the drone's dynamics to achieve fast control. Our approach has a large focus on gate detection with an efficient deep neural segmentation network and active vision. Further, we make contributions to robust state estimation and risk-based control. This allowed us to reach speeds of ~9.2m/s in the last race, unrivaled by previous autonomous drone race competitions. Although our solution was the fastest and most robust, it still lost against one of the best human pilots, Gab707. The presented approach indicates a promising direction to close the gap with human drone pilots, forming an important step in bringing AI to the real world.
[ { "created": "Thu, 30 Sep 2021 10:32:23 GMT", "version": "v1" } ]
2022-06-23
[ [ "De Wagter", "Christophe", "" ], [ "Paredes-Vallés", "Federico", "" ], [ "Sheth", "Nilay", "" ], [ "de Croon", "Guido", "" ] ]
Robotics is the next frontier in the progress of Artificial Intelligence (AI), as the real world in which robots operate represents an enormous, complex, continuous state space with inherent real-time requirements. One extreme challenge in robotics is currently formed by autonomous drone racing. Human drone racers can fly through complex tracks at speeds of up to 190 km/h. Achieving similar speeds with autonomous drones signifies tackling fundamental problems in AI under extreme restrictions in terms of resources. In this article, we present the winning solution of the first AI Robotic Racing (AIRR) Circuit, a competition consisting of four races in which all participating teams used the same drone, to which they had limited access. The core of our approach is inspired by how human pilots combine noisy observations of the race gates with their mental model of the drone's dynamics to achieve fast control. Our approach has a large focus on gate detection with an efficient deep neural segmentation network and active vision. Further, we make contributions to robust state estimation and risk-based control. This allowed us to reach speeds of ~9.2m/s in the last race, unrivaled by previous autonomous drone race competitions. Although our solution was the fastest and most robust, it still lost against one of the best human pilots, Gab707. The presented approach indicates a promising direction to close the gap with human drone pilots, forming an important step in bringing AI to the real world.
2111.07648
Gonzalo Imaz
Gonzalo E. Imaz
The Possibilistic Horn Non-Clausal Knowledge Bases
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Posibilistic logic is the most extended approach to handle uncertain and partially inconsistent information. Regarding normal forms, advances in possibilistic reasoning are mostly focused on clausal form. Yet, the encoding of real-world problems usually results in a non-clausal (NC) formula and NC-to-clausal translators produce severe drawbacks that heavily limit the practical performance of clausal reasoning. Thus, by computing formulas in its original NC form, we propose several contributions showing that notable advances are also possible in possibilistic non-clausal reasoning. {\em Firstly,} we define the class of {\em Possibilistic Horn Non-Clausal Knowledge Bases,} or $\mathcal{\overline{H}}_\Sigma$, which subsumes the classes: possibilistic Horn and propositional Horn-NC. $\mathcal{\overline{H}}_\Sigma $ is shown to be a kind of NC analogous of the standard Horn class. {\em Secondly}, we define {\em Possibilistic Non-Clausal Unit-Resolution,} or $ \mathcal{UR}_\Sigma $, and prove that $ \mathcal{UR}_\Sigma $ correctly computes the inconsistency degree of $\mathcal{\overline{H}}_\Sigma $members. $\mathcal{UR}_\Sigma $ had not been proposed before and is formulated in a clausal-like manner, which eases its understanding, formal proofs and future extension towards non-clausal resolution. {\em Thirdly}, we prove that computing the inconsistency degree of $\mathcal{\overline{H}}_\Sigma $ members takes polynomial time. Although there already exist tractable classes in possibilistic logic, all of them are clausal, and thus, $\mathcal{\overline{H}}_\Sigma $ turns out to be the first characterized polynomial non-clausal class within possibilistic reasoning.
[ { "created": "Mon, 15 Nov 2021 10:18:49 GMT", "version": "v1" } ]
2021-11-16
[ [ "Imaz", "Gonzalo E.", "" ] ]
Posibilistic logic is the most extended approach to handle uncertain and partially inconsistent information. Regarding normal forms, advances in possibilistic reasoning are mostly focused on clausal form. Yet, the encoding of real-world problems usually results in a non-clausal (NC) formula and NC-to-clausal translators produce severe drawbacks that heavily limit the practical performance of clausal reasoning. Thus, by computing formulas in its original NC form, we propose several contributions showing that notable advances are also possible in possibilistic non-clausal reasoning. {\em Firstly,} we define the class of {\em Possibilistic Horn Non-Clausal Knowledge Bases,} or $\mathcal{\overline{H}}_\Sigma$, which subsumes the classes: possibilistic Horn and propositional Horn-NC. $\mathcal{\overline{H}}_\Sigma $ is shown to be a kind of NC analogous of the standard Horn class. {\em Secondly}, we define {\em Possibilistic Non-Clausal Unit-Resolution,} or $ \mathcal{UR}_\Sigma $, and prove that $ \mathcal{UR}_\Sigma $ correctly computes the inconsistency degree of $\mathcal{\overline{H}}_\Sigma $members. $\mathcal{UR}_\Sigma $ had not been proposed before and is formulated in a clausal-like manner, which eases its understanding, formal proofs and future extension towards non-clausal resolution. {\em Thirdly}, we prove that computing the inconsistency degree of $\mathcal{\overline{H}}_\Sigma $ members takes polynomial time. Although there already exist tractable classes in possibilistic logic, all of them are clausal, and thus, $\mathcal{\overline{H}}_\Sigma $ turns out to be the first characterized polynomial non-clausal class within possibilistic reasoning.
2309.05828
Shan Zhao
Shan Zhao, Sudipan Saha, Zhitong Xiong, Niklas Boers, Xiao Xiang Zhu
Exploring Geometric Deep Learning For Precipitation Nowcasting
submitted and accepted in IGARSS2023
null
null
null
cs.LG cs.AI physics.ao-ph
http://creativecommons.org/licenses/by/4.0/
Precipitation nowcasting (up to a few hours) remains a challenge due to the highly complex local interactions that need to be captured accurately. Convolutional Neural Networks rely on convolutional kernels convolving with grid data and the extracted features are trapped by limited receptive field, typically expressed in excessively smooth output compared to ground truth. Thus they lack the capacity to model complex spatial relationships among the grids. Geometric deep learning aims to generalize neural network models to non-Euclidean domains. Such models are more flexible in defining nodes and edges and can effectively capture dynamic spatial relationship among geographical grids. Motivated by this, we explore a geometric deep learning-based temporal Graph Convolutional Network (GCN) for precipitation nowcasting. The adjacency matrix that simulates the interactions among grid cells is learned automatically by minimizing the L1 loss between prediction and ground truth pixel value during the training procedure. Then, the spatial relationship is refined by GCN layers while the temporal information is extracted by 1D convolution with various kernel lengths. The neighboring information is fed as auxiliary input layers to improve the final result. We test the model on sequences of radar reflectivity maps over the Trento/Italy area. The results show that GCNs improves the effectiveness of modeling the local details of the cloud profile as well as the prediction accuracy by achieving decreased error measures.
[ { "created": "Mon, 11 Sep 2023 21:14:55 GMT", "version": "v1" } ]
2023-09-13
[ [ "Zhao", "Shan", "" ], [ "Saha", "Sudipan", "" ], [ "Xiong", "Zhitong", "" ], [ "Boers", "Niklas", "" ], [ "Zhu", "Xiao Xiang", "" ] ]
Precipitation nowcasting (up to a few hours) remains a challenge due to the highly complex local interactions that need to be captured accurately. Convolutional Neural Networks rely on convolutional kernels convolving with grid data and the extracted features are trapped by limited receptive field, typically expressed in excessively smooth output compared to ground truth. Thus they lack the capacity to model complex spatial relationships among the grids. Geometric deep learning aims to generalize neural network models to non-Euclidean domains. Such models are more flexible in defining nodes and edges and can effectively capture dynamic spatial relationship among geographical grids. Motivated by this, we explore a geometric deep learning-based temporal Graph Convolutional Network (GCN) for precipitation nowcasting. The adjacency matrix that simulates the interactions among grid cells is learned automatically by minimizing the L1 loss between prediction and ground truth pixel value during the training procedure. Then, the spatial relationship is refined by GCN layers while the temporal information is extracted by 1D convolution with various kernel lengths. The neighboring information is fed as auxiliary input layers to improve the final result. We test the model on sequences of radar reflectivity maps over the Trento/Italy area. The results show that GCNs improves the effectiveness of modeling the local details of the cloud profile as well as the prediction accuracy by achieving decreased error measures.
1312.0317
Chunxiao Jiang
Chunxiao Jiang and Yan Chen and K. J. Ray Liu
Evolutionary Dynamics of Information Diffusion over Social Networks
arXiv admin note: substantial text overlap with arXiv:1309.2920
null
10.1109/JSTSP.2014.2313024
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current social networks are of extremely large-scale generating tremendous information flows at every moment. How information diffuse over social networks has attracted much attention from both industry and academics. Most of the existing works on information diffusion analysis are based on machine learning methods focusing on social network structure analysis and empirical data mining. However, the dynamics of information diffusion, which are heavily influenced by network users' decisions, actions and their socio-economic interactions, is generally ignored by most of existing works. In this paper, we propose an evolutionary game theoretic framework to model the dynamic information diffusion process in social networks. Specifically, we derive the information diffusion dynamics in complete networks, uniform degree and non-uniform degree networks, with the highlight of two special networks, Erd\H{o}s-R\'enyi random network and the Barab\'asi-Albert scale-free network. We find that the dynamics of information diffusion over these three kinds of networks are scale-free and the same with each other when the network scale is sufficiently large. To verify our theoretical analysis, we perform simulations for the information diffusion over synthetic networks and real-world Facebook networks. Moreover, we also conduct experiment on Twitter hashtags dataset, which shows that the proposed game theoretic model can well fit and predict the information diffusion over real social networks.
[ { "created": "Mon, 2 Dec 2013 03:21:28 GMT", "version": "v1" } ]
2015-06-18
[ [ "Jiang", "Chunxiao", "" ], [ "Chen", "Yan", "" ], [ "Liu", "K. J. Ray", "" ] ]
Current social networks are of extremely large-scale generating tremendous information flows at every moment. How information diffuse over social networks has attracted much attention from both industry and academics. Most of the existing works on information diffusion analysis are based on machine learning methods focusing on social network structure analysis and empirical data mining. However, the dynamics of information diffusion, which are heavily influenced by network users' decisions, actions and their socio-economic interactions, is generally ignored by most of existing works. In this paper, we propose an evolutionary game theoretic framework to model the dynamic information diffusion process in social networks. Specifically, we derive the information diffusion dynamics in complete networks, uniform degree and non-uniform degree networks, with the highlight of two special networks, Erd\H{o}s-R\'enyi random network and the Barab\'asi-Albert scale-free network. We find that the dynamics of information diffusion over these three kinds of networks are scale-free and the same with each other when the network scale is sufficiently large. To verify our theoretical analysis, we perform simulations for the information diffusion over synthetic networks and real-world Facebook networks. Moreover, we also conduct experiment on Twitter hashtags dataset, which shows that the proposed game theoretic model can well fit and predict the information diffusion over real social networks.
1811.09712
Clement Fung
Clement Fung, Jamie Koerner, Stewart Grant, Ivan Beschastnikh
Dancing in the Dark: Private Multi-Party Machine Learning in an Untrusted Setting
16 pages
null
null
null
cs.CR cs.DC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Distributed machine learning (ML) systems today use an unsophisticated threat model: data sources must trust a central ML process. We propose a brokered learning abstraction that allows data sources to contribute towards a globally-shared model with provable privacy guarantees in an untrusted setting. We realize this abstraction by building on federated learning, the state of the art in multi-party ML, to construct TorMentor: an anonymous hidden service that supports private multi-party ML. We define a new threat model by characterizing, developing and evaluating new attacks in the brokered learning setting, along with new defenses for these attacks. We show that TorMentor effectively protects data providers against known ML attacks while providing them with a tunable trade-off between model accuracy and privacy. We evaluate TorMentor with local and geo-distributed deployments on Azure/Tor. In an experiment with 200 clients and 14 MB of data per client, our prototype trained a logistic regression model using stochastic gradient descent in 65s. Code is available at: https://github.com/DistributedML/TorML
[ { "created": "Fri, 23 Nov 2018 22:00:39 GMT", "version": "v1" }, { "created": "Sun, 24 Feb 2019 00:40:45 GMT", "version": "v2" } ]
2019-02-26
[ [ "Fung", "Clement", "" ], [ "Koerner", "Jamie", "" ], [ "Grant", "Stewart", "" ], [ "Beschastnikh", "Ivan", "" ] ]
Distributed machine learning (ML) systems today use an unsophisticated threat model: data sources must trust a central ML process. We propose a brokered learning abstraction that allows data sources to contribute towards a globally-shared model with provable privacy guarantees in an untrusted setting. We realize this abstraction by building on federated learning, the state of the art in multi-party ML, to construct TorMentor: an anonymous hidden service that supports private multi-party ML. We define a new threat model by characterizing, developing and evaluating new attacks in the brokered learning setting, along with new defenses for these attacks. We show that TorMentor effectively protects data providers against known ML attacks while providing them with a tunable trade-off between model accuracy and privacy. We evaluate TorMentor with local and geo-distributed deployments on Azure/Tor. In an experiment with 200 clients and 14 MB of data per client, our prototype trained a logistic regression model using stochastic gradient descent in 65s. Code is available at: https://github.com/DistributedML/TorML
2309.10326
Kunlun Zhu
Kunlun Zhu, Shihao Liang, Xu Han, Zhi Zheng, Guoyang Zeng, Zhiyuan Liu, Maosong Sun
QASnowball: An Iterative Bootstrapping Framework for High-Quality Question-Answering Data Generation
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent years have witnessed the success of question answering (QA), especially its potential to be a foundation paradigm for tackling diverse NLP tasks. However, obtaining sufficient data to build an effective and stable QA system still remains an open problem. For this problem, we introduce an iterative bootstrapping framework for QA data augmentation (named QASnowball), which can iteratively generate large-scale high-quality QA data based on a seed set of supervised examples. Specifically, QASnowball consists of three modules, an answer extractor to extract core phrases in unlabeled documents as candidate answers, a question generator to generate questions based on documents and candidate answers, and a QA data filter to filter out high-quality QA data. Moreover, QASnowball can be self-enhanced by reseeding the seed set to fine-tune itself in different iterations, leading to continual improvements in the generation quality. We conduct experiments in the high-resource English scenario and the medium-resource Chinese scenario, and the experimental results show that the data generated by QASnowball can facilitate QA models: (1) training models on the generated data achieves comparable results to using supervised data, and (2) pre-training on the generated data and fine-tuning on supervised data can achieve better performance. Our code and generated data will be released to advance further work.
[ { "created": "Tue, 19 Sep 2023 05:20:36 GMT", "version": "v1" }, { "created": "Wed, 20 Sep 2023 01:57:10 GMT", "version": "v2" } ]
2023-09-21
[ [ "Zhu", "Kunlun", "" ], [ "Liang", "Shihao", "" ], [ "Han", "Xu", "" ], [ "Zheng", "Zhi", "" ], [ "Zeng", "Guoyang", "" ], [ "Liu", "Zhiyuan", "" ], [ "Sun", "Maosong", "" ] ]
Recent years have witnessed the success of question answering (QA), especially its potential to be a foundation paradigm for tackling diverse NLP tasks. However, obtaining sufficient data to build an effective and stable QA system still remains an open problem. For this problem, we introduce an iterative bootstrapping framework for QA data augmentation (named QASnowball), which can iteratively generate large-scale high-quality QA data based on a seed set of supervised examples. Specifically, QASnowball consists of three modules, an answer extractor to extract core phrases in unlabeled documents as candidate answers, a question generator to generate questions based on documents and candidate answers, and a QA data filter to filter out high-quality QA data. Moreover, QASnowball can be self-enhanced by reseeding the seed set to fine-tune itself in different iterations, leading to continual improvements in the generation quality. We conduct experiments in the high-resource English scenario and the medium-resource Chinese scenario, and the experimental results show that the data generated by QASnowball can facilitate QA models: (1) training models on the generated data achieves comparable results to using supervised data, and (2) pre-training on the generated data and fine-tuning on supervised data can achieve better performance. Our code and generated data will be released to advance further work.
2112.11956
Jim Magiera
Maria Alk\"amper, Jim Magiera, Christian Rohde
An Interface Preserving Moving Mesh in Multiple Space Dimensions
[1] Maria Alk\"{a}mper and Jim Magiera. 2021. Interface Preserving Moving Mesh (Code). DOI: 10.18419/darus-1671
null
null
null
cs.CG cs.NA math.NA
http://creativecommons.org/licenses/by/4.0/
An interface preserving moving mesh algorithm in two or higher dimensions is presented. It resolves a moving $(d-1)$-dimensional manifold directly within the $d$-dimensional mesh, which means that the interface is represented by a subset of moving mesh cell-surfaces. The underlying mesh is a conforming simplicial partition that fulfills the Delaunay property. The local remeshing algorithms allow for strong interface deformations. We give a proof that the given algorithms preserve the interface after interface deformation and remeshing steps. Originating from various numerical methods, data is attached cell-wise to the mesh. After each remeshing operation the interface preserving moving mesh retains valid data by projecting the data to the new mesh cells.\newline An open source implementation of the moving mesh algorithm is available at [1].
[ { "created": "Wed, 22 Dec 2021 15:24:12 GMT", "version": "v1" } ]
2021-12-23
[ [ "Alkämper", "Maria", "" ], [ "Magiera", "Jim", "" ], [ "Rohde", "Christian", "" ] ]
An interface preserving moving mesh algorithm in two or higher dimensions is presented. It resolves a moving $(d-1)$-dimensional manifold directly within the $d$-dimensional mesh, which means that the interface is represented by a subset of moving mesh cell-surfaces. The underlying mesh is a conforming simplicial partition that fulfills the Delaunay property. The local remeshing algorithms allow for strong interface deformations. We give a proof that the given algorithms preserve the interface after interface deformation and remeshing steps. Originating from various numerical methods, data is attached cell-wise to the mesh. After each remeshing operation the interface preserving moving mesh retains valid data by projecting the data to the new mesh cells.\newline An open source implementation of the moving mesh algorithm is available at [1].
2111.05319
Shubhendu Jena
Shubhendu Jena, Franck Multon, Adnane Boukhayma
Monocular Human Shape and Pose with Dense Mesh-borne Local Image Features
FG 2021
null
10.1109/FG52635.2021.9666993
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose to improve on graph convolution based approaches for human shape and pose estimation from monocular input, using pixel-aligned local image features. Given a single input color image, existing graph convolutional network (GCN) based techniques for human shape and pose estimation use a single convolutional neural network (CNN) generated global image feature appended to all mesh vertices equally to initialize the GCN stage, which transforms a template T-posed mesh into the target pose. In contrast, we propose for the first time the idea of using local image features per vertex. These features are sampled from the CNN image feature maps by utilizing pixel-to-mesh correspondences generated with DensePose. Our quantitative and qualitative results on standard benchmarks show that using local features improves on global ones and leads to competitive performances with respect to the state-of-the-art.
[ { "created": "Tue, 9 Nov 2021 18:43:18 GMT", "version": "v1" }, { "created": "Wed, 10 Nov 2021 02:00:05 GMT", "version": "v2" }, { "created": "Thu, 11 Nov 2021 08:38:08 GMT", "version": "v3" } ]
2022-08-12
[ [ "Jena", "Shubhendu", "" ], [ "Multon", "Franck", "" ], [ "Boukhayma", "Adnane", "" ] ]
We propose to improve on graph convolution based approaches for human shape and pose estimation from monocular input, using pixel-aligned local image features. Given a single input color image, existing graph convolutional network (GCN) based techniques for human shape and pose estimation use a single convolutional neural network (CNN) generated global image feature appended to all mesh vertices equally to initialize the GCN stage, which transforms a template T-posed mesh into the target pose. In contrast, we propose for the first time the idea of using local image features per vertex. These features are sampled from the CNN image feature maps by utilizing pixel-to-mesh correspondences generated with DensePose. Our quantitative and qualitative results on standard benchmarks show that using local features improves on global ones and leads to competitive performances with respect to the state-of-the-art.
2104.08671
Lucia Zheng
Lucia Zheng, Neel Guha, Brandon R. Anderson, Peter Henderson, Daniel E. Ho
When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset
ICAIL 2021. Code & data available at https://github.com/reglab/casehold
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
While self-supervised learning has made rapid advances in natural language processing, it remains unclear when researchers should engage in resource-intensive domain-specific pretraining (domain pretraining). The law, puzzlingly, has yielded few documented instances of substantial gains to domain pretraining in spite of the fact that legal language is widely seen to be unique. We hypothesize that these existing results stem from the fact that existing legal NLP tasks are too easy and fail to meet conditions for when domain pretraining can help. To address this, we first present CaseHOLD (Case Holdings On Legal Decisions), a new dataset comprised of over 53,000+ multiple choice questions to identify the relevant holding of a cited case. This dataset presents a fundamental task to lawyers and is both legally meaningful and difficult from an NLP perspective (F1 of 0.4 with a BiLSTM baseline). Second, we assess performance gains on CaseHOLD and existing legal NLP datasets. While a Transformer architecture (BERT) pretrained on a general corpus (Google Books and Wikipedia) improves performance, domain pretraining (using corpus of approximately 3.5M decisions across all courts in the U.S. that is larger than BERT's) with a custom legal vocabulary exhibits the most substantial performance gains with CaseHOLD (gain of 7.2% on F1, representing a 12% improvement on BERT) and consistent performance gains across two other legal tasks. Third, we show that domain pretraining may be warranted when the task exhibits sufficient similarity to the pretraining corpus: the level of performance increase in three legal tasks was directly tied to the domain specificity of the task. Our findings inform when researchers should engage resource-intensive pretraining and show that Transformer-based architectures, too, learn embeddings suggestive of distinct legal language.
[ { "created": "Sun, 18 Apr 2021 00:57:16 GMT", "version": "v1" }, { "created": "Mon, 17 May 2021 22:45:11 GMT", "version": "v2" }, { "created": "Tue, 6 Jul 2021 00:56:00 GMT", "version": "v3" } ]
2021-07-07
[ [ "Zheng", "Lucia", "" ], [ "Guha", "Neel", "" ], [ "Anderson", "Brandon R.", "" ], [ "Henderson", "Peter", "" ], [ "Ho", "Daniel E.", "" ] ]
While self-supervised learning has made rapid advances in natural language processing, it remains unclear when researchers should engage in resource-intensive domain-specific pretraining (domain pretraining). The law, puzzlingly, has yielded few documented instances of substantial gains to domain pretraining in spite of the fact that legal language is widely seen to be unique. We hypothesize that these existing results stem from the fact that existing legal NLP tasks are too easy and fail to meet conditions for when domain pretraining can help. To address this, we first present CaseHOLD (Case Holdings On Legal Decisions), a new dataset comprised of over 53,000+ multiple choice questions to identify the relevant holding of a cited case. This dataset presents a fundamental task to lawyers and is both legally meaningful and difficult from an NLP perspective (F1 of 0.4 with a BiLSTM baseline). Second, we assess performance gains on CaseHOLD and existing legal NLP datasets. While a Transformer architecture (BERT) pretrained on a general corpus (Google Books and Wikipedia) improves performance, domain pretraining (using corpus of approximately 3.5M decisions across all courts in the U.S. that is larger than BERT's) with a custom legal vocabulary exhibits the most substantial performance gains with CaseHOLD (gain of 7.2% on F1, representing a 12% improvement on BERT) and consistent performance gains across two other legal tasks. Third, we show that domain pretraining may be warranted when the task exhibits sufficient similarity to the pretraining corpus: the level of performance increase in three legal tasks was directly tied to the domain specificity of the task. Our findings inform when researchers should engage resource-intensive pretraining and show that Transformer-based architectures, too, learn embeddings suggestive of distinct legal language.
2408.04683
Weisong Sun
Weisong Sun and Yuchen Chen and Chunrong Fang and Yebo Feng and Yuan Xiao and An Guo and Quanjun Zhang and Yang Liu and Baowen Xu and Zhenyu Chen
Eliminating Backdoors in Neural Code Models via Trigger Inversion
Under review
null
null
null
cs.CR cs.AI cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural code models (NCMs) have been widely used for addressing various code understanding tasks, such as defect detection and clone detection. However, numerous recent studies reveal that such models are vulnerable to backdoor attacks. Backdoored NCMs function normally on normal code snippets, but exhibit adversary-expected behavior on poisoned code snippets injected with the adversary-crafted trigger. It poses a significant security threat. For example, a backdoored defect detection model may misclassify user-submitted defective code as non-defective. If this insecure code is then integrated into critical systems, like autonomous driving systems, it could lead to life safety. However, there is an urgent need for effective defenses against backdoor attacks targeting NCMs. To address this issue, in this paper, we innovatively propose a backdoor defense technique based on trigger inversion, called EliBadCode. EliBadCode first filters the model vocabulary for trigger tokens to reduce the search space for trigger inversion, thereby enhancing the efficiency of the trigger inversion. Then, EliBadCode introduces a sample-specific trigger position identification method, which can reduce the interference of adversarial perturbations for subsequent trigger inversion, thereby producing effective inverted triggers efficiently. Subsequently, EliBadCode employs a Greedy Coordinate Gradient algorithm to optimize the inverted trigger and designs a trigger anchoring method to purify the inverted trigger. Finally, EliBadCode eliminates backdoors through model unlearning. We evaluate the effectiveness of EliBadCode in eliminating backdoor attacks against multiple NCMs used for three safety-critical code understanding tasks. The results demonstrate that EliBadCode can effectively eliminate backdoors while having minimal adverse effects on the normal functionality of the model.
[ { "created": "Thu, 8 Aug 2024 08:23:03 GMT", "version": "v1" } ]
2024-08-12
[ [ "Sun", "Weisong", "" ], [ "Chen", "Yuchen", "" ], [ "Fang", "Chunrong", "" ], [ "Feng", "Yebo", "" ], [ "Xiao", "Yuan", "" ], [ "Guo", "An", "" ], [ "Zhang", "Quanjun", "" ], [ "Liu", "Yang", "" ], [ "Xu", "Baowen", "" ], [ "Chen", "Zhenyu", "" ] ]
Neural code models (NCMs) have been widely used for addressing various code understanding tasks, such as defect detection and clone detection. However, numerous recent studies reveal that such models are vulnerable to backdoor attacks. Backdoored NCMs function normally on normal code snippets, but exhibit adversary-expected behavior on poisoned code snippets injected with the adversary-crafted trigger. It poses a significant security threat. For example, a backdoored defect detection model may misclassify user-submitted defective code as non-defective. If this insecure code is then integrated into critical systems, like autonomous driving systems, it could lead to life safety. However, there is an urgent need for effective defenses against backdoor attacks targeting NCMs. To address this issue, in this paper, we innovatively propose a backdoor defense technique based on trigger inversion, called EliBadCode. EliBadCode first filters the model vocabulary for trigger tokens to reduce the search space for trigger inversion, thereby enhancing the efficiency of the trigger inversion. Then, EliBadCode introduces a sample-specific trigger position identification method, which can reduce the interference of adversarial perturbations for subsequent trigger inversion, thereby producing effective inverted triggers efficiently. Subsequently, EliBadCode employs a Greedy Coordinate Gradient algorithm to optimize the inverted trigger and designs a trigger anchoring method to purify the inverted trigger. Finally, EliBadCode eliminates backdoors through model unlearning. We evaluate the effectiveness of EliBadCode in eliminating backdoor attacks against multiple NCMs used for three safety-critical code understanding tasks. The results demonstrate that EliBadCode can effectively eliminate backdoors while having minimal adverse effects on the normal functionality of the model.
2407.13499
Kaiyi Pang
Minhao Bai, Jinshuai Yang, Kaiyi Pang, Xu Xin, Yongfeng Huang
Three-State Information Hiding: Provably Secure Asymmetric Steganography
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rise of language models has provided a fertile ground for the application of steganography. Due to their qualified output, steganographic texts become similar to human and have attracted most of the steganography researchers' attention. However, running a language model requires a strong computation platform. It limits the applicable scenario of steganography, since those electronic devices controlled by the decoder may not even equipped with a GPU. Traditional provably secure steganography methods cannot be applied to this low-resource scenario. Therefore, we aim at design a novel steganography framework that is practical in a low-resource scheme. We start from the rigorous probability analysis with the help of hypothesis testing techniques to construct an theoretical framework. Then we prove the security and robostness of our framework and point out its optimization goal. We test our theoretical framework in some famous LLMs and the results have proved its usability. There are still some practical problems and this gives the direction of future work. We hope that this work will expand the practical scope of steganography and create a new branch of steganography.
[ { "created": "Thu, 18 Jul 2024 13:32:00 GMT", "version": "v1" } ]
2024-07-19
[ [ "Bai", "Minhao", "" ], [ "Yang", "Jinshuai", "" ], [ "Pang", "Kaiyi", "" ], [ "Xin", "Xu", "" ], [ "Huang", "Yongfeng", "" ] ]
The rise of language models has provided a fertile ground for the application of steganography. Due to their qualified output, steganographic texts become similar to human and have attracted most of the steganography researchers' attention. However, running a language model requires a strong computation platform. It limits the applicable scenario of steganography, since those electronic devices controlled by the decoder may not even equipped with a GPU. Traditional provably secure steganography methods cannot be applied to this low-resource scenario. Therefore, we aim at design a novel steganography framework that is practical in a low-resource scheme. We start from the rigorous probability analysis with the help of hypothesis testing techniques to construct an theoretical framework. Then we prove the security and robostness of our framework and point out its optimization goal. We test our theoretical framework in some famous LLMs and the results have proved its usability. There are still some practical problems and this gives the direction of future work. We hope that this work will expand the practical scope of steganography and create a new branch of steganography.
1909.00991
Joel Robertson
Joel Robertson
Modelling Bushfire Evacuation Behaviours
84 pages
null
null
null
cs.MA cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bushfires pose a significant threat to Australia's regional areas. To minimise risk and increase resilience, communities need robust evacuation strategies that account for people's likely behaviour both before and during a bushfire. Agent-based modelling (ABM) offers a practical way to simulate a range of bushfire evacuation scenarios. However, the ABM should reflect the diversity of possible human responses in a given community. The Belief-Desire-Intention (BDI) cognitive model captures behaviour in a compact representation that is understandable by domain experts. Within a BDI-ABM simulation, individual BDI agents can be assigned profiles that determine their likely behaviour. Over a population of agents their collective behaviour will characterise the community response. These profiles are drawn from existing human behaviour research and consultation with emergency services personnel and capture the expected behaviours of identified groups in the population, both prior to and during an evacuation. A realistic representation of each community can then be formed, and evacuation scenarios within the simulation can be used to explore the possible impact of population structure on outcomes. It is hoped that this will give an improved understanding of the risks associated with evacuation, and lead to tailored evacuation plans for each community to help them prepare for and respond to bushfire.
[ { "created": "Tue, 3 Sep 2019 08:07:27 GMT", "version": "v1" } ]
2019-09-04
[ [ "Robertson", "Joel", "" ] ]
Bushfires pose a significant threat to Australia's regional areas. To minimise risk and increase resilience, communities need robust evacuation strategies that account for people's likely behaviour both before and during a bushfire. Agent-based modelling (ABM) offers a practical way to simulate a range of bushfire evacuation scenarios. However, the ABM should reflect the diversity of possible human responses in a given community. The Belief-Desire-Intention (BDI) cognitive model captures behaviour in a compact representation that is understandable by domain experts. Within a BDI-ABM simulation, individual BDI agents can be assigned profiles that determine their likely behaviour. Over a population of agents their collective behaviour will characterise the community response. These profiles are drawn from existing human behaviour research and consultation with emergency services personnel and capture the expected behaviours of identified groups in the population, both prior to and during an evacuation. A realistic representation of each community can then be formed, and evacuation scenarios within the simulation can be used to explore the possible impact of population structure on outcomes. It is hoped that this will give an improved understanding of the risks associated with evacuation, and lead to tailored evacuation plans for each community to help them prepare for and respond to bushfire.
1208.0959
Misha Denil
Misha Denil and Nando de Freitas
Recklessly Approximate Sparse Coding
null
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has recently been observed that certain extremely simple feature encoding techniques are able to achieve state of the art performance on several standard image classification benchmarks including deep belief networks, convolutional nets, factored RBMs, mcRBMs, convolutional RBMs, sparse autoencoders and several others. Moreover, these "triangle" or "soft threshold" encodings are ex- tremely efficient to compute. Several intuitive arguments have been put forward to explain this remarkable performance, yet no mathematical justification has been offered. The main result of this report is to show that these features are realized as an approximate solution to the a non-negative sparse coding problem. Using this connection we describe several variants of the soft threshold features and demonstrate their effectiveness on two image classification benchmark tasks.
[ { "created": "Sat, 4 Aug 2012 21:48:52 GMT", "version": "v1" }, { "created": "Sun, 6 Jan 2013 19:00:48 GMT", "version": "v2" } ]
2013-01-08
[ [ "Denil", "Misha", "" ], [ "de Freitas", "Nando", "" ] ]
It has recently been observed that certain extremely simple feature encoding techniques are able to achieve state of the art performance on several standard image classification benchmarks including deep belief networks, convolutional nets, factored RBMs, mcRBMs, convolutional RBMs, sparse autoencoders and several others. Moreover, these "triangle" or "soft threshold" encodings are ex- tremely efficient to compute. Several intuitive arguments have been put forward to explain this remarkable performance, yet no mathematical justification has been offered. The main result of this report is to show that these features are realized as an approximate solution to the a non-negative sparse coding problem. Using this connection we describe several variants of the soft threshold features and demonstrate their effectiveness on two image classification benchmark tasks.
2101.00122
Xiulong Yang
Xiulong Yang, Hui Ye, Yang Ye, Xiang Li, Shihao Ji
Generative Max-Mahalanobis Classifiers for Image Classification, Generation and More
Accepted as a conference paper at ECML2021
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Joint Energy-based Model (JEM) of Grathwohl et al. shows that a standard softmax classifier can be reinterpreted as an energy-based model (EBM) for the joint distribution p(x,y); the resulting model can be optimized to improve calibration, robustness, and out-of-distribution detection, while generating samples rivaling the quality of recent GAN-based approaches. However, the softmax classifier that JEM exploits is inherently discriminative and its latent feature space is not well formulated as probabilistic distributions, which may hinder its potential for image generation and incur training instability. We hypothesize that generative classifiers, such as Linear Discriminant Analysis (LDA), might be more suitable for image generation since generative classifiers model the data generation process explicitly. This paper therefore investigates an LDA classifier for image classification and generation. In particular, the Max-Mahalanobis Classifier (MMC), a special case of LDA, fits our goal very well. We show that our Generative MMC (GMMC) can be trained discriminatively, generatively, or jointly for image classification and generation. Extensive experiments on multiple datasets show that GMMC achieves state-of-the-art discriminative and generative performances, while outperforming JEM in calibration, adversarial robustness, and out-of-distribution detection by a significant margin. Our source code is available at https://github.com/sndnyang/GMMC.
[ { "created": "Fri, 1 Jan 2021 00:42:04 GMT", "version": "v1" }, { "created": "Thu, 25 Feb 2021 13:35:35 GMT", "version": "v2" }, { "created": "Fri, 2 Apr 2021 22:30:49 GMT", "version": "v3" }, { "created": "Thu, 1 Jul 2021 21:29:26 GMT", "version": "v4" } ]
2021-07-05
[ [ "Yang", "Xiulong", "" ], [ "Ye", "Hui", "" ], [ "Ye", "Yang", "" ], [ "Li", "Xiang", "" ], [ "Ji", "Shihao", "" ] ]
Joint Energy-based Model (JEM) of Grathwohl et al. shows that a standard softmax classifier can be reinterpreted as an energy-based model (EBM) for the joint distribution p(x,y); the resulting model can be optimized to improve calibration, robustness, and out-of-distribution detection, while generating samples rivaling the quality of recent GAN-based approaches. However, the softmax classifier that JEM exploits is inherently discriminative and its latent feature space is not well formulated as probabilistic distributions, which may hinder its potential for image generation and incur training instability. We hypothesize that generative classifiers, such as Linear Discriminant Analysis (LDA), might be more suitable for image generation since generative classifiers model the data generation process explicitly. This paper therefore investigates an LDA classifier for image classification and generation. In particular, the Max-Mahalanobis Classifier (MMC), a special case of LDA, fits our goal very well. We show that our Generative MMC (GMMC) can be trained discriminatively, generatively, or jointly for image classification and generation. Extensive experiments on multiple datasets show that GMMC achieves state-of-the-art discriminative and generative performances, while outperforming JEM in calibration, adversarial robustness, and out-of-distribution detection by a significant margin. Our source code is available at https://github.com/sndnyang/GMMC.
1910.11105
Lior Wolf
Barak Battash, Lior Wolf
Adaptive and Iteratively Improving Recurrent Lateral Connections
null
null
null
null
cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The current leading computer vision models are typically feed forward neural models, in which the output of one computational block is passed to the next one sequentially. This is in sharp contrast to the organization of the primate visual cortex, in which feedback and lateral connections are abundant. In this work, we propose a computational model for the role of lateral connections in a given block, in which the weights of the block vary dynamically as a function of its activations, and the input from the upstream blocks is iteratively reintroduced. We demonstrate how this novel architectural modification can lead to sizable gains in performance, when applied to visual action recognition without pretraining and that it outperforms the literature architectures with recurrent feedback processing on ImageNet.
[ { "created": "Wed, 16 Oct 2019 16:58:26 GMT", "version": "v1" } ]
2019-10-25
[ [ "Battash", "Barak", "" ], [ "Wolf", "Lior", "" ] ]
The current leading computer vision models are typically feed forward neural models, in which the output of one computational block is passed to the next one sequentially. This is in sharp contrast to the organization of the primate visual cortex, in which feedback and lateral connections are abundant. In this work, we propose a computational model for the role of lateral connections in a given block, in which the weights of the block vary dynamically as a function of its activations, and the input from the upstream blocks is iteratively reintroduced. We demonstrate how this novel architectural modification can lead to sizable gains in performance, when applied to visual action recognition without pretraining and that it outperforms the literature architectures with recurrent feedback processing on ImageNet.
1705.03186
Yiwei Zhang
Yiwei Zhang and Gennian Ge
Private Information Retrieval from MDS Coded Databases with Colluding Servers under Several Variant Models
The current draft is extended by considering several PIR models. The original version named "Multi-file Private Information Retrieval from MDS Coded Databases with Colluding Servers" is abridged into a section within the current draft. arXiv admin note: text overlap with arXiv:1704.06785
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Private information retrieval (PIR) gets renewed attentions due to its information-theoretic reformulation and its application in distributed storage system (DSS). The general PIR model considers a coded database containing $N$ servers storing $M$ files. Each file is stored independently via the same arbitrary $(N,K)$-MDS code. A user wants to retrieve a specific file from the database privately against an arbitrary set of $T$ colluding servers. A key problem is to analyze the PIR capacity, defined as the maximal number of bits privately retrieved per one downloaded bit. Several extensions for the general model appear by bringing in various additional constraints. In this paper, we propose a general PIR scheme for several variant PIR models including: PIR with robust servers, PIR with Byzantine servers, the multi-file PIR model and PIR with arbitrary collusion patterns.
[ { "created": "Tue, 9 May 2017 05:41:06 GMT", "version": "v1" }, { "created": "Wed, 11 Oct 2017 00:44:43 GMT", "version": "v2" } ]
2017-10-12
[ [ "Zhang", "Yiwei", "" ], [ "Ge", "Gennian", "" ] ]
Private information retrieval (PIR) gets renewed attentions due to its information-theoretic reformulation and its application in distributed storage system (DSS). The general PIR model considers a coded database containing $N$ servers storing $M$ files. Each file is stored independently via the same arbitrary $(N,K)$-MDS code. A user wants to retrieve a specific file from the database privately against an arbitrary set of $T$ colluding servers. A key problem is to analyze the PIR capacity, defined as the maximal number of bits privately retrieved per one downloaded bit. Several extensions for the general model appear by bringing in various additional constraints. In this paper, we propose a general PIR scheme for several variant PIR models including: PIR with robust servers, PIR with Byzantine servers, the multi-file PIR model and PIR with arbitrary collusion patterns.
2202.04006
Wojciech Przybyszewski
Wojciech Przybyszewski
Distal combinatorial tools for graphs of bounded twin-width
Accepted to LICS 2023
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study set systems formed by neighborhoods in graphs of bounded twin-width. We start by proving that such graphs have linear neighborhood complexity, in analogy to previous results concerning graphs from classes with bounded expansion and of bounded clique-width. Next, we shift our attention to the notions of distality and abstract cell decomposition, which come from model theory. We give a direct combinatorial proof that the edge relation is distal in classes of ordered graphs of bounded twin-width. This allows us to apply Distal cutting lemma and Distal regularity lemma, so we obtain powerful combinatorial tools for graphs of bounded twin-width.
[ { "created": "Tue, 8 Feb 2022 17:23:17 GMT", "version": "v1" }, { "created": "Wed, 26 Apr 2023 14:30:54 GMT", "version": "v2" } ]
2023-04-27
[ [ "Przybyszewski", "Wojciech", "" ] ]
We study set systems formed by neighborhoods in graphs of bounded twin-width. We start by proving that such graphs have linear neighborhood complexity, in analogy to previous results concerning graphs from classes with bounded expansion and of bounded clique-width. Next, we shift our attention to the notions of distality and abstract cell decomposition, which come from model theory. We give a direct combinatorial proof that the edge relation is distal in classes of ordered graphs of bounded twin-width. This allows us to apply Distal cutting lemma and Distal regularity lemma, so we obtain powerful combinatorial tools for graphs of bounded twin-width.
2210.15063
Sharman Tan
Sharman Tan, Piyush Behre, Nick Kibre, Issac Alphonso, Shuangyu Chang
Four-in-One: A Joint Approach to Inverse Text Normalization, Punctuation, Capitalization, and Disfluency for Automatic Speech Recognition
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Features such as punctuation, capitalization, and formatting of entities are important for readability, understanding, and natural language processing tasks. However, Automatic Speech Recognition (ASR) systems produce spoken-form text devoid of formatting, and tagging approaches to formatting address just one or two features at a time. In this paper, we unify spoken-to-written text conversion via a two-stage process: First, we use a single transformer tagging model to jointly produce token-level tags for inverse text normalization (ITN), punctuation, capitalization, and disfluencies. Then, we apply the tags to generate written-form text and use weighted finite state transducer (WFST) grammars to format tagged ITN entity spans. Despite joining four models into one, our unified tagging approach matches or outperforms task-specific models across all four tasks on benchmark test sets across several domains.
[ { "created": "Wed, 26 Oct 2022 22:21:03 GMT", "version": "v1" } ]
2022-10-28
[ [ "Tan", "Sharman", "" ], [ "Behre", "Piyush", "" ], [ "Kibre", "Nick", "" ], [ "Alphonso", "Issac", "" ], [ "Chang", "Shuangyu", "" ] ]
Features such as punctuation, capitalization, and formatting of entities are important for readability, understanding, and natural language processing tasks. However, Automatic Speech Recognition (ASR) systems produce spoken-form text devoid of formatting, and tagging approaches to formatting address just one or two features at a time. In this paper, we unify spoken-to-written text conversion via a two-stage process: First, we use a single transformer tagging model to jointly produce token-level tags for inverse text normalization (ITN), punctuation, capitalization, and disfluencies. Then, we apply the tags to generate written-form text and use weighted finite state transducer (WFST) grammars to format tagged ITN entity spans. Despite joining four models into one, our unified tagging approach matches or outperforms task-specific models across all four tasks on benchmark test sets across several domains.
2304.11330
Shaoteng Liu
Shaoteng Liu, Xiangyu Zhang, Tao Hu, Jiaya Jia
Self-supervised Learning by View Synthesis
13 pages, 12 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present view-synthesis autoencoders (VSA) in this paper, which is a self-supervised learning framework designed for vision transformers. Different from traditional 2D pretraining methods, VSA can be pre-trained with multi-view data. In each iteration, the input to VSA is one view (or multiple views) of a 3D object and the output is a synthesized image in another target pose. The decoder of VSA has several cross-attention blocks, which use the source view as value, source pose as key, and target pose as query. They achieve cross-attention to synthesize the target view. This simple approach realizes large-angle view synthesis and learns spatial invariant representation, where the latter is decent initialization for transformers on downstream tasks, such as 3D classification on ModelNet40, ShapeNet Core55, and ScanObjectNN. VSA outperforms existing methods significantly for linear probing and is competitive for fine-tuning. The code will be made publicly available.
[ { "created": "Sat, 22 Apr 2023 06:12:13 GMT", "version": "v1" } ]
2023-04-25
[ [ "Liu", "Shaoteng", "" ], [ "Zhang", "Xiangyu", "" ], [ "Hu", "Tao", "" ], [ "Jia", "Jiaya", "" ] ]
We present view-synthesis autoencoders (VSA) in this paper, which is a self-supervised learning framework designed for vision transformers. Different from traditional 2D pretraining methods, VSA can be pre-trained with multi-view data. In each iteration, the input to VSA is one view (or multiple views) of a 3D object and the output is a synthesized image in another target pose. The decoder of VSA has several cross-attention blocks, which use the source view as value, source pose as key, and target pose as query. They achieve cross-attention to synthesize the target view. This simple approach realizes large-angle view synthesis and learns spatial invariant representation, where the latter is decent initialization for transformers on downstream tasks, such as 3D classification on ModelNet40, ShapeNet Core55, and ScanObjectNN. VSA outperforms existing methods significantly for linear probing and is competitive for fine-tuning. The code will be made publicly available.
2101.01508
N M Anoop Krishnan
Vineeth Venugopal, Sourav Sahoo, Mohd Zaki, Manish Agarwal, Nitya Nand Gosvami, N. M. Anoop Krishnan
Looking Through Glass: Knowledge Discovery from Materials Science Literature using Natural Language Processing
17 pages, 5 figures
null
null
null
cs.DL physics.comp-ph physics.data-an
http://creativecommons.org/licenses/by-sa/4.0/
Most of the knowledge in materials science literature is in the form of unstructured data such as text and images. Here, we present a framework employing natural language processing, which automates text and image comprehension and precision knowledge extraction from inorganic glasses' literature. The abstracts are automatically categorized using latent Dirichlet allocation (LDA), providing a way to classify and search semantically linked publications. Similarly, a comprehensive summary of images and plots are presented using the 'Caption Cluster Plot' (CCP), which provides direct access to the images buried in the papers. Finally, we combine the LDA and CCP with the chemical elements occurring in the manuscript to present an 'Elemental map', a topical and image-wise distribution of chemical elements in the literature. Overall, the framework presented here can be a generic and powerful tool to extract and disseminate material-specific information on composition-structure-processing-property dataspaces, allowing insights into fundamental problems relevant to the materials science community and accelerated materials discovery.
[ { "created": "Tue, 5 Jan 2021 13:48:22 GMT", "version": "v1" } ]
2021-01-06
[ [ "Venugopal", "Vineeth", "" ], [ "Sahoo", "Sourav", "" ], [ "Zaki", "Mohd", "" ], [ "Agarwal", "Manish", "" ], [ "Gosvami", "Nitya Nand", "" ], [ "Krishnan", "N. M. Anoop", "" ] ]
Most of the knowledge in materials science literature is in the form of unstructured data such as text and images. Here, we present a framework employing natural language processing, which automates text and image comprehension and precision knowledge extraction from inorganic glasses' literature. The abstracts are automatically categorized using latent Dirichlet allocation (LDA), providing a way to classify and search semantically linked publications. Similarly, a comprehensive summary of images and plots are presented using the 'Caption Cluster Plot' (CCP), which provides direct access to the images buried in the papers. Finally, we combine the LDA and CCP with the chemical elements occurring in the manuscript to present an 'Elemental map', a topical and image-wise distribution of chemical elements in the literature. Overall, the framework presented here can be a generic and powerful tool to extract and disseminate material-specific information on composition-structure-processing-property dataspaces, allowing insights into fundamental problems relevant to the materials science community and accelerated materials discovery.
2006.14360
Lauren Watson
Lauren Watson, Benedek Rozemberczki, Rik Sarkar
Stability Enhanced Privacy and Applications in Private Stochastic Gradient Descent
null
null
null
null
cs.LG cs.CR stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Private machine learning involves addition of noise while training, resulting in lower accuracy. Intuitively, greater stability can imply greater privacy and improve this privacy-utility tradeoff. We study this role of stability in private empirical risk minimization, where differential privacy is achieved by output perturbation, and establish a corresponding theoretical result showing that for strongly-convex loss functions, an algorithm with uniform stability of $\beta$ implies a bound of $O(\sqrt{\beta})$ on the scale of noise required for differential privacy. The result applies to both explicit regularization and to implicitly stabilized ERM, such as adaptations of Stochastic Gradient Descent that are known to be stable. Thus, it generalizes recent results that improve privacy through modifications to SGD, and establishes stability as the unifying perspective. It implies new privacy guarantees for optimizations with uniform stability guarantees, where a corresponding differential privacy guarantee was previously not known. Experimental results validate the utility of stability enhanced privacy in several problems, including application of elastic nets and feature selection.
[ { "created": "Thu, 25 Jun 2020 13:04:18 GMT", "version": "v1" } ]
2020-06-26
[ [ "Watson", "Lauren", "" ], [ "Rozemberczki", "Benedek", "" ], [ "Sarkar", "Rik", "" ] ]
Private machine learning involves addition of noise while training, resulting in lower accuracy. Intuitively, greater stability can imply greater privacy and improve this privacy-utility tradeoff. We study this role of stability in private empirical risk minimization, where differential privacy is achieved by output perturbation, and establish a corresponding theoretical result showing that for strongly-convex loss functions, an algorithm with uniform stability of $\beta$ implies a bound of $O(\sqrt{\beta})$ on the scale of noise required for differential privacy. The result applies to both explicit regularization and to implicitly stabilized ERM, such as adaptations of Stochastic Gradient Descent that are known to be stable. Thus, it generalizes recent results that improve privacy through modifications to SGD, and establishes stability as the unifying perspective. It implies new privacy guarantees for optimizations with uniform stability guarantees, where a corresponding differential privacy guarantee was previously not known. Experimental results validate the utility of stability enhanced privacy in several problems, including application of elastic nets and feature selection.
2004.14071
Noa Fish
Noa Fish, Richard Zhang, Lilach Perry, Daniel Cohen-Or, Eli Shechtman, Connelly Barnes
Image Morphing with Perceptual Constraints and STN Alignment
null
null
10.1111/cgf.14027
null
cs.GR cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In image morphing, a sequence of plausible frames are synthesized and composited together to form a smooth transformation between given instances. Intermediates must remain faithful to the input, stand on their own as members of the set, and maintain a well-paced visual transition from one to the next. In this paper, we propose a conditional GAN morphing framework operating on a pair of input images. The network is trained to synthesize frames corresponding to temporal samples along the transformation, and learns a proper shape prior that enhances the plausibility of intermediate frames. While individual frame plausibility is boosted by the adversarial setup, a special training protocol producing sequences of frames, combined with a perceptual similarity loss, promote smooth transformation over time. Explicit stating of correspondences is replaced with a grid-based freeform deformation spatial transformer that predicts the geometric warp between the inputs, instituting the smooth geometric effect by bringing the shapes into an initial alignment. We provide comparisons to classic as well as latent space morphing techniques, and demonstrate that, given a set of images for self-supervision, our network learns to generate visually pleasing morphing effects featuring believable in-betweens, with robustness to changes in shape and texture, requiring no correspondence annotation.
[ { "created": "Wed, 29 Apr 2020 10:49:10 GMT", "version": "v1" } ]
2020-05-05
[ [ "Fish", "Noa", "" ], [ "Zhang", "Richard", "" ], [ "Perry", "Lilach", "" ], [ "Cohen-Or", "Daniel", "" ], [ "Shechtman", "Eli", "" ], [ "Barnes", "Connelly", "" ] ]
In image morphing, a sequence of plausible frames are synthesized and composited together to form a smooth transformation between given instances. Intermediates must remain faithful to the input, stand on their own as members of the set, and maintain a well-paced visual transition from one to the next. In this paper, we propose a conditional GAN morphing framework operating on a pair of input images. The network is trained to synthesize frames corresponding to temporal samples along the transformation, and learns a proper shape prior that enhances the plausibility of intermediate frames. While individual frame plausibility is boosted by the adversarial setup, a special training protocol producing sequences of frames, combined with a perceptual similarity loss, promote smooth transformation over time. Explicit stating of correspondences is replaced with a grid-based freeform deformation spatial transformer that predicts the geometric warp between the inputs, instituting the smooth geometric effect by bringing the shapes into an initial alignment. We provide comparisons to classic as well as latent space morphing techniques, and demonstrate that, given a set of images for self-supervision, our network learns to generate visually pleasing morphing effects featuring believable in-betweens, with robustness to changes in shape and texture, requiring no correspondence annotation.
0810.1631
Danny Bickson
Danny Bickson, Yoav Tock, Ori Shental and Danny Dolev
Polynomial Linear Programming with Gaussian Belief Propagation
7 pages, 1 figure, appeared in the 46th Annual Allerton Conference on Communication, Control and Computing, Allerton House, Illinois, Sept. 2008
The 46th Annual Allerton Conference on Communication, Control and Computing, Allerton House, Illinois, Sept. 2008
10.1109/ALLERTON.2008.4797652
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Interior-point methods are state-of-the-art algorithms for solving linear programming (LP) problems with polynomial complexity. Specifically, the Karmarkar algorithm typically solves LP problems in time O(n^{3.5}), where $n$ is the number of unknown variables. Karmarkar's celebrated algorithm is known to be an instance of the log-barrier method using the Newton iteration. The main computational overhead of this method is in inverting the Hessian matrix of the Newton iteration. In this contribution, we propose the application of the Gaussian belief propagation (GaBP) algorithm as part of an efficient and distributed LP solver that exploits the sparse and symmetric structure of the Hessian matrix and avoids the need for direct matrix inversion. This approach shifts the computation from realm of linear algebra to that of probabilistic inference on graphical models, thus applying GaBP as an efficient inference engine. Our construction is general and can be used for any interior-point algorithm which uses the Newton method, including non-linear program solvers.
[ { "created": "Thu, 9 Oct 2008 11:49:12 GMT", "version": "v1" } ]
2009-04-16
[ [ "Bickson", "Danny", "" ], [ "Tock", "Yoav", "" ], [ "Shental", "Ori", "" ], [ "Dolev", "Danny", "" ] ]
Interior-point methods are state-of-the-art algorithms for solving linear programming (LP) problems with polynomial complexity. Specifically, the Karmarkar algorithm typically solves LP problems in time O(n^{3.5}), where $n$ is the number of unknown variables. Karmarkar's celebrated algorithm is known to be an instance of the log-barrier method using the Newton iteration. The main computational overhead of this method is in inverting the Hessian matrix of the Newton iteration. In this contribution, we propose the application of the Gaussian belief propagation (GaBP) algorithm as part of an efficient and distributed LP solver that exploits the sparse and symmetric structure of the Hessian matrix and avoids the need for direct matrix inversion. This approach shifts the computation from realm of linear algebra to that of probabilistic inference on graphical models, thus applying GaBP as an efficient inference engine. Our construction is general and can be used for any interior-point algorithm which uses the Newton method, including non-linear program solvers.
2201.11451
Hans-Martin Heyn
Hans-Martin Heyn and Padmini Subbiash and Jennifer Linder and Eric Knauss and Olof Eriksson
Setting AI in context: A case study on defining the context and operational design domain for automated driving
Accepted for the 28th International Working Conference on Requirement Engineering: Foundation for Software Quality
null
null
null
cs.SE cs.LG
http://creativecommons.org/licenses/by/4.0/
[Context and motivation] For automated driving systems, the operational context needs to be known in order to state guarantees on performance and safety. The operational design domain (ODD) is an abstraction of the operational context, and its definition is an integral part of the system development process. [Question / problem] There are still major uncertainties in how to clearly define and document the operational context in a diverse and distributed development environment such as the automotive industry. This case study investigates the challenges with context definitions for the development of perception functions that use machine learning for automated driving. [Principal ideas/results] Based on qualitative analysis of data from semi-structured interviews, the case study shows that there is a lack of standardisation for context definitions across the industry, ambiguities in the processes that lead to deriving the ODD, missing documentation of assumptions about the operational context, and a lack of involvement of function developers in the context definition. [Contribution] The results outline challenges experienced by an automotive supplier company when defining the operational context for systems using machine learning. Furthermore, the study collected ideas for potential solutions from the perspective of practitioners.
[ { "created": "Thu, 27 Jan 2022 11:26:32 GMT", "version": "v1" } ]
2022-01-28
[ [ "Heyn", "Hans-Martin", "" ], [ "Subbiash", "Padmini", "" ], [ "Linder", "Jennifer", "" ], [ "Knauss", "Eric", "" ], [ "Eriksson", "Olof", "" ] ]
[Context and motivation] For automated driving systems, the operational context needs to be known in order to state guarantees on performance and safety. The operational design domain (ODD) is an abstraction of the operational context, and its definition is an integral part of the system development process. [Question / problem] There are still major uncertainties in how to clearly define and document the operational context in a diverse and distributed development environment such as the automotive industry. This case study investigates the challenges with context definitions for the development of perception functions that use machine learning for automated driving. [Principal ideas/results] Based on qualitative analysis of data from semi-structured interviews, the case study shows that there is a lack of standardisation for context definitions across the industry, ambiguities in the processes that lead to deriving the ODD, missing documentation of assumptions about the operational context, and a lack of involvement of function developers in the context definition. [Contribution] The results outline challenges experienced by an automotive supplier company when defining the operational context for systems using machine learning. Furthermore, the study collected ideas for potential solutions from the perspective of practitioners.
cs/0512062
Matteo Gagliolo
Juergen Schmidhuber, Matteo Gagliolo, Daan Wierstra, Faustino Gomez
Evolino for recurrent support vector machines
10 pages, 2 figures
null
null
IDSIA-19-05 version 2.0
cs.NE
null
Traditional Support Vector Machines (SVMs) need pre-wired finite time windows to predict and classify time series. They do not have an internal state necessary to deal with sequences involving arbitrary long-term dependencies. Here we introduce a new class of recurrent, truly sequential SVM-like devices with internal adaptive states, trained by a novel method called EVOlution of systems with KErnel-based outputs (Evoke), an instance of the recent Evolino class of methods. Evoke evolves recurrent neural networks to detect and represent temporal dependencies while using quadratic programming/support vector regression to produce precise outputs. Evoke is the first SVM-based mechanism learning to classify a context-sensitive language. It also outperforms recent state-of-the-art gradient-based recurrent neural networks (RNNs) on various time series prediction tasks.
[ { "created": "Thu, 15 Dec 2005 15:05:22 GMT", "version": "v1" } ]
2007-05-23
[ [ "Schmidhuber", "Juergen", "" ], [ "Gagliolo", "Matteo", "" ], [ "Wierstra", "Daan", "" ], [ "Gomez", "Faustino", "" ] ]
Traditional Support Vector Machines (SVMs) need pre-wired finite time windows to predict and classify time series. They do not have an internal state necessary to deal with sequences involving arbitrary long-term dependencies. Here we introduce a new class of recurrent, truly sequential SVM-like devices with internal adaptive states, trained by a novel method called EVOlution of systems with KErnel-based outputs (Evoke), an instance of the recent Evolino class of methods. Evoke evolves recurrent neural networks to detect and represent temporal dependencies while using quadratic programming/support vector regression to produce precise outputs. Evoke is the first SVM-based mechanism learning to classify a context-sensitive language. It also outperforms recent state-of-the-art gradient-based recurrent neural networks (RNNs) on various time series prediction tasks.
1809.02958
Nare Karapetyan
Jason Moulton, Nare Karapetyan, Alberto Quattrini Li, and Ioannis Rekleitis
External Force Field Modeling for Autonomous Surface Vehicles
In proceedings of International Symposium of Experimental Robotics (ISER), 2018
null
10.1007/978-3-030-33950-0_29
null
cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Operating in the presence of strong adverse forces is a particularly challenging problem in field robotics. In most robotic operations where the robot is not firmly grounded, such as aerial, surface, and underwater, minimal external forces are assumed as the standard operating procedures. The first action for operating in the presence of non-trivial forces is modeling the forces and their effect on the robots motion. In this work an Autonomous Surface Vehicle (ASV), operating on lakes and rivers with varying winds and currents, collects wind and current measurements with an inexpensive custom-made sensor suite setup, and generates a model of the force field. The modeling process takes into account depth, wind, and current measurements along with the ASVs trajectory from GPS. In this work, we propose a method for an ASV to build an environmental force map by integrating in a Gaussian Process the wind, depth, and current measurements gathered at the surface. We run extensive experimental field trials for our approach on real Jetyak ASVs. Experimental results from different locations validate the proposed modeling approach.
[ { "created": "Sun, 9 Sep 2018 11:36:42 GMT", "version": "v1" } ]
2021-01-13
[ [ "Moulton", "Jason", "" ], [ "Karapetyan", "Nare", "" ], [ "Li", "Alberto Quattrini", "" ], [ "Rekleitis", "Ioannis", "" ] ]
Operating in the presence of strong adverse forces is a particularly challenging problem in field robotics. In most robotic operations where the robot is not firmly grounded, such as aerial, surface, and underwater, minimal external forces are assumed as the standard operating procedures. The first action for operating in the presence of non-trivial forces is modeling the forces and their effect on the robots motion. In this work an Autonomous Surface Vehicle (ASV), operating on lakes and rivers with varying winds and currents, collects wind and current measurements with an inexpensive custom-made sensor suite setup, and generates a model of the force field. The modeling process takes into account depth, wind, and current measurements along with the ASVs trajectory from GPS. In this work, we propose a method for an ASV to build an environmental force map by integrating in a Gaussian Process the wind, depth, and current measurements gathered at the surface. We run extensive experimental field trials for our approach on real Jetyak ASVs. Experimental results from different locations validate the proposed modeling approach.
1501.06042
Qi Zhang
Qi Zhang, Meizhu Li, Yong Deng, Sankaran Mahadevan
Tsallis entropy of complex networks
12 pages
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How complex of the complex networks has attracted many researchers to explore it. The entropy is an useful method to describe the degree of the $complex$ of the complex networks. In this paper, a new method which is based on the Tsallis entropy is proposed to describe the $complex$ of the complex networks. The results in this paper show that the complex of the complex networks not only decided by the structure property of the complex networks, but also influenced by the relationship between each nodes. In other word, which kinds of nodes are chosen as the main part of the complex networks will influence the value of the entropy of the complex networks. The value of q in the Tsallis entropy of the complex networks is used to decided which kinds of nodes will be chosen as the main part in the complex networks. The proposed Tsallis entropy of the complex networks is a generalised method to describe the property of the complex networks.
[ { "created": "Sat, 24 Jan 2015 13:51:28 GMT", "version": "v1" } ]
2015-01-29
[ [ "Zhang", "Qi", "" ], [ "Li", "Meizhu", "" ], [ "Deng", "Yong", "" ], [ "Mahadevan", "Sankaran", "" ] ]
How complex of the complex networks has attracted many researchers to explore it. The entropy is an useful method to describe the degree of the $complex$ of the complex networks. In this paper, a new method which is based on the Tsallis entropy is proposed to describe the $complex$ of the complex networks. The results in this paper show that the complex of the complex networks not only decided by the structure property of the complex networks, but also influenced by the relationship between each nodes. In other word, which kinds of nodes are chosen as the main part of the complex networks will influence the value of the entropy of the complex networks. The value of q in the Tsallis entropy of the complex networks is used to decided which kinds of nodes will be chosen as the main part in the complex networks. The proposed Tsallis entropy of the complex networks is a generalised method to describe the property of the complex networks.
1810.10188
Sanket Biswas
Subhajit Maity, Sujan Sarkar, Avinaba Tapadar, Ayan Dutta, Sanket Biswas, Sayon Nayek, Pritam Saha
Fault Area Detection in Leaf Diseases using k-means Clustering
This article is of 5 pages in IEEE format. It has been presented as a full paper in International Conference on Trends in Electronics and Informatics (ICOEI 2018) and is currently under the proceedings of the conference and yet to be published in IEEE Xplore
null
10.1109/ICOEI.2018.8553913
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
With increasing population the crisis of food is getting bigger day by day.In this time of crisis,the leaf disease of crops is the biggest problem in the food industry.In this paper, we have addressed that problem and proposed an efficient method to detect leaf disease.Leaf diseases can be detected from sample images of the leaf with the help of image processing and segmentation.Using k-means clustering and Otsu's method the faulty region in a leaf is detected which helps to determine proper course of action to be taken.Further the ratio of normal and faulty region if calculated would be able to predict if the leaf can be cured at all.
[ { "created": "Wed, 24 Oct 2018 05:08:08 GMT", "version": "v1" } ]
2021-05-18
[ [ "Maity", "Subhajit", "" ], [ "Sarkar", "Sujan", "" ], [ "Tapadar", "Avinaba", "" ], [ "Dutta", "Ayan", "" ], [ "Biswas", "Sanket", "" ], [ "Nayek", "Sayon", "" ], [ "Saha", "Pritam", "" ] ]
With increasing population the crisis of food is getting bigger day by day.In this time of crisis,the leaf disease of crops is the biggest problem in the food industry.In this paper, we have addressed that problem and proposed an efficient method to detect leaf disease.Leaf diseases can be detected from sample images of the leaf with the help of image processing and segmentation.Using k-means clustering and Otsu's method the faulty region in a leaf is detected which helps to determine proper course of action to be taken.Further the ratio of normal and faulty region if calculated would be able to predict if the leaf can be cured at all.
2208.04190
Mehdi Khoshboresh-Masouleh
Mehdi Khoshboresh-Masouleh and Reza Shah-Hosseini
SA-NET.v2: Real-time vehicle detection from oblique UAV images with use of uncertainty estimation in deep meta-learning
null
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2022
10.5194/isprs-archives-XLVI-M-2-2022-141-2022
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
In recent years, unmanned aerial vehicle (UAV) imaging is a suitable solution for real-time monitoring different vehicles on the urban scale. Real-time vehicle detection with the use of uncertainty estimation in deep meta-learning for the portable platforms (e.g., UAV) potentially improves video understanding in real-world applications with a small training dataset, while many vehicle monitoring approaches appear to understand single-time detection with a big training dataset. The purpose of real-time vehicle detection from oblique UAV images is to locate the vehicle on the time series UAV images by using semantic segmentation. Real-time vehicle detection is more difficult due to the variety of depth and scale vehicles in oblique view UAV images. Motivated by these facts, in this manuscript, we consider the problem of real-time vehicle detection for oblique UAV images based on a small training dataset and deep meta-learning. The proposed architecture, called SA-Net.v2, is a developed method based on the SA-CNN for real-time vehicle detection by reformulating the squeeze-and-attention mechanism. The SA-Net.v2 is composed of two components, including the squeeze-and-attention function that extracts the high-level feature based on a small training dataset, and the gated CNN. For the real-time vehicle detection scenario, we test our model on the UAVid dataset. UAVid is a time series oblique UAV images dataset consisting of 30 video sequences. We examine the proposed method's applicability for stand real-time vehicle detection in urban environments using time series UAV images. The experiments show that the SA-Net.v2 achieves promising performance in time series oblique UAV images.
[ { "created": "Thu, 4 Aug 2022 09:08:47 GMT", "version": "v1" } ]
2022-08-09
[ [ "Khoshboresh-Masouleh", "Mehdi", "" ], [ "Shah-Hosseini", "Reza", "" ] ]
In recent years, unmanned aerial vehicle (UAV) imaging is a suitable solution for real-time monitoring different vehicles on the urban scale. Real-time vehicle detection with the use of uncertainty estimation in deep meta-learning for the portable platforms (e.g., UAV) potentially improves video understanding in real-world applications with a small training dataset, while many vehicle monitoring approaches appear to understand single-time detection with a big training dataset. The purpose of real-time vehicle detection from oblique UAV images is to locate the vehicle on the time series UAV images by using semantic segmentation. Real-time vehicle detection is more difficult due to the variety of depth and scale vehicles in oblique view UAV images. Motivated by these facts, in this manuscript, we consider the problem of real-time vehicle detection for oblique UAV images based on a small training dataset and deep meta-learning. The proposed architecture, called SA-Net.v2, is a developed method based on the SA-CNN for real-time vehicle detection by reformulating the squeeze-and-attention mechanism. The SA-Net.v2 is composed of two components, including the squeeze-and-attention function that extracts the high-level feature based on a small training dataset, and the gated CNN. For the real-time vehicle detection scenario, we test our model on the UAVid dataset. UAVid is a time series oblique UAV images dataset consisting of 30 video sequences. We examine the proposed method's applicability for stand real-time vehicle detection in urban environments using time series UAV images. The experiments show that the SA-Net.v2 achieves promising performance in time series oblique UAV images.
2001.02568
Xishun Wang
Xishun Wang and Zhouwang Yang and Xingye Yue and Hui Wang
A Group Norm Regularized Factorization Model for Subspace Segmentation
null
IEEE ACCESS,8:106601-106613,2020
10.1109/ACCESS.2020.3000816
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Subspace segmentation assumes that data comes from the union of different subspaces and the purpose of segmentation is to partition the data into the corresponding subspace. Low-rank representation (LRR) is a classic spectral-type method for solving subspace segmentation problems, that is, one first obtains an affinity matrix by solving a LRR model and then performs spectral clustering for segmentation. This paper proposes a group norm regularized factorization model (GNRFM) inspired by the LRR model for subspace segmentation and then designs an Accelerated Augmented Lagrangian Method (AALM) algorithm to solve this model. Specifically, we adopt group norm regularization to make the columns of the factor matrix sparse, thereby achieving a purpose of low rank, which means no Singular Value Decompositions (SVD) are required and the computational complexity of each step is greatly reduced. We obtain affinity matrices by using different LRR models and then performing cluster testing on different sets of synthetic noisy data and real data, respectively. Compared with traditional models and algorithms, the proposed method is faster and more robust to noise, so the final clustering results are better. Moreover, the numerical results show that our algorithm converges fast and only requires approximately ten iterations.
[ { "created": "Wed, 8 Jan 2020 15:20:51 GMT", "version": "v1" }, { "created": "Tue, 14 Jul 2020 09:13:40 GMT", "version": "v2" } ]
2020-07-15
[ [ "Wang", "Xishun", "" ], [ "Yang", "Zhouwang", "" ], [ "Yue", "Xingye", "" ], [ "Wang", "Hui", "" ] ]
Subspace segmentation assumes that data comes from the union of different subspaces and the purpose of segmentation is to partition the data into the corresponding subspace. Low-rank representation (LRR) is a classic spectral-type method for solving subspace segmentation problems, that is, one first obtains an affinity matrix by solving a LRR model and then performs spectral clustering for segmentation. This paper proposes a group norm regularized factorization model (GNRFM) inspired by the LRR model for subspace segmentation and then designs an Accelerated Augmented Lagrangian Method (AALM) algorithm to solve this model. Specifically, we adopt group norm regularization to make the columns of the factor matrix sparse, thereby achieving a purpose of low rank, which means no Singular Value Decompositions (SVD) are required and the computational complexity of each step is greatly reduced. We obtain affinity matrices by using different LRR models and then performing cluster testing on different sets of synthetic noisy data and real data, respectively. Compared with traditional models and algorithms, the proposed method is faster and more robust to noise, so the final clustering results are better. Moreover, the numerical results show that our algorithm converges fast and only requires approximately ten iterations.
2301.08140
Alistair Weld
Alistair Weld, Joao Cartucho, Chi Xu, Joseph Davids and Stamatia Giannarou
Regularising disparity estimation via multi task learning with structured light reconstruction
null
Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization 2022
10.1080/21681163.2022.2156391
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
3D reconstruction is a useful tool for surgical planning and guidance. However, the lack of available medical data stunts research and development in this field, as supervised deep learning methods for accurate disparity estimation rely heavily on large datasets containing ground truth information. Alternative approaches to supervision have been explored, such as self-supervision, which can reduce or remove entirely the need for ground truth. However, no proposed alternatives have demonstrated performance capabilities close to what would be expected from a supervised setup. This work aims to alleviate this issue. In this paper, we investigate the learning of structured light projections to enhance the development of direct disparity estimation networks. We show for the first time that it is possible to accurately learn the projection of structured light on a scene, implicitly learning disparity. Secondly, we \textcolor{black}{explore the use of a multi task learning (MTL) framework for the joint training of structured light and disparity. We present results which show that MTL with structured light improves disparity training; without increasing the number of model parameters. Our MTL setup outperformed the single task learning (STL) network in every validation test. Notably, in the medical generalisation test, the STL error was 1.4 times worse than that of the best MTL performance. The benefit of using MTL is emphasised when the training data is limited.} A dataset containing stereoscopic images, disparity maps and structured light projections on medical phantoms and ex vivo tissue was created for evaluation together with virtual scenes. This dataset will be made publicly available in the future.
[ { "created": "Thu, 19 Jan 2023 15:54:52 GMT", "version": "v1" } ]
2023-04-06
[ [ "Weld", "Alistair", "" ], [ "Cartucho", "Joao", "" ], [ "Xu", "Chi", "" ], [ "Davids", "Joseph", "" ], [ "Giannarou", "Stamatia", "" ] ]
3D reconstruction is a useful tool for surgical planning and guidance. However, the lack of available medical data stunts research and development in this field, as supervised deep learning methods for accurate disparity estimation rely heavily on large datasets containing ground truth information. Alternative approaches to supervision have been explored, such as self-supervision, which can reduce or remove entirely the need for ground truth. However, no proposed alternatives have demonstrated performance capabilities close to what would be expected from a supervised setup. This work aims to alleviate this issue. In this paper, we investigate the learning of structured light projections to enhance the development of direct disparity estimation networks. We show for the first time that it is possible to accurately learn the projection of structured light on a scene, implicitly learning disparity. Secondly, we \textcolor{black}{explore the use of a multi task learning (MTL) framework for the joint training of structured light and disparity. We present results which show that MTL with structured light improves disparity training; without increasing the number of model parameters. Our MTL setup outperformed the single task learning (STL) network in every validation test. Notably, in the medical generalisation test, the STL error was 1.4 times worse than that of the best MTL performance. The benefit of using MTL is emphasised when the training data is limited.} A dataset containing stereoscopic images, disparity maps and structured light projections on medical phantoms and ex vivo tissue was created for evaluation together with virtual scenes. This dataset will be made publicly available in the future.
1909.10034
Kevin Lynch
Jian Shi and Kevin M. Lynch
In-hand Sliding Regrasp with Spring-Sliding Compliance
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate in-hand regrasping by pushing an object against an external constraint and allowing sliding at the fingertips. Each fingertip is modeled as attached to a multidimensional spring mounted to a position-controlled anchor. Spring compliance maps contact forces to spring compressions, ensuring the fingers remain in contact, and sliding "compliance" governs the relationship between sliding motions and tangential contact forces. A spring-sliding compliant regrasp is achieved by controlling the finger anchor motions. We derive the fingertip sliding mechanics for multifingered sliding regrasps and analyze robust regrasping conditions in the presence of finger contact wrench uncertainties. The results are verified in simulation and experiment with a two-fingered sliding regrasp designed to maximize robustness of the operation.
[ { "created": "Sun, 22 Sep 2019 15:51:59 GMT", "version": "v1" } ]
2019-09-24
[ [ "Shi", "Jian", "" ], [ "Lynch", "Kevin M.", "" ] ]
We investigate in-hand regrasping by pushing an object against an external constraint and allowing sliding at the fingertips. Each fingertip is modeled as attached to a multidimensional spring mounted to a position-controlled anchor. Spring compliance maps contact forces to spring compressions, ensuring the fingers remain in contact, and sliding "compliance" governs the relationship between sliding motions and tangential contact forces. A spring-sliding compliant regrasp is achieved by controlling the finger anchor motions. We derive the fingertip sliding mechanics for multifingered sliding regrasps and analyze robust regrasping conditions in the presence of finger contact wrench uncertainties. The results are verified in simulation and experiment with a two-fingered sliding regrasp designed to maximize robustness of the operation.
2001.06209
Florentin Liebmann MSc
Florentin Liebmann, Simon Roner, Marco von Atzigen, Florian Wanivenhaus, Caroline Neuhaus, Jos\'e Spirig, Davide Scaramuzza, Reto Sutter, Jess Snedeker, Mazda Farshad, Philipp F\"urnstahl
Registration made easy -- standalone orthopedic navigation with HoloLens
6 pages, 5 figures, accepted at CVPR 2019 workshop on Computer Vision Applications for Mixed Reality Headsets (https://docs.microsoft.com/en-us/windows/mixed-reality/cvpr-2019)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In surgical navigation, finding correspondence between preoperative plan and intraoperative anatomy, the so-called registration task, is imperative. One promising approach is to intraoperatively digitize anatomy and register it with the preoperative plan. State-of-the-art commercial navigation systems implement such approaches for pedicle screw placement in spinal fusion surgery. Although these systems improve surgical accuracy, they are not gold standard in clinical practice. Besides economical reasons, this may be due to their difficult integration into clinical workflows and unintuitive navigation feedback. Augmented Reality has the potential to overcome these limitations. Consequently, we propose a surgical navigation approach comprising intraoperative surface digitization for registration and intuitive holographic navigation for pedicle screw placement that runs entirely on the Microsoft HoloLens. Preliminary results from phantom experiments suggest that the method may meet clinical accuracy requirements.
[ { "created": "Fri, 17 Jan 2020 09:22:21 GMT", "version": "v1" } ]
2020-01-20
[ [ "Liebmann", "Florentin", "" ], [ "Roner", "Simon", "" ], [ "von Atzigen", "Marco", "" ], [ "Wanivenhaus", "Florian", "" ], [ "Neuhaus", "Caroline", "" ], [ "Spirig", "José", "" ], [ "Scaramuzza", "Davide", "" ], [ "Sutter", "Reto", "" ], [ "Snedeker", "Jess", "" ], [ "Farshad", "Mazda", "" ], [ "Fürnstahl", "Philipp", "" ] ]
In surgical navigation, finding correspondence between preoperative plan and intraoperative anatomy, the so-called registration task, is imperative. One promising approach is to intraoperatively digitize anatomy and register it with the preoperative plan. State-of-the-art commercial navigation systems implement such approaches for pedicle screw placement in spinal fusion surgery. Although these systems improve surgical accuracy, they are not gold standard in clinical practice. Besides economical reasons, this may be due to their difficult integration into clinical workflows and unintuitive navigation feedback. Augmented Reality has the potential to overcome these limitations. Consequently, we propose a surgical navigation approach comprising intraoperative surface digitization for registration and intuitive holographic navigation for pedicle screw placement that runs entirely on the Microsoft HoloLens. Preliminary results from phantom experiments suggest that the method may meet clinical accuracy requirements.
2106.02677
Jing Cheng
Jing Cheng, Chao Shen
Relay Selection and Resource Allocation for Ultra-Reliable Uplink Transmission in Smart Factory Scenarios
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a relay-aided two-phase transmission protocol for the smart factory scenario is proposed. This protocol aims at enabling all robots' ultra-reliable target number of uplink critical data transmission within a latency constraint by jointly optimizing the relay selection, resource block (RB) assignment, and transmit power allocation. Such protocol design is formulated as a mixed-integer and strictly non-convex problem where optimization variables are mutual coupling, which is definitely challenging. Instead of conventional methods designed for solving the problem, we leverage the properties of the relative entropy function to equivalently transform the problem without introducing extra constraints. As the packet error probability requirements of each robot under two possible transmission modes are coupled in one overall reliability constraint, the big-M technique is applied to decouple it into two corresponding reliability constraints. One is for direct transmission mode, and the other is for cooperative transmission mode. Moreover, both non-convex penalty (NCP) and quadratic penalty (QP) approaches are utilized to deal with the binary indicator constraints. Based on such penalty methods, a sequence of penalized approximated convex problems can be iteratively solved for sub-optimal solutions. Numerical results demonstrate the efficiency of such two penalty methods from the perspectives of sub-optimal values of total transmit power and convergence rate. Further, the impacts of reliability, the number and location of relays, the number of robots, the target number of data bits on the total power consumption are analyzed.
[ { "created": "Fri, 4 Jun 2021 19:17:30 GMT", "version": "v1" } ]
2021-06-08
[ [ "Cheng", "Jing", "" ], [ "Shen", "Chao", "" ] ]
In this paper, a relay-aided two-phase transmission protocol for the smart factory scenario is proposed. This protocol aims at enabling all robots' ultra-reliable target number of uplink critical data transmission within a latency constraint by jointly optimizing the relay selection, resource block (RB) assignment, and transmit power allocation. Such protocol design is formulated as a mixed-integer and strictly non-convex problem where optimization variables are mutual coupling, which is definitely challenging. Instead of conventional methods designed for solving the problem, we leverage the properties of the relative entropy function to equivalently transform the problem without introducing extra constraints. As the packet error probability requirements of each robot under two possible transmission modes are coupled in one overall reliability constraint, the big-M technique is applied to decouple it into two corresponding reliability constraints. One is for direct transmission mode, and the other is for cooperative transmission mode. Moreover, both non-convex penalty (NCP) and quadratic penalty (QP) approaches are utilized to deal with the binary indicator constraints. Based on such penalty methods, a sequence of penalized approximated convex problems can be iteratively solved for sub-optimal solutions. Numerical results demonstrate the efficiency of such two penalty methods from the perspectives of sub-optimal values of total transmit power and convergence rate. Further, the impacts of reliability, the number and location of relays, the number of robots, the target number of data bits on the total power consumption are analyzed.
1506.07933
Amir Gholami
Amir Gholami, Judith Hill, Dhairya Malhotra, George Biros
AccFFT: A library for distributed-memory FFT on CPU and GPU architectures
Parallel FFT Library
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new library for parallel distributed Fast Fourier Transforms (FFT). The importance of FFT in science and engineering and the advances in high performance computing necessitate further improvements. AccFFT extends existing FFT libraries for CUDA-enabled Graphics Processing Units (GPUs) to distributed memory clusters. We use overlapping communication method to reduce the overhead of PCIe transfers from/to GPU. We present numerical results on the Maverick platform at the Texas Advanced Computing Center (TACC) and on the Titan system at the Oak Ridge National Laboratory (ORNL). We present the scaling of the library up to 4,096 K20 GPUs of Titan.
[ { "created": "Fri, 26 Jun 2015 01:19:31 GMT", "version": "v1" }, { "created": "Tue, 22 Sep 2015 19:58:27 GMT", "version": "v2" }, { "created": "Wed, 25 May 2016 20:06:16 GMT", "version": "v3" } ]
2016-05-27
[ [ "Gholami", "Amir", "" ], [ "Hill", "Judith", "" ], [ "Malhotra", "Dhairya", "" ], [ "Biros", "George", "" ] ]
We present a new library for parallel distributed Fast Fourier Transforms (FFT). The importance of FFT in science and engineering and the advances in high performance computing necessitate further improvements. AccFFT extends existing FFT libraries for CUDA-enabled Graphics Processing Units (GPUs) to distributed memory clusters. We use overlapping communication method to reduce the overhead of PCIe transfers from/to GPU. We present numerical results on the Maverick platform at the Texas Advanced Computing Center (TACC) and on the Titan system at the Oak Ridge National Laboratory (ORNL). We present the scaling of the library up to 4,096 K20 GPUs of Titan.
1909.03983
Debarpita Santra
Debarpita Santra, S. K. Basu, J. K. Mondal, Subrata Goswami
Lattice-Based Fuzzy Medical Expert System for Low Back Pain Management
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Low Back Pain (LBP) is a common medical condition that deprives many individuals worldwide of their normal routine activities. In the absence of external biomarkers, diagnosis of LBP is quite challenging. It requires dealing with several clinical variables, which have no precisely quantified values. Aiming at the development of a fuzzy medical expert system for LBP management, this research proposes an attractive lattice-based knowledge representation scheme for handling imprecision in knowledge, offering a suitable design methodology for a fuzzy knowledge base and a fuzzy inference system. The fuzzy knowledge base is constructed in modular fashion, with each module capturing interrelated medical knowledge about the relevant clinical history, clinical examinations and laboratory investigation results. This approach in design ensures optimality, consistency and preciseness in the knowledge base and scalability. The fuzzy inference system, which uses the Mamdani method, adopts the triangular membership function for fuzzification and the Centroid of Area technique for defuzzification. A prototype of this system has been built using the knowledge extracted from the domain expert physicians. The inference of the system against a few available patient records at the ESI Hospital, Sealdah has been checked. It was found to be acceptable by the verifying medical experts.
[ { "created": "Mon, 9 Sep 2019 16:44:51 GMT", "version": "v1" } ]
2019-09-10
[ [ "Santra", "Debarpita", "" ], [ "Basu", "S. K.", "" ], [ "Mondal", "J. K.", "" ], [ "Goswami", "Subrata", "" ] ]
Low Back Pain (LBP) is a common medical condition that deprives many individuals worldwide of their normal routine activities. In the absence of external biomarkers, diagnosis of LBP is quite challenging. It requires dealing with several clinical variables, which have no precisely quantified values. Aiming at the development of a fuzzy medical expert system for LBP management, this research proposes an attractive lattice-based knowledge representation scheme for handling imprecision in knowledge, offering a suitable design methodology for a fuzzy knowledge base and a fuzzy inference system. The fuzzy knowledge base is constructed in modular fashion, with each module capturing interrelated medical knowledge about the relevant clinical history, clinical examinations and laboratory investigation results. This approach in design ensures optimality, consistency and preciseness in the knowledge base and scalability. The fuzzy inference system, which uses the Mamdani method, adopts the triangular membership function for fuzzification and the Centroid of Area technique for defuzzification. A prototype of this system has been built using the knowledge extracted from the domain expert physicians. The inference of the system against a few available patient records at the ESI Hospital, Sealdah has been checked. It was found to be acceptable by the verifying medical experts.
0801.1282
Shashi Kiran Chilappagari
Shashi Kiran Chilappagari, Anantha Raman Krishnan, Bane Vasic
LDPC Codes Which Can Correct Three Errors Under Iterative Decoding
5 pages, 3 figures, submitted to IEEE Information Theory Workshop (ITW), 2008
null
10.1109/ITW.2008.4578696
null
cs.IT math.IT
null
In this paper, we provide necessary and sufficient conditions for a column-weight-three LDPC code to correct three errors when decoded using Gallager A algorithm. We then provide a construction technique which results in a code satisfying the above conditions. We also provide numerical assessment of code performance via simulation results.
[ { "created": "Tue, 8 Jan 2008 17:12:21 GMT", "version": "v1" } ]
2016-11-17
[ [ "Chilappagari", "Shashi Kiran", "" ], [ "Krishnan", "Anantha Raman", "" ], [ "Vasic", "Bane", "" ] ]
In this paper, we provide necessary and sufficient conditions for a column-weight-three LDPC code to correct three errors when decoded using Gallager A algorithm. We then provide a construction technique which results in a code satisfying the above conditions. We also provide numerical assessment of code performance via simulation results.
1312.4077
Srinjoy Ganguly Mr.
Arpita Chakraborty, Srinjoy Ganguly, Mrinal Kanti Naskar and Anupam Karmakar
A Trust Based Congestion Aware Hybrid Ant Colony Optimization Algorithm for Energy Efficient Routing in Wireless Sensor Networks (TC-ACO)
6 pages, 5 figures and 2 tables (Conference Paper)
Proceedings of the IEEE International Conference on Advanced Computing (ICoAC)-2013, pp.XX-XX,Chennai, India, 18 - 20 December (2013)
10.1109/ICoAC.2013.6921940
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Congestion is a problem of paramount importance in resource constrained Wireless Sensor Networks, especially for large networks, where the traffic loads exceed the available capacity of the resources. Sensor nodes are prone to failure and the misbehavior of these faulty nodes creates further congestion. The resulting effect is a degradation in network performance, additional computation and increased energy consumption, which in turn decreases network lifetime. Hence, the data packet routing algorithm should consider congestion as one of the parameters, in addition to the role of the faulty nodes and not merely energy efficient protocols. Unfortunately most of the researchers have tried to make the routing schemes energy efficient without considering congestion factor and the effect of the faulty nodes. In this paper we have proposed a congestion aware, energy efficient, routing approach that utilizes Ant Colony Optimization algorithm, in which faulty nodes are isolated by means of the concept of trust. The merits of the proposed scheme are verified through simulations where they are compared with other protocols.
[ { "created": "Sat, 14 Dec 2013 18:41:22 GMT", "version": "v1" } ]
2016-11-18
[ [ "Chakraborty", "Arpita", "" ], [ "Ganguly", "Srinjoy", "" ], [ "Naskar", "Mrinal Kanti", "" ], [ "Karmakar", "Anupam", "" ] ]
Congestion is a problem of paramount importance in resource constrained Wireless Sensor Networks, especially for large networks, where the traffic loads exceed the available capacity of the resources. Sensor nodes are prone to failure and the misbehavior of these faulty nodes creates further congestion. The resulting effect is a degradation in network performance, additional computation and increased energy consumption, which in turn decreases network lifetime. Hence, the data packet routing algorithm should consider congestion as one of the parameters, in addition to the role of the faulty nodes and not merely energy efficient protocols. Unfortunately most of the researchers have tried to make the routing schemes energy efficient without considering congestion factor and the effect of the faulty nodes. In this paper we have proposed a congestion aware, energy efficient, routing approach that utilizes Ant Colony Optimization algorithm, in which faulty nodes are isolated by means of the concept of trust. The merits of the proposed scheme are verified through simulations where they are compared with other protocols.
2405.06931
Jaekeol Choi
Jaekeol Choi
Identifying Key Terms in Prompts for Relevance Evaluation with GPT Models
19pages, 2 figures
International Journal of Natural Language Computing, April 2024, Volume 13, Number 2
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Relevance evaluation of a query and a passage is essential in Information Retrieval (IR). Recently, numerous studies have been conducted on tasks related to relevance judgment using Large Language Models (LLMs) such as GPT-4, demonstrating significant improvements. However, the efficacy of LLMs is considerably influenced by the design of the prompt. The purpose of this paper is to identify which specific terms in prompts positively or negatively impact relevance evaluation with LLMs. We employed two types of prompts: those used in previous research and generated automatically by LLMs. By comparing the performance of these prompts in both few-shot and zero-shot settings, we analyze the influence of specific terms in the prompts. We have observed two main findings from our study. First, we discovered that prompts using the term answerlead to more effective relevance evaluations than those using relevant. This indicates that a more direct approach, focusing on answering the query, tends to enhance performance. Second, we noted the importance of appropriately balancing the scope of relevance. While the term relevant can extend the scope too broadly, resulting in less precise evaluations, an optimal balance in defining relevance is crucial for accurate assessments. The inclusion of few-shot examples helps in more precisely defining this balance. By providing clearer contexts for the term relevance, few-shot examples contribute to refine relevance criteria. In conclusion, our study highlights the significance of carefully selecting terms in prompts for relevance evaluation with LLMs.
[ { "created": "Sat, 11 May 2024 06:30:13 GMT", "version": "v1" } ]
2024-05-14
[ [ "Choi", "Jaekeol", "" ] ]
Relevance evaluation of a query and a passage is essential in Information Retrieval (IR). Recently, numerous studies have been conducted on tasks related to relevance judgment using Large Language Models (LLMs) such as GPT-4, demonstrating significant improvements. However, the efficacy of LLMs is considerably influenced by the design of the prompt. The purpose of this paper is to identify which specific terms in prompts positively or negatively impact relevance evaluation with LLMs. We employed two types of prompts: those used in previous research and generated automatically by LLMs. By comparing the performance of these prompts in both few-shot and zero-shot settings, we analyze the influence of specific terms in the prompts. We have observed two main findings from our study. First, we discovered that prompts using the term answerlead to more effective relevance evaluations than those using relevant. This indicates that a more direct approach, focusing on answering the query, tends to enhance performance. Second, we noted the importance of appropriately balancing the scope of relevance. While the term relevant can extend the scope too broadly, resulting in less precise evaluations, an optimal balance in defining relevance is crucial for accurate assessments. The inclusion of few-shot examples helps in more precisely defining this balance. By providing clearer contexts for the term relevance, few-shot examples contribute to refine relevance criteria. In conclusion, our study highlights the significance of carefully selecting terms in prompts for relevance evaluation with LLMs.
cs/0312019
Aniello Buonocore
Laura Bozzelli, Massimo Benerecetti and Adriano Peron
Verification of recursive parallel systems
49 pages, 1 figure
null
null
null
cs.LO
null
In this paper we consider the problem of proving properties of infinite behaviour of formalisms suitable to describe (infinite state) systems with recursion and parallelism. As a formal setting, we consider the framework of Process Rewriting Systems (PRSs). For a meaningfull fragment of PRSs, allowing to accommodate both Pushdown Automata and Petri Nets, we state decidability results for a class of properties about infinite derivations (infinite term rewritings). The given results can be exploited for the automatic verification of some classes of linear time properties of infinite state systems described by PRSs. In order to exemplify the assessed results, we introduce a meaningful automaton based formalism which allows to express both recursion and multi--treading.
[ { "created": "Thu, 11 Dec 2003 14:54:06 GMT", "version": "v1" } ]
2011-11-09
[ [ "Bozzelli", "Laura", "" ], [ "Benerecetti", "Massimo", "" ], [ "Peron", "Adriano", "" ] ]
In this paper we consider the problem of proving properties of infinite behaviour of formalisms suitable to describe (infinite state) systems with recursion and parallelism. As a formal setting, we consider the framework of Process Rewriting Systems (PRSs). For a meaningfull fragment of PRSs, allowing to accommodate both Pushdown Automata and Petri Nets, we state decidability results for a class of properties about infinite derivations (infinite term rewritings). The given results can be exploited for the automatic verification of some classes of linear time properties of infinite state systems described by PRSs. In order to exemplify the assessed results, we introduce a meaningful automaton based formalism which allows to express both recursion and multi--treading.
2006.02081
Olivier Bailleux
Olivier Bailleux (LIB), Yacine Boufkhad (LIAFA)
Constraint Reductions
null
null
null
null
cs.AI cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This is a commentary on the CP 2003 paper "Efficient cnf encoding of boolean cardinality constraints". After recalling its context, we outline a classification of Constraints with respect to their deductive power regarding General Arc Consistency (GAC).
[ { "created": "Wed, 3 Jun 2020 07:37:05 GMT", "version": "v1" } ]
2020-06-04
[ [ "Bailleux", "Olivier", "", "LIB" ], [ "Boufkhad", "Yacine", "", "LIAFA" ] ]
This is a commentary on the CP 2003 paper "Efficient cnf encoding of boolean cardinality constraints". After recalling its context, we outline a classification of Constraints with respect to their deductive power regarding General Arc Consistency (GAC).
1901.06722
Jean Lienard
Jean F. Li\'enard
Fitting 3D Shapes from Partial and Noisy Point Clouds with Evolutionary Computing
null
null
null
null
cs.CV q-bio.QM
http://creativecommons.org/licenses/by-sa/4.0/
Point clouds obtained from photogrammetry are noisy and incomplete models of reality. We propose an evolutionary optimization methodology that is able to approximate the underlying object geometry on such point clouds. This approach assumes a priori knowledge on the 3D structure modeled and enables the identification of a collection of primitive shapes approximating the scene. Built-in mechanisms that enforce high shape diversity and adaptive population size make this method suitable to modeling both simple and complex scenes. We focus here on the case of cylinder approximations and we describe, test, and compare a set of mutation operators designed for optimal exploration of their search space. We assess the robustness and limitations of this algorithm through a series of synthetic examples, and we finally demonstrate its general applicability on two real-life cases in vegetation and industrial settings.
[ { "created": "Sun, 20 Jan 2019 20:12:34 GMT", "version": "v1" } ]
2019-01-23
[ [ "Liénard", "Jean F.", "" ] ]
Point clouds obtained from photogrammetry are noisy and incomplete models of reality. We propose an evolutionary optimization methodology that is able to approximate the underlying object geometry on such point clouds. This approach assumes a priori knowledge on the 3D structure modeled and enables the identification of a collection of primitive shapes approximating the scene. Built-in mechanisms that enforce high shape diversity and adaptive population size make this method suitable to modeling both simple and complex scenes. We focus here on the case of cylinder approximations and we describe, test, and compare a set of mutation operators designed for optimal exploration of their search space. We assess the robustness and limitations of this algorithm through a series of synthetic examples, and we finally demonstrate its general applicability on two real-life cases in vegetation and industrial settings.
2203.10225
Xingda Wei
Xingda Wei, Fangming Lu, Tianxia Wang, Jinyu Gu, Yuhan Yang, Rong Chen, and Haibo Chen
No Provisioned Concurrency: Fast RDMA-codesigned Remote Fork for Serverless Computing
To appear in OSDI'23
null
null
null
cs.OS cs.DC
http://creativecommons.org/licenses/by/4.0/
Serverless platforms essentially face a tradeoff between container startup time and provisioned concurrency (i.e., cached instances), which is further exaggerated by the frequent need for remote container initialization. This paper presents MITOSIS, an operating system primitive that provides fast remote fork, which exploits a deep codesign of the OS kernel with RDMA. By leveraging the fast remote read capability of RDMA and partial state transfer across serverless containers, MITOSIS bridges the performance gap between local and remote container initialization. MITOSIS is the first to fork over 10,000 new containers from one instance across multiple machines within a second, while allowing the new containers to efficiently transfer the pre-materialized states of the forked one. We have implemented MITOSIS on Linux and integrated it with FN, a popular serverless platform. Under load spikes in real-world serverless workloads, MITOSIS reduces the function tail latency by 89% with orders of magnitude lower memory usage. For serverless workflow that requires state transfer, MITOSIS improves its execution time by 86%.
[ { "created": "Sat, 19 Mar 2022 02:49:55 GMT", "version": "v1" }, { "created": "Thu, 25 Aug 2022 03:35:43 GMT", "version": "v2" }, { "created": "Sat, 17 Sep 2022 01:52:44 GMT", "version": "v3" } ]
2022-09-20
[ [ "Wei", "Xingda", "" ], [ "Lu", "Fangming", "" ], [ "Wang", "Tianxia", "" ], [ "Gu", "Jinyu", "" ], [ "Yang", "Yuhan", "" ], [ "Chen", "Rong", "" ], [ "Chen", "Haibo", "" ] ]
Serverless platforms essentially face a tradeoff between container startup time and provisioned concurrency (i.e., cached instances), which is further exaggerated by the frequent need for remote container initialization. This paper presents MITOSIS, an operating system primitive that provides fast remote fork, which exploits a deep codesign of the OS kernel with RDMA. By leveraging the fast remote read capability of RDMA and partial state transfer across serverless containers, MITOSIS bridges the performance gap between local and remote container initialization. MITOSIS is the first to fork over 10,000 new containers from one instance across multiple machines within a second, while allowing the new containers to efficiently transfer the pre-materialized states of the forked one. We have implemented MITOSIS on Linux and integrated it with FN, a popular serverless platform. Under load spikes in real-world serverless workloads, MITOSIS reduces the function tail latency by 89% with orders of magnitude lower memory usage. For serverless workflow that requires state transfer, MITOSIS improves its execution time by 86%.
2109.09051
Chunming Tang
Qi Liu, Cunsheng Ding, Sihem Mesnager, Chunming Tang, Vladimir D. Tonchev
On Infinite Families of Narrow-Sense Antiprimitive BCH Codes Admitting 3-Transitive Automorphism Groups and their Consequences
arXiv admin note: text overlap with arXiv:2010.09448
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Bose-Chaudhuri-Hocquenghem (BCH) codes are a well-studied subclass of cyclic codes that have found numerous applications in error correction and notably in quantum information processing. A subclass of attractive BCH codes is the narrow-sense BCH codes over the Galois field $\mathrm{GF}(q)$ with length $q+1$, which are closely related to the action of the projective general linear group of degree two on the projective line. This paper aims to study some of the codes within this class and specifically narrow-sense antiprimitive BCH codes (these codes are also linear complementary duals (LCD) codes that have interesting practical recent applications in cryptography, among other benefits). We shall use tools and combine arguments from algebraic coding theory, combinatorial designs, and group theory (group actions, representation theory of finite groups, etc.) to investigate narrow-sense antiprimitive BCH Codes and extend results from the recent literature. Notably, the dimension, the minimum distance of some $q$-ary BCH codes with length $q+1$, and their duals are determined in this paper. The dual codes of the narrow-sense antiprimitive BCH codes derived in this paper include almost MDS codes. Furthermore, the classification of $\mathrm{PGL} (2, p^m)$-invariant codes over $\mathrm{GF} (p^h)$ is completed. As an application of this result, the $p$-ranks of all incidence structures invariant under the projective general linear group $\mathrm{ PGL }(2, p^m)$ are determined. Furthermore, infinite families of narrow-sense BCH codes admitting a $3$-transitive automorphism group are obtained. Via these BCH codes, a coding-theory approach to constructing the Witt spherical geometry designs is presented. The BCH codes proposed in this paper are good candidates for permutation decoding, as they have a relatively large group of automorphisms.
[ { "created": "Sun, 19 Sep 2021 03:10:59 GMT", "version": "v1" } ]
2021-09-21
[ [ "Liu", "Qi", "" ], [ "Ding", "Cunsheng", "" ], [ "Mesnager", "Sihem", "" ], [ "Tang", "Chunming", "" ], [ "Tonchev", "Vladimir D.", "" ] ]
The Bose-Chaudhuri-Hocquenghem (BCH) codes are a well-studied subclass of cyclic codes that have found numerous applications in error correction and notably in quantum information processing. A subclass of attractive BCH codes is the narrow-sense BCH codes over the Galois field $\mathrm{GF}(q)$ with length $q+1$, which are closely related to the action of the projective general linear group of degree two on the projective line. This paper aims to study some of the codes within this class and specifically narrow-sense antiprimitive BCH codes (these codes are also linear complementary duals (LCD) codes that have interesting practical recent applications in cryptography, among other benefits). We shall use tools and combine arguments from algebraic coding theory, combinatorial designs, and group theory (group actions, representation theory of finite groups, etc.) to investigate narrow-sense antiprimitive BCH Codes and extend results from the recent literature. Notably, the dimension, the minimum distance of some $q$-ary BCH codes with length $q+1$, and their duals are determined in this paper. The dual codes of the narrow-sense antiprimitive BCH codes derived in this paper include almost MDS codes. Furthermore, the classification of $\mathrm{PGL} (2, p^m)$-invariant codes over $\mathrm{GF} (p^h)$ is completed. As an application of this result, the $p$-ranks of all incidence structures invariant under the projective general linear group $\mathrm{ PGL }(2, p^m)$ are determined. Furthermore, infinite families of narrow-sense BCH codes admitting a $3$-transitive automorphism group are obtained. Via these BCH codes, a coding-theory approach to constructing the Witt spherical geometry designs is presented. The BCH codes proposed in this paper are good candidates for permutation decoding, as they have a relatively large group of automorphisms.
1803.10146
Ke Wang
Ke Wang, Junbo Zhang, Yujun Wang, Lei Xie
Empirical Evaluation of Speaker Adaptation on DNN based Acoustic Model
Interspeech 2018
Proceedings of Interspeech, 2018, pp. 2429-2433
10.21437/Interspeech.2018-1897
null
cs.SD cs.CL eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Speaker adaptation aims to estimate a speaker specific acoustic model from a speaker independent one to minimize the mismatch between the training and testing conditions arisen from speaker variabilities. A variety of neural network adaptation methods have been proposed since deep learning models have become the main stream. But there still lacks an experimental comparison between different methods, especially when DNN-based acoustic models have been advanced greatly. In this paper, we aim to close this gap by providing an empirical evaluation of three typical speaker adaptation methods: LIN, LHUC and KLD. Adaptation experiments, with different size of adaptation data, are conducted on a strong TDNN-LSTM acoustic model. More challengingly, here, the source and target we are concerned with are standard Mandarin speaker model and accented Mandarin speaker model. We compare the performances of different methods and their combinations. Speaker adaptation performance is also examined by speaker's accent degree.
[ { "created": "Tue, 27 Mar 2018 15:39:46 GMT", "version": "v1" }, { "created": "Sun, 17 Jun 2018 08:14:42 GMT", "version": "v2" }, { "created": "Thu, 25 Oct 2018 07:11:54 GMT", "version": "v3" } ]
2019-01-01
[ [ "Wang", "Ke", "" ], [ "Zhang", "Junbo", "" ], [ "Wang", "Yujun", "" ], [ "Xie", "Lei", "" ] ]
Speaker adaptation aims to estimate a speaker specific acoustic model from a speaker independent one to minimize the mismatch between the training and testing conditions arisen from speaker variabilities. A variety of neural network adaptation methods have been proposed since deep learning models have become the main stream. But there still lacks an experimental comparison between different methods, especially when DNN-based acoustic models have been advanced greatly. In this paper, we aim to close this gap by providing an empirical evaluation of three typical speaker adaptation methods: LIN, LHUC and KLD. Adaptation experiments, with different size of adaptation data, are conducted on a strong TDNN-LSTM acoustic model. More challengingly, here, the source and target we are concerned with are standard Mandarin speaker model and accented Mandarin speaker model. We compare the performances of different methods and their combinations. Speaker adaptation performance is also examined by speaker's accent degree.
2004.05224
Yaodong Cui
Yaodong Cui, Ren Chen, Wenbo Chu, Long Chen, Daxin Tian, Ying Li, Dongpu Cao
Deep Learning for Image and Point Cloud Fusion in Autonomous Driving: A Review
null
IEEE Transactions on Intelligent Transportation Systems.(2021)
10.1109/TITS.2020.3023541
null
cs.CV cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Autonomous vehicles were experiencing rapid development in the past few years. However, achieving full autonomy is not a trivial task, due to the nature of the complex and dynamic driving environment. Therefore, autonomous vehicles are equipped with a suite of different sensors to ensure robust, accurate environmental perception. In particular, the camera-LiDAR fusion is becoming an emerging research theme. However, so far there has been no critical review that focuses on deep-learning-based camera-LiDAR fusion methods. To bridge this gap and motivate future research, this paper devotes to review recent deep-learning-based data fusion approaches that leverage both image and point cloud. This review gives a brief overview of deep learning on image and point cloud data processing. Followed by in-depth reviews of camera-LiDAR fusion methods in depth completion, object detection, semantic segmentation, tracking and online cross-sensor calibration, which are organized based on their respective fusion levels. Furthermore, we compare these methods on publicly available datasets. Finally, we identified gaps and over-looked challenges between current academic researches and real-world applications. Based on these observations, we provide our insights and point out promising research directions.
[ { "created": "Fri, 10 Apr 2020 20:43:14 GMT", "version": "v1" }, { "created": "Wed, 9 Sep 2020 14:12:13 GMT", "version": "v2" } ]
2021-04-08
[ [ "Cui", "Yaodong", "" ], [ "Chen", "Ren", "" ], [ "Chu", "Wenbo", "" ], [ "Chen", "Long", "" ], [ "Tian", "Daxin", "" ], [ "Li", "Ying", "" ], [ "Cao", "Dongpu", "" ] ]
Autonomous vehicles were experiencing rapid development in the past few years. However, achieving full autonomy is not a trivial task, due to the nature of the complex and dynamic driving environment. Therefore, autonomous vehicles are equipped with a suite of different sensors to ensure robust, accurate environmental perception. In particular, the camera-LiDAR fusion is becoming an emerging research theme. However, so far there has been no critical review that focuses on deep-learning-based camera-LiDAR fusion methods. To bridge this gap and motivate future research, this paper devotes to review recent deep-learning-based data fusion approaches that leverage both image and point cloud. This review gives a brief overview of deep learning on image and point cloud data processing. Followed by in-depth reviews of camera-LiDAR fusion methods in depth completion, object detection, semantic segmentation, tracking and online cross-sensor calibration, which are organized based on their respective fusion levels. Furthermore, we compare these methods on publicly available datasets. Finally, we identified gaps and over-looked challenges between current academic researches and real-world applications. Based on these observations, we provide our insights and point out promising research directions.
1609.08513
Sebastian Wild
Markus E. Nebel, Elisabeth Neumann, Sebastian Wild
Median-of-k Jumplists and Dangling-Min BSTs
appears in ANALCO 2019
null
10.1137/1.9781611975505.8
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We extend randomized jumplists introduced by Br\"onnimann et al. (STACS 2003) to choose jump-pointer targets as median of a small sample for better search costs, and present randomized algorithms with expected $O(\log n)$ time complexity that maintain the probability distribution of jump pointers upon insertions and deletions. We analyze the expected costs to search, insert and delete a random element, and we show that omitting jump pointers in small sublists hardly affects search costs, but significantly reduces the memory consumption. We use a bijection between jumplists and "dangling-min BSTs", a variant of (fringe-balanced) binary search trees for the analysis. Despite their similarities, some standard analysis techniques for search trees fail for dangling-min trees (and hence for jumplists).
[ { "created": "Tue, 27 Sep 2016 16:05:10 GMT", "version": "v1" }, { "created": "Wed, 28 Sep 2016 08:59:31 GMT", "version": "v2" }, { "created": "Tue, 30 Oct 2018 15:01:46 GMT", "version": "v3" } ]
2019-05-07
[ [ "Nebel", "Markus E.", "" ], [ "Neumann", "Elisabeth", "" ], [ "Wild", "Sebastian", "" ] ]
We extend randomized jumplists introduced by Br\"onnimann et al. (STACS 2003) to choose jump-pointer targets as median of a small sample for better search costs, and present randomized algorithms with expected $O(\log n)$ time complexity that maintain the probability distribution of jump pointers upon insertions and deletions. We analyze the expected costs to search, insert and delete a random element, and we show that omitting jump pointers in small sublists hardly affects search costs, but significantly reduces the memory consumption. We use a bijection between jumplists and "dangling-min BSTs", a variant of (fringe-balanced) binary search trees for the analysis. Despite their similarities, some standard analysis techniques for search trees fail for dangling-min trees (and hence for jumplists).
1811.08586
Changjian Li
Changjian Li, Krzysztof Czarnecki
Urban Driving with Multi-Objective Deep Reinforcement Learning
Accepted at AAMAS 2019
null
null
null
cs.LG cs.AI cs.RO
http://creativecommons.org/licenses/by/4.0/
Autonomous driving is a challenging domain that entails multiple aspects: a vehicle should be able to drive to its destination as fast as possible while avoiding collision, obeying traffic rules and ensuring the comfort of passengers. In this paper, we present a deep learning variant of thresholded lexicographic Q-learning for the task of urban driving. Our multi-objective DQN agent learns to drive on multi-lane roads and intersections, yielding and changing lanes according to traffic rules. We also propose an extension for factored Markov Decision Processes to the DQN architecture that provides auxiliary features for the Q function. This is shown to significantly improve data efficiency. We then show that the learned policy is able to zero-shot transfer to a ring road without sacrificing performance.
[ { "created": "Wed, 21 Nov 2018 03:36:52 GMT", "version": "v1" }, { "created": "Tue, 26 Feb 2019 22:03:26 GMT", "version": "v2" } ]
2019-02-28
[ [ "Li", "Changjian", "" ], [ "Czarnecki", "Krzysztof", "" ] ]
Autonomous driving is a challenging domain that entails multiple aspects: a vehicle should be able to drive to its destination as fast as possible while avoiding collision, obeying traffic rules and ensuring the comfort of passengers. In this paper, we present a deep learning variant of thresholded lexicographic Q-learning for the task of urban driving. Our multi-objective DQN agent learns to drive on multi-lane roads and intersections, yielding and changing lanes according to traffic rules. We also propose an extension for factored Markov Decision Processes to the DQN architecture that provides auxiliary features for the Q function. This is shown to significantly improve data efficiency. We then show that the learned policy is able to zero-shot transfer to a ring road without sacrificing performance.
1912.04376
Mohammad Rashidi
Tyler Dauphinee, Nikunj Patel, Mohammad Rashidi
Modular Multimodal Architecture for Document Classification
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Page classification is a crucial component to any document analysis system, allowing for complex branching control flows for different components of a given document. Utilizing both the visual and textual content of a page, the proposed method exceeds the current state-of-the-art performance on the RVL-CDIP benchmark at 93.03% test accuracy.
[ { "created": "Mon, 9 Dec 2019 21:06:15 GMT", "version": "v1" } ]
2019-12-11
[ [ "Dauphinee", "Tyler", "" ], [ "Patel", "Nikunj", "" ], [ "Rashidi", "Mohammad", "" ] ]
Page classification is a crucial component to any document analysis system, allowing for complex branching control flows for different components of a given document. Utilizing both the visual and textual content of a page, the proposed method exceeds the current state-of-the-art performance on the RVL-CDIP benchmark at 93.03% test accuracy.
cs/0206034
Atsushi Fujii
Masatoshi Fukui, Shigeto Higuchi, Youichi Nakatani, Masao Tanaka, Atsushi Fujii and Tetsuya Ishikawa
Applying a Hybrid Query Translation Method to Japanese/English Cross-Language Patent Retrieval
null
ACM SIGIR 2000 Workshop on Patent Retrieval, July, 2000
null
null
cs.CL
null
This paper applies an existing query translation method to cross-language patent retrieval. In our method, multiple dictionaries are used to derive all possible translations for an input query, and collocational statistics are used to resolve translation ambiguity. We used Japanese/English parallel patent abstracts to perform comparative experiments, where our method outperformed a simple dictionary-based query translation method, and achieved 76% of monolingual retrieval in terms of average precision.
[ { "created": "Mon, 24 Jun 2002 07:46:06 GMT", "version": "v1" } ]
2007-05-23
[ [ "Fukui", "Masatoshi", "" ], [ "Higuchi", "Shigeto", "" ], [ "Nakatani", "Youichi", "" ], [ "Tanaka", "Masao", "" ], [ "Fujii", "Atsushi", "" ], [ "Ishikawa", "Tetsuya", "" ] ]
This paper applies an existing query translation method to cross-language patent retrieval. In our method, multiple dictionaries are used to derive all possible translations for an input query, and collocational statistics are used to resolve translation ambiguity. We used Japanese/English parallel patent abstracts to perform comparative experiments, where our method outperformed a simple dictionary-based query translation method, and achieved 76% of monolingual retrieval in terms of average precision.
2206.13072
Yan-Li Lee
Yan-Li Lee, Tao Zhou, Kexin Yang, Yajun Du, Liming Pan
Personalized recommendation system based on social relationships and historical behaviors
28 pages, 7 figures
null
null
null
cs.SI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Previous studies show that recommendation algorithms based on historical behaviors of users can provide satisfactory recommendation performance. Many of these algorithms pay attention to the interest of users, while ignore the influence of social relationships on user behaviors. Social relationships not only carry intrinsic information of similar consumption tastes or behaviors, but also imply the influence of individual to its neighbors. In this paper, we assume that social relationships and historical behaviors of users are related to the same factors. Based on this assumption, we propose an algorithm to focus on social relationships useful for recommendation systems through mutual constraints from both types of information. We test the performance of our algorithm on four types of users, including all users, active users, inactive users and cold-start users. Results show that the proposed algorithm outperforms benchmarks in four types of scenarios subject to recommendation accuracy and diversity metrics. We further design a randomization model to explore the contribution of social relationships to recommendation performance, and the result shows that the contribution of social relationships in the proposed algorithm depends on the coupling strength of social relationships and historical behaviors.
[ { "created": "Mon, 27 Jun 2022 06:33:01 GMT", "version": "v1" }, { "created": "Thu, 14 Jul 2022 13:05:14 GMT", "version": "v2" } ]
2022-07-15
[ [ "Lee", "Yan-Li", "" ], [ "Zhou", "Tao", "" ], [ "Yang", "Kexin", "" ], [ "Du", "Yajun", "" ], [ "Pan", "Liming", "" ] ]
Previous studies show that recommendation algorithms based on historical behaviors of users can provide satisfactory recommendation performance. Many of these algorithms pay attention to the interest of users, while ignore the influence of social relationships on user behaviors. Social relationships not only carry intrinsic information of similar consumption tastes or behaviors, but also imply the influence of individual to its neighbors. In this paper, we assume that social relationships and historical behaviors of users are related to the same factors. Based on this assumption, we propose an algorithm to focus on social relationships useful for recommendation systems through mutual constraints from both types of information. We test the performance of our algorithm on four types of users, including all users, active users, inactive users and cold-start users. Results show that the proposed algorithm outperforms benchmarks in four types of scenarios subject to recommendation accuracy and diversity metrics. We further design a randomization model to explore the contribution of social relationships to recommendation performance, and the result shows that the contribution of social relationships in the proposed algorithm depends on the coupling strength of social relationships and historical behaviors.
1202.0116
Yuriy Ostapov
Yuriy Ostapov
Inference and Plausible Reasoning in a Natural Language Understanding System Based on Object-Oriented Semantics
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Algorithms of inference in a computer system oriented to input and semantic processing of text information are presented. Such inference is necessary for logical questions when the direct comparison of objects from a question and database can not give a result. The following classes of problems are considered: a check of hypotheses for persons and non-typical actions, the determination of persons and circumstances for non-typical actions, planning actions, the determination of event cause and state of persons. To form an answer both deduction and plausible reasoning are used. As a knowledge domain under consideration is social behavior of persons, plausible reasoning is based on laws of social psychology. Proposed algorithms of inference and plausible reasoning can be realized in computer systems closely connected with text processing (criminology, operation of business, medicine, document systems).
[ { "created": "Wed, 1 Feb 2012 08:36:50 GMT", "version": "v1" } ]
2012-02-02
[ [ "Ostapov", "Yuriy", "" ] ]
Algorithms of inference in a computer system oriented to input and semantic processing of text information are presented. Such inference is necessary for logical questions when the direct comparison of objects from a question and database can not give a result. The following classes of problems are considered: a check of hypotheses for persons and non-typical actions, the determination of persons and circumstances for non-typical actions, planning actions, the determination of event cause and state of persons. To form an answer both deduction and plausible reasoning are used. As a knowledge domain under consideration is social behavior of persons, plausible reasoning is based on laws of social psychology. Proposed algorithms of inference and plausible reasoning can be realized in computer systems closely connected with text processing (criminology, operation of business, medicine, document systems).
1602.04034
Christopher Blake
Christopher G. Blake and Frank R. Kschischang
On Scaling Rules for Energy of VLSI Polar Encoders and Decoders
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is shown that all polar encoding schemes of rate $R>\frac{1}{2}$ of block length $N$ implemented according to the Thompson VLSI model must take energy $E\ge\Omega\left(N^{3/2}\right)$. This lower bound is achievable up to polylogarithmic factors using a mesh network topology defined by Thompson and the encoding algorithm defined by Arikan. A general class of circuits that compute successive cancellation decoding adapted from Arikan's butterfly network algorithm is defined. It is shown that such decoders implemented on a rectangle grid for codes of rate $R>2/3$ must take energy $E\ge\Omega(N^{3/2})$, and this can also be reached up to polylogarithmic factors using a mesh network. Capacity approaching sequences of energy optimal polar encoders and decoders, as a function of reciprocal gap to capacity $\chi = (1-R/C)^{-1}$, have energy that scales as $\Omega\left(\chi^{5.325}\right)\le E \le O\left(\chi^{7.05}\log^{4}\left(\chi\right)\right)$.
[ { "created": "Fri, 12 Feb 2016 12:38:58 GMT", "version": "v1" } ]
2016-02-15
[ [ "Blake", "Christopher G.", "" ], [ "Kschischang", "Frank R.", "" ] ]
It is shown that all polar encoding schemes of rate $R>\frac{1}{2}$ of block length $N$ implemented according to the Thompson VLSI model must take energy $E\ge\Omega\left(N^{3/2}\right)$. This lower bound is achievable up to polylogarithmic factors using a mesh network topology defined by Thompson and the encoding algorithm defined by Arikan. A general class of circuits that compute successive cancellation decoding adapted from Arikan's butterfly network algorithm is defined. It is shown that such decoders implemented on a rectangle grid for codes of rate $R>2/3$ must take energy $E\ge\Omega(N^{3/2})$, and this can also be reached up to polylogarithmic factors using a mesh network. Capacity approaching sequences of energy optimal polar encoders and decoders, as a function of reciprocal gap to capacity $\chi = (1-R/C)^{-1}$, have energy that scales as $\Omega\left(\chi^{5.325}\right)\le E \le O\left(\chi^{7.05}\log^{4}\left(\chi\right)\right)$.
1812.10315
Amit Kirschenbaum
Milan Dojchinovski and Julio Hernandez and Markus Ackermann and Amit Kirschenbaum and Sebastian Hellmann
DBpedia NIF: Open, Large-Scale and Multilingual Knowledge Extraction Corpus
15 pages, 1 figure, 4 tables, 1 listing
null
null
null
cs.CL cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the past decade, the DBpedia community has put significant amount of effort on developing technical infrastructure and methods for efficient extraction of structured information from Wikipedia. These efforts have been primarily focused on harvesting, refinement and publishing semi-structured information found in Wikipedia articles, such as information from infoboxes, categorization information, images, wikilinks and citations. Nevertheless, still vast amount of valuable information is contained in the unstructured Wikipedia article texts. In this paper, we present DBpedia NIF - a large-scale and multilingual knowledge extraction corpus. The aim of the dataset is two-fold: to dramatically broaden and deepen the amount of structured information in DBpedia, and to provide large-scale and multilingual language resource for development of various NLP and IR task. The dataset provides the content of all articles for 128 Wikipedia languages. We describe the dataset creation process and the NLP Interchange Format (NIF) used to model the content, links and the structure the information of the Wikipedia articles. The dataset has been further enriched with about 25% more links and selected partitions published as Linked Data. Finally, we describe the maintenance and sustainability plans, and selected use cases of the dataset from the TextExt knowledge extraction challenge.
[ { "created": "Wed, 26 Dec 2018 13:50:50 GMT", "version": "v1" } ]
2018-12-27
[ [ "Dojchinovski", "Milan", "" ], [ "Hernandez", "Julio", "" ], [ "Ackermann", "Markus", "" ], [ "Kirschenbaum", "Amit", "" ], [ "Hellmann", "Sebastian", "" ] ]
In the past decade, the DBpedia community has put significant amount of effort on developing technical infrastructure and methods for efficient extraction of structured information from Wikipedia. These efforts have been primarily focused on harvesting, refinement and publishing semi-structured information found in Wikipedia articles, such as information from infoboxes, categorization information, images, wikilinks and citations. Nevertheless, still vast amount of valuable information is contained in the unstructured Wikipedia article texts. In this paper, we present DBpedia NIF - a large-scale and multilingual knowledge extraction corpus. The aim of the dataset is two-fold: to dramatically broaden and deepen the amount of structured information in DBpedia, and to provide large-scale and multilingual language resource for development of various NLP and IR task. The dataset provides the content of all articles for 128 Wikipedia languages. We describe the dataset creation process and the NLP Interchange Format (NIF) used to model the content, links and the structure the information of the Wikipedia articles. The dataset has been further enriched with about 25% more links and selected partitions published as Linked Data. Finally, we describe the maintenance and sustainability plans, and selected use cases of the dataset from the TextExt knowledge extraction challenge.
1411.6027
Bernhard Rumpe
Radu Grosu, Bernhard Rumpe
Concurrent Timed Port Automata
34 pages, 3 figures, Technical Report TUM-I9533, TU Munich, 1995
null
null
TUM-I9533
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new and powerful class of automata which are explicitly concurrent and allow a very simple definition of composition. The novelty of these automata is their time-synchronous message-asynchronous communication mechanism. Time synchrony is obtained by using global clock. Message asynchrony is obtained by requiring the automata to react to every input. Explicit concurrency is obtained by marking each transition with a set of input and output messages. We compare these automata with a history based approach which uses the same communication mechanism and show that they are equivalent.
[ { "created": "Mon, 10 Nov 2014 13:07:05 GMT", "version": "v1" } ]
2014-11-25
[ [ "Grosu", "Radu", "" ], [ "Rumpe", "Bernhard", "" ] ]
We present a new and powerful class of automata which are explicitly concurrent and allow a very simple definition of composition. The novelty of these automata is their time-synchronous message-asynchronous communication mechanism. Time synchrony is obtained by using global clock. Message asynchrony is obtained by requiring the automata to react to every input. Explicit concurrency is obtained by marking each transition with a set of input and output messages. We compare these automata with a history based approach which uses the same communication mechanism and show that they are equivalent.
1804.08850
Bin Chen
Bin Chen, Chigo Okonkwo, Hartmut Hafermann, Alex Alvarado
Increasing Achievable Information Rates via Geometric Shaping
Additional references have been added
null
10.1109/ECOC.2018.8535358
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Achievable information rates are used as a metric to design novel modulation formats via geometric shaping. The proposed geometrically shaped 256-ary constellation achieves SNR gains of up to 1.18 dB.
[ { "created": "Tue, 24 Apr 2018 06:12:25 GMT", "version": "v1" }, { "created": "Tue, 1 May 2018 15:59:41 GMT", "version": "v2" } ]
2020-06-05
[ [ "Chen", "Bin", "" ], [ "Okonkwo", "Chigo", "" ], [ "Hafermann", "Hartmut", "" ], [ "Alvarado", "Alex", "" ] ]
Achievable information rates are used as a metric to design novel modulation formats via geometric shaping. The proposed geometrically shaped 256-ary constellation achieves SNR gains of up to 1.18 dB.
2407.15326
Xu Long
Long Xu
Intelligence Preschool Education System based on Multimodal Interaction Systems and AI
null
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rapid progress in AI technologies has generated considerable interest in their potential to address challenges in every field and education is no exception. Improving learning outcomes and providing relevant education to all have been dominant themes universally, both in the developed and developing world. And they have taken on greater significance in the current era of technology driven personalization.
[ { "created": "Mon, 22 Jul 2024 02:12:42 GMT", "version": "v1" }, { "created": "Fri, 2 Aug 2024 03:35:05 GMT", "version": "v2" } ]
2024-08-05
[ [ "Xu", "Long", "" ] ]
Rapid progress in AI technologies has generated considerable interest in their potential to address challenges in every field and education is no exception. Improving learning outcomes and providing relevant education to all have been dominant themes universally, both in the developed and developing world. And they have taken on greater significance in the current era of technology driven personalization.
2401.06366
Minzhao Lyu
Minzhao Lyu and Sharat Chandra Madanapalli and Arun Vishwanath and Vijay Sivaraman
Network Anatomy and Real-Time Measurement of Nvidia GeForce NOW Cloud Gaming
This paper is accepted at Passive and Active Measurement (PAM) conference Mar 2024
M. Lyu, S. C. Madanapalli, A. Vishwanath, and V. Sivaraman, "Network Anatomy and Real-Time Measurement of Nvidia GeForce NOW Cloud Gaming", in Proc. PAM, Virtual Event, Mar 2024
10.1007/978-3-031-56249-5_3
null
cs.NI cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cloud gaming, wherein game graphics is rendered in the cloud and streamed back to the user as real-time video, expands the gaming market to billions of users who do not have gaming consoles or high-power graphics PCs. Companies like Nvidia, Amazon, Sony and Microsoft are investing in building cloud gaming platforms to tap this large unserved market. However, cloud gaming requires the user to have high bandwidth and stable network connectivity - whereas a typical console game needs about 100-200 kbps, a cloud game demands minimum 10-20 Mbps. This makes the Internet Service Provider (ISP) a key player in ensuring the end-user's good gaming experience. In this paper we develop a method to detect Nvidia's GeForce NOW cloud gaming sessions over their network infrastructure, and measure associated user experience. In particular, we envision ISPs taking advantage of our method to provision network capacity at the right time and in the right place to support growth in cloud gaming at the right experience level; as well as identify the role of contextual factors such as user setup (browser vs app) and connectivity type (wired vs wireless) in performance degradation. We first present a detailed anatomy of flow establishment and volumetric profiles of cloud gaming sessions over multiple platforms, followed by a method to detect gameplay and measure key experience aspects such as latency, frame rate and resolution via real-time analysis of network traffic. The insights and methods are also validated in the lab for XBox Cloud Gaming platform. We then implement and deploy our method in a campus network to capture gameplay behaviors and experience measures across various user setups and connectivity types which we believe are valuable for network operators.
[ { "created": "Fri, 12 Jan 2024 04:33:55 GMT", "version": "v1" }, { "created": "Tue, 13 Feb 2024 08:18:34 GMT", "version": "v2" } ]
2024-04-05
[ [ "Lyu", "Minzhao", "" ], [ "Madanapalli", "Sharat Chandra", "" ], [ "Vishwanath", "Arun", "" ], [ "Sivaraman", "Vijay", "" ] ]
Cloud gaming, wherein game graphics is rendered in the cloud and streamed back to the user as real-time video, expands the gaming market to billions of users who do not have gaming consoles or high-power graphics PCs. Companies like Nvidia, Amazon, Sony and Microsoft are investing in building cloud gaming platforms to tap this large unserved market. However, cloud gaming requires the user to have high bandwidth and stable network connectivity - whereas a typical console game needs about 100-200 kbps, a cloud game demands minimum 10-20 Mbps. This makes the Internet Service Provider (ISP) a key player in ensuring the end-user's good gaming experience. In this paper we develop a method to detect Nvidia's GeForce NOW cloud gaming sessions over their network infrastructure, and measure associated user experience. In particular, we envision ISPs taking advantage of our method to provision network capacity at the right time and in the right place to support growth in cloud gaming at the right experience level; as well as identify the role of contextual factors such as user setup (browser vs app) and connectivity type (wired vs wireless) in performance degradation. We first present a detailed anatomy of flow establishment and volumetric profiles of cloud gaming sessions over multiple platforms, followed by a method to detect gameplay and measure key experience aspects such as latency, frame rate and resolution via real-time analysis of network traffic. The insights and methods are also validated in the lab for XBox Cloud Gaming platform. We then implement and deploy our method in a campus network to capture gameplay behaviors and experience measures across various user setups and connectivity types which we believe are valuable for network operators.
2012.07541
Guangyao Zhai
Guangyao Zhai, Xin Kong, Jinhao Cui, Yong Liu, and Zhen Yang
FlowMOT: 3D Multi-Object Tracking by Scene Flow Association
Internship technical report
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most end-to-end Multi-Object Tracking (MOT) methods face the problems of low accuracy and poor generalization ability. Although traditional filter-based methods can achieve better results, they are difficult to be endowed with optimal hyperparameters and often fail in varying scenarios. To alleviate these drawbacks, we propose a LiDAR-based 3D MOT framework named FlowMOT, which integrates point-wise motion information with the traditional matching algorithm, enhancing the robustness of the motion prediction. We firstly utilize a scene flow estimation network to obtain implicit motion information between two adjacent frames and calculate the predicted detection for each old tracklet in the previous frame. Then we use Hungarian algorithm to generate optimal matching relations with the ID propagation strategy to finish the tracking task. Experiments on KITTI MOT dataset show that our approach outperforms recent end-to-end methods and achieves competitive performance with the state-of-the-art filter-based method. In addition, ours can work steadily in the various-speed scenarios where the filter-based methods may fail.
[ { "created": "Mon, 14 Dec 2020 14:03:48 GMT", "version": "v1" }, { "created": "Tue, 15 Dec 2020 13:18:56 GMT", "version": "v2" }, { "created": "Fri, 5 Mar 2021 10:36:56 GMT", "version": "v3" } ]
2021-03-08
[ [ "Zhai", "Guangyao", "" ], [ "Kong", "Xin", "" ], [ "Cui", "Jinhao", "" ], [ "Liu", "Yong", "" ], [ "Yang", "Zhen", "" ] ]
Most end-to-end Multi-Object Tracking (MOT) methods face the problems of low accuracy and poor generalization ability. Although traditional filter-based methods can achieve better results, they are difficult to be endowed with optimal hyperparameters and often fail in varying scenarios. To alleviate these drawbacks, we propose a LiDAR-based 3D MOT framework named FlowMOT, which integrates point-wise motion information with the traditional matching algorithm, enhancing the robustness of the motion prediction. We firstly utilize a scene flow estimation network to obtain implicit motion information between two adjacent frames and calculate the predicted detection for each old tracklet in the previous frame. Then we use Hungarian algorithm to generate optimal matching relations with the ID propagation strategy to finish the tracking task. Experiments on KITTI MOT dataset show that our approach outperforms recent end-to-end methods and achieves competitive performance with the state-of-the-art filter-based method. In addition, ours can work steadily in the various-speed scenarios where the filter-based methods may fail.
2307.01197
Frano Raji\v{c}
Frano Raji\v{c}, Lei Ke, Yu-Wing Tai, Chi-Keung Tang, Martin Danelljan, Fisher Yu
Segment Anything Meets Point Tracking
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Segment Anything Model (SAM) has established itself as a powerful zero-shot image segmentation model, enabled by efficient point-centric annotation and prompt-based models. While click and brush interactions are both well explored in interactive image segmentation, the existing methods on videos focus on mask annotation and propagation. This paper presents SAM-PT, a novel method for point-centric interactive video segmentation, empowered by SAM and long-term point tracking. SAM-PT leverages robust and sparse point selection and propagation techniques for mask generation. Compared to traditional object-centric mask propagation strategies, we uniquely use point propagation to exploit local structure information agnostic to object semantics. We highlight the merits of point-based tracking through direct evaluation on the zero-shot open-world Unidentified Video Objects (UVO) benchmark. Our experiments on popular video object segmentation and multi-object segmentation tracking benchmarks, including DAVIS, YouTube-VOS, and BDD100K, suggest that a point-based segmentation tracker yields better zero-shot performance and efficient interactions. We release our code that integrates different point trackers and video segmentation benchmarks at https://github.com/SysCV/sam-pt.
[ { "created": "Mon, 3 Jul 2023 17:58:01 GMT", "version": "v1" }, { "created": "Sun, 3 Dec 2023 23:57:43 GMT", "version": "v2" } ]
2023-12-05
[ [ "Rajič", "Frano", "" ], [ "Ke", "Lei", "" ], [ "Tai", "Yu-Wing", "" ], [ "Tang", "Chi-Keung", "" ], [ "Danelljan", "Martin", "" ], [ "Yu", "Fisher", "" ] ]
The Segment Anything Model (SAM) has established itself as a powerful zero-shot image segmentation model, enabled by efficient point-centric annotation and prompt-based models. While click and brush interactions are both well explored in interactive image segmentation, the existing methods on videos focus on mask annotation and propagation. This paper presents SAM-PT, a novel method for point-centric interactive video segmentation, empowered by SAM and long-term point tracking. SAM-PT leverages robust and sparse point selection and propagation techniques for mask generation. Compared to traditional object-centric mask propagation strategies, we uniquely use point propagation to exploit local structure information agnostic to object semantics. We highlight the merits of point-based tracking through direct evaluation on the zero-shot open-world Unidentified Video Objects (UVO) benchmark. Our experiments on popular video object segmentation and multi-object segmentation tracking benchmarks, including DAVIS, YouTube-VOS, and BDD100K, suggest that a point-based segmentation tracker yields better zero-shot performance and efficient interactions. We release our code that integrates different point trackers and video segmentation benchmarks at https://github.com/SysCV/sam-pt.
2311.01634
Cory Dal Ponte
Cory Dal Ponte, Sathana Dushyanthen and Kayley Lyons
"Close...but not as good as an educator." -- Using ChatGPT to provide formative feedback in large-class collaborative learning
4 pages, 2 figures, 1 table
null
null
null
cs.HC cs.AI cs.CY
http://creativecommons.org/licenses/by/4.0/
Delivering personalised, formative feedback to multiple problem-based learning groups in a short time period can be almost impossible. We employed ChatGPT to provide personalised formative feedback in a one-hour Zoom break-out room activity that taught practicing health professionals how to formulate evaluation plans for digital health initiatives. Learners completed an evaluation survey that included Likert scales and open-ended questions that were analysed. Half of the 44 survey respondents had never used ChatGPT before. Overall, respondents found the feedback favourable, described a wide range of group dynamics, and had adaptive responses to the feedback, yet only three groups used the feedback loop to improve their evaluation plans. Future educators can learn from our experience including engineering prompts, providing instructions on how to use ChatGPT, and scaffolding optimal group interactions with ChatGPT. Future researchers should explore the influence of ChatGPT on group dynamics and derive design principles for the use of ChatGPT in collaborative learning.
[ { "created": "Thu, 2 Nov 2023 23:00:38 GMT", "version": "v1" } ]
2023-11-06
[ [ "Ponte", "Cory Dal", "" ], [ "Dushyanthen", "Sathana", "" ], [ "Lyons", "Kayley", "" ] ]
Delivering personalised, formative feedback to multiple problem-based learning groups in a short time period can be almost impossible. We employed ChatGPT to provide personalised formative feedback in a one-hour Zoom break-out room activity that taught practicing health professionals how to formulate evaluation plans for digital health initiatives. Learners completed an evaluation survey that included Likert scales and open-ended questions that were analysed. Half of the 44 survey respondents had never used ChatGPT before. Overall, respondents found the feedback favourable, described a wide range of group dynamics, and had adaptive responses to the feedback, yet only three groups used the feedback loop to improve their evaluation plans. Future educators can learn from our experience including engineering prompts, providing instructions on how to use ChatGPT, and scaffolding optimal group interactions with ChatGPT. Future researchers should explore the influence of ChatGPT on group dynamics and derive design principles for the use of ChatGPT in collaborative learning.
2010.13952
Farzaneh Khoshnevisan
Farzaneh Khoshnevisan and Min Chi
An Adversarial Domain Separation Framework for Septic Shock Early Prediction Across EHR Systems
to be published in 2020 IEEE International Conference on Big Data
null
10.1109/BigData50022.2020.9378058
null
cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modeling patient disease progression using Electronic Health Records (EHRs) is critical to assist clinical decision making. While most of prior work has mainly focused on developing effective disease progression models using EHRs collected from an individual medical system, relatively little work has investigated building robust yet generalizable diagnosis models across different systems. In this work, we propose a general domain adaptation (DA) framework that tackles two categories of discrepancies in EHRs collected from different medical systems: one is caused by heterogeneous patient populations (covariate shift) and the other is caused by variations in data collection procedures (systematic bias). Prior research in DA has mainly focused on addressing covariate shift but not systematic bias. In this work, we propose an adversarial domain separation framework that addresses both categories of discrepancies by maintaining one globally-shared invariant latent representation across all systems} through an adversarial learning process, while also allocating a domain-specific model for each system to extract local latent representations that cannot and should not be unified across systems. Moreover, our proposed framework is based on variational recurrent neural network (VRNN) because of its ability to capture complex temporal dependencies and handling missing values in time-series data. We evaluate our framework for early diagnosis of an extremely challenging condition, septic shock, using two real-world EHRs from distinct medical systems in the U.S. The results show that by separating globally-shared from domain-specific representations, our framework significantly improves septic shock early prediction performance in both EHRs and outperforms the current state-of-the-art DA models.
[ { "created": "Mon, 26 Oct 2020 23:41:33 GMT", "version": "v1" } ]
2021-03-23
[ [ "Khoshnevisan", "Farzaneh", "" ], [ "Chi", "Min", "" ] ]
Modeling patient disease progression using Electronic Health Records (EHRs) is critical to assist clinical decision making. While most of prior work has mainly focused on developing effective disease progression models using EHRs collected from an individual medical system, relatively little work has investigated building robust yet generalizable diagnosis models across different systems. In this work, we propose a general domain adaptation (DA) framework that tackles two categories of discrepancies in EHRs collected from different medical systems: one is caused by heterogeneous patient populations (covariate shift) and the other is caused by variations in data collection procedures (systematic bias). Prior research in DA has mainly focused on addressing covariate shift but not systematic bias. In this work, we propose an adversarial domain separation framework that addresses both categories of discrepancies by maintaining one globally-shared invariant latent representation across all systems} through an adversarial learning process, while also allocating a domain-specific model for each system to extract local latent representations that cannot and should not be unified across systems. Moreover, our proposed framework is based on variational recurrent neural network (VRNN) because of its ability to capture complex temporal dependencies and handling missing values in time-series data. We evaluate our framework for early diagnosis of an extremely challenging condition, septic shock, using two real-world EHRs from distinct medical systems in the U.S. The results show that by separating globally-shared from domain-specific representations, our framework significantly improves septic shock early prediction performance in both EHRs and outperforms the current state-of-the-art DA models.
0711.3291
EDA Publishing Association
J. Juillard, E. Colinet (LETI), M. Dominguez, Joan Pons, J. Ricart
Resolution Limits for Resonant Mems Sensors Based on Discrete Relay Feedback Techniques
Submitted on behalf of TIMA Editions (http://irevues.inist.fr/tima-editions)
Dans Symposium on Design, Test, Integration and Packaging of MEMS/MOEMS - DTIP 2006, Stresa, Lago Maggiore : Italie (2006)
null
null
cs.OH
null
This paper is devoted to the analysis of resonant MEMS sensors based on discrete relay feedback techniques. One drawback of such techniques is that some synchronization usually occurs between the discrete part and the continuous part of the system: this results in sensor responses that are very similar to the curves known as devil's staircases, i.e. the frequency does not vary smoothly with the sensor's input. The main contribution of this paper is a theoretical calculation of the resolution of such systems. The resolutions of two existing resonant MEMS architectures are then calculated and these results are discussed.
[ { "created": "Wed, 21 Nov 2007 09:35:07 GMT", "version": "v1" } ]
2007-11-29
[ [ "Juillard", "J.", "", "LETI" ], [ "Colinet", "E.", "", "LETI" ], [ "Dominguez", "M.", "" ], [ "Pons", "Joan", "" ], [ "Ricart", "J.", "" ] ]
This paper is devoted to the analysis of resonant MEMS sensors based on discrete relay feedback techniques. One drawback of such techniques is that some synchronization usually occurs between the discrete part and the continuous part of the system: this results in sensor responses that are very similar to the curves known as devil's staircases, i.e. the frequency does not vary smoothly with the sensor's input. The main contribution of this paper is a theoretical calculation of the resolution of such systems. The resolutions of two existing resonant MEMS architectures are then calculated and these results are discussed.
2108.09200
David Aparicio
Maria In\^es Silva, David Apar\'icio, Beatriz Malveiro, Jo\~ao Tiago Ascens\~ao, Pedro Bizarro
GUDIE: a flexible, user-defined method to extract subgraphs of interest from large graphs
16 pages, 8 figures, accepted at GEM2021
null
null
null
cs.SI
http://creativecommons.org/licenses/by/4.0/
Large, dense, small-world networks often emerge from social phenomena, including financial networks, social media, or epidemiology. As networks grow in importance, it is often necessary to partition them into meaningful units of analysis. In this work, we propose GUDIE, a message-passing algorithm that extracts relevant context around seed nodes based on user-defined criteria. We design GUDIE for rich, labeled graphs, and expansions consider node and edge attributes. Preliminary results indicate that GUDIE expands to insightful areas while avoiding unimportant connections. The resulting subgraphs contain the relevant context for a seed node and can accelerate and extend analysis capabilities in finance and other critical networks.
[ { "created": "Fri, 20 Aug 2021 14:42:13 GMT", "version": "v1" } ]
2021-08-23
[ [ "Silva", "Maria Inês", "" ], [ "Aparício", "David", "" ], [ "Malveiro", "Beatriz", "" ], [ "Ascensão", "João Tiago", "" ], [ "Bizarro", "Pedro", "" ] ]
Large, dense, small-world networks often emerge from social phenomena, including financial networks, social media, or epidemiology. As networks grow in importance, it is often necessary to partition them into meaningful units of analysis. In this work, we propose GUDIE, a message-passing algorithm that extracts relevant context around seed nodes based on user-defined criteria. We design GUDIE for rich, labeled graphs, and expansions consider node and edge attributes. Preliminary results indicate that GUDIE expands to insightful areas while avoiding unimportant connections. The resulting subgraphs contain the relevant context for a seed node and can accelerate and extend analysis capabilities in finance and other critical networks.
2404.06404
Mohammad Namvarpour
M. Namvarpour and A. Razi
Apprentices to Research Assistants: Advancing Research with Large Language Models
null
null
null
null
cs.HC cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Large Language Models (LLMs) have emerged as powerful tools in various research domains. This article examines their potential through a literature review and firsthand experimentation. While LLMs offer benefits like cost-effectiveness and efficiency, challenges such as prompt tuning, biases, and subjectivity must be addressed. The study presents insights from experiments utilizing LLMs for qualitative analysis, highlighting successes and limitations. Additionally, it discusses strategies for mitigating challenges, such as prompt optimization techniques and leveraging human expertise. This study aligns with the 'LLMs as Research Tools' workshop's focus on integrating LLMs into HCI data work critically and ethically. By addressing both opportunities and challenges, our work contributes to the ongoing dialogue on their responsible application in research.
[ { "created": "Tue, 9 Apr 2024 15:53:06 GMT", "version": "v1" } ]
2024-04-10
[ [ "Namvarpour", "M.", "" ], [ "Razi", "A.", "" ] ]
Large Language Models (LLMs) have emerged as powerful tools in various research domains. This article examines their potential through a literature review and firsthand experimentation. While LLMs offer benefits like cost-effectiveness and efficiency, challenges such as prompt tuning, biases, and subjectivity must be addressed. The study presents insights from experiments utilizing LLMs for qualitative analysis, highlighting successes and limitations. Additionally, it discusses strategies for mitigating challenges, such as prompt optimization techniques and leveraging human expertise. This study aligns with the 'LLMs as Research Tools' workshop's focus on integrating LLMs into HCI data work critically and ethically. By addressing both opportunities and challenges, our work contributes to the ongoing dialogue on their responsible application in research.
1704.01006
Gerrit Bagschik
Gerrit Bagschik, Till Menzel, Markus Maurer
Ontology based Scene Creation for the Development of Automated Vehicles
Accepted at the 2018 IEEE Intelligent Vehicles Symposium, 8 pages, 10 figures
null
null
null
cs.AI cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The introduction of automated vehicles without permanent human supervision demands a functional system description, including functional system boundaries and a comprehensive safety analysis. These inputs to the technical development can be identified and analyzed by a scenario-based approach. Furthermore, to establish an economical test and release process, a large number of scenarios must be identified to obtain meaningful test results. Experts are doing well to identify scenarios that are difficult to handle or unlikely to happen. However, experts are unlikely to identify all scenarios possible based on the knowledge they have on hand. Expert knowledge modeled for computer aided processing may help for the purpose of providing a wide range of scenarios. This contribution reviews ontologies as knowledge-based systems in the field of automated vehicles, and proposes a generation of traffic scenes in natural language as a basis for a scenario creation.
[ { "created": "Wed, 29 Mar 2017 20:02:39 GMT", "version": "v1" }, { "created": "Thu, 15 Jun 2017 07:22:17 GMT", "version": "v2" }, { "created": "Thu, 4 Jan 2018 11:56:14 GMT", "version": "v3" }, { "created": "Fri, 19 Jan 2018 08:50:19 GMT", "version": "v4" }, { "created": "Mon, 23 Apr 2018 20:25:41 GMT", "version": "v5" } ]
2018-04-25
[ [ "Bagschik", "Gerrit", "" ], [ "Menzel", "Till", "" ], [ "Maurer", "Markus", "" ] ]
The introduction of automated vehicles without permanent human supervision demands a functional system description, including functional system boundaries and a comprehensive safety analysis. These inputs to the technical development can be identified and analyzed by a scenario-based approach. Furthermore, to establish an economical test and release process, a large number of scenarios must be identified to obtain meaningful test results. Experts are doing well to identify scenarios that are difficult to handle or unlikely to happen. However, experts are unlikely to identify all scenarios possible based on the knowledge they have on hand. Expert knowledge modeled for computer aided processing may help for the purpose of providing a wide range of scenarios. This contribution reviews ontologies as knowledge-based systems in the field of automated vehicles, and proposes a generation of traffic scenes in natural language as a basis for a scenario creation.
2106.07830
Zongyu Dai
Zhiqi Bu, Hua Wang, Zongyu Dai, Qi Long
On the Convergence and Calibration of Deep Learning with Differential Privacy
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Differentially private (DP) training preserves the data privacy usually at the cost of slower convergence (and thus lower accuracy), as well as more severe mis-calibration than its non-private counterpart. To analyze the convergence of DP training, we formulate a continuous time analysis through the lens of neural tangent kernel (NTK), which characterizes the per-sample gradient clipping and the noise addition in DP training, for arbitrary network architectures and loss functions. Interestingly, we show that the noise addition only affects the privacy risk but not the convergence or calibration, whereas the per-sample gradient clipping (under both flat and layerwise clipping styles) only affects the convergence and calibration. Furthermore, we observe that while DP models trained with small clipping norm usually achieve the best accurate, but are poorly calibrated and thus unreliable. In sharp contrast, DP models trained with large clipping norm enjoy the same privacy guarantee and similar accuracy, but are significantly more \textit{calibrated}. Our code can be found at \url{https://github.com/woodyx218/opacus_global_clipping}.
[ { "created": "Tue, 15 Jun 2021 01:32:29 GMT", "version": "v1" }, { "created": "Sat, 17 Jul 2021 04:11:06 GMT", "version": "v2" }, { "created": "Sun, 10 Oct 2021 04:41:31 GMT", "version": "v3" }, { "created": "Sat, 29 Jan 2022 05:25:22 GMT", "version": "v4" }, { "created": "Tue, 1 Feb 2022 22:38:40 GMT", "version": "v5" }, { "created": "Mon, 19 Jun 2023 15:13:37 GMT", "version": "v6" } ]
2023-06-21
[ [ "Bu", "Zhiqi", "" ], [ "Wang", "Hua", "" ], [ "Dai", "Zongyu", "" ], [ "Long", "Qi", "" ] ]
Differentially private (DP) training preserves the data privacy usually at the cost of slower convergence (and thus lower accuracy), as well as more severe mis-calibration than its non-private counterpart. To analyze the convergence of DP training, we formulate a continuous time analysis through the lens of neural tangent kernel (NTK), which characterizes the per-sample gradient clipping and the noise addition in DP training, for arbitrary network architectures and loss functions. Interestingly, we show that the noise addition only affects the privacy risk but not the convergence or calibration, whereas the per-sample gradient clipping (under both flat and layerwise clipping styles) only affects the convergence and calibration. Furthermore, we observe that while DP models trained with small clipping norm usually achieve the best accurate, but are poorly calibrated and thus unreliable. In sharp contrast, DP models trained with large clipping norm enjoy the same privacy guarantee and similar accuracy, but are significantly more \textit{calibrated}. Our code can be found at \url{https://github.com/woodyx218/opacus_global_clipping}.
2404.10206
Valdemar \v{S}v\'abensk\'y
Jan Vykopal, Pavel \v{C}eleda, Valdemar \v{S}v\'abensk\'y, Martin Hofbauer, Martin Hor\'ak
Research and Practice of Delivering Tabletop Exercises
Published in ACM ITiCSE 2024 conference proceedings, see https://doi.org/10.1145/3649217.3653642
null
10.1145/3649217.3653642
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
Tabletop exercises are used to train personnel in the efficient mitigation and resolution of incidents. They are applied in practice to support the preparedness of organizations and to highlight inefficient processes. Since tabletop exercises train competencies required in the workplace, they have been introduced into computing courses at universities as an innovation, especially within cybersecurity curricula. To help computing educators adopt this innovative method, we survey academic publications that deal with tabletop exercises. From 140 papers we identified and examined, we selected 14 papers for a detailed review. The results show that the existing research deals predominantly with exercises that follow a linear format and exercises that do not systematically collect data about trainees' learning. Computing education researchers can investigate novel approaches to instruction and assessment in the context of tabletop exercises to maximize the impact of this teaching method. Due to the relatively low number of published papers, the potential for future research is immense. Our review provides researchers, tool developers, and educators with an orientation in the area, a synthesis of trends, and implications for further work.
[ { "created": "Tue, 16 Apr 2024 01:12:20 GMT", "version": "v1" } ]
2024-04-17
[ [ "Vykopal", "Jan", "" ], [ "Čeleda", "Pavel", "" ], [ "Švábenský", "Valdemar", "" ], [ "Hofbauer", "Martin", "" ], [ "Horák", "Martin", "" ] ]
Tabletop exercises are used to train personnel in the efficient mitigation and resolution of incidents. They are applied in practice to support the preparedness of organizations and to highlight inefficient processes. Since tabletop exercises train competencies required in the workplace, they have been introduced into computing courses at universities as an innovation, especially within cybersecurity curricula. To help computing educators adopt this innovative method, we survey academic publications that deal with tabletop exercises. From 140 papers we identified and examined, we selected 14 papers for a detailed review. The results show that the existing research deals predominantly with exercises that follow a linear format and exercises that do not systematically collect data about trainees' learning. Computing education researchers can investigate novel approaches to instruction and assessment in the context of tabletop exercises to maximize the impact of this teaching method. Due to the relatively low number of published papers, the potential for future research is immense. Our review provides researchers, tool developers, and educators with an orientation in the area, a synthesis of trends, and implications for further work.
1812.00929
Eric Tzeng
Eric Tzeng, Kaylee Burns, Kate Saenko, Trevor Darrell
SPLAT: Semantic Pixel-Level Adaptation Transforms for Detection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Domain adaptation of visual detectors is a critical challenge, yet existing methods have overlooked pixel appearance transformations, focusing instead on bootstrapping and/or domain confusion losses. We propose a Semantic Pixel-Level Adaptation Transform (SPLAT) approach to detector adaptation that efficiently generates cross-domain image pairs. Our model uses aligned-pair and/or pseudo-label losses to adapt an object detector to the target domain, and can learn transformations with or without densely labeled data in the source (e.g. semantic segmentation annotations). Without dense labels, as is the case when only detection labels are available in the source, transformations are learned using CycleGAN alignment. Otherwise, when dense labels are available we introduce a more efficient cycle-free method, which exploits pixel-level semantic labels to condition the training of the transformation network. The end task is then trained using detection box labels from the source, potentially including labels inferred on unlabeled source data. We show both that pixel-level transforms outperform prior approaches to detector domain adaptation, and that our cycle-free method outperforms prior models for unconstrained cycle-based learning of generic transformations while running 3.8 times faster. Our combined model improves on prior detection baselines by 12.5 mAP adapting from Sim 10K to Cityscapes, recovering over 50% of the missing performance between the unadapted baseline and the labeled-target upper bound.
[ { "created": "Mon, 3 Dec 2018 17:38:52 GMT", "version": "v1" } ]
2018-12-04
[ [ "Tzeng", "Eric", "" ], [ "Burns", "Kaylee", "" ], [ "Saenko", "Kate", "" ], [ "Darrell", "Trevor", "" ] ]
Domain adaptation of visual detectors is a critical challenge, yet existing methods have overlooked pixel appearance transformations, focusing instead on bootstrapping and/or domain confusion losses. We propose a Semantic Pixel-Level Adaptation Transform (SPLAT) approach to detector adaptation that efficiently generates cross-domain image pairs. Our model uses aligned-pair and/or pseudo-label losses to adapt an object detector to the target domain, and can learn transformations with or without densely labeled data in the source (e.g. semantic segmentation annotations). Without dense labels, as is the case when only detection labels are available in the source, transformations are learned using CycleGAN alignment. Otherwise, when dense labels are available we introduce a more efficient cycle-free method, which exploits pixel-level semantic labels to condition the training of the transformation network. The end task is then trained using detection box labels from the source, potentially including labels inferred on unlabeled source data. We show both that pixel-level transforms outperform prior approaches to detector domain adaptation, and that our cycle-free method outperforms prior models for unconstrained cycle-based learning of generic transformations while running 3.8 times faster. Our combined model improves on prior detection baselines by 12.5 mAP adapting from Sim 10K to Cityscapes, recovering over 50% of the missing performance between the unadapted baseline and the labeled-target upper bound.
1710.02019
Daniel Augot
Daniel Augot (GRACE), Herv\'e Chabanne, Thomas Chenevier, William George (LIX, GRACE), Laurent Lamber
A User-Centric System for Verified Identities on the Bitcoin Blockchain
null
International Workshop on Cryptocurrencies and Blockchain Technology - CBT'17, Sep 2017, Oslo, Norway
null
null
cs.CR cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an identity management scheme built into the Bitcoin blockchain, allowing for identities that are as indelible as the blockchain itself. Moreover, we take advantage of Bitcoin's decentralized nature to facilitate a shared control between users and identity providers, allowing users to directly manage their own identities, fluidly coordinating identities from different providers, even as identity providers can revoke identities and impose controls.
[ { "created": "Thu, 5 Oct 2017 13:48:04 GMT", "version": "v1" } ]
2017-10-06
[ [ "Augot", "Daniel", "", "GRACE" ], [ "Chabanne", "Hervé", "", "LIX, GRACE" ], [ "Chenevier", "Thomas", "", "LIX, GRACE" ], [ "George", "William", "", "LIX, GRACE" ], [ "Lamber", "Laurent", "" ] ]
We present an identity management scheme built into the Bitcoin blockchain, allowing for identities that are as indelible as the blockchain itself. Moreover, we take advantage of Bitcoin's decentralized nature to facilitate a shared control between users and identity providers, allowing users to directly manage their own identities, fluidly coordinating identities from different providers, even as identity providers can revoke identities and impose controls.
1501.02419
Jeffrey Wildman
Jeffrey Wildman, Yusuf Osmanlioglu, Steven Weber, Ali Shokoufandeh
Delay Minimizing User Association in Cellular Networks via Hierarchically Well-Separated Trees
6 pages, 5 figures. Submitted on 2013-10-03 to the 2015 IEEE International Conference on Communications (ICC). Accepted on 2015-01-09 to the 2015 IEEE International Conference on Communications (ICC)
null
10.1109/ICC.2015.7248950
null
cs.IT cs.NI math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study downlink delay minimization within the context of cellular user association policies that map mobile users to base stations. We note the delay minimum user association problem fits within a broader class of network utility maximization and can be posed as a non-convex quadratic program. This non-convexity motivates a split quadratic objective function that captures the original problem's inherent tradeoff: association with a station that provides the highest signal-to-interference-plus-noise ratio (SINR) vs. a station that is least congested. We find the split-term formulation is amenable to linearization by embedding the base stations in a hierarchically well-separated tree (HST), which offers a linear approximation with constant distortion. We provide a numerical comparison of several problem formulations and find that with appropriate optimization parameter selection, the quadratic reformulation produces association policies with sum delays that are close to that of the original network utility maximization. We also comment on the more difficult problem when idle base stations (those without associated users) are deactivated.
[ { "created": "Sun, 11 Jan 2015 05:40:03 GMT", "version": "v1" } ]
2015-10-08
[ [ "Wildman", "Jeffrey", "" ], [ "Osmanlioglu", "Yusuf", "" ], [ "Weber", "Steven", "" ], [ "Shokoufandeh", "Ali", "" ] ]
We study downlink delay minimization within the context of cellular user association policies that map mobile users to base stations. We note the delay minimum user association problem fits within a broader class of network utility maximization and can be posed as a non-convex quadratic program. This non-convexity motivates a split quadratic objective function that captures the original problem's inherent tradeoff: association with a station that provides the highest signal-to-interference-plus-noise ratio (SINR) vs. a station that is least congested. We find the split-term formulation is amenable to linearization by embedding the base stations in a hierarchically well-separated tree (HST), which offers a linear approximation with constant distortion. We provide a numerical comparison of several problem formulations and find that with appropriate optimization parameter selection, the quadratic reformulation produces association policies with sum delays that are close to that of the original network utility maximization. We also comment on the more difficult problem when idle base stations (those without associated users) are deactivated.
2304.04410
Shaowei Wang
Shaowei Wang, Jin Li, Yuntong Li, Jin Li, Wei Yang, Hongyang Yan
Differentially Private Numerical Vector Analyses in the Local and Shuffle Model
Full version of "Hiding Numerical Vectors in Local Private and Shuffled Messages" (IJCAI 2021)
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Numerical vector aggregation plays a crucial role in privacy-sensitive applications, such as distributed gradient estimation in federated learning and statistical analysis of key-value data. In the context of local differential privacy, this study provides a tight minimax error bound of $O(\frac{ds}{n\epsilon^2})$, where $d$ represents the dimension of the numerical vector and $s$ denotes the number of non-zero entries. By converting the conditional/unconditional numerical mean estimation problem into a frequency estimation problem, we develop an optimal and efficient mechanism called Collision. In contrast, existing methods exhibit sub-optimal error rates of $O(\frac{d^2}{n\epsilon^2})$ or $O(\frac{ds^2}{n\epsilon^2})$. Specifically, for unconditional mean estimation, we leverage the negative correlation between two frequencies in each dimension and propose the CoCo mechanism, which further reduces estimation errors for mean values compared to Collision. Moreover, to surpass the error barrier in local privacy, we examine privacy amplification in the shuffle model for the proposed mechanisms and derive precisely tight amplification bounds. Our experiments validate and compare our mechanisms with existing approaches, demonstrating significant error reductions for frequency estimation and mean estimation on numerical vectors.
[ { "created": "Mon, 10 Apr 2023 06:44:15 GMT", "version": "v1" } ]
2023-04-11
[ [ "Wang", "Shaowei", "" ], [ "Li", "Jin", "" ], [ "Li", "Yuntong", "" ], [ "Li", "Jin", "" ], [ "Yang", "Wei", "" ], [ "Yan", "Hongyang", "" ] ]
Numerical vector aggregation plays a crucial role in privacy-sensitive applications, such as distributed gradient estimation in federated learning and statistical analysis of key-value data. In the context of local differential privacy, this study provides a tight minimax error bound of $O(\frac{ds}{n\epsilon^2})$, where $d$ represents the dimension of the numerical vector and $s$ denotes the number of non-zero entries. By converting the conditional/unconditional numerical mean estimation problem into a frequency estimation problem, we develop an optimal and efficient mechanism called Collision. In contrast, existing methods exhibit sub-optimal error rates of $O(\frac{d^2}{n\epsilon^2})$ or $O(\frac{ds^2}{n\epsilon^2})$. Specifically, for unconditional mean estimation, we leverage the negative correlation between two frequencies in each dimension and propose the CoCo mechanism, which further reduces estimation errors for mean values compared to Collision. Moreover, to surpass the error barrier in local privacy, we examine privacy amplification in the shuffle model for the proposed mechanisms and derive precisely tight amplification bounds. Our experiments validate and compare our mechanisms with existing approaches, demonstrating significant error reductions for frequency estimation and mean estimation on numerical vectors.
2408.04693
Yuchen Xia
Yuchen Xia, Jiho Kim, Yuhan Chen, Haojie Ye, Souvik Kundu, Cong Hao and Nishil Talati
Understanding the Performance and Estimating the Cost of LLM Fine-Tuning
10 pages, conference
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to the cost-prohibitive nature of training Large Language Models (LLMs), fine-tuning has emerged as an attractive alternative for specializing LLMs for specific tasks using limited compute resources in a cost-effective manner. In this paper, we characterize sparse Mixture of Experts (MoE) based LLM fine-tuning to understand their accuracy and runtime performance on a single GPU. Our evaluation provides unique insights into the training efficacy of sparse and dense versions of MoE models, as well as their runtime characteristics, including maximum batch size, execution time breakdown, end-to-end throughput, GPU hardware utilization, and load distribution. Our study identifies the optimization of the MoE layer as crucial for further improving the performance of LLM fine-tuning. Using our profiling results, we also develop and validate an analytical model to estimate the cost of LLM fine-tuning on the cloud. This model, based on parameters of the model and GPU architecture, estimates LLM throughput and the cost of training, aiding practitioners in industry and academia to budget the cost of fine-tuning a specific model.
[ { "created": "Thu, 8 Aug 2024 16:26:07 GMT", "version": "v1" } ]
2024-08-15
[ [ "Xia", "Yuchen", "" ], [ "Kim", "Jiho", "" ], [ "Chen", "Yuhan", "" ], [ "Ye", "Haojie", "" ], [ "Kundu", "Souvik", "" ], [ "Hao", "Cong", "" ], [ "Talati", "Nishil", "" ] ]
Due to the cost-prohibitive nature of training Large Language Models (LLMs), fine-tuning has emerged as an attractive alternative for specializing LLMs for specific tasks using limited compute resources in a cost-effective manner. In this paper, we characterize sparse Mixture of Experts (MoE) based LLM fine-tuning to understand their accuracy and runtime performance on a single GPU. Our evaluation provides unique insights into the training efficacy of sparse and dense versions of MoE models, as well as their runtime characteristics, including maximum batch size, execution time breakdown, end-to-end throughput, GPU hardware utilization, and load distribution. Our study identifies the optimization of the MoE layer as crucial for further improving the performance of LLM fine-tuning. Using our profiling results, we also develop and validate an analytical model to estimate the cost of LLM fine-tuning on the cloud. This model, based on parameters of the model and GPU architecture, estimates LLM throughput and the cost of training, aiding practitioners in industry and academia to budget the cost of fine-tuning a specific model.
1604.01833
Jyothi Korra
Jinju Joby P, Jyothi Korra
System for Filtering Messages on Social Media Content
null
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The social networking era has left us with little privacy. The details of the social network users are published on Social Networking sites. Vulnerability has reached new heights due to the overpowering effects of social networking. The sites like Facebook, Twitter are having a huge set of users who publish their files, comments, messages in other users walls. These messages and comments could be of any nature. Even friends could post a comment that would harm a persons integrity. Thus there has to be a system which will monitor the messages and comments that are posted on the walls. If the messages are found to be neutral (does not have any harmful content), then it can be published. If the messages are found to have non-neutral content in them, then these messages would be blocked by the social network manager. The messages that are non-neutral would be of sexual, offensive, hatred, pun intended nature. Thus the social network manager can classify content as neutral and non-neutral and notify the user if there seems to be messages of non-neutral behavior.
[ { "created": "Wed, 6 Apr 2016 23:46:54 GMT", "version": "v1" } ]
2016-04-08
[ [ "P", "Jinju Joby", "" ], [ "Korra", "Jyothi", "" ] ]
The social networking era has left us with little privacy. The details of the social network users are published on Social Networking sites. Vulnerability has reached new heights due to the overpowering effects of social networking. The sites like Facebook, Twitter are having a huge set of users who publish their files, comments, messages in other users walls. These messages and comments could be of any nature. Even friends could post a comment that would harm a persons integrity. Thus there has to be a system which will monitor the messages and comments that are posted on the walls. If the messages are found to be neutral (does not have any harmful content), then it can be published. If the messages are found to have non-neutral content in them, then these messages would be blocked by the social network manager. The messages that are non-neutral would be of sexual, offensive, hatred, pun intended nature. Thus the social network manager can classify content as neutral and non-neutral and notify the user if there seems to be messages of non-neutral behavior.
1512.05849
Miles Brundage
Miles Brundage
Modeling Progress in AI
AAAI 2016 Workshop on AI, Ethics, and Society
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Participants in recent discussions of AI-related issues ranging from intelligence explosion to technological unemployment have made diverse claims about the nature, pace, and drivers of progress in AI. However, these theories are rarely specified in enough detail to enable systematic evaluation of their assumptions or to extrapolate progress quantitatively, as is often done with some success in other technological domains. After reviewing relevant literatures and justifying the need for more rigorous modeling of AI progress, this paper contributes to that research program by suggesting ways to account for the relationship between hardware speed increases and algorithmic improvements in AI, the role of human inputs in enabling AI capabilities, and the relationships between different sub-fields of AI. It then outlines ways of tailoring AI progress models to generate insights on the specific issue of technological unemployment, and outlines future directions for research on AI progress.
[ { "created": "Fri, 18 Dec 2015 04:17:39 GMT", "version": "v1" } ]
2015-12-21
[ [ "Brundage", "Miles", "" ] ]
Participants in recent discussions of AI-related issues ranging from intelligence explosion to technological unemployment have made diverse claims about the nature, pace, and drivers of progress in AI. However, these theories are rarely specified in enough detail to enable systematic evaluation of their assumptions or to extrapolate progress quantitatively, as is often done with some success in other technological domains. After reviewing relevant literatures and justifying the need for more rigorous modeling of AI progress, this paper contributes to that research program by suggesting ways to account for the relationship between hardware speed increases and algorithmic improvements in AI, the role of human inputs in enabling AI capabilities, and the relationships between different sub-fields of AI. It then outlines ways of tailoring AI progress models to generate insights on the specific issue of technological unemployment, and outlines future directions for research on AI progress.
2104.01350
Masaki Kitayama
Masaki Kitayama, Hitoshi Kiya
Generation of Gradient-Preserving Images allowing HOG Feature Extraction
Accepted for publication in IEEE International Conference on Consumer Electronics - Taiwan, 2021(ICCE-TW 2021)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a method for generating visually protected images, referred to as gradient-preserving images. The protected images allow us to directly extract Histogram-of-Oriented-Gradients (HOG) features for privacy-preserving machine learning. In an experiment, HOG features extracted from gradient-preserving images are applied to a face recognition algorithm to demonstrate the effectiveness of the proposed method.
[ { "created": "Sat, 3 Apr 2021 09:06:58 GMT", "version": "v1" }, { "created": "Sat, 22 May 2021 06:46:37 GMT", "version": "v2" } ]
2021-05-25
[ [ "Kitayama", "Masaki", "" ], [ "Kiya", "Hitoshi", "" ] ]
In this paper, we propose a method for generating visually protected images, referred to as gradient-preserving images. The protected images allow us to directly extract Histogram-of-Oriented-Gradients (HOG) features for privacy-preserving machine learning. In an experiment, HOG features extracted from gradient-preserving images are applied to a face recognition algorithm to demonstrate the effectiveness of the proposed method.
1807.09464
Colas Le Guernic
Julien Duchene (CALID, LAAS-TSF), Eric Alata (LAAS-TSF), Vincent Nicomette (LAAS-TSF), Mohamed Ka\^aniche (LAAS-TSF), Colas Le Guernic (DGA.MI, TAMIS)
Specification-Based Protocol Obfuscation
null
2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), Jun 2018, Luxembourg City, France. IEEE, 2018
10.1109/DSN.2018.00056
null
cs.CR cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a new obfuscation technique of a communication protocol that is aimed at making the reverse engineering of the protocol more complex. The obfuscation is based on the transformation of protocol message format specification. The obfuscating transformations are applied to the Abstract Syntax Tree (AST) representation of the messages and mainly concern the ordering or aggregation of the AST nodes. The paper also presents the design of a framework that implements the proposed obfuscation technique by automatically generating, from the specification of the message format, a library performing the corresponding transformations. Finally, our framework is applied to two real application protocols (Modbus and HTTP) to illustrate the relevance and efficiency of the proposed approach. Various metrics recorded from the experiments show the significant increase of the complexity of the obfuscated protocol binary compared to the non-obfuscated code. It is also shown that the execution time and memory overheads remain acceptable for a practical deployment of the approach in operation.
[ { "created": "Wed, 25 Jul 2018 07:49:25 GMT", "version": "v1" } ]
2018-07-26
[ [ "Duchene", "Julien", "", "CALID, LAAS-TSF" ], [ "Alata", "Eric", "", "LAAS-TSF" ], [ "Nicomette", "Vincent", "", "LAAS-TSF" ], [ "Kaâniche", "Mohamed", "", "LAAS-TSF" ], [ "Guernic", "Colas Le", "", "DGA.MI, TAMIS" ] ]
This paper proposes a new obfuscation technique of a communication protocol that is aimed at making the reverse engineering of the protocol more complex. The obfuscation is based on the transformation of protocol message format specification. The obfuscating transformations are applied to the Abstract Syntax Tree (AST) representation of the messages and mainly concern the ordering or aggregation of the AST nodes. The paper also presents the design of a framework that implements the proposed obfuscation technique by automatically generating, from the specification of the message format, a library performing the corresponding transformations. Finally, our framework is applied to two real application protocols (Modbus and HTTP) to illustrate the relevance and efficiency of the proposed approach. Various metrics recorded from the experiments show the significant increase of the complexity of the obfuscated protocol binary compared to the non-obfuscated code. It is also shown that the execution time and memory overheads remain acceptable for a practical deployment of the approach in operation.
2209.02124
Jimmy Bao
Jimmy Bao
Utilizing Post-Hurricane Satellite Imagery to Identify Flooding Damage with Convolutional Neural Networks
18 pages without figures/references, 12 figures. Patrick Emedom-Nnamdi is the Editor
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Post-hurricane damage assessment is crucial towards managing resource allocations and executing an effective response. Traditionally, this evaluation is performed through field reconnaissance, which is slow, hazardous, and arduous. Instead, in this paper we furthered the idea of implementing deep learning through convolutional neural networks in order to classify post-hurricane satellite imagery of buildings as Flooded/Damaged or Undamaged. The experimentation was conducted employing a dataset containing post-hurricane satellite imagery from the Greater Houston area after Hurricane Harvey in 2017. This paper implemented three convolutional neural network model architectures paired with additional model considerations in order to achieve high accuracies (over 99%), reinforcing the effective use of machine learning in post-hurricane disaster assessment.
[ { "created": "Mon, 5 Sep 2022 20:12:39 GMT", "version": "v1" } ]
2022-09-07
[ [ "Bao", "Jimmy", "" ] ]
Post-hurricane damage assessment is crucial towards managing resource allocations and executing an effective response. Traditionally, this evaluation is performed through field reconnaissance, which is slow, hazardous, and arduous. Instead, in this paper we furthered the idea of implementing deep learning through convolutional neural networks in order to classify post-hurricane satellite imagery of buildings as Flooded/Damaged or Undamaged. The experimentation was conducted employing a dataset containing post-hurricane satellite imagery from the Greater Houston area after Hurricane Harvey in 2017. This paper implemented three convolutional neural network model architectures paired with additional model considerations in order to achieve high accuracies (over 99%), reinforcing the effective use of machine learning in post-hurricane disaster assessment.
1503.03553
Teruyoshi Washizawa
Yasuhiro Nakahara and Teruyoshi Washizawa
Accelerating DEM simulations on GPUs by reducing the impact of warp divergences
15 pages, 4 figures
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A way to accelerate DEM calculations on the GPUs is developed. We examined how warp divergences take place in the contact detection and the force calculations taking account of the GPU architecture. Then we showed a strategy to reduce the impact of the warp divergences on the runtime of the DEM force calculations.
[ { "created": "Thu, 12 Mar 2015 01:40:58 GMT", "version": "v1" } ]
2015-03-13
[ [ "Nakahara", "Yasuhiro", "" ], [ "Washizawa", "Teruyoshi", "" ] ]
A way to accelerate DEM calculations on the GPUs is developed. We examined how warp divergences take place in the contact detection and the force calculations taking account of the GPU architecture. Then we showed a strategy to reduce the impact of the warp divergences on the runtime of the DEM force calculations.