id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2112.01433
Wei Zhang
Wei Zhang, Mingrui Liu, Yu Feng, Xiaodong Cui, Brian Kingsbury, Yuhai Tu
Loss Landscape Dependent Self-Adjusting Learning Rates in Decentralized Stochastic Gradient Descent
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Distributed Deep Learning (DDL) is essential for large-scale Deep Learning (DL) training. Synchronous Stochastic Gradient Descent (SSGD) 1 is the de facto DDL optimization method. Using a sufficiently large batch size is critical to achieving DDL runtime speedup. In a large batch setting, the learning rate must be increased to compensate for the reduced number of parameter updates. However, a large learning rate may harm convergence in SSGD and training could easily diverge. Recently, Decentralized Parallel SGD (DPSGD) has been proposed to improve distributed training speed. In this paper, we find that DPSGD not only has a system-wise run-time benefit but also a significant convergence benefit over SSGD in the large batch setting. Based on a detailed analysis of the DPSGD learning dynamics, we find that DPSGD introduces additional landscape-dependent noise that automatically adjusts the effective learning rate to improve convergence. In addition, we theoretically show that this noise smoothes the loss landscape, hence allowing a larger learning rate. We conduct extensive studies over 18 state-of-the-art DL models/tasks and demonstrate that DPSGD often converges in cases where SSGD diverges for large learning rates in the large batch setting. Our findings are consistent across two different application domains: Computer Vision (CIFAR10 and ImageNet-1K) and Automatic Speech Recognition (SWB300 and SWB2000), and two different types of neural network models: Convolutional Neural Networks and Long Short-Term Memory Recurrent Neural Networks.
[ { "created": "Thu, 2 Dec 2021 17:23:25 GMT", "version": "v1" } ]
2021-12-03
[ [ "Zhang", "Wei", "" ], [ "Liu", "Mingrui", "" ], [ "Feng", "Yu", "" ], [ "Cui", "Xiaodong", "" ], [ "Kingsbury", "Brian", "" ], [ "Tu", "Yuhai", "" ] ]
Distributed Deep Learning (DDL) is essential for large-scale Deep Learning (DL) training. Synchronous Stochastic Gradient Descent (SSGD) 1 is the de facto DDL optimization method. Using a sufficiently large batch size is critical to achieving DDL runtime speedup. In a large batch setting, the learning rate must be increased to compensate for the reduced number of parameter updates. However, a large learning rate may harm convergence in SSGD and training could easily diverge. Recently, Decentralized Parallel SGD (DPSGD) has been proposed to improve distributed training speed. In this paper, we find that DPSGD not only has a system-wise run-time benefit but also a significant convergence benefit over SSGD in the large batch setting. Based on a detailed analysis of the DPSGD learning dynamics, we find that DPSGD introduces additional landscape-dependent noise that automatically adjusts the effective learning rate to improve convergence. In addition, we theoretically show that this noise smoothes the loss landscape, hence allowing a larger learning rate. We conduct extensive studies over 18 state-of-the-art DL models/tasks and demonstrate that DPSGD often converges in cases where SSGD diverges for large learning rates in the large batch setting. Our findings are consistent across two different application domains: Computer Vision (CIFAR10 and ImageNet-1K) and Automatic Speech Recognition (SWB300 and SWB2000), and two different types of neural network models: Convolutional Neural Networks and Long Short-Term Memory Recurrent Neural Networks.
1710.11169
Xiang Ren
Zeqiu Wu, Xiang Ren, Frank F. Xu, Ji Li, Jiawei Han
Indirect Supervision for Relation Extraction using Question-Answer Pairs
9 pages + 1 page reference. Accepted to WSDM 2018
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic relation extraction (RE) for types of interest is of great importance for interpreting massive text corpora in an efficient manner. Traditional RE models have heavily relied on human-annotated corpus for training, which can be costly in generating labeled data and become obstacles when dealing with more relation types. Thus, more RE extraction systems have shifted to be built upon training data automatically acquired by linking to knowledge bases (distant supervision). However, due to the incompleteness of knowledge bases and the context-agnostic labeling, the training data collected via distant supervision (DS) can be very noisy. In recent years, as increasing attention has been brought to tackling question-answering (QA) tasks, user feedback or datasets of such tasks become more accessible. In this paper, we propose a novel framework, ReQuest, to leverage question-answer pairs as an indirect source of supervision for relation extraction, and study how to use such supervision to reduce noise induced from DS. Our model jointly embeds relation mentions, types, QA entity mention pairs and text features in two low-dimensional spaces (RE and QA), where objects with same relation types or semantically similar question-answer pairs have similar representations. Shared features connect these two spaces, carrying clearer semantic knowledge from both sources. ReQuest, then use these learned embeddings to estimate the types of test relation mentions. We formulate a global objective function and adopt a novel margin-based QA loss to reduce noise in DS by exploiting semantic evidence from the QA dataset. Our experimental results achieve an average of 11% improvement in F1 score on two public RE datasets combined with TREC QA dataset.
[ { "created": "Mon, 30 Oct 2017 18:27:19 GMT", "version": "v1" }, { "created": "Thu, 23 Nov 2017 07:43:31 GMT", "version": "v2" } ]
2017-11-27
[ [ "Wu", "Zeqiu", "" ], [ "Ren", "Xiang", "" ], [ "Xu", "Frank F.", "" ], [ "Li", "Ji", "" ], [ "Han", "Jiawei", "" ] ]
Automatic relation extraction (RE) for types of interest is of great importance for interpreting massive text corpora in an efficient manner. Traditional RE models have heavily relied on human-annotated corpus for training, which can be costly in generating labeled data and become obstacles when dealing with more relation types. Thus, more RE extraction systems have shifted to be built upon training data automatically acquired by linking to knowledge bases (distant supervision). However, due to the incompleteness of knowledge bases and the context-agnostic labeling, the training data collected via distant supervision (DS) can be very noisy. In recent years, as increasing attention has been brought to tackling question-answering (QA) tasks, user feedback or datasets of such tasks become more accessible. In this paper, we propose a novel framework, ReQuest, to leverage question-answer pairs as an indirect source of supervision for relation extraction, and study how to use such supervision to reduce noise induced from DS. Our model jointly embeds relation mentions, types, QA entity mention pairs and text features in two low-dimensional spaces (RE and QA), where objects with same relation types or semantically similar question-answer pairs have similar representations. Shared features connect these two spaces, carrying clearer semantic knowledge from both sources. ReQuest, then use these learned embeddings to estimate the types of test relation mentions. We formulate a global objective function and adopt a novel margin-based QA loss to reduce noise in DS by exploiting semantic evidence from the QA dataset. Our experimental results achieve an average of 11% improvement in F1 score on two public RE datasets combined with TREC QA dataset.
2108.13002
Yucheng Zhao
Yucheng Zhao, Guangting Wang, Chuanxin Tang, Chong Luo, Wenjun Zeng, Zheng-Jun Zha
A Battle of Network Structures: An Empirical Study of CNN, Transformer, and MLP
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Convolutional neural networks (CNN) are the dominant deep neural network (DNN) architecture for computer vision. Recently, Transformer and multi-layer perceptron (MLP)-based models, such as Vision Transformer and MLP-Mixer, started to lead new trends as they showed promising results in the ImageNet classification task. In this paper, we conduct empirical studies on these DNN structures and try to understand their respective pros and cons. To ensure a fair comparison, we first develop a unified framework called SPACH which adopts separate modules for spatial and channel processing. Our experiments under the SPACH framework reveal that all structures can achieve competitive performance at a moderate scale. However, they demonstrate distinctive behaviors when the network size scales up. Based on our findings, we propose two hybrid models using convolution and Transformer modules. The resulting Hybrid-MS-S+ model achieves 83.9% top-1 accuracy with 63M parameters and 12.3G FLOPS. It is already on par with the SOTA models with sophisticated designs. The code and models are publicly available at https://github.com/microsoft/SPACH.
[ { "created": "Mon, 30 Aug 2021 06:09:02 GMT", "version": "v1" }, { "created": "Thu, 25 Nov 2021 08:37:31 GMT", "version": "v2" } ]
2021-11-29
[ [ "Zhao", "Yucheng", "" ], [ "Wang", "Guangting", "" ], [ "Tang", "Chuanxin", "" ], [ "Luo", "Chong", "" ], [ "Zeng", "Wenjun", "" ], [ "Zha", "Zheng-Jun", "" ] ]
Convolutional neural networks (CNN) are the dominant deep neural network (DNN) architecture for computer vision. Recently, Transformer and multi-layer perceptron (MLP)-based models, such as Vision Transformer and MLP-Mixer, started to lead new trends as they showed promising results in the ImageNet classification task. In this paper, we conduct empirical studies on these DNN structures and try to understand their respective pros and cons. To ensure a fair comparison, we first develop a unified framework called SPACH which adopts separate modules for spatial and channel processing. Our experiments under the SPACH framework reveal that all structures can achieve competitive performance at a moderate scale. However, they demonstrate distinctive behaviors when the network size scales up. Based on our findings, we propose two hybrid models using convolution and Transformer modules. The resulting Hybrid-MS-S+ model achieves 83.9% top-1 accuracy with 63M parameters and 12.3G FLOPS. It is already on par with the SOTA models with sophisticated designs. The code and models are publicly available at https://github.com/microsoft/SPACH.
2212.02851
Praveen Venkateswaran
Praveen Venkateswaran, Evelyn Duesterwald, Vatche Isahagian
DiSTRICT: Dialogue State Tracking with Retriever Driven In-Context Tuning
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Dialogue State Tracking (DST), a key component of task-oriented conversation systems, represents user intentions by determining the values of pre-defined slots in an ongoing dialogue. Existing approaches use hand-crafted templates and additional slot information to fine-tune and prompt large pre-trained language models and elicit slot values from the dialogue context. Significant manual effort and domain knowledge is required to design effective prompts, limiting the generalizability of these approaches to new domains and tasks. In this work, we propose DiSTRICT, a generalizable in-context tuning approach for DST that retrieves highly relevant training examples for a given dialogue to fine-tune the model without any hand-crafted templates. Experiments with the MultiWOZ benchmark datasets show that DiSTRICT outperforms existing approaches in various zero-shot and few-shot settings using a much smaller model, thereby providing an important advantage for real-world deployments that often have limited resource availability.
[ { "created": "Tue, 6 Dec 2022 09:40:15 GMT", "version": "v1" }, { "created": "Sat, 21 Oct 2023 14:30:15 GMT", "version": "v2" } ]
2023-10-24
[ [ "Venkateswaran", "Praveen", "" ], [ "Duesterwald", "Evelyn", "" ], [ "Isahagian", "Vatche", "" ] ]
Dialogue State Tracking (DST), a key component of task-oriented conversation systems, represents user intentions by determining the values of pre-defined slots in an ongoing dialogue. Existing approaches use hand-crafted templates and additional slot information to fine-tune and prompt large pre-trained language models and elicit slot values from the dialogue context. Significant manual effort and domain knowledge is required to design effective prompts, limiting the generalizability of these approaches to new domains and tasks. In this work, we propose DiSTRICT, a generalizable in-context tuning approach for DST that retrieves highly relevant training examples for a given dialogue to fine-tune the model without any hand-crafted templates. Experiments with the MultiWOZ benchmark datasets show that DiSTRICT outperforms existing approaches in various zero-shot and few-shot settings using a much smaller model, thereby providing an important advantage for real-world deployments that often have limited resource availability.
0901.2207
Ryuhei Mori
Ryuhei Mori and Toshiyuki Tanaka
Performance and Construction of Polar Codes on Symmetric Binary-Input Memoryless Channels
5 pages, 3 figures, submitted to ISIT2009; revised
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Channel polarization is a method of constructing capacity achieving codes for symmetric binary-input discrete memoryless channels (B-DMCs) [1]. In the original paper, the construction complexity is exponential in the blocklength. In this paper, a new construction method for arbitrary symmetric binary memoryless channel (B-MC) with linear complexity in the blocklength is proposed. Furthermore, new upper and lower bounds of the block error probability of polar codes are derived for the BEC and the arbitrary symmetric B-MC, respectively.
[ { "created": "Thu, 15 Jan 2009 17:08:36 GMT", "version": "v1" }, { "created": "Sat, 23 May 2009 09:21:04 GMT", "version": "v2" } ]
2009-05-23
[ [ "Mori", "Ryuhei", "" ], [ "Tanaka", "Toshiyuki", "" ] ]
Channel polarization is a method of constructing capacity achieving codes for symmetric binary-input discrete memoryless channels (B-DMCs) [1]. In the original paper, the construction complexity is exponential in the blocklength. In this paper, a new construction method for arbitrary symmetric binary memoryless channel (B-MC) with linear complexity in the blocklength is proposed. Furthermore, new upper and lower bounds of the block error probability of polar codes are derived for the BEC and the arbitrary symmetric B-MC, respectively.
2006.04097
Zhiguo Wang
Zhiguo Wang, Liusha Yang, Feng Yin, Ke Lin, Qingjiang Shi, Zhi-Quan Luo
Optimally Combining Classifiers for Semi-Supervised Learning
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper considers semi-supervised learning for tabular data. It is widely known that Xgboost based on tree model works well on the heterogeneous features while transductive support vector machine can exploit the low density separation assumption. However, little work has been done to combine them together for the end-to-end semi-supervised learning. In this paper, we find these two methods have complementary properties and larger diversity, which motivates us to propose a new semi-supervised learning method that is able to adaptively combine the strengths of Xgboost and transductive support vector machine. Instead of the majority vote rule, an optimization problem in terms of ensemble weight is established, which helps to obtain more accurate pseudo labels for unlabeled data. The experimental results on the UCI data sets and real commercial data set demonstrate the superior classification performance of our method over the five state-of-the-art algorithms improving test accuracy by about $3\%-4\%$. The partial code can be found at https://github.com/hav-cam-mit/CTO.
[ { "created": "Sun, 7 Jun 2020 09:28:34 GMT", "version": "v1" } ]
2020-06-09
[ [ "Wang", "Zhiguo", "" ], [ "Yang", "Liusha", "" ], [ "Yin", "Feng", "" ], [ "Lin", "Ke", "" ], [ "Shi", "Qingjiang", "" ], [ "Luo", "Zhi-Quan", "" ] ]
This paper considers semi-supervised learning for tabular data. It is widely known that Xgboost based on tree model works well on the heterogeneous features while transductive support vector machine can exploit the low density separation assumption. However, little work has been done to combine them together for the end-to-end semi-supervised learning. In this paper, we find these two methods have complementary properties and larger diversity, which motivates us to propose a new semi-supervised learning method that is able to adaptively combine the strengths of Xgboost and transductive support vector machine. Instead of the majority vote rule, an optimization problem in terms of ensemble weight is established, which helps to obtain more accurate pseudo labels for unlabeled data. The experimental results on the UCI data sets and real commercial data set demonstrate the superior classification performance of our method over the five state-of-the-art algorithms improving test accuracy by about $3\%-4\%$. The partial code can be found at https://github.com/hav-cam-mit/CTO.
1802.09317
Eric Sanchis
Eric Sanchis
A Model of Free Will for Artificial Entities
10th International Conference on Advanced Cognitive Technologies and Applications, (COGNITIVE 2018), February 18-22, Barcelona, Spain
null
null
null
cs.MA cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The impression of free will is the feeling according to which our choices are neither imposed from our inside nor from outside. It is the sense we are the ultimate cause of our acts. In direct opposition with the universal determinism, the existence of free will continues to be discussed. In this paper, free will is linked to a decisional mechanism: an agent is provided with free will if having performed a predictable choice Cp, it can immediately perform another choice Cr in a random way. The intangible feeling of free will is replaced by a decision-making process including a predictable decision-making process immediately followed by an unpredictable decisional one.
[ { "created": "Mon, 26 Feb 2018 14:29:03 GMT", "version": "v1" } ]
2018-02-27
[ [ "Sanchis", "Eric", "" ] ]
The impression of free will is the feeling according to which our choices are neither imposed from our inside nor from outside. It is the sense we are the ultimate cause of our acts. In direct opposition with the universal determinism, the existence of free will continues to be discussed. In this paper, free will is linked to a decisional mechanism: an agent is provided with free will if having performed a predictable choice Cp, it can immediately perform another choice Cr in a random way. The intangible feeling of free will is replaced by a decision-making process including a predictable decision-making process immediately followed by an unpredictable decisional one.
2005.10884
Chong Xiang
Chong Xiang, Arjun Nitin Bhagoji, Vikash Sehwag, Prateek Mittal
PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking
USENIX Security Symposium 2021; extended technical report
null
null
null
cs.CV cs.CR cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Localized adversarial patches aim to induce misclassification in machine learning models by arbitrarily modifying pixels within a restricted region of an image. Such attacks can be realized in the physical world by attaching the adversarial patch to the object to be misclassified, and defending against such attacks is an unsolved/open problem. In this paper, we propose a general defense framework called PatchGuard that can achieve high provable robustness while maintaining high clean accuracy against localized adversarial patches. The cornerstone of PatchGuard involves the use of CNNs with small receptive fields to impose a bound on the number of features corrupted by an adversarial patch. Given a bounded number of corrupted features, the problem of designing an adversarial patch defense reduces to that of designing a secure feature aggregation mechanism. Towards this end, we present our robust masking defense that robustly detects and masks corrupted features to recover the correct prediction. Notably, we can prove the robustness of our defense against any adversary within our threat model. Our extensive evaluation on ImageNet, ImageNette (a 10-class subset of ImageNet), and CIFAR-10 datasets demonstrates that our defense achieves state-of-the-art performance in terms of both provable robust accuracy and clean accuracy.
[ { "created": "Sun, 17 May 2020 03:38:34 GMT", "version": "v1" }, { "created": "Mon, 8 Jun 2020 14:51:03 GMT", "version": "v2" }, { "created": "Sun, 2 Aug 2020 15:39:00 GMT", "version": "v3" }, { "created": "Sun, 18 Oct 2020 18:12:03 GMT", "version": "v4" }, { "created": "Wed, 31 Mar 2021 14:20:39 GMT", "version": "v5" } ]
2021-04-01
[ [ "Xiang", "Chong", "" ], [ "Bhagoji", "Arjun Nitin", "" ], [ "Sehwag", "Vikash", "" ], [ "Mittal", "Prateek", "" ] ]
Localized adversarial patches aim to induce misclassification in machine learning models by arbitrarily modifying pixels within a restricted region of an image. Such attacks can be realized in the physical world by attaching the adversarial patch to the object to be misclassified, and defending against such attacks is an unsolved/open problem. In this paper, we propose a general defense framework called PatchGuard that can achieve high provable robustness while maintaining high clean accuracy against localized adversarial patches. The cornerstone of PatchGuard involves the use of CNNs with small receptive fields to impose a bound on the number of features corrupted by an adversarial patch. Given a bounded number of corrupted features, the problem of designing an adversarial patch defense reduces to that of designing a secure feature aggregation mechanism. Towards this end, we present our robust masking defense that robustly detects and masks corrupted features to recover the correct prediction. Notably, we can prove the robustness of our defense against any adversary within our threat model. Our extensive evaluation on ImageNet, ImageNette (a 10-class subset of ImageNet), and CIFAR-10 datasets demonstrates that our defense achieves state-of-the-art performance in terms of both provable robust accuracy and clean accuracy.
1010.1697
Roberto Amadio
Roberto M. Amadio (PPS), Nicolas Ayache (PPS, INRIA Paris - Rocquencourt), Yann R\'egis-Gianas (PPS, INRIA Paris - Rocquencourt), Ronan Saillard (PPS, INRIA Paris - Rocquencourt)
Certifying cost annotations in compilers
null
null
null
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We discuss the problem of building a compiler which can lift in a provably correct way pieces of information on the execution cost of the object code to cost annotations on the source code. To this end, we need a clear and flexible picture of: (i) the meaning of cost annotations, (ii) the method to prove them sound and precise, and (iii) the way such proofs can be composed. We propose a so-called labelling approach to these three questions. As a first step, we examine its application to a toy compiler. This formal study suggests that the labelling approach has good compositionality and scalability properties. In order to provide further evidence for this claim, we report our successful experience in implementing and testing the labelling approach on top of a prototype compiler written in OCAML for (a large fragment of) the C language.
[ { "created": "Fri, 8 Oct 2010 14:13:09 GMT", "version": "v1" } ]
2010-10-11
[ [ "Amadio", "Roberto M.", "", "PPS" ], [ "Ayache", "Nicolas", "", "PPS, INRIA Paris -\n Rocquencourt" ], [ "Régis-Gianas", "Yann", "", "PPS, INRIA Paris - Rocquencourt" ], [ "Saillard", "Ronan", "", "PPS, INRIA Paris - Rocquencourt" ] ]
We discuss the problem of building a compiler which can lift in a provably correct way pieces of information on the execution cost of the object code to cost annotations on the source code. To this end, we need a clear and flexible picture of: (i) the meaning of cost annotations, (ii) the method to prove them sound and precise, and (iii) the way such proofs can be composed. We propose a so-called labelling approach to these three questions. As a first step, we examine its application to a toy compiler. This formal study suggests that the labelling approach has good compositionality and scalability properties. In order to provide further evidence for this claim, we report our successful experience in implementing and testing the labelling approach on top of a prototype compiler written in OCAML for (a large fragment of) the C language.
2407.18925
Xiangxiong Kong
Xiangxiong Kong
Monitoring Time-Varying Changes of Historic Structures Through Photogrammetry-Driven Digital Twinning
null
null
10.5194/isprs-archives-XLVIII-2-2024-181-2024
null
cs.CV cs.CY
http://creativecommons.org/licenses/by/4.0/
Historic structures are important for our society but could be prone to structural deterioration due to long service durations and natural impacts. Monitoring the deterioration of historic structures becomes essential for stakeholders to take appropriate interventions. Existing work in the literature primarily focuses on assessing the structural damage at a given moment instead of evaluating the development of deterioration over time. To address this gap, we proposed a novel five-component digital twin framework to monitor time-varying changes in historic structures. A testbed of a casemate in Fort Soledad on the island of Guam was selected to validate our framework. Using this testbed, key implementation steps in our digital twin framework were performed. The findings from this study confirm that our digital twin framework can effectively monitor deterioration over time, which is an urgent need in the cultural heritage preservation community.
[ { "created": "Tue, 9 Jul 2024 22:26:29 GMT", "version": "v1" } ]
2024-07-30
[ [ "Kong", "Xiangxiong", "" ] ]
Historic structures are important for our society but could be prone to structural deterioration due to long service durations and natural impacts. Monitoring the deterioration of historic structures becomes essential for stakeholders to take appropriate interventions. Existing work in the literature primarily focuses on assessing the structural damage at a given moment instead of evaluating the development of deterioration over time. To address this gap, we proposed a novel five-component digital twin framework to monitor time-varying changes in historic structures. A testbed of a casemate in Fort Soledad on the island of Guam was selected to validate our framework. Using this testbed, key implementation steps in our digital twin framework were performed. The findings from this study confirm that our digital twin framework can effectively monitor deterioration over time, which is an urgent need in the cultural heritage preservation community.
2404.15488
Jean-Philippe Corbeil
Jean-Philippe Corbeil
IryoNLP at MEDIQA-CORR 2024: Tackling the Medical Error Detection & Correction Task On the Shoulders of Medical Agents
null
null
null
null
cs.CL cs.AI cs.MA
http://creativecommons.org/licenses/by-nc-sa/4.0/
In natural language processing applied to the clinical domain, utilizing large language models has emerged as a promising avenue for error detection and correction on clinical notes, a knowledge-intensive task for which annotated data is scarce. This paper presents MedReAct'N'MedReFlex, which leverages a suite of four LLM-based medical agents. The MedReAct agent initiates the process by observing, analyzing, and taking action, generating trajectories to guide the search to target a potential error in the clinical notes. Subsequently, the MedEval agent employs five evaluators to assess the targeted error and the proposed correction. In cases where MedReAct's actions prove insufficient, the MedReFlex agent intervenes, engaging in reflective analysis and proposing alternative strategies. Finally, the MedFinalParser agent formats the final output, preserving the original style while ensuring the integrity of the error correction process. One core component of our method is our RAG pipeline based on our ClinicalCorp corpora. Among other well-known sources containing clinical guidelines and information, we preprocess and release the open-source MedWiki dataset for clinical RAG application. Our results demonstrate the central role of our RAG approach with ClinicalCorp leveraged through the MedReAct'N'MedReFlex framework. It achieved the ninth rank on the MEDIQA-CORR 2024 final leaderboard.
[ { "created": "Tue, 23 Apr 2024 20:00:37 GMT", "version": "v1" } ]
2024-04-25
[ [ "Corbeil", "Jean-Philippe", "" ] ]
In natural language processing applied to the clinical domain, utilizing large language models has emerged as a promising avenue for error detection and correction on clinical notes, a knowledge-intensive task for which annotated data is scarce. This paper presents MedReAct'N'MedReFlex, which leverages a suite of four LLM-based medical agents. The MedReAct agent initiates the process by observing, analyzing, and taking action, generating trajectories to guide the search to target a potential error in the clinical notes. Subsequently, the MedEval agent employs five evaluators to assess the targeted error and the proposed correction. In cases where MedReAct's actions prove insufficient, the MedReFlex agent intervenes, engaging in reflective analysis and proposing alternative strategies. Finally, the MedFinalParser agent formats the final output, preserving the original style while ensuring the integrity of the error correction process. One core component of our method is our RAG pipeline based on our ClinicalCorp corpora. Among other well-known sources containing clinical guidelines and information, we preprocess and release the open-source MedWiki dataset for clinical RAG application. Our results demonstrate the central role of our RAG approach with ClinicalCorp leveraged through the MedReAct'N'MedReFlex framework. It achieved the ninth rank on the MEDIQA-CORR 2024 final leaderboard.
2406.16843
Arne Hole
Arne Hole
Constructibility, computational complexity and P versus NP
null
null
null
null
cs.CC cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A class of decision problems related to inconsistency proofs for formal theories is used to show that under a constructive interpretation of computational complexity classes, an assumption referred to as the MW thesis implies that one may explicitly construct decision problems in NP which are not algorithmically solvable. In particular, in this interpretation the MW thesis implies P $\neq$ NP. It is argued that MW is a naturally valid, yet also naturally unformalizable principle.
[ { "created": "Mon, 24 Jun 2024 17:48:29 GMT", "version": "v1" } ]
2024-06-25
[ [ "Hole", "Arne", "" ] ]
A class of decision problems related to inconsistency proofs for formal theories is used to show that under a constructive interpretation of computational complexity classes, an assumption referred to as the MW thesis implies that one may explicitly construct decision problems in NP which are not algorithmically solvable. In particular, in this interpretation the MW thesis implies P $\neq$ NP. It is argued that MW is a naturally valid, yet also naturally unformalizable principle.
1609.03947
Li Yang Ku
Li Yang Ku, Erik Learned-Miller, Rod Grupen
Associating Grasp Configurations with Hierarchical Features in Convolutional Neural Networks
8 pages, 9 figures, IROS 2017
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we provide a solution for posturing the anthropomorphic Robonaut-2 hand and arm for grasping based on visual information. A mapping from visual features extracted from a convolutional neural network (CNN) to grasp points is learned. We demonstrate that a CNN pre-trained for image classification can be applied to a grasping task based on a small set of grasping examples. Our approach takes advantage of the hierarchical nature of the CNN by identifying features that capture the hierarchical support relations between filters in different CNN layers and locating their 3D positions by tracing activations backwards in the CNN. When this backward trace terminates in the RGB-D image, important manipulable structures are thereby localized. These features that reside in different layers of the CNN are then associated with controllers that engage different kinematic subchains in the hand/arm system for grasping. A grasping dataset is collected using demonstrated hand/object relationships for Robonaut-2 to evaluate the proposed approach in terms of the precision of the resulting preshape postures. We demonstrate that this approach outperforms baseline approaches in cluttered scenarios on the grasping dataset and a point cloud based approach on a grasping task using Robonaut-2.
[ { "created": "Tue, 13 Sep 2016 17:37:38 GMT", "version": "v1" }, { "created": "Mon, 26 Sep 2016 21:00:20 GMT", "version": "v2" }, { "created": "Fri, 21 Oct 2016 18:57:25 GMT", "version": "v3" }, { "created": "Tue, 21 Mar 2017 17:40:17 GMT", "version": "v4" }, { "created": "Wed, 26 Jul 2017 00:41:01 GMT", "version": "v5" } ]
2017-07-27
[ [ "Ku", "Li Yang", "" ], [ "Learned-Miller", "Erik", "" ], [ "Grupen", "Rod", "" ] ]
In this work, we provide a solution for posturing the anthropomorphic Robonaut-2 hand and arm for grasping based on visual information. A mapping from visual features extracted from a convolutional neural network (CNN) to grasp points is learned. We demonstrate that a CNN pre-trained for image classification can be applied to a grasping task based on a small set of grasping examples. Our approach takes advantage of the hierarchical nature of the CNN by identifying features that capture the hierarchical support relations between filters in different CNN layers and locating their 3D positions by tracing activations backwards in the CNN. When this backward trace terminates in the RGB-D image, important manipulable structures are thereby localized. These features that reside in different layers of the CNN are then associated with controllers that engage different kinematic subchains in the hand/arm system for grasping. A grasping dataset is collected using demonstrated hand/object relationships for Robonaut-2 to evaluate the proposed approach in terms of the precision of the resulting preshape postures. We demonstrate that this approach outperforms baseline approaches in cluttered scenarios on the grasping dataset and a point cloud based approach on a grasping task using Robonaut-2.
1710.07814
Stefano Buzzi
Stefano Buzzi and Alessio Zappone
Downlink Power Control in User-Centric and Cell-Free Massive MIMO Wireless Networks
presented at the 28th Annual IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (IEEE PIMRC 2017), Montreal (CA), October 2017
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, the so-called cell-free Massive MIMO architecture has been introduced, wherein a very large number of distributed access points (APs) simultaneously and jointly serve a much smaller number of mobile stations (MSs). A variant of the cell-free technique is the user-centric approach, wherein each AP just decodes the MSs that it receives with the largest power. This paper considers both the cell-free and user-centric approaches, and, using an interplay of sequential optimization and alternating optimization, derives downlink power-control algorithms aimed at maximizing either the minimum users' SINR (to ensure fairness), or the system sum-rate. Numerical results show the effectiveness of the proposed algorithms, as well as that the user-centric approach generally outperforms the CF one.
[ { "created": "Sat, 21 Oct 2017 15:30:49 GMT", "version": "v1" } ]
2017-10-24
[ [ "Buzzi", "Stefano", "" ], [ "Zappone", "Alessio", "" ] ]
Recently, the so-called cell-free Massive MIMO architecture has been introduced, wherein a very large number of distributed access points (APs) simultaneously and jointly serve a much smaller number of mobile stations (MSs). A variant of the cell-free technique is the user-centric approach, wherein each AP just decodes the MSs that it receives with the largest power. This paper considers both the cell-free and user-centric approaches, and, using an interplay of sequential optimization and alternating optimization, derives downlink power-control algorithms aimed at maximizing either the minimum users' SINR (to ensure fairness), or the system sum-rate. Numerical results show the effectiveness of the proposed algorithms, as well as that the user-centric approach generally outperforms the CF one.
2001.11367
Shan Huang
Shan Huang (1), Xiao Zhou (2), Sang Chin (2) ((1) Boston University Department of Physics, (2) Boston University Department of Computer Science)
Application of Seq2Seq Models on Code Correction
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We apply various seq2seq models on programming language correction tasks on Juliet Test Suite for C/C++ and Java of Software Assurance Reference Datasets(SARD), and achieve 75\%(for C/C++) and 56\%(for Java) repair rates on these tasks. We introduce Pyramid Encoder in these seq2seq models, which largely increases the computational efficiency and memory efficiency, while remain similar repair rate to their non-pyramid counterparts. We successfully carry out error type classification task on ITC benchmark examples (with only 685 code instances) using transfer learning with models pre-trained on Juliet Test Suite, pointing out a novel way of processing small programing language datasets.
[ { "created": "Tue, 28 Jan 2020 21:57:43 GMT", "version": "v1" }, { "created": "Tue, 4 Aug 2020 20:51:43 GMT", "version": "v2" } ]
2020-08-06
[ [ "Huang", "Shan", "" ], [ "Zhou", "Xiao", "" ], [ "Chin", "Sang", "" ] ]
We apply various seq2seq models on programming language correction tasks on Juliet Test Suite for C/C++ and Java of Software Assurance Reference Datasets(SARD), and achieve 75\%(for C/C++) and 56\%(for Java) repair rates on these tasks. We introduce Pyramid Encoder in these seq2seq models, which largely increases the computational efficiency and memory efficiency, while remain similar repair rate to their non-pyramid counterparts. We successfully carry out error type classification task on ITC benchmark examples (with only 685 code instances) using transfer learning with models pre-trained on Juliet Test Suite, pointing out a novel way of processing small programing language datasets.
2406.15679
Sherri WeitlHarms
Sherri WeitlHarms
Iterative Service-Learning: A Computing-Based Case-study Applied to Small Rural Organizations
9 pages, 0 figures
Proceedings of the 2024 IEEE Frontiers in Education Conference
null
null
cs.SI cs.HC cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes the iterative use of service learning to develop, review, and improve computing-based artifacts. It is well-known that computing students benefit from service-learning experiences as do the community partners. It is also well-known that computing artifacts rarely function well long-term without versioning and updates. Service-learning projects are often one-time engagements, completed by single teams of students over the course of a semester course. This limits the benefit for community partners that do not have the expertise or resources to review and update a project on their own. Over several years, teams of undergraduate students in a capstone course created tailored social media plans for numerous small rural organizations. The projects were required to meet client specific needs, with identified audiences, measurable goals, and strategies and tactics to reach the identified goals. This paper builds on previously results for 60 projects conducted over several years. Nine clients were selected to participate in the iterative follow-up process, where new student teams conducted client interviews, reviewed the initial plans, and analyzed metrics from the current strategies and tactics to provide updated, improved artifacts. Using ABET learning objectives as a basis, clients reviewed the student teams and artifacts. This longitudinal study discusses the impact of this intervention to increase implementation and sustained use rates of computing artifacts developed through service learning. Both students and clients reported high satisfaction levels, and clients were particularly satisfied with the iterative improvement process. This research demonstrates an innovative practice for creating and maintaining computing artifacts through iterative service learning, while addressing the resource constraints of small organizations.
[ { "created": "Fri, 21 Jun 2024 23:05:13 GMT", "version": "v1" } ]
2024-08-06
[ [ "WeitlHarms", "Sherri", "" ] ]
This paper describes the iterative use of service learning to develop, review, and improve computing-based artifacts. It is well-known that computing students benefit from service-learning experiences as do the community partners. It is also well-known that computing artifacts rarely function well long-term without versioning and updates. Service-learning projects are often one-time engagements, completed by single teams of students over the course of a semester course. This limits the benefit for community partners that do not have the expertise or resources to review and update a project on their own. Over several years, teams of undergraduate students in a capstone course created tailored social media plans for numerous small rural organizations. The projects were required to meet client specific needs, with identified audiences, measurable goals, and strategies and tactics to reach the identified goals. This paper builds on previously results for 60 projects conducted over several years. Nine clients were selected to participate in the iterative follow-up process, where new student teams conducted client interviews, reviewed the initial plans, and analyzed metrics from the current strategies and tactics to provide updated, improved artifacts. Using ABET learning objectives as a basis, clients reviewed the student teams and artifacts. This longitudinal study discusses the impact of this intervention to increase implementation and sustained use rates of computing artifacts developed through service learning. Both students and clients reported high satisfaction levels, and clients were particularly satisfied with the iterative improvement process. This research demonstrates an innovative practice for creating and maintaining computing artifacts through iterative service learning, while addressing the resource constraints of small organizations.
1501.02212
Holger Petersen
Holger Petersen
Efficient Computation by Three Counter Machines
null
null
null
null
cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that multiplication can be done in polynomial time on a three counter machine that receives its input as the contents of two counters. The technique is generalized to functions of two variables computable by deterministic Turing machines in linear space.
[ { "created": "Fri, 9 Jan 2015 18:02:01 GMT", "version": "v1" } ]
2015-01-12
[ [ "Petersen", "Holger", "" ] ]
We show that multiplication can be done in polynomial time on a three counter machine that receives its input as the contents of two counters. The technique is generalized to functions of two variables computable by deterministic Turing machines in linear space.
1912.05134
Baosong Yang
Yu Wan and Baosong Yang and Derek F. Wong and Lidia S. Chao and Haihua Du and Ben C.H. Ao
Unsupervised Neural Dialect Translation with Commonality and Diversity Modeling
AAAI 2020
null
10.1609/aaai.v34i05.6448
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As a special machine translation task, dialect translation has two main characteristics: 1) lack of parallel training corpus; and 2) possessing similar grammar between two sides of the translation. In this paper, we investigate how to exploit the commonality and diversity between dialects thus to build unsupervised translation models merely accessing to monolingual data. Specifically, we leverage pivot-private embedding, layer coordination, as well as parameter sharing to sufficiently model commonality and diversity among source and target, ranging from lexical, through syntactic, to semantic levels. In order to examine the effectiveness of the proposed models, we collect 20 million monolingual corpus for each of Mandarin and Cantonese, which are official language and the most widely used dialect in China. Experimental results reveal that our methods outperform rule-based simplified and traditional Chinese conversion and conventional unsupervised translation models over 12 BLEU scores.
[ { "created": "Wed, 11 Dec 2019 06:21:16 GMT", "version": "v1" } ]
2022-10-20
[ [ "Wan", "Yu", "" ], [ "Yang", "Baosong", "" ], [ "Wong", "Derek F.", "" ], [ "Chao", "Lidia S.", "" ], [ "Du", "Haihua", "" ], [ "Ao", "Ben C. H.", "" ] ]
As a special machine translation task, dialect translation has two main characteristics: 1) lack of parallel training corpus; and 2) possessing similar grammar between two sides of the translation. In this paper, we investigate how to exploit the commonality and diversity between dialects thus to build unsupervised translation models merely accessing to monolingual data. Specifically, we leverage pivot-private embedding, layer coordination, as well as parameter sharing to sufficiently model commonality and diversity among source and target, ranging from lexical, through syntactic, to semantic levels. In order to examine the effectiveness of the proposed models, we collect 20 million monolingual corpus for each of Mandarin and Cantonese, which are official language and the most widely used dialect in China. Experimental results reveal that our methods outperform rule-based simplified and traditional Chinese conversion and conventional unsupervised translation models over 12 BLEU scores.
2311.03815
Ning Chen
Ning Chen, Zhipeng Cheng, Xuwei Fan, Bangzhen Huang, Yifeng Zhao, Lianfen Huang, Xiaojiang Du, Mohsen Guizani
Integrated Sensing, Communication, and Computing for Cost-effective Multimodal Federated Perception
null
null
null
null
cs.NI eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated learning (FL) is a classic paradigm of 6G edge intelligence (EI), which alleviates privacy leaks and high communication pressure caused by traditional centralized data processing in the artificial intelligence of things (AIoT). The implementation of multimodal federated perception (MFP) services involves three sub-processes, including sensing-based multimodal data generation, communication-based model transmission, and computing-based model training, ultimately relying on available underlying multi-domain physical resources such as time, frequency, and computing power. How to reasonably coordinate the multi-domain resources scheduling among sensing, communication, and computing, therefore, is crucial to the MFP networks. To address the above issues, this paper investigates service-oriented resource management with integrated sensing, communication, and computing (ISCC). With the incentive mechanism of the MFP service market, the resources management problem is redefined as a social welfare maximization problem, where the idea of "expanding resources" and "reducing costs" is used to improve learning performance gain and reduce resource costs. Experimental results demonstrate the effectiveness and robustness of the proposed resource scheduling mechanisms.
[ { "created": "Tue, 7 Nov 2023 08:55:56 GMT", "version": "v1" } ]
2023-11-08
[ [ "Chen", "Ning", "" ], [ "Cheng", "Zhipeng", "" ], [ "Fan", "Xuwei", "" ], [ "Huang", "Bangzhen", "" ], [ "Zhao", "Yifeng", "" ], [ "Huang", "Lianfen", "" ], [ "Du", "Xiaojiang", "" ], [ "Guizani", "Mohsen", "" ] ]
Federated learning (FL) is a classic paradigm of 6G edge intelligence (EI), which alleviates privacy leaks and high communication pressure caused by traditional centralized data processing in the artificial intelligence of things (AIoT). The implementation of multimodal federated perception (MFP) services involves three sub-processes, including sensing-based multimodal data generation, communication-based model transmission, and computing-based model training, ultimately relying on available underlying multi-domain physical resources such as time, frequency, and computing power. How to reasonably coordinate the multi-domain resources scheduling among sensing, communication, and computing, therefore, is crucial to the MFP networks. To address the above issues, this paper investigates service-oriented resource management with integrated sensing, communication, and computing (ISCC). With the incentive mechanism of the MFP service market, the resources management problem is redefined as a social welfare maximization problem, where the idea of "expanding resources" and "reducing costs" is used to improve learning performance gain and reduce resource costs. Experimental results demonstrate the effectiveness and robustness of the proposed resource scheduling mechanisms.
2304.04812
Ziyang Li
Ziyang Li, Jiani Huang, Mayur Naik
Scallop: A Language for Neurosymbolic Programming
null
null
null
null
cs.PL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
We present Scallop, a language which combines the benefits of deep learning and logical reasoning. Scallop enables users to write a wide range of neurosymbolic applications and train them in a data- and compute-efficient manner. It achieves these goals through three key features: 1) a flexible symbolic representation that is based on the relational data model; 2) a declarative logic programming language that is based on Datalog and supports recursion, aggregation, and negation; and 3) a framework for automatic and efficient differentiable reasoning that is based on the theory of provenance semirings. We evaluate Scallop on a suite of eight neurosymbolic applications from the literature. Our evaluation demonstrates that Scallop is capable of expressing algorithmic reasoning in diverse and challenging AI tasks, provides a succinct interface for machine learning programmers to integrate logical domain knowledge, and yields solutions that are comparable or superior to state-of-the-art models in terms of accuracy. Furthermore, Scallop's solutions outperform these models in aspects such as runtime and data efficiency, interpretability, and generalizability.
[ { "created": "Mon, 10 Apr 2023 18:46:53 GMT", "version": "v1" } ]
2023-04-12
[ [ "Li", "Ziyang", "" ], [ "Huang", "Jiani", "" ], [ "Naik", "Mayur", "" ] ]
We present Scallop, a language which combines the benefits of deep learning and logical reasoning. Scallop enables users to write a wide range of neurosymbolic applications and train them in a data- and compute-efficient manner. It achieves these goals through three key features: 1) a flexible symbolic representation that is based on the relational data model; 2) a declarative logic programming language that is based on Datalog and supports recursion, aggregation, and negation; and 3) a framework for automatic and efficient differentiable reasoning that is based on the theory of provenance semirings. We evaluate Scallop on a suite of eight neurosymbolic applications from the literature. Our evaluation demonstrates that Scallop is capable of expressing algorithmic reasoning in diverse and challenging AI tasks, provides a succinct interface for machine learning programmers to integrate logical domain knowledge, and yields solutions that are comparable or superior to state-of-the-art models in terms of accuracy. Furthermore, Scallop's solutions outperform these models in aspects such as runtime and data efficiency, interpretability, and generalizability.
2311.13509
Qiyang Zhang
Yijie Chen, Qiyang Zhang, Yiran Zhang, Xiao Ma, Ao Zhou
Energy and Time-Aware Inference Offloading for DNN-based Applications in LEO Satellites
Accepted by ICNP 2023 Workshop
null
null
null
cs.DC
http://creativecommons.org/licenses/by-nc-nd/4.0/
In recent years, Low Earth Orbit (LEO) satellites have witnessed rapid development, with inference based on Deep Neural Network (DNN) models emerging as the prevailing technology for remote sensing satellite image recognition. However, the substantial computation capability and energy demands of DNN models, coupled with the instability of the satellite-ground link, pose significant challenges, burdening satellites with limited power intake and hindering the timely completion of tasks. Existing approaches, such as transmitting all images to the ground for processing or executing DNN models on the satellite, is unable to effectively address this issue. By exploiting the internal hierarchical structure of DNNs and treating each layer as an independent subtask, we propose a satellite-ground collaborative computation partial offloading approach to address this challenge. We formulate the problem of minimizing the inference task execution time and onboard energy consumption through offloading as an integer linear programming (ILP) model. The complexity in solving the problem arises from the combinatorial explosion in the discrete solution space. To address this, we have designed an improved optimization algorithm based on branch and bound. Simulation results illustrate that, compared to the existing approaches, our algorithm improve the performance by 10%-18%
[ { "created": "Wed, 22 Nov 2023 16:34:17 GMT", "version": "v1" } ]
2023-11-23
[ [ "Chen", "Yijie", "" ], [ "Zhang", "Qiyang", "" ], [ "Zhang", "Yiran", "" ], [ "Ma", "Xiao", "" ], [ "Zhou", "Ao", "" ] ]
In recent years, Low Earth Orbit (LEO) satellites have witnessed rapid development, with inference based on Deep Neural Network (DNN) models emerging as the prevailing technology for remote sensing satellite image recognition. However, the substantial computation capability and energy demands of DNN models, coupled with the instability of the satellite-ground link, pose significant challenges, burdening satellites with limited power intake and hindering the timely completion of tasks. Existing approaches, such as transmitting all images to the ground for processing or executing DNN models on the satellite, is unable to effectively address this issue. By exploiting the internal hierarchical structure of DNNs and treating each layer as an independent subtask, we propose a satellite-ground collaborative computation partial offloading approach to address this challenge. We formulate the problem of minimizing the inference task execution time and onboard energy consumption through offloading as an integer linear programming (ILP) model. The complexity in solving the problem arises from the combinatorial explosion in the discrete solution space. To address this, we have designed an improved optimization algorithm based on branch and bound. Simulation results illustrate that, compared to the existing approaches, our algorithm improve the performance by 10%-18%
2108.03567
Peihan Liu
Nuh Aydin, Peihan Liu, Bryan Yoshino
A Database of Quantum Codes
arXiv admin note: text overlap with arXiv:2106.12065
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quantum error correcting codes (QECC) is becoming an increasingly important branch of coding theory. For classical block codes, a \href{codetables.de} {comprehensive database of best known codes} exists which is available online at \cite{codetables}. The same database contains data on best known quantum codes as well, but only for the binary field. There has been an increased interest in quantum codes over larger fields with many papers reporting such codes in the literature. However, to the best of our knowledge, there is no database of best known quantum codes for most fields. We established a new database of QECC that includes codes over $\mathbb{F}_{q^2}$ for $q\leq 29$. We also present several methods of constructing quantum codes from classical codes based on the CSS construction. We have found dozens of new quantum codes that improve the previously known parameters and also hundreds new quantum codes that did not exist in the literature.
[ { "created": "Sun, 8 Aug 2021 04:31:10 GMT", "version": "v1" } ]
2021-08-21
[ [ "Aydin", "Nuh", "" ], [ "Liu", "Peihan", "" ], [ "Yoshino", "Bryan", "" ] ]
Quantum error correcting codes (QECC) is becoming an increasingly important branch of coding theory. For classical block codes, a \href{codetables.de} {comprehensive database of best known codes} exists which is available online at \cite{codetables}. The same database contains data on best known quantum codes as well, but only for the binary field. There has been an increased interest in quantum codes over larger fields with many papers reporting such codes in the literature. However, to the best of our knowledge, there is no database of best known quantum codes for most fields. We established a new database of QECC that includes codes over $\mathbb{F}_{q^2}$ for $q\leq 29$. We also present several methods of constructing quantum codes from classical codes based on the CSS construction. We have found dozens of new quantum codes that improve the previously known parameters and also hundreds new quantum codes that did not exist in the literature.
2404.16816
Harman Singh
Harman Singh, Nitish Gupta, Shikhar Bharadwaj, Dinesh Tewari, Partha Talukdar
IndicGenBench: A Multilingual Benchmark to Evaluate Generation Capabilities of LLMs on Indic Languages
ACL 2024
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As large language models (LLMs) see increasing adoption across the globe, it is imperative for LLMs to be representative of the linguistic diversity of the world. India is a linguistically diverse country of 1.4 Billion people. To facilitate research on multilingual LLM evaluation, we release IndicGenBench - the largest benchmark for evaluating LLMs on user-facing generation tasks across a diverse set 29 of Indic languages covering 13 scripts and 4 language families. IndicGenBench is composed of diverse generation tasks like cross-lingual summarization, machine translation, and cross-lingual question answering. IndicGenBench extends existing benchmarks to many Indic languages through human curation providing multi-way parallel evaluation data for many under-represented Indic languages for the first time. We evaluate a wide range of proprietary and open-source LLMs including GPT-3.5, GPT-4, PaLM-2, mT5, Gemma, BLOOM and LLaMA on IndicGenBench in a variety of settings. The largest PaLM-2 models performs the best on most tasks, however, there is a significant performance gap in all languages compared to English showing that further research is needed for the development of more inclusive multilingual language models. IndicGenBench is released at www.github.com/google-research-datasets/indic-gen-bench
[ { "created": "Thu, 25 Apr 2024 17:57:36 GMT", "version": "v1" }, { "created": "Wed, 7 Aug 2024 19:47:21 GMT", "version": "v2" } ]
2024-08-09
[ [ "Singh", "Harman", "" ], [ "Gupta", "Nitish", "" ], [ "Bharadwaj", "Shikhar", "" ], [ "Tewari", "Dinesh", "" ], [ "Talukdar", "Partha", "" ] ]
As large language models (LLMs) see increasing adoption across the globe, it is imperative for LLMs to be representative of the linguistic diversity of the world. India is a linguistically diverse country of 1.4 Billion people. To facilitate research on multilingual LLM evaluation, we release IndicGenBench - the largest benchmark for evaluating LLMs on user-facing generation tasks across a diverse set 29 of Indic languages covering 13 scripts and 4 language families. IndicGenBench is composed of diverse generation tasks like cross-lingual summarization, machine translation, and cross-lingual question answering. IndicGenBench extends existing benchmarks to many Indic languages through human curation providing multi-way parallel evaluation data for many under-represented Indic languages for the first time. We evaluate a wide range of proprietary and open-source LLMs including GPT-3.5, GPT-4, PaLM-2, mT5, Gemma, BLOOM and LLaMA on IndicGenBench in a variety of settings. The largest PaLM-2 models performs the best on most tasks, however, there is a significant performance gap in all languages compared to English showing that further research is needed for the development of more inclusive multilingual language models. IndicGenBench is released at www.github.com/google-research-datasets/indic-gen-bench
2406.16793
Yushun Zhang
Yushun Zhang, Congliang Chen, Ziniu Li, Tian Ding, Chenwei Wu, Yinyu Ye, Zhi-Quan Luo, Ruoyu Sun
Adam-mini: Use Fewer Learning Rates To Gain More
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
We propose Adam-mini, an optimizer that achieves on-par or better performance than AdamW with 45% to 50% less memory footprint. Adam-mini reduces memory by cutting down the learning rate resources in Adam (i.e., $1/\sqrt{v}$). We find that $\geq$ 90% of these learning rates in $v$ could be harmlessly removed if we (1) carefully partition the parameters into blocks following our proposed principle on Hessian structure; (2) assign a single but good learning rate to each parameter block. We further find that, for each of these parameter blocks, there exists a single high-quality learning rate that can outperform Adam, provided that sufficient resources are available to search it out. We then provide one cost-effective way to find good learning rates and propose Adam-mini. Empirically, we verify that Adam-mini performs on par or better than AdamW on various language models sized from 125M to 7B for pre-training, supervised fine-tuning, and RLHF. The reduced memory footprint of Adam-mini also alleviates communication overheads among GPUs and CPUs, thereby increasing throughput. For instance, Adam-mini achieves 49.6% higher throughput than AdamW when pre-training Llama2-7B on $2\times$ A800-80GB GPUs, which saves 33% wall-clock time for pre-training.
[ { "created": "Mon, 24 Jun 2024 16:56:41 GMT", "version": "v1" }, { "created": "Tue, 25 Jun 2024 17:45:06 GMT", "version": "v2" }, { "created": "Wed, 26 Jun 2024 13:03:16 GMT", "version": "v3" }, { "created": "Mon, 1 Jul 2024 17:46:19 GMT", "version": "v4" }, { "created": "Wed, 3 Jul 2024 16:38:17 GMT", "version": "v5" } ]
2024-07-04
[ [ "Zhang", "Yushun", "" ], [ "Chen", "Congliang", "" ], [ "Li", "Ziniu", "" ], [ "Ding", "Tian", "" ], [ "Wu", "Chenwei", "" ], [ "Ye", "Yinyu", "" ], [ "Luo", "Zhi-Quan", "" ], [ "Sun", "Ruoyu", "" ] ]
We propose Adam-mini, an optimizer that achieves on-par or better performance than AdamW with 45% to 50% less memory footprint. Adam-mini reduces memory by cutting down the learning rate resources in Adam (i.e., $1/\sqrt{v}$). We find that $\geq$ 90% of these learning rates in $v$ could be harmlessly removed if we (1) carefully partition the parameters into blocks following our proposed principle on Hessian structure; (2) assign a single but good learning rate to each parameter block. We further find that, for each of these parameter blocks, there exists a single high-quality learning rate that can outperform Adam, provided that sufficient resources are available to search it out. We then provide one cost-effective way to find good learning rates and propose Adam-mini. Empirically, we verify that Adam-mini performs on par or better than AdamW on various language models sized from 125M to 7B for pre-training, supervised fine-tuning, and RLHF. The reduced memory footprint of Adam-mini also alleviates communication overheads among GPUs and CPUs, thereby increasing throughput. For instance, Adam-mini achieves 49.6% higher throughput than AdamW when pre-training Llama2-7B on $2\times$ A800-80GB GPUs, which saves 33% wall-clock time for pre-training.
2205.00510
Jussi Karlgren
Jussi Karlgren
Textual Stylistic Variation: Choices, Genres and Individuals
null
null
10.1007/978-3-642-12337-5_6
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
This chapter argues for more informed target metrics for the statistical processing of stylistic variation in text collections. Much as operationalised relevance proved a useful goal to strive for in information retrieval, research in textual stylistics, whether application oriented or philologically inclined, needs goals formulated in terms of pertinence, relevance, and utility - notions that agree with reader experience of text. Differences readers are aware of are mostly based on utility - not on textual characteristics per se. Mostly, readers report stylistic differences in terms of genres. Genres, while vague and undefined, are well-established and talked about: very early on, readers learn to distinguish genres. This chapter discusses variation given by genre, and contrasts it to variation occasioned by individual choice.
[ { "created": "Sun, 1 May 2022 16:39:49 GMT", "version": "v1" } ]
2022-05-11
[ [ "Karlgren", "Jussi", "" ] ]
This chapter argues for more informed target metrics for the statistical processing of stylistic variation in text collections. Much as operationalised relevance proved a useful goal to strive for in information retrieval, research in textual stylistics, whether application oriented or philologically inclined, needs goals formulated in terms of pertinence, relevance, and utility - notions that agree with reader experience of text. Differences readers are aware of are mostly based on utility - not on textual characteristics per se. Mostly, readers report stylistic differences in terms of genres. Genres, while vague and undefined, are well-established and talked about: very early on, readers learn to distinguish genres. This chapter discusses variation given by genre, and contrasts it to variation occasioned by individual choice.
1905.04612
Bruce MacLennan
Chengrui Li and Bruce J. MacLennan
Continuous-Time Systems for Solving 0-1 Integer Linear Programming Feasibility Problems
null
null
null
null
cs.DS cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The 0-1 integer linear programming feasibility problem is an important NP-complete problem. This paper proposes a continuous-time dynamical system for solving that problem without getting trapped in non-solution local minima. First, the problem is transformed to an easier form in linear time. Then, we propose an "impulse algorithm" to escape from local traps and show its performance is better than randomization for escaping traps. Second, we present the time-to-solution distribution of the impulse algorithm and compare it with exhaustive search to see its advantages. Third, we show that the fractional size of the basin of attraction of the global minimum is significantly larger than $2^{-N}$, the corresponding discrete probability for exhaustive search. Finally, we conduct a case study to show that the location of the basin is independent of different dimensions. These findings reveal a better way to solve the 0-1 integer linear programming feasibility problem continuously and show that its cost could be less than discrete methods in average cases.
[ { "created": "Sun, 12 May 2019 00:01:07 GMT", "version": "v1" } ]
2019-05-14
[ [ "Li", "Chengrui", "" ], [ "MacLennan", "Bruce J.", "" ] ]
The 0-1 integer linear programming feasibility problem is an important NP-complete problem. This paper proposes a continuous-time dynamical system for solving that problem without getting trapped in non-solution local minima. First, the problem is transformed to an easier form in linear time. Then, we propose an "impulse algorithm" to escape from local traps and show its performance is better than randomization for escaping traps. Second, we present the time-to-solution distribution of the impulse algorithm and compare it with exhaustive search to see its advantages. Third, we show that the fractional size of the basin of attraction of the global minimum is significantly larger than $2^{-N}$, the corresponding discrete probability for exhaustive search. Finally, we conduct a case study to show that the location of the basin is independent of different dimensions. These findings reveal a better way to solve the 0-1 integer linear programming feasibility problem continuously and show that its cost could be less than discrete methods in average cases.
1901.05919
Francesco Kriegel
Francesco Kriegel
The Distributive, Graded Lattice of EL Concept Descriptions and its Neighborhood Relation (Extended Version)
null
null
null
LTCS-Report 18-10
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For the description logic EL, we consider the neighborhood relation which is induced by the subsumption order, and we show that the corresponding lattice of EL concept descriptions is distributive, modular, graded, and metric. In particular, this implies the existence of a rank function as well as the existence of a distance function.
[ { "created": "Thu, 17 Jan 2019 17:35:46 GMT", "version": "v1" } ]
2019-01-18
[ [ "Kriegel", "Francesco", "" ] ]
For the description logic EL, we consider the neighborhood relation which is induced by the subsumption order, and we show that the corresponding lattice of EL concept descriptions is distributive, modular, graded, and metric. In particular, this implies the existence of a rank function as well as the existence of a distance function.
2311.06992
Arnav Kumar
Arnav Kumar, Andr\'es Monroy-Hern\'andez
Ball-AR: Fostering Playful Co-Located Interaction Through Environment-centric Physical Activity with AR
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
We present Ball-AR, an augmented reality (AR) game where two players in the same physical space attempt to hit each other with virtual dodgeballs overlaid on the physical world. Researchers have studied AR's potential for fostering co-located interaction and physical activity; however, they have not investigated the impacts of physical activity and physical environment on user experiences and interaction. We created an AR dodgeball game centered around encouraging physical activity and harnessing the physical environment. We then evaluated the game with five dyads to analyze the impacts of these design choices on the quality of gameplay and interaction between players. We found that physical activity and the shared physical space created memorable experiences and interactions among participants, although participants desired a more augmented and immersive experience.
[ { "created": "Mon, 13 Nov 2023 00:12:14 GMT", "version": "v1" } ]
2023-11-14
[ [ "Kumar", "Arnav", "" ], [ "Monroy-Hernández", "Andrés", "" ] ]
We present Ball-AR, an augmented reality (AR) game where two players in the same physical space attempt to hit each other with virtual dodgeballs overlaid on the physical world. Researchers have studied AR's potential for fostering co-located interaction and physical activity; however, they have not investigated the impacts of physical activity and physical environment on user experiences and interaction. We created an AR dodgeball game centered around encouraging physical activity and harnessing the physical environment. We then evaluated the game with five dyads to analyze the impacts of these design choices on the quality of gameplay and interaction between players. We found that physical activity and the shared physical space created memorable experiences and interactions among participants, although participants desired a more augmented and immersive experience.
2007.09072
Xu Chen
Xin Tang and Xu Chen and Liekang Zeng and Shuai Yu and Lin Chen
Joint Multi-User DNN Partitioning and Computational Resource Allocation for Collaborative Edge Intelligence
null
null
null
null
cs.DC cs.LG cs.NI eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mobile Edge Computing (MEC) has emerged as a promising supporting architecture providing a variety of resources to the network edge, thus acting as an enabler for edge intelligence services empowering massive mobile and Internet of Things (IoT) devices with AI capability. With the assistance of edge servers, user equipments (UEs) are able to run deep neural network (DNN) based AI applications, which are generally resource-hungry and compute-intensive, such that an individual UE can hardly afford by itself in real time. However the resources in each individual edge server are typically limited. Therefore, any resource optimization involving edge servers is by nature a resource-constrained optimization problem and needs to be tackled in such realistic context. Motivated by this observation, we investigate the optimization problem of DNN partitioning (an emerging DNN offloading scheme) in a realistic multi-user resource-constrained condition that rarely considered in previous works. Despite the extremely large solution space, we reveal several properties of this specific optimization problem of joint multi-UE DNN partitioning and computational resource allocation. We propose an algorithm called Iterative Alternating Optimization (IAO) that can achieve the optimal solution in polynomial time. In addition, we present rigorous theoretic analysis of our algorithm in terms of time complexity and performance under realistic estimation error. Moreover, we build a prototype that implements our framework and conduct extensive experiments using realistic DNN models, whose results demonstrate its effectiveness and efficiency.
[ { "created": "Wed, 15 Jul 2020 09:40:13 GMT", "version": "v1" } ]
2020-07-20
[ [ "Tang", "Xin", "" ], [ "Chen", "Xu", "" ], [ "Zeng", "Liekang", "" ], [ "Yu", "Shuai", "" ], [ "Chen", "Lin", "" ] ]
Mobile Edge Computing (MEC) has emerged as a promising supporting architecture providing a variety of resources to the network edge, thus acting as an enabler for edge intelligence services empowering massive mobile and Internet of Things (IoT) devices with AI capability. With the assistance of edge servers, user equipments (UEs) are able to run deep neural network (DNN) based AI applications, which are generally resource-hungry and compute-intensive, such that an individual UE can hardly afford by itself in real time. However the resources in each individual edge server are typically limited. Therefore, any resource optimization involving edge servers is by nature a resource-constrained optimization problem and needs to be tackled in such realistic context. Motivated by this observation, we investigate the optimization problem of DNN partitioning (an emerging DNN offloading scheme) in a realistic multi-user resource-constrained condition that rarely considered in previous works. Despite the extremely large solution space, we reveal several properties of this specific optimization problem of joint multi-UE DNN partitioning and computational resource allocation. We propose an algorithm called Iterative Alternating Optimization (IAO) that can achieve the optimal solution in polynomial time. In addition, we present rigorous theoretic analysis of our algorithm in terms of time complexity and performance under realistic estimation error. Moreover, we build a prototype that implements our framework and conduct extensive experiments using realistic DNN models, whose results demonstrate its effectiveness and efficiency.
2407.01546
Hongjie Xu
Hongjie Xu, Yunzhuang Shen, Yuan Sun and Xiaodong Li
Machine Learning-Enhanced Ant Colony Optimization for Column Generation
9 pages including reference
null
null
null
cs.NE cs.LG math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Column generation (CG) is a powerful technique for solving optimization problems that involve a large number of variables or columns. This technique begins by solving a smaller problem with a subset of columns and gradually generates additional columns as needed. However, the generation of columns often requires solving difficult subproblems repeatedly, which can be a bottleneck for CG. To address this challenge, we propose a novel method called machine learning enhanced ant colony optimization (MLACO), to efficiently generate multiple high-quality columns from a subproblem. Specifically, we train a ML model to predict the optimal solution of a subproblem, and then integrate this ML prediction into the probabilistic model of ACO to sample multiple high-quality columns. Our experimental results on the bin packing problem with conflicts show that the MLACO method significantly improves the performance of CG compared to several state-of-the-art methods. Furthermore, when our method is incorporated into a Branch-and-Price method, it leads to a significant reduction in solution time.
[ { "created": "Tue, 23 Apr 2024 01:00:09 GMT", "version": "v1" } ]
2024-07-03
[ [ "Xu", "Hongjie", "" ], [ "Shen", "Yunzhuang", "" ], [ "Sun", "Yuan", "" ], [ "Li", "Xiaodong", "" ] ]
Column generation (CG) is a powerful technique for solving optimization problems that involve a large number of variables or columns. This technique begins by solving a smaller problem with a subset of columns and gradually generates additional columns as needed. However, the generation of columns often requires solving difficult subproblems repeatedly, which can be a bottleneck for CG. To address this challenge, we propose a novel method called machine learning enhanced ant colony optimization (MLACO), to efficiently generate multiple high-quality columns from a subproblem. Specifically, we train a ML model to predict the optimal solution of a subproblem, and then integrate this ML prediction into the probabilistic model of ACO to sample multiple high-quality columns. Our experimental results on the bin packing problem with conflicts show that the MLACO method significantly improves the performance of CG compared to several state-of-the-art methods. Furthermore, when our method is incorporated into a Branch-and-Price method, it leads to a significant reduction in solution time.
2108.05170
Sven Mantowsky
Sven Mantowsky, Falk Heuer, Syed Saqib Bukhari, Michael Keckeisen, Georg Schneider
ProAI: An Efficient Embedded AI Hardware for Automotive Applications -- a Benchmark Study
Accepted by IEEE International Conference on Computer Vision (ICCV) 2021
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Development in the field of Single Board Computers (SBC) have been increasing for several years. They provide a good balance between computing performance and power consumption which is usually required for mobile platforms, like application in vehicles for Advanced Driver Assistance Systems (ADAS) and Autonomous Driving (AD). However, there is an ever-increasing need of more powerful and efficient SBCs which can run power intensive Deep Neural Networks (DNNs) in real-time and can also satisfy necessary functional safety requirements such as Automotive Safety Integrity Level (ASIL). ProAI is being developed by ZF mainly to run powerful and efficient applications such as multitask DNNs and on top of that it also has the required safety certification for AD. In this work, we compare and discuss state of the art SBC on the basis of power intensive multitask DNN architecture called Multitask-CenterNet with respect to performance measures such as, FPS and power efficiency. As an automotive supercomputer, ProAI delivers an excellent combination of performance and efficiency, managing nearly twice the number of FPS per watt than a modern workstation laptop and almost four times compared to the Jetson Nano. Furthermore, it was also shown that there is still power in reserve for further and more complex tasks on the ProAI, based on the CPU and GPU utilization during the benchmark.
[ { "created": "Wed, 11 Aug 2021 11:54:05 GMT", "version": "v1" }, { "created": "Thu, 9 Sep 2021 13:23:34 GMT", "version": "v2" } ]
2021-09-10
[ [ "Mantowsky", "Sven", "" ], [ "Heuer", "Falk", "" ], [ "Bukhari", "Syed Saqib", "" ], [ "Keckeisen", "Michael", "" ], [ "Schneider", "Georg", "" ] ]
Development in the field of Single Board Computers (SBC) have been increasing for several years. They provide a good balance between computing performance and power consumption which is usually required for mobile platforms, like application in vehicles for Advanced Driver Assistance Systems (ADAS) and Autonomous Driving (AD). However, there is an ever-increasing need of more powerful and efficient SBCs which can run power intensive Deep Neural Networks (DNNs) in real-time and can also satisfy necessary functional safety requirements such as Automotive Safety Integrity Level (ASIL). ProAI is being developed by ZF mainly to run powerful and efficient applications such as multitask DNNs and on top of that it also has the required safety certification for AD. In this work, we compare and discuss state of the art SBC on the basis of power intensive multitask DNN architecture called Multitask-CenterNet with respect to performance measures such as, FPS and power efficiency. As an automotive supercomputer, ProAI delivers an excellent combination of performance and efficiency, managing nearly twice the number of FPS per watt than a modern workstation laptop and almost four times compared to the Jetson Nano. Furthermore, it was also shown that there is still power in reserve for further and more complex tasks on the ProAI, based on the CPU and GPU utilization during the benchmark.
2112.03009
Ximing Li
Bing Wang, Yue Wang, Ximing Li, Jihong Ouyang
Weakly Supervised Prototype Topic Model with Discriminative Seed Words: Modifying the Category Prior by Self-exploring Supervised Signals
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/publicdomain/zero/1.0/
Dataless text classification, i.e., a new paradigm of weakly supervised learning, refers to the task of learning with unlabeled documents and a few predefined representative words of categories, known as seed words. The recent generative dataless methods construct document-specific category priors by using seed word occurrences only, however, such category priors often contain very limited and even noisy supervised signals. To remedy this problem, in this paper we propose a novel formulation of category prior. First, for each document, we consider its label membership degree by not only counting seed word occurrences, but also using a novel prototype scheme, which captures pseudo-nearest neighboring categories. Second, for each label, we consider its frequency prior knowledge of the corpus, which is also a discriminative knowledge for classification. By incorporating the proposed category prior into the previous generative dataless method, we suggest a novel generative dataless method, namely Weakly Supervised Prototype Topic Model (WSPTM). The experimental results on real-world datasets demonstrate that WSPTM outperforms the existing baseline methods.
[ { "created": "Sat, 20 Nov 2021 00:00:56 GMT", "version": "v1" } ]
2021-12-07
[ [ "Wang", "Bing", "" ], [ "Wang", "Yue", "" ], [ "Li", "Ximing", "" ], [ "Ouyang", "Jihong", "" ] ]
Dataless text classification, i.e., a new paradigm of weakly supervised learning, refers to the task of learning with unlabeled documents and a few predefined representative words of categories, known as seed words. The recent generative dataless methods construct document-specific category priors by using seed word occurrences only, however, such category priors often contain very limited and even noisy supervised signals. To remedy this problem, in this paper we propose a novel formulation of category prior. First, for each document, we consider its label membership degree by not only counting seed word occurrences, but also using a novel prototype scheme, which captures pseudo-nearest neighboring categories. Second, for each label, we consider its frequency prior knowledge of the corpus, which is also a discriminative knowledge for classification. By incorporating the proposed category prior into the previous generative dataless method, we suggest a novel generative dataless method, namely Weakly Supervised Prototype Topic Model (WSPTM). The experimental results on real-world datasets demonstrate that WSPTM outperforms the existing baseline methods.
2310.12334
Weiyi Wu
Weiyi Wu, Chongyang Gao, Joseph DiPalma, Soroush Vosoughi, Saeed Hassanpour
Improving Representation Learning for Histopathologic Images with Cluster Constraints
Accepted by ICCV2023
Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 21404-21414
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in whole-slide image (WSI) scanners and computational capabilities have significantly propelled the application of artificial intelligence in histopathology slide analysis. While these strides are promising, current supervised learning approaches for WSI analysis come with the challenge of exhaustively labeling high-resolution slides - a process that is both labor-intensive and time-consuming. In contrast, self-supervised learning (SSL) pretraining strategies are emerging as a viable alternative, given that they don't rely on explicit data annotations. These SSL strategies are quickly bridging the performance disparity with their supervised counterparts. In this context, we introduce an SSL framework. This framework aims for transferable representation learning and semantically meaningful clustering by synergizing invariance loss and clustering loss in WSI analysis. Notably, our approach outperforms common SSL methods in downstream classification and clustering tasks, as evidenced by tests on the Camelyon16 and a pancreatic cancer dataset.
[ { "created": "Wed, 18 Oct 2023 21:20:44 GMT", "version": "v1" }, { "created": "Tue, 14 Nov 2023 12:04:24 GMT", "version": "v2" } ]
2023-11-15
[ [ "Wu", "Weiyi", "" ], [ "Gao", "Chongyang", "" ], [ "DiPalma", "Joseph", "" ], [ "Vosoughi", "Soroush", "" ], [ "Hassanpour", "Saeed", "" ] ]
Recent advances in whole-slide image (WSI) scanners and computational capabilities have significantly propelled the application of artificial intelligence in histopathology slide analysis. While these strides are promising, current supervised learning approaches for WSI analysis come with the challenge of exhaustively labeling high-resolution slides - a process that is both labor-intensive and time-consuming. In contrast, self-supervised learning (SSL) pretraining strategies are emerging as a viable alternative, given that they don't rely on explicit data annotations. These SSL strategies are quickly bridging the performance disparity with their supervised counterparts. In this context, we introduce an SSL framework. This framework aims for transferable representation learning and semantically meaningful clustering by synergizing invariance loss and clustering loss in WSI analysis. Notably, our approach outperforms common SSL methods in downstream classification and clustering tasks, as evidenced by tests on the Camelyon16 and a pancreatic cancer dataset.
2202.03055
Anastasiia Grishina
Anastasiia Grishina
Enabling Automatic Repair of Source Code Vulnerabilities Using Data-Driven Methods
Accepted for the ICSE '22 Doctoral Symposium
null
10.1145/3510454.3517063
null
cs.SE cs.CR cs.LG
http://creativecommons.org/licenses/by/4.0/
Users around the world rely on software-intensive systems in their day-to-day activities. These systems regularly contain bugs and security vulnerabilities. To facilitate bug fixing, data-driven models of automatic program repair use pairs of buggy and fixed code to learn transformations that fix errors in code. However, automatic repair of security vulnerabilities remains under-explored. In this work, we propose ways to improve code representations for vulnerability repair from three perspectives: input data type, data-driven models, and downstream tasks. The expected results of this work are improved code representations for automatic program repair and, specifically, fixing security vulnerabilities.
[ { "created": "Mon, 7 Feb 2022 10:47:37 GMT", "version": "v1" } ]
2022-02-08
[ [ "Grishina", "Anastasiia", "" ] ]
Users around the world rely on software-intensive systems in their day-to-day activities. These systems regularly contain bugs and security vulnerabilities. To facilitate bug fixing, data-driven models of automatic program repair use pairs of buggy and fixed code to learn transformations that fix errors in code. However, automatic repair of security vulnerabilities remains under-explored. In this work, we propose ways to improve code representations for vulnerability repair from three perspectives: input data type, data-driven models, and downstream tasks. The expected results of this work are improved code representations for automatic program repair and, specifically, fixing security vulnerabilities.
cs/9809065
Chunlei Liu
Sonia Fahmy, Raj Jain, Rohit Goyal, Bobby Vandalore, Shivkumar Kalyanaraman, Sastri Kota, and Pradeep Samudra
Feedback Consolidation Algorithms for ABR Point-to-Multipoint Connections in ATM Networks
Proceedings of IEEE INFOCOM 1998, March 1998, volume 3, pp. 1004-1013
null
10.1109/INFCOM.1998.662910
null
cs.NI
null
ABR traffic management for point-to-multipoint connections controls the source rate to the minimum rate supported by all the branches of the multicast tree. A number of algorithms have been developed for extending ABR congestion avoidance algorithms to perform feedback consolidation at the branch points. This paper discusses various design options and implementation alternatives for the consolidation algorithms, and proposes a number of new algorithms. The performance of the proposed algorithms and the previous algorithms is compared under a variety of conditions. Results indicate that the algorithms we propose eliminate the consolidation noise (caused if the feedback is returned before all branches respond), while exhibiting a fast transient response.
[ { "created": "Wed, 23 Sep 1998 16:22:42 GMT", "version": "v1" } ]
2016-11-15
[ [ "Fahmy", "Sonia", "" ], [ "Jain", "Raj", "" ], [ "Goyal", "Rohit", "" ], [ "Vandalore", "Bobby", "" ], [ "Kalyanaraman", "Shivkumar", "" ], [ "Kota", "Sastri", "" ], [ "Samudra", "Pradeep", "" ] ]
ABR traffic management for point-to-multipoint connections controls the source rate to the minimum rate supported by all the branches of the multicast tree. A number of algorithms have been developed for extending ABR congestion avoidance algorithms to perform feedback consolidation at the branch points. This paper discusses various design options and implementation alternatives for the consolidation algorithms, and proposes a number of new algorithms. The performance of the proposed algorithms and the previous algorithms is compared under a variety of conditions. Results indicate that the algorithms we propose eliminate the consolidation noise (caused if the feedback is returned before all branches respond), while exhibiting a fast transient response.
2401.13410
Shuyi Wang
Shuyi Wang, Bing Liu, Guido Zuccon
How to Forget Clients in Federated Online Learning to Rank?
Accepted in ECIR 2024
null
null
null
cs.CR cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data protection legislation like the European Union's General Data Protection Regulation (GDPR) establishes the \textit{right to be forgotten}: a user (client) can request contributions made using their data to be removed from learned models. In this paper, we study how to remove the contributions made by a client participating in a Federated Online Learning to Rank (FOLTR) system. In a FOLTR system, a ranker is learned by aggregating local updates to the global ranking model. Local updates are learned in an online manner at a client-level using queries and implicit interactions that have occurred within that specific client. By doing so, each client's local data is not shared with other clients or with a centralised search service, while at the same time clients can benefit from an effective global ranking model learned from contributions of each client in the federation. In this paper, we study an effective and efficient unlearning method that can remove a client's contribution without compromising the overall ranker effectiveness and without needing to retrain the global ranker from scratch. A key challenge is how to measure whether the model has unlearned the contributions from the client $c^*$ that has requested removal. For this, we instruct $c^*$ to perform a poisoning attack (add noise to this client updates) and then we measure whether the impact of the attack is lessened when the unlearning process has taken place. Through experiments on four datasets, we demonstrate the effectiveness and efficiency of the unlearning strategy under different combinations of parameter settings.
[ { "created": "Wed, 24 Jan 2024 12:11:41 GMT", "version": "v1" } ]
2024-01-25
[ [ "Wang", "Shuyi", "" ], [ "Liu", "Bing", "" ], [ "Zuccon", "Guido", "" ] ]
Data protection legislation like the European Union's General Data Protection Regulation (GDPR) establishes the \textit{right to be forgotten}: a user (client) can request contributions made using their data to be removed from learned models. In this paper, we study how to remove the contributions made by a client participating in a Federated Online Learning to Rank (FOLTR) system. In a FOLTR system, a ranker is learned by aggregating local updates to the global ranking model. Local updates are learned in an online manner at a client-level using queries and implicit interactions that have occurred within that specific client. By doing so, each client's local data is not shared with other clients or with a centralised search service, while at the same time clients can benefit from an effective global ranking model learned from contributions of each client in the federation. In this paper, we study an effective and efficient unlearning method that can remove a client's contribution without compromising the overall ranker effectiveness and without needing to retrain the global ranker from scratch. A key challenge is how to measure whether the model has unlearned the contributions from the client $c^*$ that has requested removal. For this, we instruct $c^*$ to perform a poisoning attack (add noise to this client updates) and then we measure whether the impact of the attack is lessened when the unlearning process has taken place. Through experiments on four datasets, we demonstrate the effectiveness and efficiency of the unlearning strategy under different combinations of parameter settings.
2202.05451
Jia Huei Tan
Jia Huei Tan, Ying Hua Tan, Chee Seng Chan, Joon Huang Chuah
ACORT: A Compact Object Relation Transformer for Parameter Efficient Image Captioning
Neurocomputing; In Press
null
null
null
cs.CV cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recent research that applies Transformer-based architectures to image captioning has resulted in state-of-the-art image captioning performance, capitalising on the success of Transformers on natural language tasks. Unfortunately, though these models work well, one major flaw is their large model sizes. To this end, we present three parameter reduction methods for image captioning Transformers: Radix Encoding, cross-layer parameter sharing, and attention parameter sharing. By combining these methods, our proposed ACORT models have 3.7x to 21.6x fewer parameters than the baseline model without compromising test performance. Results on the MS-COCO dataset demonstrate that our ACORT models are competitive against baselines and SOTA approaches, with CIDEr score >=126. Finally, we present qualitative results and ablation studies to demonstrate the efficacy of the proposed changes further. Code and pre-trained models are publicly available at https://github.com/jiahuei/sparse-image-captioning.
[ { "created": "Fri, 11 Feb 2022 05:10:28 GMT", "version": "v1" } ]
2022-02-14
[ [ "Tan", "Jia Huei", "" ], [ "Tan", "Ying Hua", "" ], [ "Chan", "Chee Seng", "" ], [ "Chuah", "Joon Huang", "" ] ]
Recent research that applies Transformer-based architectures to image captioning has resulted in state-of-the-art image captioning performance, capitalising on the success of Transformers on natural language tasks. Unfortunately, though these models work well, one major flaw is their large model sizes. To this end, we present three parameter reduction methods for image captioning Transformers: Radix Encoding, cross-layer parameter sharing, and attention parameter sharing. By combining these methods, our proposed ACORT models have 3.7x to 21.6x fewer parameters than the baseline model without compromising test performance. Results on the MS-COCO dataset demonstrate that our ACORT models are competitive against baselines and SOTA approaches, with CIDEr score >=126. Finally, we present qualitative results and ablation studies to demonstrate the efficacy of the proposed changes further. Code and pre-trained models are publicly available at https://github.com/jiahuei/sparse-image-captioning.
1208.2900
Lu Yang
Lu Yang and Wei Zhang
On Achievable Degrees of Freedom for MIMO X Channels
18 pages 6 figures, minor revisions in Abstract and Introduction of version 1
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, the achievable DoF of MIMO X channels for constant channel coefficients with $M_t$ antennas at transmitter $t$ and $N_r$ antennas at receiver $r$ ($t,r=1,2$) is studied. A spatial interference alignment and cancelation scheme is proposed to achieve the maximum DoF of the MIMO X channels. The scenario of $M_1\geq M_2\geq N_1\geq N_2$ is first considered and divided into 3 cases, $3N_2<M_1+M_2<2N_1+N_2$ (Case $A$), $M_1+M_2\geq2N_1+N_2$ (Case $B$), and $M_1+M_2\leq3N_2$ (Case $C$). With the proposed scheme, it is shown that in Case $A$, the outer-bound $\frac{M_1+M_2+N_2}{2}$ is achievable; in Case $B$, the achievable DoF equals the outer-bound $N_1+N_2$ if $M_2>N_1$, otherwise it is 1/2 or 1 less than the outer-bound; in Case $C$, the achievable DoF is equal to the outer-bound $2/3(M_1+M_2)$ if $(3N_2-M_1-M_2)\mod 3=0$, and it is 1/3 or 1/6 less than the outer-bound if $(3N_2-M_1-M_2)\mod 3=1 \mathrm{or} 2$. In the scenario of $M_t\leq N_r$, the exact symmetrical results of DoF can be obtained.
[ { "created": "Tue, 14 Aug 2012 15:34:33 GMT", "version": "v1" }, { "created": "Fri, 24 Aug 2012 02:28:53 GMT", "version": "v2" } ]
2012-08-27
[ [ "Yang", "Lu", "" ], [ "Zhang", "Wei", "" ] ]
In this paper, the achievable DoF of MIMO X channels for constant channel coefficients with $M_t$ antennas at transmitter $t$ and $N_r$ antennas at receiver $r$ ($t,r=1,2$) is studied. A spatial interference alignment and cancelation scheme is proposed to achieve the maximum DoF of the MIMO X channels. The scenario of $M_1\geq M_2\geq N_1\geq N_2$ is first considered and divided into 3 cases, $3N_2<M_1+M_2<2N_1+N_2$ (Case $A$), $M_1+M_2\geq2N_1+N_2$ (Case $B$), and $M_1+M_2\leq3N_2$ (Case $C$). With the proposed scheme, it is shown that in Case $A$, the outer-bound $\frac{M_1+M_2+N_2}{2}$ is achievable; in Case $B$, the achievable DoF equals the outer-bound $N_1+N_2$ if $M_2>N_1$, otherwise it is 1/2 or 1 less than the outer-bound; in Case $C$, the achievable DoF is equal to the outer-bound $2/3(M_1+M_2)$ if $(3N_2-M_1-M_2)\mod 3=0$, and it is 1/3 or 1/6 less than the outer-bound if $(3N_2-M_1-M_2)\mod 3=1 \mathrm{or} 2$. In the scenario of $M_t\leq N_r$, the exact symmetrical results of DoF can be obtained.
1302.6312
Shams Zawoad
Shams Zawoad, Ragib Hasan
Cloud Forensics: A Meta-Study of Challenges, Approaches, and Open Problems
null
null
null
null
cs.DC cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, cloud computing has become popular as a cost-effective and efficient computing paradigm. Unfortunately, today's cloud computing architectures are not designed for security and forensics. To date, very little research has been done to develop the theory and practice of cloud forensics. Many factors complicate forensic investigations in a cloud environment. First, the storage system is no longer local. Therefore, even with a subpoena, law enforcement agents cannot confiscate the suspect's computer and get access to the suspect's files. Second, each cloud server contains files from many users. Hence, it is not feasible to seize servers from a data center without violating the privacy of many other users. Third, even if the data belonging to a particular suspect is identified, separating it from other users' data is difficult. Moreover, other than the cloud provider's word, there is usually no evidence that links a given data file to a particular suspect. For such challenges, clouds cannot be used to store healthcare, business, or national security related data, which require audit and regulatory compliance. In this paper, we systematically examine the cloud forensics problem and explore the challenges and issues in cloud forensics. We then discuss existing research projects and finally, we highlight the open problems and future directions in cloud forensics research area. We posit that our systematic approach towards understanding the nature and challenges of cloud forensics will allow us to examine possible secure solution approaches, leading to increased trust on and adoption of cloud computing, especially in business, healthcare, and national security. This in turn will lead to lower cost and long-term benefit to our society as a whole.
[ { "created": "Tue, 26 Feb 2013 04:55:53 GMT", "version": "v1" } ]
2013-02-27
[ [ "Zawoad", "Shams", "" ], [ "Hasan", "Ragib", "" ] ]
In recent years, cloud computing has become popular as a cost-effective and efficient computing paradigm. Unfortunately, today's cloud computing architectures are not designed for security and forensics. To date, very little research has been done to develop the theory and practice of cloud forensics. Many factors complicate forensic investigations in a cloud environment. First, the storage system is no longer local. Therefore, even with a subpoena, law enforcement agents cannot confiscate the suspect's computer and get access to the suspect's files. Second, each cloud server contains files from many users. Hence, it is not feasible to seize servers from a data center without violating the privacy of many other users. Third, even if the data belonging to a particular suspect is identified, separating it from other users' data is difficult. Moreover, other than the cloud provider's word, there is usually no evidence that links a given data file to a particular suspect. For such challenges, clouds cannot be used to store healthcare, business, or national security related data, which require audit and regulatory compliance. In this paper, we systematically examine the cloud forensics problem and explore the challenges and issues in cloud forensics. We then discuss existing research projects and finally, we highlight the open problems and future directions in cloud forensics research area. We posit that our systematic approach towards understanding the nature and challenges of cloud forensics will allow us to examine possible secure solution approaches, leading to increased trust on and adoption of cloud computing, especially in business, healthcare, and national security. This in turn will lead to lower cost and long-term benefit to our society as a whole.
2407.08388
Geoff Keeling
Geoff Keeling, Winnie Street
On the attribution of confidence to large language models
22 pages, 0 figures
null
null
null
cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
Credences are mental states corresponding to degrees of confidence in propositions. Attribution of credences to Large Language Models (LLMs) is commonplace in the empirical literature on LLM evaluation. Yet the theoretical basis for LLM credence attribution is unclear. We defend three claims. First, our semantic claim is that LLM credence attributions are (at least in general) correctly interpreted literally, as expressing truth-apt beliefs on the part of scientists that purport to describe facts about LLM credences. Second, our metaphysical claim is that the existence of LLM credences is at least plausible, although current evidence is inconclusive. Third, our epistemic claim is that LLM credence attributions made in the empirical literature on LLM evaluation are subject to non-trivial sceptical concerns. It is a distinct possibility that even if LLMs have credences, LLM credence attributions are generally false because the experimental techniques used to assess LLM credences are not truth-tracking.
[ { "created": "Thu, 11 Jul 2024 10:51:06 GMT", "version": "v1" } ]
2024-07-12
[ [ "Keeling", "Geoff", "" ], [ "Street", "Winnie", "" ] ]
Credences are mental states corresponding to degrees of confidence in propositions. Attribution of credences to Large Language Models (LLMs) is commonplace in the empirical literature on LLM evaluation. Yet the theoretical basis for LLM credence attribution is unclear. We defend three claims. First, our semantic claim is that LLM credence attributions are (at least in general) correctly interpreted literally, as expressing truth-apt beliefs on the part of scientists that purport to describe facts about LLM credences. Second, our metaphysical claim is that the existence of LLM credences is at least plausible, although current evidence is inconclusive. Third, our epistemic claim is that LLM credence attributions made in the empirical literature on LLM evaluation are subject to non-trivial sceptical concerns. It is a distinct possibility that even if LLMs have credences, LLM credence attributions are generally false because the experimental techniques used to assess LLM credences are not truth-tracking.
2301.12830
Timo Koch
Timo Koch and Dennis Gl\"aser and Anett Seeland and Sarbani Roy and Katharina Schulze and Kilian Weishaupt and David Boehringer and Sibylle Hermann and Bernd Flemisch
A sustainable infrastructure concept for improved accessibility, reusability, and archival of research software
null
null
null
null
cs.SE
http://creativecommons.org/licenses/by-nc-nd/4.0/
Research software is an integral part of most research today and it is widely accepted that research software artifacts should be accessible and reproducible. However, the sustainable archival of research software artifacts is an ongoing effort. We identify research software artifacts as snapshots of the current state of research and an integral part of a sustainable cycle of software development, research, and publication. We develop requirements and recommendations to improve the archival, access, and reuse of research software artifacts based on installable, configurable, extensible research software, and sustainable public open-access infrastructure. The described goal is to enable the reuse and exploration of research software beyond published research results, in parallel with reproducibility efforts, and in line with the FAIR principles for data and software. Research software artifacts can be reused in varying scenarios. To this end, we design a multi-modal representation concept supporting multiple reuse scenarios. We identify types of research software artifacts that can be viewed as different modes of the same software-based research result, for example, installation-free configurable browser-based apps to containerized environments, descriptions in journal publications and software documentation, or source code with installation instructions. We discuss how the sustainability and reuse of research software are enhanced or enabled by a suitable archive infrastructure. Finally, at the example of a pilot project at the University of Stuttgart, Germany -- a collaborative effort between research software developers and infrastructure providers -- we outline practical challenges and experiences
[ { "created": "Fri, 27 Jan 2023 00:16:23 GMT", "version": "v1" } ]
2023-01-31
[ [ "Koch", "Timo", "" ], [ "Gläser", "Dennis", "" ], [ "Seeland", "Anett", "" ], [ "Roy", "Sarbani", "" ], [ "Schulze", "Katharina", "" ], [ "Weishaupt", "Kilian", "" ], [ "Boehringer", "David", "" ], [ "Hermann", "Sibylle", "" ], [ "Flemisch", "Bernd", "" ] ]
Research software is an integral part of most research today and it is widely accepted that research software artifacts should be accessible and reproducible. However, the sustainable archival of research software artifacts is an ongoing effort. We identify research software artifacts as snapshots of the current state of research and an integral part of a sustainable cycle of software development, research, and publication. We develop requirements and recommendations to improve the archival, access, and reuse of research software artifacts based on installable, configurable, extensible research software, and sustainable public open-access infrastructure. The described goal is to enable the reuse and exploration of research software beyond published research results, in parallel with reproducibility efforts, and in line with the FAIR principles for data and software. Research software artifacts can be reused in varying scenarios. To this end, we design a multi-modal representation concept supporting multiple reuse scenarios. We identify types of research software artifacts that can be viewed as different modes of the same software-based research result, for example, installation-free configurable browser-based apps to containerized environments, descriptions in journal publications and software documentation, or source code with installation instructions. We discuss how the sustainability and reuse of research software are enhanced or enabled by a suitable archive infrastructure. Finally, at the example of a pilot project at the University of Stuttgart, Germany -- a collaborative effort between research software developers and infrastructure providers -- we outline practical challenges and experiences
1905.01749
Max Noormohammadpour
Mohammad Noormohammadpour, Srikanth Kandula, Cauligi S. Raghavendra, Sriram Rao
Efficient Inter-Datacenter Bulk Transfers with Mixed Completion Time Objectives
Accepted to Elsevier Computer Networks
null
null
null
cs.DC cs.NI cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bulk transfers from one to multiple datacenters can have many different completion time objectives ranging from quickly replicating some $k$ copies to minimizing the time by which the last destination receives a full replica. We design an SDN-style wide-area traffic scheduler that optimizes different completion time objectives for various requests. The scheduler builds, for each bulk transfer, one or more multicast forwarding trees which preferentially use lightly loaded network links. Multiple multicast trees are used per bulk transfer to insulate destinations that have higher available bandwidth and can hence finish quickly from congested destinations. These decisions--how many trees to construct and which receivers to serve using a given tree--result from an optimization problem that minimizes a weighted sum of transfers' completion time objectives and their bandwidth consumption. Results from simulations and emulations on Mininet show that our scheduler, Iris, can improve different completion time objectives by about $2.5\times$.
[ { "created": "Sun, 5 May 2019 21:22:44 GMT", "version": "v1" }, { "created": "Tue, 6 Aug 2019 06:02:51 GMT", "version": "v2" }, { "created": "Sun, 15 Sep 2019 16:14:54 GMT", "version": "v3" } ]
2019-09-17
[ [ "Noormohammadpour", "Mohammad", "" ], [ "Kandula", "Srikanth", "" ], [ "Raghavendra", "Cauligi S.", "" ], [ "Rao", "Sriram", "" ] ]
Bulk transfers from one to multiple datacenters can have many different completion time objectives ranging from quickly replicating some $k$ copies to minimizing the time by which the last destination receives a full replica. We design an SDN-style wide-area traffic scheduler that optimizes different completion time objectives for various requests. The scheduler builds, for each bulk transfer, one or more multicast forwarding trees which preferentially use lightly loaded network links. Multiple multicast trees are used per bulk transfer to insulate destinations that have higher available bandwidth and can hence finish quickly from congested destinations. These decisions--how many trees to construct and which receivers to serve using a given tree--result from an optimization problem that minimizes a weighted sum of transfers' completion time objectives and their bandwidth consumption. Results from simulations and emulations on Mininet show that our scheduler, Iris, can improve different completion time objectives by about $2.5\times$.
1808.06032
Yufei Wang
Yufei Wang, Zhe Lin, Xiaohui Shen, Jianming Zhang, Scott Cohen
Concept Mask: Large-Scale Segmentation from Semantic Concepts
Accepted to ECCV18
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Existing works on semantic segmentation typically consider a small number of labels, ranging from tens to a few hundreds. With a large number of labels, training and evaluation of such task become extremely challenging due to correlation between labels and lack of datasets with complete annotations. We formulate semantic segmentation as a problem of image segmentation given a semantic concept, and propose a novel system which can potentially handle an unlimited number of concepts, including objects, parts, stuff, and attributes. We achieve this using a weakly and semi-supervised framework leveraging multiple datasets with different levels of supervision. We first train a deep neural network on a 6M stock image dataset with only image-level labels to learn visual-semantic embedding on 18K concepts. Then, we refine and extend the embedding network to predict an attention map, using a curated dataset with bounding box annotations on 750 concepts. Finally, we train an attention-driven class agnostic segmentation network using an 80-category fully annotated dataset. We perform extensive experiments to validate that the proposed system performs competitively to the state of the art on fully supervised concepts, and is capable of producing accurate segmentations for weakly learned and unseen concepts.
[ { "created": "Sat, 18 Aug 2018 02:26:03 GMT", "version": "v1" } ]
2018-08-21
[ [ "Wang", "Yufei", "" ], [ "Lin", "Zhe", "" ], [ "Shen", "Xiaohui", "" ], [ "Zhang", "Jianming", "" ], [ "Cohen", "Scott", "" ] ]
Existing works on semantic segmentation typically consider a small number of labels, ranging from tens to a few hundreds. With a large number of labels, training and evaluation of such task become extremely challenging due to correlation between labels and lack of datasets with complete annotations. We formulate semantic segmentation as a problem of image segmentation given a semantic concept, and propose a novel system which can potentially handle an unlimited number of concepts, including objects, parts, stuff, and attributes. We achieve this using a weakly and semi-supervised framework leveraging multiple datasets with different levels of supervision. We first train a deep neural network on a 6M stock image dataset with only image-level labels to learn visual-semantic embedding on 18K concepts. Then, we refine and extend the embedding network to predict an attention map, using a curated dataset with bounding box annotations on 750 concepts. Finally, we train an attention-driven class agnostic segmentation network using an 80-category fully annotated dataset. We perform extensive experiments to validate that the proposed system performs competitively to the state of the art on fully supervised concepts, and is capable of producing accurate segmentations for weakly learned and unseen concepts.
2010.01080
Moritz Wolf
Moritz Wolf, Dana Ruiter, Ashwin Geet D'Sa, Liane Reiners, Jan Alexandersson, Dietrich Klakow
HUMAN: Hierarchical Universal Modular ANnotator
7 pages, 4 figures, EMNLP - Demonstrations 2020
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A lot of real-world phenomena are complex and cannot be captured by single task annotations. This causes a need for subsequent annotations, with interdependent questions and answers describing the nature of the subject at hand. Even in the case a phenomenon is easily captured by a single task, the high specialisation of most annotation tools can result in having to switch to another tool if the task only slightly changes. We introduce HUMAN, a novel web-based annotation tool that addresses the above problems by a) covering a variety of annotation tasks on both textual and image data, and b) the usage of an internal deterministic state machine, allowing the researcher to chain different annotation tasks in an interdependent manner. Further, the modular nature of the tool makes it easy to define new annotation tasks and integrate machine learning algorithms e.g., for active learning. HUMAN comes with an easy-to-use graphical user interface that simplifies the annotation task and management.
[ { "created": "Fri, 2 Oct 2020 16:20:30 GMT", "version": "v1" } ]
2020-10-05
[ [ "Wolf", "Moritz", "" ], [ "Ruiter", "Dana", "" ], [ "D'Sa", "Ashwin Geet", "" ], [ "Reiners", "Liane", "" ], [ "Alexandersson", "Jan", "" ], [ "Klakow", "Dietrich", "" ] ]
A lot of real-world phenomena are complex and cannot be captured by single task annotations. This causes a need for subsequent annotations, with interdependent questions and answers describing the nature of the subject at hand. Even in the case a phenomenon is easily captured by a single task, the high specialisation of most annotation tools can result in having to switch to another tool if the task only slightly changes. We introduce HUMAN, a novel web-based annotation tool that addresses the above problems by a) covering a variety of annotation tasks on both textual and image data, and b) the usage of an internal deterministic state machine, allowing the researcher to chain different annotation tasks in an interdependent manner. Further, the modular nature of the tool makes it easy to define new annotation tasks and integrate machine learning algorithms e.g., for active learning. HUMAN comes with an easy-to-use graphical user interface that simplifies the annotation task and management.
1407.6915
Brad Rees
Rostislav Tsiomenko and Bradley S. Rees
Accelerating Fast Fourier Transforms Using Hadoop and CUDA
6 pages Was submitted to ICDCS 2013 but not accepted
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There has been considerable research into improving Fast Fourier Transform (FFT) performance through parallelization and optimization for specialized hardware. However, even with those advancements, processing of very large files, over 1TB in size, still remains prohibitively slow. Analysts performing signal processing are forced to wait hours or days for results, which results in a disruption of their workflow and a decrease in productivity. In this paper we present a unique approach that not only parallelizes the workload over multi-cores, but distributes the problem over a cluster of graphics processing unit (GPU)-equipped servers. By utilizing Hadoop and CUDA, we can take advantage of inexpensive servers while still exceeding the processing power of a dedicated supercomputer, as demonstrated in our result using Amazon EC2.
[ { "created": "Fri, 25 Jul 2014 14:25:35 GMT", "version": "v1" } ]
2014-07-28
[ [ "Tsiomenko", "Rostislav", "" ], [ "Rees", "Bradley S.", "" ] ]
There has been considerable research into improving Fast Fourier Transform (FFT) performance through parallelization and optimization for specialized hardware. However, even with those advancements, processing of very large files, over 1TB in size, still remains prohibitively slow. Analysts performing signal processing are forced to wait hours or days for results, which results in a disruption of their workflow and a decrease in productivity. In this paper we present a unique approach that not only parallelizes the workload over multi-cores, but distributes the problem over a cluster of graphics processing unit (GPU)-equipped servers. By utilizing Hadoop and CUDA, we can take advantage of inexpensive servers while still exceeding the processing power of a dedicated supercomputer, as demonstrated in our result using Amazon EC2.
2301.10127
Erik Wallin
Erik Wallin, Lennart Svensson, Fredrik Kahl, Lars Hammarstrand
Improving Open-Set Semi-Supervised Learning with Self-Supervision
WACV2024
null
null
null
cs.LG cs.CV stat.ML
http://creativecommons.org/licenses/by/4.0/
Open-set semi-supervised learning (OSSL) embodies a practical scenario within semi-supervised learning, wherein the unlabeled training set encompasses classes absent from the labeled set. Many existing OSSL methods assume that these out-of-distribution data are harmful and put effort into excluding data belonging to unknown classes from the training objective. In contrast, we propose an OSSL framework that facilitates learning from all unlabeled data through self-supervision. Additionally, we utilize an energy-based score to accurately recognize data belonging to the known classes, making our method well-suited for handling uncurated data in deployment. We show through extensive experimental evaluations that our method yields state-of-the-art results on many of the evaluated benchmark problems in terms of closed-set accuracy and open-set recognition when compared with existing methods for OSSL. Our code is available at https://github.com/walline/ssl-tf2-sefoss.
[ { "created": "Tue, 24 Jan 2023 16:46:37 GMT", "version": "v1" }, { "created": "Wed, 8 Mar 2023 20:02:28 GMT", "version": "v2" }, { "created": "Wed, 29 Nov 2023 23:02:49 GMT", "version": "v3" } ]
2023-12-04
[ [ "Wallin", "Erik", "" ], [ "Svensson", "Lennart", "" ], [ "Kahl", "Fredrik", "" ], [ "Hammarstrand", "Lars", "" ] ]
Open-set semi-supervised learning (OSSL) embodies a practical scenario within semi-supervised learning, wherein the unlabeled training set encompasses classes absent from the labeled set. Many existing OSSL methods assume that these out-of-distribution data are harmful and put effort into excluding data belonging to unknown classes from the training objective. In contrast, we propose an OSSL framework that facilitates learning from all unlabeled data through self-supervision. Additionally, we utilize an energy-based score to accurately recognize data belonging to the known classes, making our method well-suited for handling uncurated data in deployment. We show through extensive experimental evaluations that our method yields state-of-the-art results on many of the evaluated benchmark problems in terms of closed-set accuracy and open-set recognition when compared with existing methods for OSSL. Our code is available at https://github.com/walline/ssl-tf2-sefoss.
2111.09908
Corban Rivera
Corban G. Rivera, David A Handelman
Visual Goal-Directed Meta-Learning with Contextual Planning Networks
null
null
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by/4.0/
The goal of meta-learning is to generalize to new tasks and goals as quickly as possible. Ideally, we would like approaches that generalize to new goals and tasks on the first attempt. Toward that end, we introduce contextual planning networks (CPN). Tasks are represented as goal images and used to condition the approach. We evaluate CPN along with several other approaches adapted for zero-shot goal-directed meta-learning. We evaluate these approaches across 24 distinct manipulation tasks using Metaworld benchmark tasks. We found that CPN outperformed several approaches and baselines on one task and was competitive with existing approaches on others. We demonstrate the approach on a physical platform on Jenga tasks using a Kinova Jaco robotic arm.
[ { "created": "Thu, 18 Nov 2021 19:11:01 GMT", "version": "v1" } ]
2021-11-22
[ [ "Rivera", "Corban G.", "" ], [ "Handelman", "David A", "" ] ]
The goal of meta-learning is to generalize to new tasks and goals as quickly as possible. Ideally, we would like approaches that generalize to new goals and tasks on the first attempt. Toward that end, we introduce contextual planning networks (CPN). Tasks are represented as goal images and used to condition the approach. We evaluate CPN along with several other approaches adapted for zero-shot goal-directed meta-learning. We evaluate these approaches across 24 distinct manipulation tasks using Metaworld benchmark tasks. We found that CPN outperformed several approaches and baselines on one task and was competitive with existing approaches on others. We demonstrate the approach on a physical platform on Jenga tasks using a Kinova Jaco robotic arm.
1405.4733
Ocan Sankur
Jean-Fran\c{c}ois Raskin, Ocan Sankur
Multiple-Environment Markov Decision Processes
null
null
null
null
cs.LO cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce Multi-Environment Markov Decision Processes (MEMDPs) which are MDPs with a set of probabilistic transition functions. The goal in a MEMDP is to synthesize a single controller with guaranteed performances against all environments even though the environment is unknown a priori. While MEMDPs can be seen as a special class of partially observable MDPs, we show that several verification problems that are undecidable for partially observable MDPs, are decidable for MEMDPs and sometimes have even efficient solutions.
[ { "created": "Mon, 19 May 2014 14:21:15 GMT", "version": "v1" }, { "created": "Wed, 3 Dec 2014 10:42:46 GMT", "version": "v2" } ]
2014-12-04
[ [ "Raskin", "Jean-François", "" ], [ "Sankur", "Ocan", "" ] ]
We introduce Multi-Environment Markov Decision Processes (MEMDPs) which are MDPs with a set of probabilistic transition functions. The goal in a MEMDP is to synthesize a single controller with guaranteed performances against all environments even though the environment is unknown a priori. While MEMDPs can be seen as a special class of partially observable MDPs, we show that several verification problems that are undecidable for partially observable MDPs, are decidable for MEMDPs and sometimes have even efficient solutions.
1410.7582
Kas{\i}m Sinan Y{\i}ld{\i}r{\i}m
Kas{\i}m Sinan Y{\i}ld{\i}r{\i}m and \"Onder G\"urcan
Adaptive Synchronization of Robotic Sensor Networks
First International Workshop on Robotic Sensor Networks part of Cyber-Physical Systems Week, Berlin, Germany, 14 April 2014
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The main focus of recent time synchronization research is developing power-efficient synchronization methods that meet pre-defined accuracy requirements. However, an aspect that has been often overlooked is the high dynamics of the network topology due to the mobility of the nodes. Employing existing flooding-based and peer-to-peer synchronization methods, are networked robots still be able to adapt themselves and self-adjust their logical clocks under mobile network dynamics? In this paper, we present the application and the evaluation of the existing synchronization methods on robotic sensor networks. We show through simulations that Adaptive Value Tracking synchronization is robust and efficient under mobility. Hence, deducing the time synchronization problem in robotic sensor networks into a dynamic value searching problem is preferable to existing synchronization methods in the literature.
[ { "created": "Tue, 28 Oct 2014 11:02:23 GMT", "version": "v1" } ]
2014-10-29
[ [ "Yıldırım", "Kasım Sinan", "" ], [ "Gürcan", "Önder", "" ] ]
The main focus of recent time synchronization research is developing power-efficient synchronization methods that meet pre-defined accuracy requirements. However, an aspect that has been often overlooked is the high dynamics of the network topology due to the mobility of the nodes. Employing existing flooding-based and peer-to-peer synchronization methods, are networked robots still be able to adapt themselves and self-adjust their logical clocks under mobile network dynamics? In this paper, we present the application and the evaluation of the existing synchronization methods on robotic sensor networks. We show through simulations that Adaptive Value Tracking synchronization is robust and efficient under mobility. Hence, deducing the time synchronization problem in robotic sensor networks into a dynamic value searching problem is preferable to existing synchronization methods in the literature.
1812.11950
Menglei Zhang
Menglei Zhang, Zhou Liu, Lei Yu
Image Super-Resolution via RL-CSC: When Residual Learning Meets Convolutional Sparse Coding
10 pages, 8 figures, 4 tables
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a simple yet effective model for Single Image Super-Resolution (SISR), by combining the merits of Residual Learning and Convolutional Sparse Coding (RL-CSC). Our model is inspired by the Learned Iterative Shrinkage-Threshold Algorithm (LISTA). We extend LISTA to its convolutional version and build the main part of our model by strictly following the convolutional form, which improves the network's interpretability. Specifically, the convolutional sparse codings of input feature maps are learned in a recursive manner, and high-frequency information can be recovered from these CSCs. More importantly, residual learning is applied to alleviate the training difficulty when the network goes deeper. Extensive experiments on benchmark datasets demonstrate the effectiveness of our method. RL-CSC (30 layers) outperforms several recent state-of-the-arts, e.g., DRRN (52 layers) and MemNet (80 layers) in both accuracy and visual qualities. Codes and more results are available at https://github.com/axzml/RL-CSC.
[ { "created": "Mon, 31 Dec 2018 18:44:26 GMT", "version": "v1" } ]
2019-01-01
[ [ "Zhang", "Menglei", "" ], [ "Liu", "Zhou", "" ], [ "Yu", "Lei", "" ] ]
We propose a simple yet effective model for Single Image Super-Resolution (SISR), by combining the merits of Residual Learning and Convolutional Sparse Coding (RL-CSC). Our model is inspired by the Learned Iterative Shrinkage-Threshold Algorithm (LISTA). We extend LISTA to its convolutional version and build the main part of our model by strictly following the convolutional form, which improves the network's interpretability. Specifically, the convolutional sparse codings of input feature maps are learned in a recursive manner, and high-frequency information can be recovered from these CSCs. More importantly, residual learning is applied to alleviate the training difficulty when the network goes deeper. Extensive experiments on benchmark datasets demonstrate the effectiveness of our method. RL-CSC (30 layers) outperforms several recent state-of-the-arts, e.g., DRRN (52 layers) and MemNet (80 layers) in both accuracy and visual qualities. Codes and more results are available at https://github.com/axzml/RL-CSC.
1901.11358
Ansari Saleh Ahmar
Harryanto, Muchriana Muchran, Ansari Saleh Ahmar
Application of TAM model to the use of information technology
null
International Journal of Engineering & Technology (UAE), 7 (2.9) (2018) 37-40
null
null
cs.HC
http://creativecommons.org/licenses/by-nc-sa/4.0/
The purpose of this research is to see the application of modified TAM model by entering the experiential variable as a moderation variable to see one's intention in the use of technology especially internet banking. Data obtained through the distribution of questionnaires to customers. The study population is bank customers registered as users of internet banking services. The sample selection used a simple random sampling technique. Hypothesis testing using Partial Least Square (PLS) method through AMOS program. The results showed that the proposed five hypotheses, two significant and three insignificant. Perceived ease of use is significantly related to perceived usefulness. Per-ceived usefulness is not significantly related to intention to use. Perceived ease of use is significantly related to intention to use moderated by experience and not significantly correlated with intention to use moderated by the Experience.
[ { "created": "Sun, 13 Jan 2019 22:54:30 GMT", "version": "v1" } ]
2019-02-01
[ [ "Harryanto", "", "" ], [ "Muchran", "Muchriana", "" ], [ "Ahmar", "Ansari Saleh", "" ] ]
The purpose of this research is to see the application of modified TAM model by entering the experiential variable as a moderation variable to see one's intention in the use of technology especially internet banking. Data obtained through the distribution of questionnaires to customers. The study population is bank customers registered as users of internet banking services. The sample selection used a simple random sampling technique. Hypothesis testing using Partial Least Square (PLS) method through AMOS program. The results showed that the proposed five hypotheses, two significant and three insignificant. Perceived ease of use is significantly related to perceived usefulness. Per-ceived usefulness is not significantly related to intention to use. Perceived ease of use is significantly related to intention to use moderated by experience and not significantly correlated with intention to use moderated by the Experience.
2403.01700
Haoxu Wang
Haoxu Wang, Ming Cheng, Qiang Fu, Ming Li
Robust Wake Word Spotting With Frame-Level Cross-Modal Attention Based Audio-Visual Conformer
Accepted by ICASSP 2024
null
null
null
cs.SD cs.MM eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, neural network-based Wake Word Spotting achieves good performance on clean audio samples but struggles in noisy environments. Audio-Visual Wake Word Spotting (AVWWS) receives lots of attention because visual lip movement information is not affected by complex acoustic scenes. Previous works usually use simple addition or concatenation for multi-modal fusion. The inter-modal correlation remains relatively under-explored. In this paper, we propose a novel module called Frame-Level Cross-Modal Attention (FLCMA) to improve the performance of AVWWS systems. This module can help model multi-modal information at the frame-level through synchronous lip movements and speech signals. We train the end-to-end FLCMA based Audio-Visual Conformer and further improve the performance by fine-tuning pre-trained uni-modal models for the AVWWS task. The proposed system achieves a new state-of-the-art result (4.57% WWS score) on the far-field MISP dataset.
[ { "created": "Mon, 4 Mar 2024 03:25:42 GMT", "version": "v1" } ]
2024-03-05
[ [ "Wang", "Haoxu", "" ], [ "Cheng", "Ming", "" ], [ "Fu", "Qiang", "" ], [ "Li", "Ming", "" ] ]
In recent years, neural network-based Wake Word Spotting achieves good performance on clean audio samples but struggles in noisy environments. Audio-Visual Wake Word Spotting (AVWWS) receives lots of attention because visual lip movement information is not affected by complex acoustic scenes. Previous works usually use simple addition or concatenation for multi-modal fusion. The inter-modal correlation remains relatively under-explored. In this paper, we propose a novel module called Frame-Level Cross-Modal Attention (FLCMA) to improve the performance of AVWWS systems. This module can help model multi-modal information at the frame-level through synchronous lip movements and speech signals. We train the end-to-end FLCMA based Audio-Visual Conformer and further improve the performance by fine-tuning pre-trained uni-modal models for the AVWWS task. The proposed system achieves a new state-of-the-art result (4.57% WWS score) on the far-field MISP dataset.
2305.07865
Nian Guo
Nian Guo, Shansuo Liang, Wei Han
Power Allocation for the Base Matrix of Spatially Coupled Sparse Regression Codes
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate power allocation for the base matrix of a spatially coupled sparse regression code (SC-SPARC) for reliable communications over an additive white Gaussian noise channel. A conventional SC-SPARC allocates power uniformly to the non-zero entries of its base matrix. Yet, to achieve the channel capacity with uniform power allocation, the coupling width and the coupling length of the base matrix must satisfy regularity conditions and tend to infinity as the rate approaches the capacity. For a base matrix with a pair of finite and arbitrarily chosen coupling width and coupling length, we propose a novel power allocation policy, termed V-power allocation. V-power allocation puts more power to the outer columns of the base matrix to jumpstart the decoding process and less power to the inner columns, resembling the shape of the letter V. We show that V-power allocation outperforms uniform power allocation since it ensures successful decoding for a wider range of signal-to-noise ratios given a code rate in the limit of large blocklength. In the finite blocklength regime, we show by simulations that power allocations imitating the shape of the letter V improve the error performance of a SC-SPARC.
[ { "created": "Sat, 13 May 2023 08:38:25 GMT", "version": "v1" } ]
2023-05-16
[ [ "Guo", "Nian", "" ], [ "Liang", "Shansuo", "" ], [ "Han", "Wei", "" ] ]
We investigate power allocation for the base matrix of a spatially coupled sparse regression code (SC-SPARC) for reliable communications over an additive white Gaussian noise channel. A conventional SC-SPARC allocates power uniformly to the non-zero entries of its base matrix. Yet, to achieve the channel capacity with uniform power allocation, the coupling width and the coupling length of the base matrix must satisfy regularity conditions and tend to infinity as the rate approaches the capacity. For a base matrix with a pair of finite and arbitrarily chosen coupling width and coupling length, we propose a novel power allocation policy, termed V-power allocation. V-power allocation puts more power to the outer columns of the base matrix to jumpstart the decoding process and less power to the inner columns, resembling the shape of the letter V. We show that V-power allocation outperforms uniform power allocation since it ensures successful decoding for a wider range of signal-to-noise ratios given a code rate in the limit of large blocklength. In the finite blocklength regime, we show by simulations that power allocations imitating the shape of the letter V improve the error performance of a SC-SPARC.
2012.11188
Tom Tseng
Tom Tseng, Laxman Dhulipala, Julian Shun
Parallel Index-Based Structural Graph Clustering and Its Approximation
null
null
10.1145/3448016.3457278
null
cs.DB cs.DC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
SCAN (Structural Clustering Algorithm for Networks) is a well-studied, widely used graph clustering algorithm. For large graphs, however, sequential SCAN variants are prohibitively slow, and parallel SCAN variants do not effectively share work among queries with different SCAN parameter settings. Since users of SCAN often explore many parameter settings to find good clusterings, it is worthwhile to precompute an index that speeds up queries. This paper presents a practical and provably efficient parallel index-based SCAN algorithm based on GS*-Index, a recent sequential algorithm. Our parallel algorithm improves upon the asymptotic work of the sequential algorithm by using integer sorting. It is also highly parallel, achieving logarithmic span (parallel time) for both index construction and clustering queries. Furthermore, we apply locality-sensitive hashing (LSH) to design a novel approximate SCAN algorithm and prove guarantees for its clustering behavior. We present an experimental evaluation of our algorithms on large real-world graphs. On a 48-core machine with two-way hyper-threading, our parallel index construction achieves 50--151$\times$ speedup over the construction of GS*-Index. In fact, even on a single thread, our index construction algorithm is faster than GS*-Index. Our parallel index query implementation achieves 5--32$\times$ speedup over GS*-Index queries across a range of SCAN parameter values, and our implementation is always faster than ppSCAN, a state-of-the-art parallel SCAN algorithm. Moreover, our experiments show that applying LSH results in faster index construction while maintaining good clustering quality.
[ { "created": "Mon, 21 Dec 2020 09:07:44 GMT", "version": "v1" }, { "created": "Tue, 30 Mar 2021 21:51:50 GMT", "version": "v2" } ]
2021-04-01
[ [ "Tseng", "Tom", "" ], [ "Dhulipala", "Laxman", "" ], [ "Shun", "Julian", "" ] ]
SCAN (Structural Clustering Algorithm for Networks) is a well-studied, widely used graph clustering algorithm. For large graphs, however, sequential SCAN variants are prohibitively slow, and parallel SCAN variants do not effectively share work among queries with different SCAN parameter settings. Since users of SCAN often explore many parameter settings to find good clusterings, it is worthwhile to precompute an index that speeds up queries. This paper presents a practical and provably efficient parallel index-based SCAN algorithm based on GS*-Index, a recent sequential algorithm. Our parallel algorithm improves upon the asymptotic work of the sequential algorithm by using integer sorting. It is also highly parallel, achieving logarithmic span (parallel time) for both index construction and clustering queries. Furthermore, we apply locality-sensitive hashing (LSH) to design a novel approximate SCAN algorithm and prove guarantees for its clustering behavior. We present an experimental evaluation of our algorithms on large real-world graphs. On a 48-core machine with two-way hyper-threading, our parallel index construction achieves 50--151$\times$ speedup over the construction of GS*-Index. In fact, even on a single thread, our index construction algorithm is faster than GS*-Index. Our parallel index query implementation achieves 5--32$\times$ speedup over GS*-Index queries across a range of SCAN parameter values, and our implementation is always faster than ppSCAN, a state-of-the-art parallel SCAN algorithm. Moreover, our experiments show that applying LSH results in faster index construction while maintaining good clustering quality.
2104.02461
Sanjeev Saxena
Waseem Akram and Sanjeev Saxena
Sorted Range Reporting and Range Minima Queries
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given an array A[1: n] of n elements drawn from an ordered set, the sorted range selection problem is to build a data structure that can be used to answer the following type of queries efficiently: Given a pair of indices i, j $ (1\le i\le j \le n)$, and a positive integer k, report the k smallest elements from the sub-array A[i: j] in order. Brodal et al. (Brodal, G.S., Fagerberg, R., Greve, M., and L{\'o}pez-Ortiz, A., Online sorted range reporting. Algorithms and Computation (2009) pp. 173--182) introduced the problem and gave an optimal solution. After O(n log n) time for preprocessing, the query time is O(k). The space used is O(n). In this paper, we propose the only other possible optimal trade-off for the problem. We present a linear space solution to the problem that takes O(k log k) time to answer a range selection query. The preprocessing time is O(n). Moreover, the proposed algorithm reports the output elements one by one in non-decreasing order. Our solution is simple and practical. We also describe an extremely simple method for range minima queries (most of whose parts are known) which takes al most (but not exactly) linear time. We believe that this method may be, in practice, faster and easier to implement in most cases.
[ { "created": "Tue, 6 Apr 2021 12:39:28 GMT", "version": "v1" }, { "created": "Fri, 16 Dec 2022 07:17:24 GMT", "version": "v2" }, { "created": "Wed, 23 Aug 2023 08:27:22 GMT", "version": "v3" }, { "created": "Tue, 19 Sep 2023 09:55:20 GMT", "version": "v4" } ]
2023-09-20
[ [ "Akram", "Waseem", "" ], [ "Saxena", "Sanjeev", "" ] ]
Given an array A[1: n] of n elements drawn from an ordered set, the sorted range selection problem is to build a data structure that can be used to answer the following type of queries efficiently: Given a pair of indices i, j $ (1\le i\le j \le n)$, and a positive integer k, report the k smallest elements from the sub-array A[i: j] in order. Brodal et al. (Brodal, G.S., Fagerberg, R., Greve, M., and L{\'o}pez-Ortiz, A., Online sorted range reporting. Algorithms and Computation (2009) pp. 173--182) introduced the problem and gave an optimal solution. After O(n log n) time for preprocessing, the query time is O(k). The space used is O(n). In this paper, we propose the only other possible optimal trade-off for the problem. We present a linear space solution to the problem that takes O(k log k) time to answer a range selection query. The preprocessing time is O(n). Moreover, the proposed algorithm reports the output elements one by one in non-decreasing order. Our solution is simple and practical. We also describe an extremely simple method for range minima queries (most of whose parts are known) which takes al most (but not exactly) linear time. We believe that this method may be, in practice, faster and easier to implement in most cases.
1307.0849
James Yang
James Yifei Yang and Bruce Hajek
Single Video Performance Analysis for Video-on-Demand Systems
null
null
null
null
cs.NI cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the content placement problem for cache delivery video-on-demand systems under static random network topologies with fixed heavy-tailed video demand. The performance measure is the amount of server load; we wish to minimize the total download rate for all users from the server and maximize the rate from caches. Our approach reduces the analysis for multiple videos to consideration of decoupled systems with a single video each. For each placement policy, insights gained from the single video analysis carry back to the original multiple video content placement problem. Finally, we propose a hybrid placement technique that achieves near optimal performance with low complexity.
[ { "created": "Tue, 2 Jul 2013 20:56:37 GMT", "version": "v1" } ]
2013-07-04
[ [ "Yang", "James Yifei", "" ], [ "Hajek", "Bruce", "" ] ]
We study the content placement problem for cache delivery video-on-demand systems under static random network topologies with fixed heavy-tailed video demand. The performance measure is the amount of server load; we wish to minimize the total download rate for all users from the server and maximize the rate from caches. Our approach reduces the analysis for multiple videos to consideration of decoupled systems with a single video each. For each placement policy, insights gained from the single video analysis carry back to the original multiple video content placement problem. Finally, we propose a hybrid placement technique that achieves near optimal performance with low complexity.
1103.5580
Saheeb Kayani Engr.
Saheeb Ahmed Kayani
Designing a Miniature Wheel Arrangement for Mobile Robot Platforms
Final published version, hardcopy available from technical library of NUST College of E&ME, Rawalpindi, Pakistan on request
null
null
DME-RR-02
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this research report details of design of a miniature wheel arrangement are presented. This miniature wheel arrangement is essentially a direction control mechanism intended for use on a mobile robot platform or base. The design is a specific one employing a stepper motor as actuator and as described can only be used on a certain type of wheeled robots. However, as a basic steering control element, more than one of these miniature wheel arrangements can be grouped together to implement more elaborate and intelligent direction control schemes on varying configurations of wheeled mobile robot platforms.
[ { "created": "Tue, 29 Mar 2011 09:35:49 GMT", "version": "v1" }, { "created": "Wed, 30 Mar 2011 09:32:01 GMT", "version": "v2" }, { "created": "Sat, 2 Apr 2011 10:20:28 GMT", "version": "v3" } ]
2011-04-05
[ [ "Kayani", "Saheeb Ahmed", "" ] ]
In this research report details of design of a miniature wheel arrangement are presented. This miniature wheel arrangement is essentially a direction control mechanism intended for use on a mobile robot platform or base. The design is a specific one employing a stepper motor as actuator and as described can only be used on a certain type of wheeled robots. However, as a basic steering control element, more than one of these miniature wheel arrangements can be grouped together to implement more elaborate and intelligent direction control schemes on varying configurations of wheeled mobile robot platforms.
2011.00310
Artem Kramov
S. D. Pogorilyy and A. A. Kramov
Method of the coherence evaluation of Ukrainian text
16 pages, in Ukrainian, 5 figures
Data Recording, Storage & Processing. 2018. Vol. 20, Issue 4. P. 64-75
10.35681/1560-9189.2018
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to the growing role of the SEO technologies, it is necessary to perform an automated analysis of the article's quality. Such approach helps both to return the most intelligible pages for the user's query and to raise the web sites positions to the top of query results. An automated assessment of a coherence is a part of the complex analysis of the text. In this article, main methods for text coherence measurements for Ukrainian language are analyzed. Expediency of using the semantic similarity graph method in comparison with other methods are explained. It is suggested the improvement of that method by the pre-training of the neural network for vector representations of sentences. Experimental examination of the original method and its modifications is made. Training and examination procedures are made on the corpus of Ukrainian texts, which were previously retrieved from abstracts and full texts of Ukrainian scientific articles. The testing procedure is implemented by performing of two typical tasks for the text coherence assessment: document discrimination task and insertion task. Accordingly to the analysis it is defined the most effective combination of method's modification and its parameter for the measurement of the text coherence.
[ { "created": "Sat, 31 Oct 2020 16:48:55 GMT", "version": "v1" } ]
2020-11-03
[ [ "Pogorilyy", "S. D.", "" ], [ "Kramov", "A. A.", "" ] ]
Due to the growing role of the SEO technologies, it is necessary to perform an automated analysis of the article's quality. Such approach helps both to return the most intelligible pages for the user's query and to raise the web sites positions to the top of query results. An automated assessment of a coherence is a part of the complex analysis of the text. In this article, main methods for text coherence measurements for Ukrainian language are analyzed. Expediency of using the semantic similarity graph method in comparison with other methods are explained. It is suggested the improvement of that method by the pre-training of the neural network for vector representations of sentences. Experimental examination of the original method and its modifications is made. Training and examination procedures are made on the corpus of Ukrainian texts, which were previously retrieved from abstracts and full texts of Ukrainian scientific articles. The testing procedure is implemented by performing of two typical tasks for the text coherence assessment: document discrimination task and insertion task. Accordingly to the analysis it is defined the most effective combination of method's modification and its parameter for the measurement of the text coherence.
1709.01190
Yiqiu Wang
Yiqiu Wang, Anshumali Shrivastava, Jonathan Wang, Junghee Ryu
FLASH: Randomized Algorithms Accelerated over CPU-GPU for Ultra-High Dimensional Similarity Search
null
null
null
null
cs.DS cs.DB cs.DC cs.IR cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present FLASH (\textbf{F}ast \textbf{L}SH \textbf{A}lgorithm for \textbf{S}imilarity search accelerated with \textbf{H}PC), a similarity search system for ultra-high dimensional datasets on a single machine, that does not require similarity computations and is tailored for high-performance computing platforms. By leveraging a LSH style randomized indexing procedure and combining it with several principled techniques, such as reservoir sampling, recent advances in one-pass minwise hashing, and count based estimations, we reduce the computational and parallelization costs of similarity search, while retaining sound theoretical guarantees. We evaluate FLASH on several real, high-dimensional datasets from different domains, including text, malicious URL, click-through prediction, social networks, etc. Our experiments shed new light on the difficulties associated with datasets having several million dimensions. Current state-of-the-art implementations either fail on the presented scale or are orders of magnitude slower than FLASH. FLASH is capable of computing an approximate k-NN graph, from scratch, over the full webspam dataset (1.3 billion nonzeros) in less than 10 seconds. Computing a full k-NN graph in less than 10 seconds on the webspam dataset, using brute-force ($n^2D$), will require at least 20 teraflops. We provide CPU and GPU implementations of FLASH for replicability of our results.
[ { "created": "Mon, 4 Sep 2017 23:09:19 GMT", "version": "v1" }, { "created": "Tue, 3 Jul 2018 07:09:23 GMT", "version": "v2" } ]
2018-07-04
[ [ "Wang", "Yiqiu", "" ], [ "Shrivastava", "Anshumali", "" ], [ "Wang", "Jonathan", "" ], [ "Ryu", "Junghee", "" ] ]
We present FLASH (\textbf{F}ast \textbf{L}SH \textbf{A}lgorithm for \textbf{S}imilarity search accelerated with \textbf{H}PC), a similarity search system for ultra-high dimensional datasets on a single machine, that does not require similarity computations and is tailored for high-performance computing platforms. By leveraging a LSH style randomized indexing procedure and combining it with several principled techniques, such as reservoir sampling, recent advances in one-pass minwise hashing, and count based estimations, we reduce the computational and parallelization costs of similarity search, while retaining sound theoretical guarantees. We evaluate FLASH on several real, high-dimensional datasets from different domains, including text, malicious URL, click-through prediction, social networks, etc. Our experiments shed new light on the difficulties associated with datasets having several million dimensions. Current state-of-the-art implementations either fail on the presented scale or are orders of magnitude slower than FLASH. FLASH is capable of computing an approximate k-NN graph, from scratch, over the full webspam dataset (1.3 billion nonzeros) in less than 10 seconds. Computing a full k-NN graph in less than 10 seconds on the webspam dataset, using brute-force ($n^2D$), will require at least 20 teraflops. We provide CPU and GPU implementations of FLASH for replicability of our results.
0804.2535
Aleksander Wojdyga
Aleksander Wojdyga
Short proofs of strong normalization
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents simple, syntactic strong normalization proofs for the simply-typed lambda-calculus and the polymorphic lambda-calculus (system F) with the full set of logical connectives, and all the permutative reductions. The normalization proofs use translations of terms and types to systems, for which strong normalization property is known.
[ { "created": "Wed, 16 Apr 2008 07:09:59 GMT", "version": "v1" } ]
2008-04-17
[ [ "Wojdyga", "Aleksander", "" ] ]
This paper presents simple, syntactic strong normalization proofs for the simply-typed lambda-calculus and the polymorphic lambda-calculus (system F) with the full set of logical connectives, and all the permutative reductions. The normalization proofs use translations of terms and types to systems, for which strong normalization property is known.
1810.12611
Asifullah Khan
Aqsa Saeed Qureshi, Asifullah Khan
Adaptive Transfer Learning in Deep Neural Networks: Wind Power Prediction using Knowledge Transfer from Region to Region and Between Different Task Domains
28 pages, 21 figures, and 11 tables
null
10.1111/coin.12236
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transfer Learning (TL) in Deep Neural Networks is gaining importance because in most of the applications, the labeling of data is costly and time-consuming. Additionally, TL also provides an effective weight initialization strategy for Deep Neural Networks . This paper introduces the idea of Adaptive Transfer Learning in Deep Neural Networks (ATL-DNN) for wind power prediction. Specifically, we show in case of wind power prediction that adaptive TL of Deep Neural Networks system can be adaptively modified as regards training on a different wind farm is concerned. The proposed ATL-DNN technique is tested for short-term wind power prediction, where continuously arriving information has to be exploited. Adaptive TL not only helps in providing good weight initialization, but is also helpful to utilize the incoming data for effective learning. Additionally, the proposed ATL-DNN technique is shown to transfer knowledge between different task domains (wind power to wind speed prediction) and from one region to another region. The simulation results show that the proposed ATL-DNN technique achieves average values of 0.0637,0.0986, and 0.0984 for the Mean-Absolute-Error, Root-Mean-Squared-Error, and Standard-Deviation-Error, respectively.
[ { "created": "Tue, 30 Oct 2018 09:47:32 GMT", "version": "v1" }, { "created": "Thu, 20 Dec 2018 11:27:20 GMT", "version": "v2" } ]
2019-08-20
[ [ "Qureshi", "Aqsa Saeed", "" ], [ "Khan", "Asifullah", "" ] ]
Transfer Learning (TL) in Deep Neural Networks is gaining importance because in most of the applications, the labeling of data is costly and time-consuming. Additionally, TL also provides an effective weight initialization strategy for Deep Neural Networks . This paper introduces the idea of Adaptive Transfer Learning in Deep Neural Networks (ATL-DNN) for wind power prediction. Specifically, we show in case of wind power prediction that adaptive TL of Deep Neural Networks system can be adaptively modified as regards training on a different wind farm is concerned. The proposed ATL-DNN technique is tested for short-term wind power prediction, where continuously arriving information has to be exploited. Adaptive TL not only helps in providing good weight initialization, but is also helpful to utilize the incoming data for effective learning. Additionally, the proposed ATL-DNN technique is shown to transfer knowledge between different task domains (wind power to wind speed prediction) and from one region to another region. The simulation results show that the proposed ATL-DNN technique achieves average values of 0.0637,0.0986, and 0.0984 for the Mean-Absolute-Error, Root-Mean-Squared-Error, and Standard-Deviation-Error, respectively.
2007.14896
Anirudh Paranjothi
Anirudh Paranjothi, Mohammad S. Khan, Sherali Zeadally
Survey on Congestion Detection and Control in Connected Vehicles
null
null
10.1016/j.adhoc.2020.102277
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The dynamic nature of vehicular ad hoc network (VANET) induced by frequent topology changes and node mobility, imposes critical challenges for vehicular communications. Aggravated by the high volume of information dissemination among vehicles over limited bandwidth, the topological dynamics of VANET causes congestion in the communication channel, which is the primary cause of problems such as message drop, delay, and degraded quality of service. To mitigate these problems, congestion detection, and control techniques are needed to be incorporated in a vehicular network. Congestion control approaches can be either open-loop or closed loop based on pre-congestion or post congestion strategies. We present a general architecture of vehicular communication in urban and highway environment as well as a state-of-the-art survey of recent congestion detection and control techniques. We also identify the drawbacks of existing approaches and classify them according to different hierarchical schemes. Through an extensive literature review, we recommend solution approaches and future directions for handling congestion in vehicular communications.
[ { "created": "Wed, 29 Jul 2020 15:13:31 GMT", "version": "v1" } ]
2020-07-30
[ [ "Paranjothi", "Anirudh", "" ], [ "Khan", "Mohammad S.", "" ], [ "Zeadally", "Sherali", "" ] ]
The dynamic nature of vehicular ad hoc network (VANET) induced by frequent topology changes and node mobility, imposes critical challenges for vehicular communications. Aggravated by the high volume of information dissemination among vehicles over limited bandwidth, the topological dynamics of VANET causes congestion in the communication channel, which is the primary cause of problems such as message drop, delay, and degraded quality of service. To mitigate these problems, congestion detection, and control techniques are needed to be incorporated in a vehicular network. Congestion control approaches can be either open-loop or closed loop based on pre-congestion or post congestion strategies. We present a general architecture of vehicular communication in urban and highway environment as well as a state-of-the-art survey of recent congestion detection and control techniques. We also identify the drawbacks of existing approaches and classify them according to different hierarchical schemes. Through an extensive literature review, we recommend solution approaches and future directions for handling congestion in vehicular communications.
2004.09811
Yuanxin Ye
Bai Zhu, Yuanxin Ye, Chao Yang, Liang Zhou, Huiyu Liu, Yungang Cao
Fast and Robust Registration of Aerial Images and LiDAR data Based on Structrual Features and 3D Phase Correlation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Co-Registration of aerial imagery and Light Detection and Ranging (LiDAR) data is quilt challenging because the different imaging mechanism causes significant geometric and radiometric distortions between such data. To tackle the problem, this paper proposes an automatic registration method based on structural features and three-dimension (3D) phase correlation. In the proposed method, the LiDAR point cloud data is first transformed into the intensity map, which is used as the reference image. Then, we employ the Fast operator to extract uniformly distributed interest points in the aerial image by a partition strategy and perform a local geometric correction by using the collinearity equation to eliminate scale and rotation difference between images. Subsequently, a robust structural feature descriptor is build based on dense gradient features, and the 3D phase correlation is used to detect control points (CPs) between aerial images and LiDAR data in the frequency domain, where the image matching is accelerated by the 3D Fast Fourier Transform (FFT). Finally, the obtained CPs are employed to correct the exterior orientation elements, which is used to achieve co-registration of aerial images and LiDAR data. Experiments with two datasets of aerial images and LiDAR data show that the proposed method is much faster and more robust than state of the art methods
[ { "created": "Tue, 21 Apr 2020 08:19:56 GMT", "version": "v1" } ]
2020-04-22
[ [ "Zhu", "Bai", "" ], [ "Ye", "Yuanxin", "" ], [ "Yang", "Chao", "" ], [ "Zhou", "Liang", "" ], [ "Liu", "Huiyu", "" ], [ "Cao", "Yungang", "" ] ]
Co-Registration of aerial imagery and Light Detection and Ranging (LiDAR) data is quilt challenging because the different imaging mechanism causes significant geometric and radiometric distortions between such data. To tackle the problem, this paper proposes an automatic registration method based on structural features and three-dimension (3D) phase correlation. In the proposed method, the LiDAR point cloud data is first transformed into the intensity map, which is used as the reference image. Then, we employ the Fast operator to extract uniformly distributed interest points in the aerial image by a partition strategy and perform a local geometric correction by using the collinearity equation to eliminate scale and rotation difference between images. Subsequently, a robust structural feature descriptor is build based on dense gradient features, and the 3D phase correlation is used to detect control points (CPs) between aerial images and LiDAR data in the frequency domain, where the image matching is accelerated by the 3D Fast Fourier Transform (FFT). Finally, the obtained CPs are employed to correct the exterior orientation elements, which is used to achieve co-registration of aerial images and LiDAR data. Experiments with two datasets of aerial images and LiDAR data show that the proposed method is much faster and more robust than state of the art methods
2012.06279
Oliver Struckmeier
Oliver Struckmeier, Kshitij Tiwari, Ville Kyrki
Autoencoding Slow Representations for Semi-supervised Data Efficient Regression
11 pages, 6 figures
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The slowness principle is a concept inspired by the visual cortex of the brain. It postulates that the underlying generative factors of a quickly varying sensory signal change on a slower time scale. Unsupervised learning of intermediate representations utilizing abundant unlabeled sensory data can be leveraged to perform data-efficient supervised downstream regression. In this paper, we propose a general formulation of slowness for unsupervised representation learning adding a slowness regularization term to the estimate lower bound of the beta-VAE to encourage temporal similarity in observation and latent space. Within this framework we compare existing slowness regularization terms such as the L1 and L2 loss used in existing end-to-end methods, the SlowVAE and propose a new term based on Brownian motion. We empirically evaluate these slowness regularization terms with respect to their downstream task performance and data efficiency. We find that slow representations lead to equal or better downstream task performance and data efficiency in different experiment domains when compared to representations without slowness regularization. Finally, we discuss how the Frechet Inception Distance (FID), traditionally used to determine the generative capabilities of GANs, can serve as a measure to predict the performance of pre-trained Autoencoder model in a supervised downstream task and accelerate hyperparameter search.
[ { "created": "Fri, 11 Dec 2020 12:19:45 GMT", "version": "v1" }, { "created": "Sun, 4 Jul 2021 18:43:55 GMT", "version": "v2" } ]
2021-07-06
[ [ "Struckmeier", "Oliver", "" ], [ "Tiwari", "Kshitij", "" ], [ "Kyrki", "Ville", "" ] ]
The slowness principle is a concept inspired by the visual cortex of the brain. It postulates that the underlying generative factors of a quickly varying sensory signal change on a slower time scale. Unsupervised learning of intermediate representations utilizing abundant unlabeled sensory data can be leveraged to perform data-efficient supervised downstream regression. In this paper, we propose a general formulation of slowness for unsupervised representation learning adding a slowness regularization term to the estimate lower bound of the beta-VAE to encourage temporal similarity in observation and latent space. Within this framework we compare existing slowness regularization terms such as the L1 and L2 loss used in existing end-to-end methods, the SlowVAE and propose a new term based on Brownian motion. We empirically evaluate these slowness regularization terms with respect to their downstream task performance and data efficiency. We find that slow representations lead to equal or better downstream task performance and data efficiency in different experiment domains when compared to representations without slowness regularization. Finally, we discuss how the Frechet Inception Distance (FID), traditionally used to determine the generative capabilities of GANs, can serve as a measure to predict the performance of pre-trained Autoencoder model in a supervised downstream task and accelerate hyperparameter search.
2405.01306
Zhenhan Huang
Zhenhan Huang, Tejaswini Pedapati, Pin-Yu Chen, Chunhen Jiang, Jianxi Gao
Graph is all you need? Lightweight data-agnostic neural architecture search without training
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Neural architecture search (NAS) enables the automatic design of neural network models. However, training the candidates generated by the search algorithm for performance evaluation incurs considerable computational overhead. Our method, dubbed nasgraph, remarkably reduces the computational costs by converting neural architectures to graphs and using the average degree, a graph measure, as the proxy in lieu of the evaluation metric. Our training-free NAS method is data-agnostic and light-weight. It can find the best architecture among 200 randomly sampled architectures from NAS-Bench201 in 217 CPU seconds. Besides, our method is able to achieve competitive performance on various datasets including NASBench-101, NASBench-201, and NDS search spaces. We also demonstrate that nasgraph generalizes to more challenging tasks on Micro TransNAS-Bench-101.
[ { "created": "Thu, 2 May 2024 14:12:58 GMT", "version": "v1" } ]
2024-05-03
[ [ "Huang", "Zhenhan", "" ], [ "Pedapati", "Tejaswini", "" ], [ "Chen", "Pin-Yu", "" ], [ "Jiang", "Chunhen", "" ], [ "Gao", "Jianxi", "" ] ]
Neural architecture search (NAS) enables the automatic design of neural network models. However, training the candidates generated by the search algorithm for performance evaluation incurs considerable computational overhead. Our method, dubbed nasgraph, remarkably reduces the computational costs by converting neural architectures to graphs and using the average degree, a graph measure, as the proxy in lieu of the evaluation metric. Our training-free NAS method is data-agnostic and light-weight. It can find the best architecture among 200 randomly sampled architectures from NAS-Bench201 in 217 CPU seconds. Besides, our method is able to achieve competitive performance on various datasets including NASBench-101, NASBench-201, and NDS search spaces. We also demonstrate that nasgraph generalizes to more challenging tasks on Micro TransNAS-Bench-101.
1910.06960
Ahmed Alkhateeb
Yu Zhang, Muhammad Alrabeiah, and Ahmed Alkhateeb
Deep Learning for Massive MIMO with 1-Bit ADCs: When More Antennas Need Fewer Pilots
Accepted in IEEE Wireless Communications Letters; 5 pages; 3 figures
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper considers uplink massive MIMO systems with 1-bit analog-to-digital converters (ADCs) and develops a deep-learning based channel estimation framework. In this framework, the prior channel estimation observations and deep neural network models are leveraged to learn the non-trivial mapping from quantized received measurements to channels. For that, we derive the sufficient length and structure of the pilot sequence to guarantee the existence of this mapping function. This leads to the interesting, and \textit{counter-intuitive}, observation that when more antennas are employed by the massive MIMO base station, our proposed deep learning approach achieves better channel estimation performance, for the same pilot sequence length. Equivalently, for the same channel estimation performance, this means that when more antennas are employed, fewer pilots are required. This observation is also analytically proved for some special channel models. Simulation results confirm our observations and show that more antennas lead to better channel estimation both in terms of the normalized mean squared error and the achievable signal-to-noise ratio per antenna.
[ { "created": "Tue, 15 Oct 2019 17:55:19 GMT", "version": "v1" }, { "created": "Thu, 7 May 2020 23:21:32 GMT", "version": "v2" } ]
2020-05-11
[ [ "Zhang", "Yu", "" ], [ "Alrabeiah", "Muhammad", "" ], [ "Alkhateeb", "Ahmed", "" ] ]
This paper considers uplink massive MIMO systems with 1-bit analog-to-digital converters (ADCs) and develops a deep-learning based channel estimation framework. In this framework, the prior channel estimation observations and deep neural network models are leveraged to learn the non-trivial mapping from quantized received measurements to channels. For that, we derive the sufficient length and structure of the pilot sequence to guarantee the existence of this mapping function. This leads to the interesting, and \textit{counter-intuitive}, observation that when more antennas are employed by the massive MIMO base station, our proposed deep learning approach achieves better channel estimation performance, for the same pilot sequence length. Equivalently, for the same channel estimation performance, this means that when more antennas are employed, fewer pilots are required. This observation is also analytically proved for some special channel models. Simulation results confirm our observations and show that more antennas lead to better channel estimation both in terms of the normalized mean squared error and the achievable signal-to-noise ratio per antenna.
2102.11127
Xueli Yu
Xueli Yu, Weizhi Xu, Zeyu Cui, Shu Wu, Liang Wang
Graph-based Hierarchical Relevance Matching Signals for Ad-hoc Retrieval
To appear at WWW 2021
null
10.1145/3442381.3450115
null
cs.IR
http://creativecommons.org/licenses/by/4.0/
The ad-hoc retrieval task is to rank related documents given a query and a document collection. A series of deep learning based approaches have been proposed to solve such problem and gained lots of attention. However, we argue that they are inherently based on local word sequences, ignoring the subtle long-distance document-level word relationships. To solve the problem, we explicitly model the document-level word relationship through the graph structure, capturing the subtle information via graph neural networks. In addition, due to the complexity and scale of the document collections, it is considerable to explore the different grain-sized hierarchical matching signals at a more general level. Therefore, we propose a Graph-based Hierarchical Relevance Matching model (GHRM) for ad-hoc retrieval, by which we can capture the subtle and general hierarchical matching signals simultaneously. We validate the effects of GHRM over two representative ad-hoc retrieval benchmarks, the comprehensive experiments and results demonstrate its superiority over state-of-the-art methods.
[ { "created": "Mon, 22 Feb 2021 15:57:08 GMT", "version": "v1" } ]
2021-02-23
[ [ "Yu", "Xueli", "" ], [ "Xu", "Weizhi", "" ], [ "Cui", "Zeyu", "" ], [ "Wu", "Shu", "" ], [ "Wang", "Liang", "" ] ]
The ad-hoc retrieval task is to rank related documents given a query and a document collection. A series of deep learning based approaches have been proposed to solve such problem and gained lots of attention. However, we argue that they are inherently based on local word sequences, ignoring the subtle long-distance document-level word relationships. To solve the problem, we explicitly model the document-level word relationship through the graph structure, capturing the subtle information via graph neural networks. In addition, due to the complexity and scale of the document collections, it is considerable to explore the different grain-sized hierarchical matching signals at a more general level. Therefore, we propose a Graph-based Hierarchical Relevance Matching model (GHRM) for ad-hoc retrieval, by which we can capture the subtle and general hierarchical matching signals simultaneously. We validate the effects of GHRM over two representative ad-hoc retrieval benchmarks, the comprehensive experiments and results demonstrate its superiority over state-of-the-art methods.
2404.01735
Yunshan Ma
Yunshan Ma, Yingzhi He, Wenjun Zhong, Xiang Wang, Roger Zimmermann, Tat-Seng Chua
CIRP: Cross-Item Relational Pre-training for Multimodal Product Bundling
arXiv preprint, 10 pages, 4 figures, 6 tables
null
null
null
cs.IR cs.MM
http://creativecommons.org/licenses/by/4.0/
Product bundling has been a prevailing marketing strategy that is beneficial in the online shopping scenario. Effective product bundling methods depend on high-quality item representations, which need to capture both the individual items' semantics and cross-item relations. However, previous item representation learning methods, either feature fusion or graph learning, suffer from inadequate cross-modal alignment and struggle to capture the cross-item relations for cold-start items. Multimodal pre-train models could be the potential solutions given their promising performance on various multimodal downstream tasks. However, the cross-item relations have been under-explored in the current multimodal pre-train models. To bridge this gap, we propose a novel and simple framework Cross-Item Relational Pre-training (CIRP) for item representation learning in product bundling. Specifically, we employ a multimodal encoder to generate image and text representations. Then we leverage both the cross-item contrastive loss (CIC) and individual item's image-text contrastive loss (ITC) as the pre-train objectives. Our method seeks to integrate cross-item relation modeling capability into the multimodal encoder, while preserving the in-depth aligned multimodal semantics. Therefore, even for cold-start items that have no relations, their representations are still relation-aware. Furthermore, to eliminate the potential noise and reduce the computational cost, we harness a relation pruning module to remove the noisy and redundant relations. We apply the item representations extracted by CIRP to the product bundling model ItemKNN, and experiments on three e-commerce datasets demonstrate that CIRP outperforms various leading representation learning methods.
[ { "created": "Tue, 2 Apr 2024 08:50:55 GMT", "version": "v1" } ]
2024-04-03
[ [ "Ma", "Yunshan", "" ], [ "He", "Yingzhi", "" ], [ "Zhong", "Wenjun", "" ], [ "Wang", "Xiang", "" ], [ "Zimmermann", "Roger", "" ], [ "Chua", "Tat-Seng", "" ] ]
Product bundling has been a prevailing marketing strategy that is beneficial in the online shopping scenario. Effective product bundling methods depend on high-quality item representations, which need to capture both the individual items' semantics and cross-item relations. However, previous item representation learning methods, either feature fusion or graph learning, suffer from inadequate cross-modal alignment and struggle to capture the cross-item relations for cold-start items. Multimodal pre-train models could be the potential solutions given their promising performance on various multimodal downstream tasks. However, the cross-item relations have been under-explored in the current multimodal pre-train models. To bridge this gap, we propose a novel and simple framework Cross-Item Relational Pre-training (CIRP) for item representation learning in product bundling. Specifically, we employ a multimodal encoder to generate image and text representations. Then we leverage both the cross-item contrastive loss (CIC) and individual item's image-text contrastive loss (ITC) as the pre-train objectives. Our method seeks to integrate cross-item relation modeling capability into the multimodal encoder, while preserving the in-depth aligned multimodal semantics. Therefore, even for cold-start items that have no relations, their representations are still relation-aware. Furthermore, to eliminate the potential noise and reduce the computational cost, we harness a relation pruning module to remove the noisy and redundant relations. We apply the item representations extracted by CIRP to the product bundling model ItemKNN, and experiments on three e-commerce datasets demonstrate that CIRP outperforms various leading representation learning methods.
2406.08009
Yinan Deng
Yinan Deng, Jiahui Wang, Jingyu Zhao, Jianyu Dou, Yi Yang, and Yufeng Yue
OpenObj: Open-Vocabulary Object-Level Neural Radiance Fields with Fine-Grained Understanding
8 pages, 7figures. Project Url: https://openobj.github.io/
null
null
null
cs.CV cs.AI cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, there has been a surge of interest in open-vocabulary 3D scene reconstruction facilitated by visual language models (VLMs), which showcase remarkable capabilities in open-set retrieval. However, existing methods face some limitations: they either focus on learning point-wise features, resulting in blurry semantic understanding, or solely tackle object-level reconstruction, thereby overlooking the intricate details of the object's interior. To address these challenges, we introduce OpenObj, an innovative approach to build open-vocabulary object-level Neural Radiance Fields (NeRF) with fine-grained understanding. In essence, OpenObj establishes a robust framework for efficient and watertight scene modeling and comprehension at the object-level. Moreover, we incorporate part-level features into the neural fields, enabling a nuanced representation of object interiors. This approach captures object-level instances while maintaining a fine-grained understanding. The results on multiple datasets demonstrate that OpenObj achieves superior performance in zero-shot semantic segmentation and retrieval tasks. Additionally, OpenObj supports real-world robotics tasks at multiple scales, including global movement and local manipulation.
[ { "created": "Wed, 12 Jun 2024 08:59:33 GMT", "version": "v1" } ]
2024-06-13
[ [ "Deng", "Yinan", "" ], [ "Wang", "Jiahui", "" ], [ "Zhao", "Jingyu", "" ], [ "Dou", "Jianyu", "" ], [ "Yang", "Yi", "" ], [ "Yue", "Yufeng", "" ] ]
In recent years, there has been a surge of interest in open-vocabulary 3D scene reconstruction facilitated by visual language models (VLMs), which showcase remarkable capabilities in open-set retrieval. However, existing methods face some limitations: they either focus on learning point-wise features, resulting in blurry semantic understanding, or solely tackle object-level reconstruction, thereby overlooking the intricate details of the object's interior. To address these challenges, we introduce OpenObj, an innovative approach to build open-vocabulary object-level Neural Radiance Fields (NeRF) with fine-grained understanding. In essence, OpenObj establishes a robust framework for efficient and watertight scene modeling and comprehension at the object-level. Moreover, we incorporate part-level features into the neural fields, enabling a nuanced representation of object interiors. This approach captures object-level instances while maintaining a fine-grained understanding. The results on multiple datasets demonstrate that OpenObj achieves superior performance in zero-shot semantic segmentation and retrieval tasks. Additionally, OpenObj supports real-world robotics tasks at multiple scales, including global movement and local manipulation.
2403.02227
Yongzhao Wang
Ariyan Bighashdel, Yongzhao Wang, Stephen McAleer, Rahul Savani, Frans A. Oliehoek
Policy Space Response Oracles: A Survey
Ariyan Bighashdel and Yongzhao Wang contributed equally
The 33rd International Joint Conference on Artificial Intelligence, 2024
null
null
cs.GT cs.AI cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Game theory provides a mathematical way to study the interaction between multiple decision makers. However, classical game-theoretic analysis is limited in scalability due to the large number of strategies, precluding direct application to more complex scenarios. This survey provides a comprehensive overview of a framework for large games, known as Policy Space Response Oracles (PSRO), which holds promise to improve scalability by focusing attention on sufficient subsets of strategies. We first motivate PSRO and provide historical context. We then focus on the strategy exploration problem for PSRO: the challenge of assembling effective subsets of strategies that still represent the original game well with minimum computational cost. We survey current research directions for enhancing the efficiency of PSRO, and explore the applications of PSRO across various domains. We conclude by discussing open questions and future research.
[ { "created": "Mon, 4 Mar 2024 17:15:09 GMT", "version": "v1" }, { "created": "Mon, 27 May 2024 16:49:18 GMT", "version": "v2" } ]
2024-05-28
[ [ "Bighashdel", "Ariyan", "" ], [ "Wang", "Yongzhao", "" ], [ "McAleer", "Stephen", "" ], [ "Savani", "Rahul", "" ], [ "Oliehoek", "Frans A.", "" ] ]
Game theory provides a mathematical way to study the interaction between multiple decision makers. However, classical game-theoretic analysis is limited in scalability due to the large number of strategies, precluding direct application to more complex scenarios. This survey provides a comprehensive overview of a framework for large games, known as Policy Space Response Oracles (PSRO), which holds promise to improve scalability by focusing attention on sufficient subsets of strategies. We first motivate PSRO and provide historical context. We then focus on the strategy exploration problem for PSRO: the challenge of assembling effective subsets of strategies that still represent the original game well with minimum computational cost. We survey current research directions for enhancing the efficiency of PSRO, and explore the applications of PSRO across various domains. We conclude by discussing open questions and future research.
2110.04353
Sheena Panthaplackel
Sheena Panthaplackel, Junyi Jessy Li, Milos Gligoric, Raymond J. Mooney
Learning to Describe Solutions for Bug Reports Based on Developer Discussions
Accepted in Findings of ACL 2022
null
null
null
cs.CL cs.SE
http://creativecommons.org/licenses/by/4.0/
When a software bug is reported, developers engage in a discussion to collaboratively resolve it. While the solution is likely formulated within the discussion, it is often buried in a large amount of text, making it difficult to comprehend and delaying its implementation. To expedite bug resolution, we propose generating a concise natural language description of the solution by synthesizing relevant content within the discussion, which encompasses both natural language and source code. We build a corpus for this task using a novel technique for obtaining noisy supervision from repository changes linked to bug reports, with which we establish benchmarks. We also design two systems for generating a description during an ongoing discussion by classifying when sufficient context for performing the task emerges in real-time. With automated and human evaluation, we find this task to form an ideal testbed for complex reasoning in long, bimodal dialogue context.
[ { "created": "Fri, 8 Oct 2021 19:39:55 GMT", "version": "v1" }, { "created": "Wed, 30 Mar 2022 17:56:43 GMT", "version": "v2" } ]
2022-03-31
[ [ "Panthaplackel", "Sheena", "" ], [ "Li", "Junyi Jessy", "" ], [ "Gligoric", "Milos", "" ], [ "Mooney", "Raymond J.", "" ] ]
When a software bug is reported, developers engage in a discussion to collaboratively resolve it. While the solution is likely formulated within the discussion, it is often buried in a large amount of text, making it difficult to comprehend and delaying its implementation. To expedite bug resolution, we propose generating a concise natural language description of the solution by synthesizing relevant content within the discussion, which encompasses both natural language and source code. We build a corpus for this task using a novel technique for obtaining noisy supervision from repository changes linked to bug reports, with which we establish benchmarks. We also design two systems for generating a description during an ongoing discussion by classifying when sufficient context for performing the task emerges in real-time. With automated and human evaluation, we find this task to form an ideal testbed for complex reasoning in long, bimodal dialogue context.
cs/9905006
Stephen F. Bush
Stephen F. Bush
The Design and Analysis of Virtual Network Configuration for a Wireless Mobile ATM Network
PhD Thesis
null
null
null
cs.NI
null
This research concentrates on the design and analysis of an algorithm referred to as Virtual Network Configuration (VNC) which uses predicted future states of a system for faster network configuration and management. VNC is applied to the configuration of a wireless mobile ATM network. VNC is built on techniques from parallel discrete event simulation merged with constraints from real-time systems and applied to mobile ATM configuration and handoff. Configuration in a mobile network is a dynamic and continuous process. Factors such as load, distance, capacity and topology are all constantly changing in a mobile environment. The VNC algorithm anticipates configuration changes and speeds the reconfiguration process by pre-computing and caching results. VNC propagates local prediction results throughout the VNC enhanced system. The Global Positioning System is an enabling technology for the use of VNC in mobile networks because it provides location information and accurate time for each node. This research has resulted in well defined structures for the encapsulation of physical processes within Logical Processes and a generic library for enhancing a system with VNC. Enhancing an existing system with VNC is straight forward assuming the existing physical processes do not have side effects. The benefit of prediction is gained at the cost of additional traffic and processing. This research includes an analysis of VNC and suggestions for optimization of the VNC algorithm and its parameters.
[ { "created": "Tue, 11 May 1999 20:05:36 GMT", "version": "v1" } ]
2009-09-25
[ [ "Bush", "Stephen F.", "" ] ]
This research concentrates on the design and analysis of an algorithm referred to as Virtual Network Configuration (VNC) which uses predicted future states of a system for faster network configuration and management. VNC is applied to the configuration of a wireless mobile ATM network. VNC is built on techniques from parallel discrete event simulation merged with constraints from real-time systems and applied to mobile ATM configuration and handoff. Configuration in a mobile network is a dynamic and continuous process. Factors such as load, distance, capacity and topology are all constantly changing in a mobile environment. The VNC algorithm anticipates configuration changes and speeds the reconfiguration process by pre-computing and caching results. VNC propagates local prediction results throughout the VNC enhanced system. The Global Positioning System is an enabling technology for the use of VNC in mobile networks because it provides location information and accurate time for each node. This research has resulted in well defined structures for the encapsulation of physical processes within Logical Processes and a generic library for enhancing a system with VNC. Enhancing an existing system with VNC is straight forward assuming the existing physical processes do not have side effects. The benefit of prediction is gained at the cost of additional traffic and processing. This research includes an analysis of VNC and suggestions for optimization of the VNC algorithm and its parameters.
1912.05723
Karthikeyan K
Karthikeyan K, Shubham Kumar Bharti, Piyush Rai
On the relationship between multitask neural networks and multitask Gaussian Processes
19 pages, 4 figures
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the effectiveness of multitask deep neural network (MTDNN), there is a limited theoretical understanding on how the information is shared across different tasks in MTDNN. In this work, we establish a formal connection between MTDNN with infinitely-wide hidden layers and multitask Gaussian Process (GP). We derive multitask GP kernels corresponding to both single-layer and deep multitask Bayesian neural networks (MTBNN) and show that information among different tasks is shared primarily due to correlation across last layer weights of MTBNN and shared hyper-parameters, which is contrary to the popular hypothesis that information is shared because of shared intermediate layer weights. Our construction enables using multitask GP to perform efficient Bayesian inference for the equivalent MTDNN with infinitely-wide hidden layers. Prior work on the connection between deep neural networks and GP for single task settings can be seen as special cases of our construction. We also present an adaptive multitask neural network architecture that corresponds to a multitask GP with more flexible kernels, such as Linear Model of Coregionalization (LMC) and Cross-Coregionalization (CC) kernels. We provide experimental results to further illustrate these ideas on synthetic and real datasets.
[ { "created": "Thu, 12 Dec 2019 01:51:35 GMT", "version": "v1" } ]
2019-12-13
[ [ "K", "Karthikeyan", "" ], [ "Bharti", "Shubham Kumar", "" ], [ "Rai", "Piyush", "" ] ]
Despite the effectiveness of multitask deep neural network (MTDNN), there is a limited theoretical understanding on how the information is shared across different tasks in MTDNN. In this work, we establish a formal connection between MTDNN with infinitely-wide hidden layers and multitask Gaussian Process (GP). We derive multitask GP kernels corresponding to both single-layer and deep multitask Bayesian neural networks (MTBNN) and show that information among different tasks is shared primarily due to correlation across last layer weights of MTBNN and shared hyper-parameters, which is contrary to the popular hypothesis that information is shared because of shared intermediate layer weights. Our construction enables using multitask GP to perform efficient Bayesian inference for the equivalent MTDNN with infinitely-wide hidden layers. Prior work on the connection between deep neural networks and GP for single task settings can be seen as special cases of our construction. We also present an adaptive multitask neural network architecture that corresponds to a multitask GP with more flexible kernels, such as Linear Model of Coregionalization (LMC) and Cross-Coregionalization (CC) kernels. We provide experimental results to further illustrate these ideas on synthetic and real datasets.
2403.01596
Morteza Sadeghi
Morteza Sadeghi, Abdolreza Torabi
Optimizing Near Field Computation in the MLFMA Algorithm with Data Redundancy and Performance Modeling on a Single GPU
null
null
null
null
cs.DC cs.PF physics.comp-ph
http://creativecommons.org/licenses/by/4.0/
The Multilevel Fast Multipole Algorithm (MLFMA) has known applications in scientific modeling in the fields of telecommunications, physics, mechanics, and chemistry. Accelerating calculation of far-field using GPUs and GPU clusters for large-scale problems has been studied for more than a decade. The acceleration of the Near Field Computation (P2P operator) however was less of a concern because it does not face the challenges of distributed processing which does far field. This article proposes a modification of the P2P algorithm and uses performance models to determine its optimality criteria. By modeling the speedup, we found that making threads independence by creating redundancy in the data makes the algorithm for lower dense (higher frequency) problems nearly 13 times faster than non-redundant mode.
[ { "created": "Sun, 3 Mar 2024 19:33:44 GMT", "version": "v1" } ]
2024-03-05
[ [ "Sadeghi", "Morteza", "" ], [ "Torabi", "Abdolreza", "" ] ]
The Multilevel Fast Multipole Algorithm (MLFMA) has known applications in scientific modeling in the fields of telecommunications, physics, mechanics, and chemistry. Accelerating calculation of far-field using GPUs and GPU clusters for large-scale problems has been studied for more than a decade. The acceleration of the Near Field Computation (P2P operator) however was less of a concern because it does not face the challenges of distributed processing which does far field. This article proposes a modification of the P2P algorithm and uses performance models to determine its optimality criteria. By modeling the speedup, we found that making threads independence by creating redundancy in the data makes the algorithm for lower dense (higher frequency) problems nearly 13 times faster than non-redundant mode.
2002.08992
Lu\'is Felipe Ign\'acio Cunha Cunha
Alexandre Abreu, Lu\'is Cunha, Celina de Figueiredo, Franklin Marquezino, Daniel Posner, Renato Portugal
Total tessellation cover and quantum walk
null
null
null
null
cs.DM cs.CC math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose the total staggered quantum walk model and the total tessellation cover of a graph. This model uses the concept of total tessellation cover to describe the motion of the walker who is allowed to hop both to vertices and edges of the graph, in contrast with previous models in which the walker hops either to vertices or edges. We establish bounds on $T_t(G)$, which is the smallest number of tessellations required in a total tessellation cover of $G$. We highlight two of these lower bounds $T_t(G) \geq \omega(G)$ and $T_t(G)\geq is(G)+1$, where $\omega(G)$ is the size of a maximum clique and $is(G)$ is the number of edges of a maximum induced star subgraph. Using these bounds, we define the good total tessellable graphs with either $T_t(G)=\omega(G)$ or $T_t(G)=is(G)+1$. The $k$-total tessellability problem aims to decide whether a given graph $G$ has $T_t(G) \leq k$. We show that $k$-total tessellability is in $\mathcal{P}$ for good total tessellable graphs. We establish the $\mathcal{NP}$-completeness of the following problems when restricted to the following classes: ($is(G)+1$)-total tessellability for graphs with $\omega(G) = 2$; $\omega(G)$-total tessellability for graphs $G$ with $is(G)+1 = 3$; $k$-total tessellability for graphs $G$ with $\max\{\omega(G), is(G)+1\}$ far from $k$; and $4$-total tessellability for graphs $G$ with $\omega(G) = is(G)+1 = 4$. As a consequence, we establish hardness results for bipartite graphs, line graphs of triangle-free graphs, universal graphs, planar graphs, and $(2,1)$-chordal graphs.
[ { "created": "Thu, 20 Feb 2020 19:53:35 GMT", "version": "v1" } ]
2020-02-24
[ [ "Abreu", "Alexandre", "" ], [ "Cunha", "Luís", "" ], [ "de Figueiredo", "Celina", "" ], [ "Marquezino", "Franklin", "" ], [ "Posner", "Daniel", "" ], [ "Portugal", "Renato", "" ] ]
We propose the total staggered quantum walk model and the total tessellation cover of a graph. This model uses the concept of total tessellation cover to describe the motion of the walker who is allowed to hop both to vertices and edges of the graph, in contrast with previous models in which the walker hops either to vertices or edges. We establish bounds on $T_t(G)$, which is the smallest number of tessellations required in a total tessellation cover of $G$. We highlight two of these lower bounds $T_t(G) \geq \omega(G)$ and $T_t(G)\geq is(G)+1$, where $\omega(G)$ is the size of a maximum clique and $is(G)$ is the number of edges of a maximum induced star subgraph. Using these bounds, we define the good total tessellable graphs with either $T_t(G)=\omega(G)$ or $T_t(G)=is(G)+1$. The $k$-total tessellability problem aims to decide whether a given graph $G$ has $T_t(G) \leq k$. We show that $k$-total tessellability is in $\mathcal{P}$ for good total tessellable graphs. We establish the $\mathcal{NP}$-completeness of the following problems when restricted to the following classes: ($is(G)+1$)-total tessellability for graphs with $\omega(G) = 2$; $\omega(G)$-total tessellability for graphs $G$ with $is(G)+1 = 3$; $k$-total tessellability for graphs $G$ with $\max\{\omega(G), is(G)+1\}$ far from $k$; and $4$-total tessellability for graphs $G$ with $\omega(G) = is(G)+1 = 4$. As a consequence, we establish hardness results for bipartite graphs, line graphs of triangle-free graphs, universal graphs, planar graphs, and $(2,1)$-chordal graphs.
1511.06385
Chunchuan Lv Mr.
Chunchuan Lyu, Kaizhu Huang, Hai-Ning Liang
A Unified Gradient Regularization Family for Adversarial Examples
The paper has been presented at ICDM 2015
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adversarial examples are augmented data points generated by imperceptible perturbation of input samples. They have recently drawn much attention with the machine learning and data mining community. Being difficult to distinguish from real examples, such adversarial examples could change the prediction of many of the best learning models including the state-of-the-art deep learning models. Recent attempts have been made to build robust models that take into account adversarial examples. However, these methods can either lead to performance drops or lack mathematical motivations. In this paper, we propose a unified framework to build robust machine learning models against adversarial examples. More specifically, using the unified framework, we develop a family of gradient regularization methods that effectively penalize the gradient of loss function w.r.t. inputs. Our proposed framework is appealing in that it offers a unified view to deal with adversarial examples. It incorporates another recently-proposed perturbation based approach as a special case. In addition, we present some visual effects that reveals semantic meaning in those perturbations, and thus support our regularization method and provide another explanation for generalizability of adversarial examples. By applying this technique to Maxout networks, we conduct a series of experiments and achieve encouraging results on two benchmark datasets. In particular,we attain the best accuracy on MNIST data (without data augmentation) and competitive performance on CIFAR-10 data.
[ { "created": "Thu, 19 Nov 2015 21:14:43 GMT", "version": "v1" } ]
2016-03-03
[ [ "Lyu", "Chunchuan", "" ], [ "Huang", "Kaizhu", "" ], [ "Liang", "Hai-Ning", "" ] ]
Adversarial examples are augmented data points generated by imperceptible perturbation of input samples. They have recently drawn much attention with the machine learning and data mining community. Being difficult to distinguish from real examples, such adversarial examples could change the prediction of many of the best learning models including the state-of-the-art deep learning models. Recent attempts have been made to build robust models that take into account adversarial examples. However, these methods can either lead to performance drops or lack mathematical motivations. In this paper, we propose a unified framework to build robust machine learning models against adversarial examples. More specifically, using the unified framework, we develop a family of gradient regularization methods that effectively penalize the gradient of loss function w.r.t. inputs. Our proposed framework is appealing in that it offers a unified view to deal with adversarial examples. It incorporates another recently-proposed perturbation based approach as a special case. In addition, we present some visual effects that reveals semantic meaning in those perturbations, and thus support our regularization method and provide another explanation for generalizability of adversarial examples. By applying this technique to Maxout networks, we conduct a series of experiments and achieve encouraging results on two benchmark datasets. In particular,we attain the best accuracy on MNIST data (without data augmentation) and competitive performance on CIFAR-10 data.
1705.02828
Thijs Laarhoven
Thijs Laarhoven
Faster tuple lattice sieving using spherical locality-sensitive filters
12 pages + references, 2 figures. Subsumed/merged into Cryptology ePrint Archive 2017/228, available at https://ia.cr/2017/1228
null
null
null
cs.DS cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To overcome the large memory requirement of classical lattice sieving algorithms for solving hard lattice problems, Bai-Laarhoven-Stehl\'{e} [ANTS 2016] studied tuple lattice sieving, where tuples instead of pairs of lattice vectors are combined to form shorter vectors. Herold-Kirshanova [PKC 2017] recently improved upon their results for arbitrary tuple sizes, for example showing that a triple sieve can solve the shortest vector problem (SVP) in dimension $d$ in time $2^{0.3717d + o(d)}$, using a technique similar to locality-sensitive hashing for finding nearest neighbors. In this work, we generalize the spherical locality-sensitive filters of Becker-Ducas-Gama-Laarhoven [SODA 2016] to obtain space-time tradeoffs for near neighbor searching on dense data sets, and we apply these techniques to tuple lattice sieving to obtain even better time complexities. For instance, our triple sieve heuristically solves SVP in time $2^{0.3588d + o(d)}$. For practical sieves based on Micciancio-Voulgaris' GaussSieve [SODA 2010], this shows that a triple sieve uses less space and less time than the current best near-linear space double sieve.
[ { "created": "Mon, 8 May 2017 11:24:10 GMT", "version": "v1" }, { "created": "Fri, 23 Feb 2018 14:13:35 GMT", "version": "v2" } ]
2018-02-26
[ [ "Laarhoven", "Thijs", "" ] ]
To overcome the large memory requirement of classical lattice sieving algorithms for solving hard lattice problems, Bai-Laarhoven-Stehl\'{e} [ANTS 2016] studied tuple lattice sieving, where tuples instead of pairs of lattice vectors are combined to form shorter vectors. Herold-Kirshanova [PKC 2017] recently improved upon their results for arbitrary tuple sizes, for example showing that a triple sieve can solve the shortest vector problem (SVP) in dimension $d$ in time $2^{0.3717d + o(d)}$, using a technique similar to locality-sensitive hashing for finding nearest neighbors. In this work, we generalize the spherical locality-sensitive filters of Becker-Ducas-Gama-Laarhoven [SODA 2016] to obtain space-time tradeoffs for near neighbor searching on dense data sets, and we apply these techniques to tuple lattice sieving to obtain even better time complexities. For instance, our triple sieve heuristically solves SVP in time $2^{0.3588d + o(d)}$. For practical sieves based on Micciancio-Voulgaris' GaussSieve [SODA 2010], this shows that a triple sieve uses less space and less time than the current best near-linear space double sieve.
2303.15768
Jaeseong Lee
Jaeseong Lee, Taewoo Kim, Sunghyun Park, Younggun Lee, Jaegul Choo
RobustSwap: A Simple yet Robust Face Swapping Model against Attribute Leakage
21 pages
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Face swapping aims at injecting a source image's identity (i.e., facial features) into a target image, while strictly preserving the target's attributes, which are irrelevant to identity. However, we observed that previous approaches still suffer from source attribute leakage, where the source image's attributes interfere with the target image's. In this paper, we analyze the latent space of StyleGAN and find the adequate combination of the latents geared for face swapping task. Based on the findings, we develop a simple yet robust face swapping model, RobustSwap, which is resistant to the potential source attribute leakage. Moreover, we exploit the coordination of 3DMM's implicit and explicit information as a guidance to incorporate the structure of the source image and the precise pose of the target image. Despite our method solely utilizing an image dataset without identity labels for training, our model has the capability to generate high-fidelity and temporally consistent videos. Through extensive qualitative and quantitative evaluations, we demonstrate that our method shows significant improvements compared with the previous face swapping models in synthesizing both images and videos. Project page is available at https://robustswap.github.io/
[ { "created": "Tue, 28 Mar 2023 07:03:31 GMT", "version": "v1" } ]
2023-03-29
[ [ "Lee", "Jaeseong", "" ], [ "Kim", "Taewoo", "" ], [ "Park", "Sunghyun", "" ], [ "Lee", "Younggun", "" ], [ "Choo", "Jaegul", "" ] ]
Face swapping aims at injecting a source image's identity (i.e., facial features) into a target image, while strictly preserving the target's attributes, which are irrelevant to identity. However, we observed that previous approaches still suffer from source attribute leakage, where the source image's attributes interfere with the target image's. In this paper, we analyze the latent space of StyleGAN and find the adequate combination of the latents geared for face swapping task. Based on the findings, we develop a simple yet robust face swapping model, RobustSwap, which is resistant to the potential source attribute leakage. Moreover, we exploit the coordination of 3DMM's implicit and explicit information as a guidance to incorporate the structure of the source image and the precise pose of the target image. Despite our method solely utilizing an image dataset without identity labels for training, our model has the capability to generate high-fidelity and temporally consistent videos. Through extensive qualitative and quantitative evaluations, we demonstrate that our method shows significant improvements compared with the previous face swapping models in synthesizing both images and videos. Project page is available at https://robustswap.github.io/
2211.06319
Marek Kwiek
Marek Kwiek and Wojciech Roszka
The Young and the Old, the Fast and the Slow: A Large-Scale Study of Productivity Classes and Rank Advancement
16 pages, 2 tables, 4 figures. There is an Online Supplementary Material here: https://www.tandfonline.com/doi/full/10.1080/03075079.2023.2288172
Studies in Higher Education
10.1080/03075079.2023.2288172
null
cs.CY cs.DL
http://creativecommons.org/licenses/by/4.0/
We examined a large population of Polish science, technology, engineering, mathematics and medicine (STEMM) scientists (N = 16,083) to study rank advancement and productivity. We used two previously neglected time dimensions - promotion age and promotion speed - to construct individual biographical profiles and publication profiles. We used a classificatory approach and the new methodological approach of journal prestige-normalized productivity. All scientists were allocated to different productivity, promotion age, and promotion speed classes (top 20%, middle 60%, and bottom 20%). The patterns were consistent across all disciplines: scientists in young promotion age classes (and fast promotion speed classes) in the past were currently the most productive. In contrast, scientists in old promotion age classes (and slow promotion speed classes) in the past were currently the least productive. In the three largest disciplines, the young-old promotion age productivity differential for associate professors was 100-200% (150-200% for full professors); and the fast-slow promotion speed productivity differential for associate professors was 80-150% (100-170% for full professors). Our results were confirmed by a regression analysis in which we found odds ratio estimates of membership in top productivity classes. We combined data collected from the national register of all Polish scientists and scholars (N = 99,935) and publication metadata on all Polish articles indexed in Scopus (N = 935,167).
[ { "created": "Thu, 3 Nov 2022 11:40:18 GMT", "version": "v1" }, { "created": "Fri, 3 Nov 2023 10:06:35 GMT", "version": "v2" }, { "created": "Sat, 9 Dec 2023 11:46:49 GMT", "version": "v3" } ]
2023-12-19
[ [ "Kwiek", "Marek", "" ], [ "Roszka", "Wojciech", "" ] ]
We examined a large population of Polish science, technology, engineering, mathematics and medicine (STEMM) scientists (N = 16,083) to study rank advancement and productivity. We used two previously neglected time dimensions - promotion age and promotion speed - to construct individual biographical profiles and publication profiles. We used a classificatory approach and the new methodological approach of journal prestige-normalized productivity. All scientists were allocated to different productivity, promotion age, and promotion speed classes (top 20%, middle 60%, and bottom 20%). The patterns were consistent across all disciplines: scientists in young promotion age classes (and fast promotion speed classes) in the past were currently the most productive. In contrast, scientists in old promotion age classes (and slow promotion speed classes) in the past were currently the least productive. In the three largest disciplines, the young-old promotion age productivity differential for associate professors was 100-200% (150-200% for full professors); and the fast-slow promotion speed productivity differential for associate professors was 80-150% (100-170% for full professors). Our results were confirmed by a regression analysis in which we found odds ratio estimates of membership in top productivity classes. We combined data collected from the national register of all Polish scientists and scholars (N = 99,935) and publication metadata on all Polish articles indexed in Scopus (N = 935,167).
2210.09693
Chaoli Zhang
Chaoli Zhang and Tian Zhou and Qingsong Wen and Liang Sun
TFAD: A Decomposition Time Series Anomaly Detection Architecture with Time-Frequency Analysis
Accepted by the ACM International Conference on Information and Knowledge Management (CIKM 2022)
CIKM 2022
10.1145/3511808.3557470
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Time series anomaly detection is a challenging problem due to the complex temporal dependencies and the limited label data. Although some algorithms including both traditional and deep models have been proposed, most of them mainly focus on time-domain modeling, and do not fully utilize the information in the frequency domain of the time series data. In this paper, we propose a Time-Frequency analysis based time series Anomaly Detection model, or TFAD for short, to exploit both time and frequency domains for performance improvement. Besides, we incorporate time series decomposition and data augmentation mechanisms in the designed time-frequency architecture to further boost the abilities of performance and interpretability. Empirical studies on widely used benchmark datasets show that our approach obtains state-of-the-art performance in univariate and multivariate time series anomaly detection tasks. Code is provided at https://github.com/DAMO-DI-ML/CIKM22-TFAD.
[ { "created": "Tue, 18 Oct 2022 09:08:57 GMT", "version": "v1" }, { "created": "Sat, 25 Mar 2023 06:26:36 GMT", "version": "v2" } ]
2023-03-28
[ [ "Zhang", "Chaoli", "" ], [ "Zhou", "Tian", "" ], [ "Wen", "Qingsong", "" ], [ "Sun", "Liang", "" ] ]
Time series anomaly detection is a challenging problem due to the complex temporal dependencies and the limited label data. Although some algorithms including both traditional and deep models have been proposed, most of them mainly focus on time-domain modeling, and do not fully utilize the information in the frequency domain of the time series data. In this paper, we propose a Time-Frequency analysis based time series Anomaly Detection model, or TFAD for short, to exploit both time and frequency domains for performance improvement. Besides, we incorporate time series decomposition and data augmentation mechanisms in the designed time-frequency architecture to further boost the abilities of performance and interpretability. Empirical studies on widely used benchmark datasets show that our approach obtains state-of-the-art performance in univariate and multivariate time series anomaly detection tasks. Code is provided at https://github.com/DAMO-DI-ML/CIKM22-TFAD.
1907.03956
Changjoo Nam
Changjoo Nam, Jinhwi Lee, Younggil Cho, Jeongho Lee, Dong Hwan Kim, ChangHwan Kim
Planning for target retrieval using a robotic manipulator in cluttered and occluded environments
8 pages, 14 figures
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents planning algorithms for a robotic manipulator with a fixed base in order to grasp a target object in cluttered environments. We consider a configuration of objects in a confined space with a high density so no collision-free path to the target exists. The robot must relocate some objects to retrieve the target while avoiding collisions. For fast completion of the retrieval task, the robot needs to compute a plan optimizing an appropriate objective value directly related to the execution time of the relocation plan. We propose planning algorithms that aim to minimize the number of objects to be relocated. Our objective value is appropriate for the object retrieval task because grasping and releasing objects often dominate the total running time. In addition to the algorithm working in fully known and static environments, we propose algorithms that can deal with uncertain and dynamic situations incurred by occluded views. The proposed algorithms are shown to be complete and run in polynomial time. Our methods reduce the total running time significantly compared to a baseline method (e.g., 25.1% of reduction in a known static environment with 10 objects
[ { "created": "Tue, 9 Jul 2019 03:18:51 GMT", "version": "v1" } ]
2022-02-09
[ [ "Nam", "Changjoo", "" ], [ "Lee", "Jinhwi", "" ], [ "Cho", "Younggil", "" ], [ "Lee", "Jeongho", "" ], [ "Kim", "Dong Hwan", "" ], [ "Kim", "ChangHwan", "" ] ]
This paper presents planning algorithms for a robotic manipulator with a fixed base in order to grasp a target object in cluttered environments. We consider a configuration of objects in a confined space with a high density so no collision-free path to the target exists. The robot must relocate some objects to retrieve the target while avoiding collisions. For fast completion of the retrieval task, the robot needs to compute a plan optimizing an appropriate objective value directly related to the execution time of the relocation plan. We propose planning algorithms that aim to minimize the number of objects to be relocated. Our objective value is appropriate for the object retrieval task because grasping and releasing objects often dominate the total running time. In addition to the algorithm working in fully known and static environments, we propose algorithms that can deal with uncertain and dynamic situations incurred by occluded views. The proposed algorithms are shown to be complete and run in polynomial time. Our methods reduce the total running time significantly compared to a baseline method (e.g., 25.1% of reduction in a known static environment with 10 objects
2002.05651
Peter Henderson
Peter Henderson, Jieru Hu, Joshua Romoff, Emma Brunskill, Dan Jurafsky, Joelle Pineau
Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning
Published in JMLR: https://jmlr.org/papers/v21/20-312.html
null
null
null
cs.CY cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Accurate reporting of energy and carbon usage is essential for understanding the potential climate impacts of machine learning research. We introduce a framework that makes this easier by providing a simple interface for tracking realtime energy consumption and carbon emissions, as well as generating standardized online appendices. Utilizing this framework, we create a leaderboard for energy efficient reinforcement learning algorithms to incentivize responsible research in this area as an example for other areas of machine learning. Finally, based on case studies using our framework, we propose strategies for mitigation of carbon emissions and reduction of energy consumption. By making accounting easier, we hope to further the sustainable development of machine learning experiments and spur more research into energy efficient algorithms.
[ { "created": "Fri, 31 Jan 2020 05:12:59 GMT", "version": "v1" }, { "created": "Tue, 29 Nov 2022 08:53:47 GMT", "version": "v2" } ]
2022-11-30
[ [ "Henderson", "Peter", "" ], [ "Hu", "Jieru", "" ], [ "Romoff", "Joshua", "" ], [ "Brunskill", "Emma", "" ], [ "Jurafsky", "Dan", "" ], [ "Pineau", "Joelle", "" ] ]
Accurate reporting of energy and carbon usage is essential for understanding the potential climate impacts of machine learning research. We introduce a framework that makes this easier by providing a simple interface for tracking realtime energy consumption and carbon emissions, as well as generating standardized online appendices. Utilizing this framework, we create a leaderboard for energy efficient reinforcement learning algorithms to incentivize responsible research in this area as an example for other areas of machine learning. Finally, based on case studies using our framework, we propose strategies for mitigation of carbon emissions and reduction of energy consumption. By making accounting easier, we hope to further the sustainable development of machine learning experiments and spur more research into energy efficient algorithms.
2307.08974
Giovanni Cacciamani
Giovanni E. Cacciamani, Michael B. Eppler, Conner Ganjavi, Asli Pekan, Brett Biedermann, Gary S. Collins, Inderbir S. Gill
Development of the ChatGPT, Generative Artificial Intelligence and Natural Large Language Models for Accountable Reporting and Use (CANGARU) Guidelines
20 pages, 1 figure, protocol
null
null
up-23-00306
cs.AI cs.CY
http://creativecommons.org/licenses/by/4.0/
The swift progress and ubiquitous adoption of Generative AI (GAI), Generative Pre-trained Transformers (GPTs), and large language models (LLMs) like ChatGPT, have spurred queries about their ethical application, use, and disclosure in scholarly research and scientific productions. A few publishers and journals have recently created their own sets of rules; however, the absence of a unified approach may lead to a 'Babel Tower Effect,' potentially resulting in confusion rather than desired standardization. In response to this, we present the ChatGPT, Generative Artificial Intelligence, and Natural Large Language Models for Accountable Reporting and Use Guidelines (CANGARU) initiative, with the aim of fostering a cross-disciplinary global inclusive consensus on the ethical use, disclosure, and proper reporting of GAI/GPT/LLM technologies in academia. The present protocol consists of four distinct parts: a) an ongoing systematic review of GAI/GPT/LLM applications to understand the linked ideas, findings, and reporting standards in scholarly research, and to formulate guidelines for its use and disclosure, b) a bibliometric analysis of existing author guidelines in journals that mention GAI/GPT/LLM, with the goal of evaluating existing guidelines, analyzing the disparity in their recommendations, and identifying common rules that can be brought into the Delphi consensus process, c) a Delphi survey to establish agreement on the items for the guidelines, ensuring principled GAI/GPT/LLM use, disclosure, and reporting in academia, and d) the subsequent development and dissemination of the finalized guidelines and their supplementary explanation and elaboration documents.
[ { "created": "Tue, 18 Jul 2023 05:12:52 GMT", "version": "v1" } ]
2023-07-19
[ [ "Cacciamani", "Giovanni E.", "" ], [ "Eppler", "Michael B.", "" ], [ "Ganjavi", "Conner", "" ], [ "Pekan", "Asli", "" ], [ "Biedermann", "Brett", "" ], [ "Collins", "Gary S.", "" ], [ "Gill", "Inderbir S.", "" ] ]
The swift progress and ubiquitous adoption of Generative AI (GAI), Generative Pre-trained Transformers (GPTs), and large language models (LLMs) like ChatGPT, have spurred queries about their ethical application, use, and disclosure in scholarly research and scientific productions. A few publishers and journals have recently created their own sets of rules; however, the absence of a unified approach may lead to a 'Babel Tower Effect,' potentially resulting in confusion rather than desired standardization. In response to this, we present the ChatGPT, Generative Artificial Intelligence, and Natural Large Language Models for Accountable Reporting and Use Guidelines (CANGARU) initiative, with the aim of fostering a cross-disciplinary global inclusive consensus on the ethical use, disclosure, and proper reporting of GAI/GPT/LLM technologies in academia. The present protocol consists of four distinct parts: a) an ongoing systematic review of GAI/GPT/LLM applications to understand the linked ideas, findings, and reporting standards in scholarly research, and to formulate guidelines for its use and disclosure, b) a bibliometric analysis of existing author guidelines in journals that mention GAI/GPT/LLM, with the goal of evaluating existing guidelines, analyzing the disparity in their recommendations, and identifying common rules that can be brought into the Delphi consensus process, c) a Delphi survey to establish agreement on the items for the guidelines, ensuring principled GAI/GPT/LLM use, disclosure, and reporting in academia, and d) the subsequent development and dissemination of the finalized guidelines and their supplementary explanation and elaboration documents.
2111.09872
Dimitrios Miltiadou
Dimitrios Miltiadou (1), Stamatis Pitsios (1), Dimitrios Spyropoulos (1), Dimitrios Alexandrou (1), Fenareti Lampathaki (2), Domenico Messina (3), Konstantinos Perakis (1) ((1) UBITECH, (2) Suite5, (3) ENGINEERING Ingegneria Informatica S.p.A.)
A big data intelligence marketplace and secure analytics experimentation platform for the aviation industry
null
LNICST (2021), volume 371, pp 48-62
10.1007/978-3-030-72802-1_4
null
cs.CR cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
The unprecedented volume, diversity and richness of aviation data that can be acquired, generated, stored, and managed provides unique capabilities for the aviation-related industries and pertains value that remains to be unlocked with the adoption of the innovative Big Data Analytics technologies. Despite the large efforts and investments on research and innovation, the Big Data technologies introduce a number of challenges to its adopters. Besides the effective storage and access to the underlying big data, efficient data integration and data interoperability should be considered, while at the same time multiple data sources should be effectively combined by performing data exchange and data sharing between the different stakeholders. However, this reveals additional challenges for the crucial preservation of the information security of the collected data, the trusted and secure data exchange and data sharing, as well as the robust data access control. The current paper aims to introduce the ICARUS big data-enabled platform that aims provide a multi-sided platform that offers a novel aviation data and intelligence marketplace accompanied by a trusted and secure analytics workspace. It holistically handles the complete big data lifecycle from the data collection, data curation and data exploration to the data integration and data analysis of data originating from heterogeneous data sources with different velocity, variety and volume in a trusted and secure manner.
[ { "created": "Thu, 18 Nov 2021 18:51:40 GMT", "version": "v1" } ]
2021-11-19
[ [ "Miltiadou", "Dimitrios", "" ], [ "Pitsios", "Stamatis", "" ], [ "Spyropoulos", "Dimitrios", "" ], [ "Alexandrou", "Dimitrios", "" ], [ "Lampathaki", "Fenareti", "" ], [ "Messina", "Domenico", "" ], [ "Perakis", "Konstantinos", "" ] ]
The unprecedented volume, diversity and richness of aviation data that can be acquired, generated, stored, and managed provides unique capabilities for the aviation-related industries and pertains value that remains to be unlocked with the adoption of the innovative Big Data Analytics technologies. Despite the large efforts and investments on research and innovation, the Big Data technologies introduce a number of challenges to its adopters. Besides the effective storage and access to the underlying big data, efficient data integration and data interoperability should be considered, while at the same time multiple data sources should be effectively combined by performing data exchange and data sharing between the different stakeholders. However, this reveals additional challenges for the crucial preservation of the information security of the collected data, the trusted and secure data exchange and data sharing, as well as the robust data access control. The current paper aims to introduce the ICARUS big data-enabled platform that aims provide a multi-sided platform that offers a novel aviation data and intelligence marketplace accompanied by a trusted and secure analytics workspace. It holistically handles the complete big data lifecycle from the data collection, data curation and data exploration to the data integration and data analysis of data originating from heterogeneous data sources with different velocity, variety and volume in a trusted and secure manner.
2312.14980
Changhoon Lee
Jun Park and Changhoon Lee
TPTNet: A Data-Driven Temperature Prediction Model Based on Turbulent Potential Temperature
null
null
null
null
cs.LG physics.ao-ph physics.flu-dyn
http://creativecommons.org/licenses/by/4.0/
A data-driven model for predicting the surface temperature using neural networks was proposed to alleviate the computational burden of numerical weather prediction (NWP). Our model, named TPTNet uses only 2m temperature measured at the weather stations of the South Korean Peninsula as input to predict the local temperature at finite forecast hours. The turbulent fluctuation component of the temperature was extracted from the station measurements by separating the climatology component accounting for the yearly and daily variations. The effect of station altitude was then compensated by introducing a potential temperature. The resulting turbulent potential temperature data at irregularly distributed stations were used as input for predicting the turbulent potential temperature at forecast hours through three trained networks based on convolutional neural network (CNN), Swin Transformer, and a graphic neural network (GNN). The prediction performance of our network was compared with that of persistence and NWP, confirming that our model outperformed NWP for up to 12 forecast hours.
[ { "created": "Fri, 22 Dec 2023 01:02:27 GMT", "version": "v1" } ]
2023-12-27
[ [ "Park", "Jun", "" ], [ "Lee", "Changhoon", "" ] ]
A data-driven model for predicting the surface temperature using neural networks was proposed to alleviate the computational burden of numerical weather prediction (NWP). Our model, named TPTNet uses only 2m temperature measured at the weather stations of the South Korean Peninsula as input to predict the local temperature at finite forecast hours. The turbulent fluctuation component of the temperature was extracted from the station measurements by separating the climatology component accounting for the yearly and daily variations. The effect of station altitude was then compensated by introducing a potential temperature. The resulting turbulent potential temperature data at irregularly distributed stations were used as input for predicting the turbulent potential temperature at forecast hours through three trained networks based on convolutional neural network (CNN), Swin Transformer, and a graphic neural network (GNN). The prediction performance of our network was compared with that of persistence and NWP, confirming that our model outperformed NWP for up to 12 forecast hours.
2103.16553
Antoine Miech
Antoine Miech, Jean-Baptiste Alayrac, Ivan Laptev, Josef Sivic, Andrew Zisserman
Thinking Fast and Slow: Efficient Text-to-Visual Retrieval with Transformers
Accepted to CVPR 2021
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Our objective is language-based search of large-scale image and video datasets. For this task, the approach that consists of independently mapping text and vision to a joint embedding space, a.k.a. dual encoders, is attractive as retrieval scales and is efficient for billions of images using approximate nearest neighbour search. An alternative approach of using vision-text transformers with cross-attention gives considerable improvements in accuracy over the joint embeddings, but is often inapplicable in practice for large-scale retrieval given the cost of the cross-attention mechanisms required for each sample at test time. This work combines the best of both worlds. We make the following three contributions. First, we equip transformer-based models with a new fine-grained cross-attention architecture, providing significant improvements in retrieval accuracy whilst preserving scalability. Second, we introduce a generic approach for combining a Fast dual encoder model with our Slow but accurate transformer-based model via distillation and re-ranking. Finally, we validate our approach on the Flickr30K image dataset where we show an increase in inference speed by several orders of magnitude while having results competitive to the state of the art. We also extend our method to the video domain, improving the state of the art on the VATEX dataset.
[ { "created": "Tue, 30 Mar 2021 17:57:08 GMT", "version": "v1" } ]
2021-03-31
[ [ "Miech", "Antoine", "" ], [ "Alayrac", "Jean-Baptiste", "" ], [ "Laptev", "Ivan", "" ], [ "Sivic", "Josef", "" ], [ "Zisserman", "Andrew", "" ] ]
Our objective is language-based search of large-scale image and video datasets. For this task, the approach that consists of independently mapping text and vision to a joint embedding space, a.k.a. dual encoders, is attractive as retrieval scales and is efficient for billions of images using approximate nearest neighbour search. An alternative approach of using vision-text transformers with cross-attention gives considerable improvements in accuracy over the joint embeddings, but is often inapplicable in practice for large-scale retrieval given the cost of the cross-attention mechanisms required for each sample at test time. This work combines the best of both worlds. We make the following three contributions. First, we equip transformer-based models with a new fine-grained cross-attention architecture, providing significant improvements in retrieval accuracy whilst preserving scalability. Second, we introduce a generic approach for combining a Fast dual encoder model with our Slow but accurate transformer-based model via distillation and re-ranking. Finally, we validate our approach on the Flickr30K image dataset where we show an increase in inference speed by several orders of magnitude while having results competitive to the state of the art. We also extend our method to the video domain, improving the state of the art on the VATEX dataset.
2204.05490
Le Yu
Le Yu, Zihang Liu, Leilei Sun, Bowen Du, Chuanren Liu, Weifeng Lv
Continuous-Time User Preference Modelling for Temporal Sets Prediction
Accepted by the TKDE journal
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a sequence of sets, where each set has a timestamp and contains an arbitrary number of elements, temporal sets prediction aims to predict the elements in the subsequent set. Previous studies for temporal sets prediction mainly focus on the modelling of elements and implicitly represent each user's preference based on his/her interacted elements. However, user preferences are often continuously evolving and the evolutionary trend cannot be fully captured with the indirect learning paradigm of user preferences. To this end, we propose a continuous-time user preference modelling framework for temporal sets prediction, which explicitly models the evolving preference of each user by maintaining a memory bank to store the states of all the users and elements. Specifically, we first construct a universal sequence by arranging all the user-set interactions in a non-descending temporal order, and then chronologically learn from each user-set interaction. For each interaction, we continuously update the memories of the related user and elements based on their currently encoded messages and past memories. Moreover, we present a personalized user behavior learning module to discover user-specific characteristics based on each user's historical sequence, which aggregates the previously interacted elements from dual perspectives according to the user and elements. Finally, we develop a set-batch algorithm to improve the model efficiency, which can create time-consistent batches in advance and achieve 3.5x and 3.0x speedups in the training and evaluation process on average. Experiments on four real-world datasets demonstrate the superiority of our approach over state-of-the-arts under both transductive and inductive settings. The good interpretability of our method is also shown.
[ { "created": "Tue, 12 Apr 2022 02:49:27 GMT", "version": "v1" }, { "created": "Wed, 13 Apr 2022 03:06:27 GMT", "version": "v2" }, { "created": "Thu, 14 Apr 2022 09:53:13 GMT", "version": "v3" }, { "created": "Fri, 17 Jun 2022 13:03:46 GMT", "version": "v4" }, { "created": "Wed, 13 Jul 2022 05:49:40 GMT", "version": "v5" }, { "created": "Sun, 31 Jul 2022 05:06:34 GMT", "version": "v6" }, { "created": "Mon, 28 Aug 2023 05:03:48 GMT", "version": "v7" } ]
2023-08-29
[ [ "Yu", "Le", "" ], [ "Liu", "Zihang", "" ], [ "Sun", "Leilei", "" ], [ "Du", "Bowen", "" ], [ "Liu", "Chuanren", "" ], [ "Lv", "Weifeng", "" ] ]
Given a sequence of sets, where each set has a timestamp and contains an arbitrary number of elements, temporal sets prediction aims to predict the elements in the subsequent set. Previous studies for temporal sets prediction mainly focus on the modelling of elements and implicitly represent each user's preference based on his/her interacted elements. However, user preferences are often continuously evolving and the evolutionary trend cannot be fully captured with the indirect learning paradigm of user preferences. To this end, we propose a continuous-time user preference modelling framework for temporal sets prediction, which explicitly models the evolving preference of each user by maintaining a memory bank to store the states of all the users and elements. Specifically, we first construct a universal sequence by arranging all the user-set interactions in a non-descending temporal order, and then chronologically learn from each user-set interaction. For each interaction, we continuously update the memories of the related user and elements based on their currently encoded messages and past memories. Moreover, we present a personalized user behavior learning module to discover user-specific characteristics based on each user's historical sequence, which aggregates the previously interacted elements from dual perspectives according to the user and elements. Finally, we develop a set-batch algorithm to improve the model efficiency, which can create time-consistent batches in advance and achieve 3.5x and 3.0x speedups in the training and evaluation process on average. Experiments on four real-world datasets demonstrate the superiority of our approach over state-of-the-arts under both transductive and inductive settings. The good interpretability of our method is also shown.
2310.08876
Maximilian Strobel
Maximilian Strobel, Stephan Schoenfeldt, Jonas Daugalas
Gesture Recognition for FMCW Radar on the Edge
4 pages, 5 figures, accepted in 2024 IEEE Topical Conference on Wireless Sensors and Sensor Networks (WiSNeT)
2024 IEEE Topical Conference on Wireless Sensors and Sensor Networks (WiSNeT), San Antonio, TX, USA, 2024, pp. 45-48
10.1109/WiSNeT59910.2024.10438579
null
cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a lightweight gesture recognition system based on 60 GHz frequency modulated continuous wave (FMCW) radar. We show that gestures can be characterized efficiently by a set of five features, and propose a slim radar processing algorithm to extract these features. In contrast to previous approaches, we avoid heavy 2D processing, i.e. range-Doppler imaging, and perform instead an early target detection - this allows us to port the system to fully embedded platforms with tight constraints on memory, compute and power consumption. A recurrent neural network (RNN) based architecture exploits these features to jointly detect and classify five different gestures. The proposed system recognizes gestures with an F1 score of 98.4% on our hold-out test dataset, it runs on an Arm Cortex-M4 microcontroller requiring less than 280 kB of flash memory, 120 kB of RAM, and consuming 75 mW of power.
[ { "created": "Fri, 13 Oct 2023 06:03:07 GMT", "version": "v1" }, { "created": "Fri, 26 Jan 2024 04:17:09 GMT", "version": "v2" } ]
2024-02-29
[ [ "Strobel", "Maximilian", "" ], [ "Schoenfeldt", "Stephan", "" ], [ "Daugalas", "Jonas", "" ] ]
This paper introduces a lightweight gesture recognition system based on 60 GHz frequency modulated continuous wave (FMCW) radar. We show that gestures can be characterized efficiently by a set of five features, and propose a slim radar processing algorithm to extract these features. In contrast to previous approaches, we avoid heavy 2D processing, i.e. range-Doppler imaging, and perform instead an early target detection - this allows us to port the system to fully embedded platforms with tight constraints on memory, compute and power consumption. A recurrent neural network (RNN) based architecture exploits these features to jointly detect and classify five different gestures. The proposed system recognizes gestures with an F1 score of 98.4% on our hold-out test dataset, it runs on an Arm Cortex-M4 microcontroller requiring less than 280 kB of flash memory, 120 kB of RAM, and consuming 75 mW of power.
1702.02130
Delia Fano Yela
Delia Fano Yela, Sebastian Ewert, Derry FitzGerald, Mark Sandler
On the Importance of Temporal Context in Proximity Kernels: A Vocal Separation Case Study
2017 AES International Conference on Semantic Audio
Proceedings of the AES International Conference on Semantic Audio, Erlangen, Germany, pp. 13-20, 2017
null
null
cs.SD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Musical source separation methods exploit source-specific spectral characteristics to facilitate the decomposition process. Kernel Additive Modelling (KAM) models a source applying robust statistics to time-frequency bins as specified by a source-specific kernel, a function defining similarity between bins. Kernels in existing approaches are typically defined using metrics between single time frames. In the presence of noise and other sound sources information from a single-frame, however, turns out to be unreliable and often incorrect frames are selected as similar. In this paper, we incorporate a temporal context into the kernel to provide additional information stabilizing the similarity search. Evaluated in the context of vocal separation, our simple extension led to a considerable improvement in separation quality compared to previous kernels.
[ { "created": "Tue, 7 Feb 2017 18:41:31 GMT", "version": "v1" }, { "created": "Tue, 11 Apr 2017 12:23:37 GMT", "version": "v2" } ]
2017-11-01
[ [ "Yela", "Delia Fano", "" ], [ "Ewert", "Sebastian", "" ], [ "FitzGerald", "Derry", "" ], [ "Sandler", "Mark", "" ] ]
Musical source separation methods exploit source-specific spectral characteristics to facilitate the decomposition process. Kernel Additive Modelling (KAM) models a source applying robust statistics to time-frequency bins as specified by a source-specific kernel, a function defining similarity between bins. Kernels in existing approaches are typically defined using metrics between single time frames. In the presence of noise and other sound sources information from a single-frame, however, turns out to be unreliable and often incorrect frames are selected as similar. In this paper, we incorporate a temporal context into the kernel to provide additional information stabilizing the similarity search. Evaluated in the context of vocal separation, our simple extension led to a considerable improvement in separation quality compared to previous kernels.
1111.6084
Gianvito Summa
Angela Bonifati, Gianvito Summa, Esther Pacitti, Fady Draidi
Semantic Query Reformulation in Social PDMS
29 pages, 8 figures, query rewriting in PDMS
null
null
null
cs.DB cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider social peer-to-peer data management systems (PDMS), where each peer maintains both semantic mappings between its schema and some acquaintances, and social links with peer friends. In this context, reformulating a query from a peer's schema into other peer's schemas is a hard problem, as it may generate as many rewritings as the set of mappings from that peer to the outside and transitively on, by eventually traversing the entire network. However, not all the obtained rewritings are relevant to a given query. In this paper, we address this problem by inspecting semantic mappings and social links to find only relevant rewritings. We propose a new notion of 'relevance' of a query with respect to a mapping, and, based on this notion, a new semantic query reformulation approach for social PDMS, which achieves great accuracy and flexibility. To find rapidly the most interesting mappings, we combine several techniques: (i) social links are expressed as FOAF (Friend of a Friend) links to characterize peer's friendship and compact mapping summaries are used to obtain mapping descriptions; (ii) local semantic views are special views that contain information about external mappings; and (iii) gossiping techniques improve the search of relevant mappings. Our experimental evaluation, based on a prototype on top of PeerSim and a simulated network demonstrate that our solution yields greater recall, compared to traditional query translation approaches proposed in the literature.
[ { "created": "Fri, 25 Nov 2011 19:02:10 GMT", "version": "v1" } ]
2011-11-28
[ [ "Bonifati", "Angela", "" ], [ "Summa", "Gianvito", "" ], [ "Pacitti", "Esther", "" ], [ "Draidi", "Fady", "" ] ]
We consider social peer-to-peer data management systems (PDMS), where each peer maintains both semantic mappings between its schema and some acquaintances, and social links with peer friends. In this context, reformulating a query from a peer's schema into other peer's schemas is a hard problem, as it may generate as many rewritings as the set of mappings from that peer to the outside and transitively on, by eventually traversing the entire network. However, not all the obtained rewritings are relevant to a given query. In this paper, we address this problem by inspecting semantic mappings and social links to find only relevant rewritings. We propose a new notion of 'relevance' of a query with respect to a mapping, and, based on this notion, a new semantic query reformulation approach for social PDMS, which achieves great accuracy and flexibility. To find rapidly the most interesting mappings, we combine several techniques: (i) social links are expressed as FOAF (Friend of a Friend) links to characterize peer's friendship and compact mapping summaries are used to obtain mapping descriptions; (ii) local semantic views are special views that contain information about external mappings; and (iii) gossiping techniques improve the search of relevant mappings. Our experimental evaluation, based on a prototype on top of PeerSim and a simulated network demonstrate that our solution yields greater recall, compared to traditional query translation approaches proposed in the literature.
2006.13022
Feng Liu
Li Zhong, Zhen Fang, Feng Liu, Bo Yuan, Guangquan Zhang, Jie Lu
Bridging the Theoretical Bound and Deep Algorithms for Open Set Domain Adaptation
null
null
null
null
cs.LG cs.CV stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
In the unsupervised open set domain adaptation (UOSDA), the target domain contains unknown classes that are not observed in the source domain. Researchers in this area aim to train a classifier to accurately: 1) recognize unknown target data (data with unknown classes) and, 2) classify other target data. To achieve this aim, a previous study has proven an upper bound of the target-domain risk, and the open set difference, as an important term in the upper bound, is used to measure the risk on unknown target data. By minimizing the upper bound, a shallow classifier can be trained to achieve the aim. However, if the classifier is very flexible (e.g., deep neural networks (DNNs)), the open set difference will converge to a negative value when minimizing the upper bound, which causes an issue where most target data are recognized as unknown data. To address this issue, we propose a new upper bound of target-domain risk for UOSDA, which includes four terms: source-domain risk, $\epsilon$-open set difference ($\Delta_\epsilon$), a distributional discrepancy between domains, and a constant. Compared to the open set difference, $\Delta_\epsilon$ is more robust against the issue when it is being minimized, and thus we are able to use very flexible classifiers (i.e., DNNs). Then, we propose a new principle-guided deep UOSDA method that trains DNNs via minimizing the new upper bound. Specifically, source-domain risk and $\Delta_\epsilon$ are minimized by gradient descent, and the distributional discrepancy is minimized via a novel open-set conditional adversarial training strategy. Finally, compared to existing shallow and deep UOSDA methods, our method shows the state-of-the-art performance on several benchmark datasets, including digit recognition (MNIST, SVHN, USPS), object recognition (Office-31, Office-Home), and face recognition (PIE).
[ { "created": "Tue, 23 Jun 2020 14:01:06 GMT", "version": "v1" } ]
2020-06-24
[ [ "Zhong", "Li", "" ], [ "Fang", "Zhen", "" ], [ "Liu", "Feng", "" ], [ "Yuan", "Bo", "" ], [ "Zhang", "Guangquan", "" ], [ "Lu", "Jie", "" ] ]
In the unsupervised open set domain adaptation (UOSDA), the target domain contains unknown classes that are not observed in the source domain. Researchers in this area aim to train a classifier to accurately: 1) recognize unknown target data (data with unknown classes) and, 2) classify other target data. To achieve this aim, a previous study has proven an upper bound of the target-domain risk, and the open set difference, as an important term in the upper bound, is used to measure the risk on unknown target data. By minimizing the upper bound, a shallow classifier can be trained to achieve the aim. However, if the classifier is very flexible (e.g., deep neural networks (DNNs)), the open set difference will converge to a negative value when minimizing the upper bound, which causes an issue where most target data are recognized as unknown data. To address this issue, we propose a new upper bound of target-domain risk for UOSDA, which includes four terms: source-domain risk, $\epsilon$-open set difference ($\Delta_\epsilon$), a distributional discrepancy between domains, and a constant. Compared to the open set difference, $\Delta_\epsilon$ is more robust against the issue when it is being minimized, and thus we are able to use very flexible classifiers (i.e., DNNs). Then, we propose a new principle-guided deep UOSDA method that trains DNNs via minimizing the new upper bound. Specifically, source-domain risk and $\Delta_\epsilon$ are minimized by gradient descent, and the distributional discrepancy is minimized via a novel open-set conditional adversarial training strategy. Finally, compared to existing shallow and deep UOSDA methods, our method shows the state-of-the-art performance on several benchmark datasets, including digit recognition (MNIST, SVHN, USPS), object recognition (Office-31, Office-Home), and face recognition (PIE).
2108.13465
Bo Li
Bo Li, Xinyang Jiang, Donglin Bai, Yuge Zhang, Ningxin Zheng, Xuanyi Dong, Lu Liu, Yuqing Yang, Dongsheng Li
Full-Cycle Energy Consumption Benchmark for Low-Carbon Computer Vision
ArXiv Preprint
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
The energy consumption of deep learning models is increasing at a breathtaking rate, which raises concerns due to potential negative effects on carbon neutrality in the context of global warming and climate change. With the progress of efficient deep learning techniques, e.g., model compression, researchers can obtain efficient models with fewer parameters and smaller latency. However, most of the existing efficient deep learning methods do not explicitly consider energy consumption as a key performance indicator. Furthermore, existing methods mostly focus on the inference costs of the resulting efficient models, but neglect the notable energy consumption throughout the entire life cycle of the algorithm. In this paper, we present the first large-scale energy consumption benchmark for efficient computer vision models, where a new metric is proposed to explicitly evaluate the full-cycle energy consumption under different model usage intensity. The benchmark can provide insights for low carbon emission when selecting efficient deep learning algorithms in different model usage scenarios.
[ { "created": "Mon, 30 Aug 2021 18:22:36 GMT", "version": "v1" }, { "created": "Tue, 12 Oct 2021 02:23:42 GMT", "version": "v2" } ]
2021-10-13
[ [ "Li", "Bo", "" ], [ "Jiang", "Xinyang", "" ], [ "Bai", "Donglin", "" ], [ "Zhang", "Yuge", "" ], [ "Zheng", "Ningxin", "" ], [ "Dong", "Xuanyi", "" ], [ "Liu", "Lu", "" ], [ "Yang", "Yuqing", "" ], [ "Li", "Dongsheng", "" ] ]
The energy consumption of deep learning models is increasing at a breathtaking rate, which raises concerns due to potential negative effects on carbon neutrality in the context of global warming and climate change. With the progress of efficient deep learning techniques, e.g., model compression, researchers can obtain efficient models with fewer parameters and smaller latency. However, most of the existing efficient deep learning methods do not explicitly consider energy consumption as a key performance indicator. Furthermore, existing methods mostly focus on the inference costs of the resulting efficient models, but neglect the notable energy consumption throughout the entire life cycle of the algorithm. In this paper, we present the first large-scale energy consumption benchmark for efficient computer vision models, where a new metric is proposed to explicitly evaluate the full-cycle energy consumption under different model usage intensity. The benchmark can provide insights for low carbon emission when selecting efficient deep learning algorithms in different model usage scenarios.
2301.03944
Yunbo Lyu
Yunbo Lyu, Thanh Le-Cong, Hong Jin Kang, Ratnadira Widyasari, Zhipeng Zhao, Xuan-Bach D. Le, Ming Li, David Lo
CHRONOS: Time-Aware Zero-Shot Identification of Libraries from Vulnerability Reports
Accepted to the Technical Track of ICSE 2023
null
null
null
cs.SE cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tools that alert developers about library vulnerabilities depend on accurate, up-to-date vulnerability databases which are maintained by security researchers. These databases record the libraries related to each vulnerability. However, the vulnerability reports may not explicitly list every library and human analysis is required to determine all the relevant libraries. Human analysis may be slow and expensive, which motivates the need for automated approaches. Researchers and practitioners have proposed to automatically identify libraries from vulnerability reports using extreme multi-label learning (XML). While state-of-the-art XML techniques showed promising performance, their experiment settings do not practically fit what happens in reality. Previous studies randomly split the vulnerability reports data for training and testing their models without considering the chronological order of the reports. This may unduly train the models on chronologically newer reports while testing the models on chronologically older ones. However, in practice, one often receives chronologically new reports, which may be related to previously unseen libraries. Under this practical setting, we observe that the performance of current XML techniques declines substantially, e.g., F1 decreased from 0.7 to 0.28 under experiments without and with consideration of chronological order of vulnerability reports. We propose a practical library identification approach, namely CHRONOS, based on zero-shot learning. The novelty of CHRONOS is three-fold. First, CHRONOS fits into the practical pipeline by considering the chronological order of vulnerability reports. Second, CHRONOS enriches the data of the vulnerability descriptions and labels using a carefully designed data enhancement step. Third, CHRONOS exploits the temporal ordering of the vulnerability reports using a cache to prioritize prediction of...
[ { "created": "Tue, 10 Jan 2023 12:57:10 GMT", "version": "v1" }, { "created": "Sat, 4 Feb 2023 12:48:51 GMT", "version": "v2" }, { "created": "Tue, 14 Mar 2023 07:29:49 GMT", "version": "v3" }, { "created": "Sat, 29 Jul 2023 04:33:44 GMT", "version": "v4" } ]
2023-08-01
[ [ "Lyu", "Yunbo", "" ], [ "Le-Cong", "Thanh", "" ], [ "Kang", "Hong Jin", "" ], [ "Widyasari", "Ratnadira", "" ], [ "Zhao", "Zhipeng", "" ], [ "Le", "Xuan-Bach D.", "" ], [ "Li", "Ming", "" ], [ "Lo", "David", "" ] ]
Tools that alert developers about library vulnerabilities depend on accurate, up-to-date vulnerability databases which are maintained by security researchers. These databases record the libraries related to each vulnerability. However, the vulnerability reports may not explicitly list every library and human analysis is required to determine all the relevant libraries. Human analysis may be slow and expensive, which motivates the need for automated approaches. Researchers and practitioners have proposed to automatically identify libraries from vulnerability reports using extreme multi-label learning (XML). While state-of-the-art XML techniques showed promising performance, their experiment settings do not practically fit what happens in reality. Previous studies randomly split the vulnerability reports data for training and testing their models without considering the chronological order of the reports. This may unduly train the models on chronologically newer reports while testing the models on chronologically older ones. However, in practice, one often receives chronologically new reports, which may be related to previously unseen libraries. Under this practical setting, we observe that the performance of current XML techniques declines substantially, e.g., F1 decreased from 0.7 to 0.28 under experiments without and with consideration of chronological order of vulnerability reports. We propose a practical library identification approach, namely CHRONOS, based on zero-shot learning. The novelty of CHRONOS is three-fold. First, CHRONOS fits into the practical pipeline by considering the chronological order of vulnerability reports. Second, CHRONOS enriches the data of the vulnerability descriptions and labels using a carefully designed data enhancement step. Third, CHRONOS exploits the temporal ordering of the vulnerability reports using a cache to prioritize prediction of...
2302.10512
Pengxiang Jin
Shenglin Zhang, Pengxiang Jin, Zihan Lin, Yongqian Sun, Bicheng Zhang, Sibo Xia, Zhengdan Li, Zhenyu Zhong, Minghua Ma, Wa Jin, Dai Zhang, Zhenyu Zhu, Dan Pei
Robust Failure Diagnosis of Microservice System through Multimodal Data
null
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Automatic failure diagnosis is crucial for large microservice systems. Currently, most failure diagnosis methods rely solely on single-modal data (i.e., using either metrics, logs, or traces). In this study, we conduct an empirical study using real-world failure cases to show that combining these sources of data (multimodal data) leads to a more accurate diagnosis. However, effectively representing these data and addressing imbalanced failures remain challenging. To tackle these issues, we propose DiagFusion, a robust failure diagnosis approach that uses multimodal data. It leverages embedding techniques and data augmentation to represent the multimodal data of service instances, combines deployment data and traces to build a dependency graph, and uses a graph neural network to localize the root cause instance and determine the failure type. Our evaluations using real-world datasets show that DiagFusion outperforms existing methods in terms of root cause instance localization (improving by 20.9% to 368%) and failure type determination (improving by 11.0% to 169%).
[ { "created": "Tue, 21 Feb 2023 08:28:28 GMT", "version": "v1" }, { "created": "Wed, 31 May 2023 14:53:17 GMT", "version": "v2" } ]
2023-06-01
[ [ "Zhang", "Shenglin", "" ], [ "Jin", "Pengxiang", "" ], [ "Lin", "Zihan", "" ], [ "Sun", "Yongqian", "" ], [ "Zhang", "Bicheng", "" ], [ "Xia", "Sibo", "" ], [ "Li", "Zhengdan", "" ], [ "Zhong", "Zhenyu", "" ], [ "Ma", "Minghua", "" ], [ "Jin", "Wa", "" ], [ "Zhang", "Dai", "" ], [ "Zhu", "Zhenyu", "" ], [ "Pei", "Dan", "" ] ]
Automatic failure diagnosis is crucial for large microservice systems. Currently, most failure diagnosis methods rely solely on single-modal data (i.e., using either metrics, logs, or traces). In this study, we conduct an empirical study using real-world failure cases to show that combining these sources of data (multimodal data) leads to a more accurate diagnosis. However, effectively representing these data and addressing imbalanced failures remain challenging. To tackle these issues, we propose DiagFusion, a robust failure diagnosis approach that uses multimodal data. It leverages embedding techniques and data augmentation to represent the multimodal data of service instances, combines deployment data and traces to build a dependency graph, and uses a graph neural network to localize the root cause instance and determine the failure type. Our evaluations using real-world datasets show that DiagFusion outperforms existing methods in terms of root cause instance localization (improving by 20.9% to 368%) and failure type determination (improving by 11.0% to 169%).
1407.3969
Anna Maria Massone
Giorgio Ricca, Mauro C. Beltrametti, Anna Maria Massone
An iterative approach to Hough transform without re-voting
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many bone shapes in the human skeleton are characterized by profiles that can be associated to equations of algebraic curves. Fixing the parameters in the curve equation, by means of a classical pattern recognition procedure like the Hough transform technique, it is then possible to associate an equation to a specific bone profile. However, most skeleton districts are more accurately described by piecewise defined curves. This paper utilizes an iterative approach of the Hough transform without re-voting, to provide an efficient procedure for describing the profile of a bone in the human skeleton as a collection of different but continuously attached curves.
[ { "created": "Tue, 15 Jul 2014 12:56:35 GMT", "version": "v1" } ]
2014-07-16
[ [ "Ricca", "Giorgio", "" ], [ "Beltrametti", "Mauro C.", "" ], [ "Massone", "Anna Maria", "" ] ]
Many bone shapes in the human skeleton are characterized by profiles that can be associated to equations of algebraic curves. Fixing the parameters in the curve equation, by means of a classical pattern recognition procedure like the Hough transform technique, it is then possible to associate an equation to a specific bone profile. However, most skeleton districts are more accurately described by piecewise defined curves. This paper utilizes an iterative approach of the Hough transform without re-voting, to provide an efficient procedure for describing the profile of a bone in the human skeleton as a collection of different but continuously attached curves.
2112.10727
Li Duan
Li Duan, Lewis Boyd, Gerardo Aragon-Camarasa
Learning Physics Properties of Fabrics and Garments with a Physics Similarity Neural Network
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In this paper, we propose to predict the physics parameters of real fabrics and garments by learning their physics similarities between simulated fabrics via a Physics Similarity Network (PhySNet). For this, we estimate wind speeds generated by an electric fan and the area weight to predict bending stiffness of simulated and real fabrics and garments. We found that PhySNet coupled with a Bayesian optimiser can predict physics parameters and improve the state-of-art by 34%for real fabrics and 68% for real garments.
[ { "created": "Mon, 20 Dec 2021 18:19:12 GMT", "version": "v1" } ]
2021-12-21
[ [ "Duan", "Li", "" ], [ "Boyd", "Lewis", "" ], [ "Aragon-Camarasa", "Gerardo", "" ] ]
In this paper, we propose to predict the physics parameters of real fabrics and garments by learning their physics similarities between simulated fabrics via a Physics Similarity Network (PhySNet). For this, we estimate wind speeds generated by an electric fan and the area weight to predict bending stiffness of simulated and real fabrics and garments. We found that PhySNet coupled with a Bayesian optimiser can predict physics parameters and improve the state-of-art by 34%for real fabrics and 68% for real garments.
2108.04751
Jean-Claude Belfiore
Jean-Claude Belfiore, Daniel Bennequin and Xavier Giraud
Logical Information Cells I
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
In this study we explore the spontaneous apparition of visible intelligible reasoning in simple artificial networks, and we connect this experimental observation with a notion of semantic information. We start with the reproduction of a DNN model of natural neurons in monkeys, studied by Neromyliotis and Moschovakis in 2017 and 2018, to explain how "motor equivalent neurons", coding only for the action of pointing, are supplemented by other neurons for specifying the actor of the action, the eye E, the hand H, or the eye and the hand together EH. There appear inner neurons performing a logical work, making intermediary proposition, for instance E V EH. Then, we remarked that adding a second hidden layer and choosing a symmetric metric for learning, the activities of the neurons become almost quantized and more informative. Using the work of Carnap and Bar-Hillel 1952, we define a measure of the logical value for collections of such cells. The logical score growths with the depth of the layer, i.e. the information on the output decision increases, which confirms a kind of bottleneck principle. Then we study a bit more complex tasks, a priori involving predicate logic. We compare the logic and the measured weights. This shows, for groups of neurons, a neat correlation between the logical score and the size of the weights. It exhibits a form of sparsity between the layers. The most spectacular result concerns the triples which can conclude for all conditions: when applying their weight matrices to their logical matrix, we recover the classification. This shows that weights precisely perform the proofs.
[ { "created": "Tue, 10 Aug 2021 15:31:26 GMT", "version": "v1" } ]
2021-08-11
[ [ "Belfiore", "Jean-Claude", "" ], [ "Bennequin", "Daniel", "" ], [ "Giraud", "Xavier", "" ] ]
In this study we explore the spontaneous apparition of visible intelligible reasoning in simple artificial networks, and we connect this experimental observation with a notion of semantic information. We start with the reproduction of a DNN model of natural neurons in monkeys, studied by Neromyliotis and Moschovakis in 2017 and 2018, to explain how "motor equivalent neurons", coding only for the action of pointing, are supplemented by other neurons for specifying the actor of the action, the eye E, the hand H, or the eye and the hand together EH. There appear inner neurons performing a logical work, making intermediary proposition, for instance E V EH. Then, we remarked that adding a second hidden layer and choosing a symmetric metric for learning, the activities of the neurons become almost quantized and more informative. Using the work of Carnap and Bar-Hillel 1952, we define a measure of the logical value for collections of such cells. The logical score growths with the depth of the layer, i.e. the information on the output decision increases, which confirms a kind of bottleneck principle. Then we study a bit more complex tasks, a priori involving predicate logic. We compare the logic and the measured weights. This shows, for groups of neurons, a neat correlation between the logical score and the size of the weights. It exhibits a form of sparsity between the layers. The most spectacular result concerns the triples which can conclude for all conditions: when applying their weight matrices to their logical matrix, we recover the classification. This shows that weights precisely perform the proofs.
1811.04752
Brandon Malone
Brandon Malone, Alberto Garcia-Duran, and Mathias Niepert
Learning Representations of Missing Data for Predicting Patient Outcomes
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Extracting actionable insight from Electronic Health Records (EHRs) poses several challenges for traditional machine learning approaches. Patients are often missing data relative to each other; the data comes in a variety of modalities, such as multivariate time series, free text, and categorical demographic information; important relationships among patients can be difficult to detect; and many others. In this work, we propose a novel approach to address these first three challenges using a representation learning scheme based on message passing. We show that our proposed approach is competitive with or outperforms the state of the art for predicting in-hospital mortality (binary classification), the length of hospital visits (regression) and the discharge destination (multiclass classification).
[ { "created": "Mon, 12 Nov 2018 14:51:41 GMT", "version": "v1" } ]
2018-11-13
[ [ "Malone", "Brandon", "" ], [ "Garcia-Duran", "Alberto", "" ], [ "Niepert", "Mathias", "" ] ]
Extracting actionable insight from Electronic Health Records (EHRs) poses several challenges for traditional machine learning approaches. Patients are often missing data relative to each other; the data comes in a variety of modalities, such as multivariate time series, free text, and categorical demographic information; important relationships among patients can be difficult to detect; and many others. In this work, we propose a novel approach to address these first three challenges using a representation learning scheme based on message passing. We show that our proposed approach is competitive with or outperforms the state of the art for predicting in-hospital mortality (binary classification), the length of hospital visits (regression) and the discharge destination (multiclass classification).
2105.03109
Marius Hobbhahn
Marius Hobbhahn, Philipp Hennig
Laplace Matching for fast Approximate Inference in Latent Gaussian Models
Added experiments and clarifications; Currently under review at JMLR
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Bayesian inference on non-Gaussian data is often non-analytic and requires computationally expensive approximations such as sampling or variational inference. We propose an approximate inference framework primarily designed to be computationally cheap while still achieving high approximation quality. The concept, which we call Laplace Matching, involves closed-form, approximate, bi-directional transformations between the parameter spaces of exponential families. These are constructed from Laplace approximations under custom-designed basis transformations. The mappings can then be leveraged to effectively turn a latent Gaussian distribution into an approximate conjugate prior to a rich class of observable variables. This allows us to train latent Gaussian models such as Gaussian Processes on non-Gaussian data at nearly no additional cost. The method can be thought of as a pre-processing step which can be implemented in <5 lines of code and runs in less than a second. Furthermore, Laplace Matching yields a simple way to group similar data points together, e.g. to produce inducing points for GPs. We empirically evaluate the method with experiments for four different exponential distributions, namely the Beta, Gamma, Dirichlet and inverse Wishart, showing approximation quality comparable to state-of-the-art approximate inference techniques at a drastic reduction in computational cost.
[ { "created": "Fri, 7 May 2021 08:25:17 GMT", "version": "v1" }, { "created": "Tue, 11 Oct 2022 10:20:07 GMT", "version": "v2" } ]
2022-10-12
[ [ "Hobbhahn", "Marius", "" ], [ "Hennig", "Philipp", "" ] ]
Bayesian inference on non-Gaussian data is often non-analytic and requires computationally expensive approximations such as sampling or variational inference. We propose an approximate inference framework primarily designed to be computationally cheap while still achieving high approximation quality. The concept, which we call Laplace Matching, involves closed-form, approximate, bi-directional transformations between the parameter spaces of exponential families. These are constructed from Laplace approximations under custom-designed basis transformations. The mappings can then be leveraged to effectively turn a latent Gaussian distribution into an approximate conjugate prior to a rich class of observable variables. This allows us to train latent Gaussian models such as Gaussian Processes on non-Gaussian data at nearly no additional cost. The method can be thought of as a pre-processing step which can be implemented in <5 lines of code and runs in less than a second. Furthermore, Laplace Matching yields a simple way to group similar data points together, e.g. to produce inducing points for GPs. We empirically evaluate the method with experiments for four different exponential distributions, namely the Beta, Gamma, Dirichlet and inverse Wishart, showing approximation quality comparable to state-of-the-art approximate inference techniques at a drastic reduction in computational cost.
2205.11472
Benjamin Schiller
Benjamin Schiller, Johannes Daxenberger, Iryna Gurevych
Diversity Over Size: On the Effect of Sample and Topic Sizes for Argument Mining Datasets
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The task of Argument Mining, that is extracting argumentative sentences for a specific topic from large document sources, is an inherently difficult task for machine learning models and humans alike, as large Argument Mining datasets are rare and recognition of argumentative sentences requires expert knowledge. The task becomes even more difficult if it also involves stance detection of retrieved arguments. Given the cost and complexity of creating suitably large Argument Mining datasets, we ask whether it is necessary for acceptable performance to have datasets growing in size. Our findings show that, when using carefully composed training samples and a model pretrained on related tasks, we can reach 95% of the maximum performance while reducing the training sample size by at least 85%. This gain is consistent across three Argument Mining tasks on three different datasets. We also publish a new dataset for future benchmarking.
[ { "created": "Mon, 23 May 2022 17:14:32 GMT", "version": "v1" }, { "created": "Sat, 15 Jul 2023 14:39:15 GMT", "version": "v2" } ]
2023-07-18
[ [ "Schiller", "Benjamin", "" ], [ "Daxenberger", "Johannes", "" ], [ "Gurevych", "Iryna", "" ] ]
The task of Argument Mining, that is extracting argumentative sentences for a specific topic from large document sources, is an inherently difficult task for machine learning models and humans alike, as large Argument Mining datasets are rare and recognition of argumentative sentences requires expert knowledge. The task becomes even more difficult if it also involves stance detection of retrieved arguments. Given the cost and complexity of creating suitably large Argument Mining datasets, we ask whether it is necessary for acceptable performance to have datasets growing in size. Our findings show that, when using carefully composed training samples and a model pretrained on related tasks, we can reach 95% of the maximum performance while reducing the training sample size by at least 85%. This gain is consistent across three Argument Mining tasks on three different datasets. We also publish a new dataset for future benchmarking.