id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2005.04944
Carsten Schneider
Sergei A. Abramov and Manuel Bronstein and Marko Petkov\v{s}ek and Carsten Schneider
On Rational and Hypergeometric Solutions of Linear Ordinary Difference Equations in $\Pi\mathbf\Sigma^*$-field extensions
Various typos have been removed and the presentation has been improved
null
null
null
cs.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a complete algorithm that computes all hypergeometric solutions of homogeneous linear difference equations and rational solutions of parameterized linear difference equations in the setting of $\Pi\Sigma^*$-fields. More generally, we provide a flexible framework for a big class of difference fields that is built by a tower of $\Pi\Sigma^*$-field extensions over a difference field that satisfies certain algorithmic properties. As a consequence one can compute all solutions in terms of indefinite nested sums and products that arise within the components of a parameterized linear difference equation, and one can find all hypergeometric solutions that are defined over the arising sums and products of a homogeneous linear difference equation.
[ { "created": "Mon, 11 May 2020 09:15:31 GMT", "version": "v1" }, { "created": "Mon, 25 Jan 2021 19:36:53 GMT", "version": "v2" } ]
2021-01-27
[ [ "Abramov", "Sergei A.", "" ], [ "Bronstein", "Manuel", "" ], [ "Petkovšek", "Marko", "" ], [ "Schneider", "Carsten", "" ] ]
We present a complete algorithm that computes all hypergeometric solutions of homogeneous linear difference equations and rational solutions of parameterized linear difference equations in the setting of $\Pi\Sigma^*$-fields. More generally, we provide a flexible framework for a big class of difference fields that is built by a tower of $\Pi\Sigma^*$-field extensions over a difference field that satisfies certain algorithmic properties. As a consequence one can compute all solutions in terms of indefinite nested sums and products that arise within the components of a parameterized linear difference equation, and one can find all hypergeometric solutions that are defined over the arising sums and products of a homogeneous linear difference equation.
2112.05665
Zhuangzhuang Dai
Zhuangzhuang Dai, Muhamad Risqi U. Saputra, Chris Xiaoxuan Lu, Andrew Markham, and Niki Trigoni
Deep Odometry Systems on Edge with EKF-LoRa Backend for Real-Time Positioning in Adverse Environment
null
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ubiquitous positioning for pedestrian in adverse environment has served a long standing challenge. Despite dramatic progress made by Deep Learning, multi-sensor deep odometry systems yet pose a high computational cost and suffer from cumulative drifting errors over time. Thanks to the increasing computational power of edge devices, we propose a novel ubiquitous positioning solution by integrating state-of-the-art deep odometry models on edge with an EKF (Extended Kalman Filter)-LoRa backend. We carefully compare and select three sensor modalities, i.e., an Inertial Measurement Unit (IMU), a millimetre-wave (mmWave) radar, and a thermal infrared camera, and realise their deep odometry inference engines which runs in real-time. A pipeline of deploying deep odometry considering accuracy, complexity, and edge platform is proposed. We design a LoRa link for positional data backhaul and projecting aggregated positions of deep odometry into the global frame. We find that a simple EKF based fusion module is sufficient for generic positioning calibration with over 34% accuracy gains against any standalone deep odometry system. Extensive tests in different environments validate the efficiency and efficacy of our proposed positioning system.
[ { "created": "Fri, 10 Dec 2021 16:53:13 GMT", "version": "v1" } ]
2021-12-13
[ [ "Dai", "Zhuangzhuang", "" ], [ "Saputra", "Muhamad Risqi U.", "" ], [ "Lu", "Chris Xiaoxuan", "" ], [ "Markham", "Andrew", "" ], [ "Trigoni", "Niki", "" ] ]
Ubiquitous positioning for pedestrian in adverse environment has served a long standing challenge. Despite dramatic progress made by Deep Learning, multi-sensor deep odometry systems yet pose a high computational cost and suffer from cumulative drifting errors over time. Thanks to the increasing computational power of edge devices, we propose a novel ubiquitous positioning solution by integrating state-of-the-art deep odometry models on edge with an EKF (Extended Kalman Filter)-LoRa backend. We carefully compare and select three sensor modalities, i.e., an Inertial Measurement Unit (IMU), a millimetre-wave (mmWave) radar, and a thermal infrared camera, and realise their deep odometry inference engines which runs in real-time. A pipeline of deploying deep odometry considering accuracy, complexity, and edge platform is proposed. We design a LoRa link for positional data backhaul and projecting aggregated positions of deep odometry into the global frame. We find that a simple EKF based fusion module is sufficient for generic positioning calibration with over 34% accuracy gains against any standalone deep odometry system. Extensive tests in different environments validate the efficiency and efficacy of our proposed positioning system.
2302.02237
Yujin Han
Yujin Han, Mingwenchan Xu, Leying Guan
Conformalized Semi-supervised Random Forest for Classification and Abnormality Detection
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Random Forests classifier, a widely utilized off-the-shelf classification tool, assumes training and test samples come from the same distribution as other standard classifiers. However, in safety-critical scenarios like medical diagnosis and network attack detection, discrepancies between the training and test sets, including the potential presence of novel outlier samples not appearing during training, can pose significant challenges. To address this problem, we introduce the Conformalized Semi-Supervised Random Forest (CSForest), which couples the conformalization technique Jackknife+aB with semi-supervised tree ensembles to construct a set-valued prediction $C(x)$. Instead of optimizing over the training distribution, CSForest employs unlabeled test samples to enhance accuracy and flag unseen outliers by generating an empty set. Theoretically, we establish CSForest to cover true labels for previously observed inlier classes under arbitrarily label-shift in the test data. We compare CSForest with state-of-the-art methods using synthetic examples and various real-world datasets, under different types of distribution changes in the test domain. Our results highlight CSForest's effective prediction of inliers and its ability to detect outlier samples unique to the test data. In addition, CSForest shows persistently good performance as the sizes of the training and test sets vary. Codes of CSForest are available at https://github.com/yujinhan98/CSForest.
[ { "created": "Sat, 4 Feb 2023 20:53:07 GMT", "version": "v1" }, { "created": "Thu, 29 Feb 2024 11:49:45 GMT", "version": "v2" } ]
2024-03-01
[ [ "Han", "Yujin", "" ], [ "Xu", "Mingwenchan", "" ], [ "Guan", "Leying", "" ] ]
The Random Forests classifier, a widely utilized off-the-shelf classification tool, assumes training and test samples come from the same distribution as other standard classifiers. However, in safety-critical scenarios like medical diagnosis and network attack detection, discrepancies between the training and test sets, including the potential presence of novel outlier samples not appearing during training, can pose significant challenges. To address this problem, we introduce the Conformalized Semi-Supervised Random Forest (CSForest), which couples the conformalization technique Jackknife+aB with semi-supervised tree ensembles to construct a set-valued prediction $C(x)$. Instead of optimizing over the training distribution, CSForest employs unlabeled test samples to enhance accuracy and flag unseen outliers by generating an empty set. Theoretically, we establish CSForest to cover true labels for previously observed inlier classes under arbitrarily label-shift in the test data. We compare CSForest with state-of-the-art methods using synthetic examples and various real-world datasets, under different types of distribution changes in the test domain. Our results highlight CSForest's effective prediction of inliers and its ability to detect outlier samples unique to the test data. In addition, CSForest shows persistently good performance as the sizes of the training and test sets vary. Codes of CSForest are available at https://github.com/yujinhan98/CSForest.
2002.02758
Parth Shah
Parth Shah, Vishvajit Bakrola
Neural Machine Translation System of Indic Languages -- An Attention based Approach
null
2019 Second International Conference on Advanced Computational and Communication Paradigms (ICACCP), Gangtok, India, 2019, pp. 1-5
10.1109/ICACCP.2019.8882969
null
cs.CL cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural machine translation (NMT) is a recent and effective technique which led to remarkable improvements in comparison of conventional machine translation techniques. Proposed neural machine translation model developed for the Gujarati language contains encoder-decoder with attention mechanism. In India, almost all the languages are originated from their ancestral language - Sanskrit. They are having inevitable similarities including lexical and named entity similarity. Translating into Indic languages is always be a challenging task. In this paper, we have presented the neural machine translation system (NMT) that can efficiently translate Indic languages like Hindi and Gujarati that together covers more than 58.49 percentage of total speakers in the country. We have compared the performance of our NMT model with automatic evaluation matrices such as BLEU, perplexity and TER matrix. The comparison of our network with Google translate is also presented where it outperformed with a margin of 6 BLEU score on English-Gujarati translation.
[ { "created": "Sun, 2 Feb 2020 07:15:18 GMT", "version": "v1" } ]
2020-02-10
[ [ "Shah", "Parth", "" ], [ "Bakrola", "Vishvajit", "" ] ]
Neural machine translation (NMT) is a recent and effective technique which led to remarkable improvements in comparison of conventional machine translation techniques. Proposed neural machine translation model developed for the Gujarati language contains encoder-decoder with attention mechanism. In India, almost all the languages are originated from their ancestral language - Sanskrit. They are having inevitable similarities including lexical and named entity similarity. Translating into Indic languages is always be a challenging task. In this paper, we have presented the neural machine translation system (NMT) that can efficiently translate Indic languages like Hindi and Gujarati that together covers more than 58.49 percentage of total speakers in the country. We have compared the performance of our NMT model with automatic evaluation matrices such as BLEU, perplexity and TER matrix. The comparison of our network with Google translate is also presented where it outperformed with a margin of 6 BLEU score on English-Gujarati translation.
1611.01170
Wei Xie
Wei Xie, Yang Wang, Steven M. Boker, Donald E. Brown
PrivLogit: Efficient Privacy-preserving Logistic Regression by Tailoring Numerical Optimizers
24 pages, 4 figures. Work done and circulated since 2015
null
null
null
cs.LG cs.CR stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Safeguarding privacy in machine learning is highly desirable, especially in collaborative studies across many organizations. Privacy-preserving distributed machine learning (based on cryptography) is popular to solve the problem. However, existing cryptographic protocols still incur excess computational overhead. Here, we make a novel observation that this is partially due to naive adoption of mainstream numerical optimization (e.g., Newton method) and failing to tailor for secure computing. This work presents a contrasting perspective: customizing numerical optimization specifically for secure settings. We propose a seemingly less-favorable optimization method that can in fact significantly accelerate privacy-preserving logistic regression. Leveraging this new method, we propose two new secure protocols for conducting logistic regression in a privacy-preserving and distributed manner. Extensive theoretical and empirical evaluations prove the competitive performance of our two secure proposals while without compromising accuracy or privacy: with speedup up to 2.3x and 8.1x, respectively, over state-of-the-art; and even faster as data scales up. Such drastic speedup is on top of and in addition to performance improvements from existing (and future) state-of-the-art cryptography. Our work provides a new way towards efficient and practical privacy-preserving logistic regression for large-scale studies which are common for modern science.
[ { "created": "Thu, 3 Nov 2016 20:04:29 GMT", "version": "v1" } ]
2016-11-07
[ [ "Xie", "Wei", "" ], [ "Wang", "Yang", "" ], [ "Boker", "Steven M.", "" ], [ "Brown", "Donald E.", "" ] ]
Safeguarding privacy in machine learning is highly desirable, especially in collaborative studies across many organizations. Privacy-preserving distributed machine learning (based on cryptography) is popular to solve the problem. However, existing cryptographic protocols still incur excess computational overhead. Here, we make a novel observation that this is partially due to naive adoption of mainstream numerical optimization (e.g., Newton method) and failing to tailor for secure computing. This work presents a contrasting perspective: customizing numerical optimization specifically for secure settings. We propose a seemingly less-favorable optimization method that can in fact significantly accelerate privacy-preserving logistic regression. Leveraging this new method, we propose two new secure protocols for conducting logistic regression in a privacy-preserving and distributed manner. Extensive theoretical and empirical evaluations prove the competitive performance of our two secure proposals while without compromising accuracy or privacy: with speedup up to 2.3x and 8.1x, respectively, over state-of-the-art; and even faster as data scales up. Such drastic speedup is on top of and in addition to performance improvements from existing (and future) state-of-the-art cryptography. Our work provides a new way towards efficient and practical privacy-preserving logistic regression for large-scale studies which are common for modern science.
1505.00904
Bernhard Rumpe
Jan Oliver Ringert, Alexander Roth, Bernhard Rumpe, Andreas Wortmann
Code Generator Composition for Model-Driven Engineering of Robotics Component & Connector Systems
12 pages, 4 figures, In: Proceedings of the 1st International Workshop on Model-Driven Robot Software Engineering (MORSE 2014), York, Great Britain, Volume 1319 of CEUR Workshop Proceedings, 2014
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Engineering software for robotics applications requires multidomain and application-specific solutions. Model-driven engineering and modeling language integration provide means for developing specialized, yet reusable models of robotics software architectures. Code generators transform these platform independent models into executable code specific to robotic platforms. Generative software engineering for multidomain applications requires not only the integration of modeling languages but also the integration of validation mechanisms and code generators. In this paper we sketch a conceptual model for code generator composition and show an instantiation of this model in the MontiArc- Automaton framework. MontiArcAutomaton allows modeling software architectures as component and connector models with different component behavior modeling languages. Effective means for code generator integration are a necessity for the post hoc integration of applicationspecific languages in model-based robotics software engineering.
[ { "created": "Tue, 5 May 2015 07:30:58 GMT", "version": "v1" } ]
2015-05-06
[ [ "Ringert", "Jan Oliver", "" ], [ "Roth", "Alexander", "" ], [ "Rumpe", "Bernhard", "" ], [ "Wortmann", "Andreas", "" ] ]
Engineering software for robotics applications requires multidomain and application-specific solutions. Model-driven engineering and modeling language integration provide means for developing specialized, yet reusable models of robotics software architectures. Code generators transform these platform independent models into executable code specific to robotic platforms. Generative software engineering for multidomain applications requires not only the integration of modeling languages but also the integration of validation mechanisms and code generators. In this paper we sketch a conceptual model for code generator composition and show an instantiation of this model in the MontiArc- Automaton framework. MontiArcAutomaton allows modeling software architectures as component and connector models with different component behavior modeling languages. Effective means for code generator integration are a necessity for the post hoc integration of applicationspecific languages in model-based robotics software engineering.
1412.2122
V\'ictor Ponce-L\'opez
V\'ictor Ponce-L\'opez, Sergio Escalera, Marc P\'erez, Oriol Jan\'es, Xavier Bar\'o
Non-Verbal Communication Analysis in Victim-Offender Mediations
Please, find the supplementary video material at: http://sunai.uoc.edu/~vponcel/video/VOMSessionSample.mp4
null
null
null
cs.HC cs.AI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present a non-invasive ambient intelligence framework for the semi-automatic analysis of non-verbal communication applied to the restorative justice field. In particular, we propose the use of computer vision and social signal processing technologies in real scenarios of Victim-Offender Mediations, applying feature extraction techniques to multi-modal audio-RGB-depth data. We compute a set of behavioral indicators that define communicative cues from the fields of psychology and observational methodology. We test our methodology on data captured in real world Victim-Offender Mediation sessions in Catalonia in collaboration with the regional government. We define the ground truth based on expert opinions when annotating the observed social responses. Using different state-of-the-art binary classification approaches, our system achieves recognition accuracies of 86% when predicting satisfaction, and 79% when predicting both agreement and receptivity. Applying a regression strategy, we obtain a mean deviation for the predictions between 0.5 and 0.7 in the range [1-5] for the computed social signals.
[ { "created": "Tue, 25 Nov 2014 12:56:43 GMT", "version": "v1" }, { "created": "Mon, 19 Jan 2015 18:12:48 GMT", "version": "v2" } ]
2016-02-22
[ [ "Ponce-López", "Víctor", "" ], [ "Escalera", "Sergio", "" ], [ "Pérez", "Marc", "" ], [ "Janés", "Oriol", "" ], [ "Baró", "Xavier", "" ] ]
In this paper we present a non-invasive ambient intelligence framework for the semi-automatic analysis of non-verbal communication applied to the restorative justice field. In particular, we propose the use of computer vision and social signal processing technologies in real scenarios of Victim-Offender Mediations, applying feature extraction techniques to multi-modal audio-RGB-depth data. We compute a set of behavioral indicators that define communicative cues from the fields of psychology and observational methodology. We test our methodology on data captured in real world Victim-Offender Mediation sessions in Catalonia in collaboration with the regional government. We define the ground truth based on expert opinions when annotating the observed social responses. Using different state-of-the-art binary classification approaches, our system achieves recognition accuracies of 86% when predicting satisfaction, and 79% when predicting both agreement and receptivity. Applying a regression strategy, we obtain a mean deviation for the predictions between 0.5 and 0.7 in the range [1-5] for the computed social signals.
1902.10296
Aaron Steven White
Shaorong Yan, Aaron Steven White
A Framework for Decoding Event-Related Potentials from Text
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel framework for modeling event-related potentials (ERPs) collected during reading that couples pre-trained convolutional decoders with a language model. Using this framework, we compare the abilities of a variety of existing and novel sentence processing models to reconstruct ERPs. We find that modern contextual word embeddings underperform surprisal-based models but that, combined, the two outperform either on its own.
[ { "created": "Wed, 27 Feb 2019 01:43:48 GMT", "version": "v1" }, { "created": "Tue, 2 Apr 2019 17:29:41 GMT", "version": "v2" } ]
2019-04-03
[ [ "Yan", "Shaorong", "" ], [ "White", "Aaron Steven", "" ] ]
We propose a novel framework for modeling event-related potentials (ERPs) collected during reading that couples pre-trained convolutional decoders with a language model. Using this framework, we compare the abilities of a variety of existing and novel sentence processing models to reconstruct ERPs. We find that modern contextual word embeddings underperform surprisal-based models but that, combined, the two outperform either on its own.
2310.04975
Youquan Xian
Peng Liu, Youquan Xian, Chuanjian Yao, Peng Wang, Li-e Wang, Xianxian Li
A Trustworthy and Consistent Blockchain Oracle Scheme for Industrial Internet of Things
Rejected after the third round of review of IEEE Internet of Things Journal
null
null
null
cs.CR cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Blockchain provides decentralization and trustlessness features for the Industrial Internet of Things (IIoT), which expands the application scenarios of IIoT. To address the problem that the blockchain cannot actively obtain off-chain data, the blockchain oracle is proposed as a bridge between the blockchain and external data. However, the existing oracle schemes are difficult to solve the problem of low quality of service caused by frequent data changes and heterogeneous devices in IIoT, and the current oracle node selection schemes are difficult to balance security and quality of service. To tackle these problems, this paper proposes a secure and reliable oracle scheme that can obtain high-quality off-chain data. Specifically, we first design an oracle node selection algorithm based on Verifiable Random Function (VRF) and reputation mechanism to securely select high-quality nodes. Second, we propose a data filtering algorithm based on a sliding window to further improve the consistency of the collected data. We verify the security of the proposed scheme through security analysis. The experimental results show that the proposed scheme can effectively improve the service quality of the oracle.
[ { "created": "Sun, 8 Oct 2023 02:44:29 GMT", "version": "v1" } ]
2023-10-10
[ [ "Liu", "Peng", "" ], [ "Xian", "Youquan", "" ], [ "Yao", "Chuanjian", "" ], [ "Wang", "Peng", "" ], [ "Wang", "Li-e", "" ], [ "Li", "Xianxian", "" ] ]
Blockchain provides decentralization and trustlessness features for the Industrial Internet of Things (IIoT), which expands the application scenarios of IIoT. To address the problem that the blockchain cannot actively obtain off-chain data, the blockchain oracle is proposed as a bridge between the blockchain and external data. However, the existing oracle schemes are difficult to solve the problem of low quality of service caused by frequent data changes and heterogeneous devices in IIoT, and the current oracle node selection schemes are difficult to balance security and quality of service. To tackle these problems, this paper proposes a secure and reliable oracle scheme that can obtain high-quality off-chain data. Specifically, we first design an oracle node selection algorithm based on Verifiable Random Function (VRF) and reputation mechanism to securely select high-quality nodes. Second, we propose a data filtering algorithm based on a sliding window to further improve the consistency of the collected data. We verify the security of the proposed scheme through security analysis. The experimental results show that the proposed scheme can effectively improve the service quality of the oracle.
2012.03923
Nathaniel Harms
Eric Blais, Renato Ferreira Pinto Jr., Nathaniel Harms
VC Dimension and Distribution-Free Sample-Based Testing
44 pages
null
null
null
cs.LG cs.CC cs.DS
http://creativecommons.org/licenses/by/4.0/
We consider the problem of determining which classes of functions can be tested more efficiently than they can be learned, in the distribution-free sample-based model that corresponds to the standard PAC learning setting. Our main result shows that while VC dimension by itself does not always provide tight bounds on the number of samples required to test a class of functions in this model, it can be combined with a closely-related variant that we call "lower VC" (or LVC) dimension to obtain strong lower bounds on this sample complexity. We use this result to obtain strong and in many cases nearly optimal lower bounds on the sample complexity for testing unions of intervals, halfspaces, intersections of halfspaces, polynomial threshold functions, and decision trees. Conversely, we show that two natural classes of functions, juntas and monotone functions, can be tested with a number of samples that is polynomially smaller than the number of samples required for PAC learning. Finally, we also use the connection between VC dimension and property testing to establish new lower bounds for testing radius clusterability and testing feasibility of linear constraint systems.
[ { "created": "Mon, 7 Dec 2020 18:50:46 GMT", "version": "v1" } ]
2020-12-08
[ [ "Blais", "Eric", "" ], [ "Pinto", "Renato Ferreira", "Jr." ], [ "Harms", "Nathaniel", "" ] ]
We consider the problem of determining which classes of functions can be tested more efficiently than they can be learned, in the distribution-free sample-based model that corresponds to the standard PAC learning setting. Our main result shows that while VC dimension by itself does not always provide tight bounds on the number of samples required to test a class of functions in this model, it can be combined with a closely-related variant that we call "lower VC" (or LVC) dimension to obtain strong lower bounds on this sample complexity. We use this result to obtain strong and in many cases nearly optimal lower bounds on the sample complexity for testing unions of intervals, halfspaces, intersections of halfspaces, polynomial threshold functions, and decision trees. Conversely, we show that two natural classes of functions, juntas and monotone functions, can be tested with a number of samples that is polynomially smaller than the number of samples required for PAC learning. Finally, we also use the connection between VC dimension and property testing to establish new lower bounds for testing radius clusterability and testing feasibility of linear constraint systems.
1607.03607
Gabriel Martins Dias
Gabriel Martins Dias, Cintia Borges Margi, Filipe C. P. de Oliveira, Boris Bellalta
Cloud Empowered Self-Managing WSNs
12 pages, 4200 words, 4 figures, 2 tables, submitted to "IEEE Communications Magazine" special issue on the Internet of Things
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wireless Sensor Networks (WSNs) are composed of low powered and resource-constrained wireless sensor nodes that are not capable of performing high-complexity algorithms. Integrating these networks into the Internet of Things (IoT) facilitates their real-time optimization based on remote data visualization and analysis. This work describes the design and implementation of a scalable system architecture that integrates WSNs and cloud services to work autonomously in an IoT environment. The implementation relies on Software Defined Networking features to simplify the WSN management and exploits data analytics tools to execute a reinforcement learning algorithm that takes decisions based on the environment's evolution. It can automatically configure wireless sensor nodes to measure and transmit the temperature only at periods when the environment changes more often. Without any human intervention, the system could reduce nearly 85% the number of transmissions, showing the potential of this mechanism to extend WSNs lifetime without compromising the data quality. Besides attending to similar use cases, such a WSN autonomic management could promote a new business model to offer sensing tasks as a service, which is also introduced in this work.
[ { "created": "Wed, 13 Jul 2016 07:01:26 GMT", "version": "v1" } ]
2016-07-14
[ [ "Dias", "Gabriel Martins", "" ], [ "Margi", "Cintia Borges", "" ], [ "de Oliveira", "Filipe C. P.", "" ], [ "Bellalta", "Boris", "" ] ]
Wireless Sensor Networks (WSNs) are composed of low powered and resource-constrained wireless sensor nodes that are not capable of performing high-complexity algorithms. Integrating these networks into the Internet of Things (IoT) facilitates their real-time optimization based on remote data visualization and analysis. This work describes the design and implementation of a scalable system architecture that integrates WSNs and cloud services to work autonomously in an IoT environment. The implementation relies on Software Defined Networking features to simplify the WSN management and exploits data analytics tools to execute a reinforcement learning algorithm that takes decisions based on the environment's evolution. It can automatically configure wireless sensor nodes to measure and transmit the temperature only at periods when the environment changes more often. Without any human intervention, the system could reduce nearly 85% the number of transmissions, showing the potential of this mechanism to extend WSNs lifetime without compromising the data quality. Besides attending to similar use cases, such a WSN autonomic management could promote a new business model to offer sensing tasks as a service, which is also introduced in this work.
1610.03677
Suchet Bargoti
Suchet Bargoti and James Underwood
Deep Fruit Detection in Orchards
Submitted to the IEEE International Conference on Robotics and Automation 2017
null
null
null
cs.RO cs.AI cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An accurate and reliable image based fruit detection system is critical for supporting higher level agriculture tasks such as yield mapping and robotic harvesting. This paper presents the use of a state-of-the-art object detection framework, Faster R-CNN, in the context of fruit detection in orchards, including mangoes, almonds and apples. Ablation studies are presented to better understand the practical deployment of the detection network, including how much training data is required to capture variability in the dataset. Data augmentation techniques are shown to yield significant performance gains, resulting in a greater than two-fold reduction in the number of training images required. In contrast, transferring knowledge between orchards contributed to negligible performance gain over initialising the Deep Convolutional Neural Network directly from ImageNet features. Finally, to operate over orchard data containing between 100-1000 fruit per image, a tiling approach is introduced for the Faster R-CNN framework. The study has resulted in the best yet detection performance for these orchards relative to previous works, with an F1-score of >0.9 achieved for apples and mangoes.
[ { "created": "Wed, 12 Oct 2016 11:40:24 GMT", "version": "v1" }, { "created": "Mon, 18 Sep 2017 01:03:55 GMT", "version": "v2" } ]
2017-09-19
[ [ "Bargoti", "Suchet", "" ], [ "Underwood", "James", "" ] ]
An accurate and reliable image based fruit detection system is critical for supporting higher level agriculture tasks such as yield mapping and robotic harvesting. This paper presents the use of a state-of-the-art object detection framework, Faster R-CNN, in the context of fruit detection in orchards, including mangoes, almonds and apples. Ablation studies are presented to better understand the practical deployment of the detection network, including how much training data is required to capture variability in the dataset. Data augmentation techniques are shown to yield significant performance gains, resulting in a greater than two-fold reduction in the number of training images required. In contrast, transferring knowledge between orchards contributed to negligible performance gain over initialising the Deep Convolutional Neural Network directly from ImageNet features. Finally, to operate over orchard data containing between 100-1000 fruit per image, a tiling approach is introduced for the Faster R-CNN framework. The study has resulted in the best yet detection performance for these orchards relative to previous works, with an F1-score of >0.9 achieved for apples and mangoes.
1606.04593
Guy Kloss
Guy Kloss
Strongvelope Multi-Party Encrypted Messaging Protocol design document
design whitepaper
null
null
null
cs.CR
http://creativecommons.org/licenses/by-sa/4.0/
In this document we describe the design of a multi-party messaging encryption protocol "Strongvelope". We hope that it will prove useful to people interested in understanding the inner workings of this protocol as well as cryptography and security experts to review the underlying concepts and assumptions. In this design paper we are outlining the perspective of chat message protection through the Strongvelope module. This is different from the product (the Mega chat) and the transport means which it will be used with. Aspects of the chat product and transport are only referred to where appropriate, but are not subject to discussion in this document.
[ { "created": "Tue, 14 Jun 2016 23:38:41 GMT", "version": "v1" } ]
2016-06-16
[ [ "Kloss", "Guy", "" ] ]
In this document we describe the design of a multi-party messaging encryption protocol "Strongvelope". We hope that it will prove useful to people interested in understanding the inner workings of this protocol as well as cryptography and security experts to review the underlying concepts and assumptions. In this design paper we are outlining the perspective of chat message protection through the Strongvelope module. This is different from the product (the Mega chat) and the transport means which it will be used with. Aspects of the chat product and transport are only referred to where appropriate, but are not subject to discussion in this document.
1604.08239
Sam Royston
Sam Royston, Connor DeFanti and Ken Perlin
A Collaborative Untethered Virtual Reality Environment for Interactive Social Network Visualization
null
null
null
null
cs.HC cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The increasing prevalence of Virtual Reality technologies as a platform for gaming and video playback warrants research into how to best apply the current state of the art to challenges in data visualization. Many current VR systems are noncollaborative, while data analysis and visualization is often a multi-person process. Our goal in this paper is to address the technical and user experience challenges that arise when creating VR environments for collaborative data visualization. We focus on the integration of multiple tracking systems and the new interaction paradigms that this integration can enable, along with visual design considerations that apply specifically to collaborative network visualization in virtual reality. We demonstrate a system for collaborative interaction with large 3D layouts of Twitter friend/follow networks. The system is built by combining a 'Holojam' architecture (multiple GearVR Headsets within an OptiTrack motion capture stage) and Perception Neuron motion suits, to offer an untethered, full-room multi-person visualization experience.
[ { "created": "Wed, 27 Apr 2016 20:54:37 GMT", "version": "v1" } ]
2016-04-29
[ [ "Royston", "Sam", "" ], [ "DeFanti", "Connor", "" ], [ "Perlin", "Ken", "" ] ]
The increasing prevalence of Virtual Reality technologies as a platform for gaming and video playback warrants research into how to best apply the current state of the art to challenges in data visualization. Many current VR systems are noncollaborative, while data analysis and visualization is often a multi-person process. Our goal in this paper is to address the technical and user experience challenges that arise when creating VR environments for collaborative data visualization. We focus on the integration of multiple tracking systems and the new interaction paradigms that this integration can enable, along with visual design considerations that apply specifically to collaborative network visualization in virtual reality. We demonstrate a system for collaborative interaction with large 3D layouts of Twitter friend/follow networks. The system is built by combining a 'Holojam' architecture (multiple GearVR Headsets within an OptiTrack motion capture stage) and Perception Neuron motion suits, to offer an untethered, full-room multi-person visualization experience.
1511.04121
Cornelia Haisjackl
Jakob Pinggera, Marco Furtner, Markus Martini, Pierre Sachse, Katharina Reiter, Stefan Zugal, Barbara Weber
Investigating the Process of Process Modeling with Eye Movement Analysis
arXiv admin note: text overlap with arXiv:1511.04057
Proc. ER-BPM'12, pp. 438-450, 2013
10.1007/978-3-642-36285-9_46
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Research on quality issues of business process models has recently begun to explore the process of creating process models by analyzing the modeler's interactions with the modeling environment. In this paper we aim to complement previous insights on the modeler's modeling behavior with data gathered by tracking the modeler's eye movements when engaged in the act of modeling. We present preliminary results and outline directions for future research to triangulate toward a more comprehensive understanding of the process of process modeling. We believe that combining different views on the process of process modeling constitutes another building block in understanding this process that will ultimately enable us to support modelers in creating better process models.
[ { "created": "Wed, 11 Nov 2015 17:55:01 GMT", "version": "v1" } ]
2015-11-16
[ [ "Pinggera", "Jakob", "" ], [ "Furtner", "Marco", "" ], [ "Martini", "Markus", "" ], [ "Sachse", "Pierre", "" ], [ "Reiter", "Katharina", "" ], [ "Zugal", "Stefan", "" ], [ "Weber", "Barbara", "" ] ]
Research on quality issues of business process models has recently begun to explore the process of creating process models by analyzing the modeler's interactions with the modeling environment. In this paper we aim to complement previous insights on the modeler's modeling behavior with data gathered by tracking the modeler's eye movements when engaged in the act of modeling. We present preliminary results and outline directions for future research to triangulate toward a more comprehensive understanding of the process of process modeling. We believe that combining different views on the process of process modeling constitutes another building block in understanding this process that will ultimately enable us to support modelers in creating better process models.
1911.08150
Masahito Hayashi
Masahito Hayashi and Angeles Vazquez-Castro
Two-Way Physical Layer Security Protocol for Gaussian Channels
null
IEEE Transactions on Communications, vol. 68, Issue 5, 3068 - 3078 (2020)
10.1109/TCOMM.2020.2973618
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we propose a two-way protocol of physical layer security using the method of privacy amplification against eavesdroppers. First we justify our proposed protocol by analyzing the physical layer security provided by the classic wiretap channel model (i.e. one-way protocol). In the Gaussian channels, the classic one-way protocol requires Eve's channel to be degraded w.r.t. Bob's channel. However, this channel degradation condition depends on Eve's location and whether Eve's receiving antenna is more powerful than Bob's. To overcome this limitation, we introduce a two-way protocol inspired in IEEE TIT (1993) that eliminates the channel degradation condition. In the proposed two-way protocol, on a first phase, via Gaussian channel, Bob sends randomness to Alice, which is partially leaked to Eve. Then, on a second phase, Alice transmits information to Bob over a public noiseless channel. We derive the secrecy capacity of the two-way protocol when the channel to Eve is also Gaussian. We show that the capacity of the two-way protocol is always positive. We present numerical values of the capacities illustrating the gains obtained by our proposed protocol. We apply our result to simple yet realistic models of satellite communication channels.
[ { "created": "Tue, 19 Nov 2019 08:18:52 GMT", "version": "v1" } ]
2022-04-26
[ [ "Hayashi", "Masahito", "" ], [ "Vazquez-Castro", "Angeles", "" ] ]
In this paper we propose a two-way protocol of physical layer security using the method of privacy amplification against eavesdroppers. First we justify our proposed protocol by analyzing the physical layer security provided by the classic wiretap channel model (i.e. one-way protocol). In the Gaussian channels, the classic one-way protocol requires Eve's channel to be degraded w.r.t. Bob's channel. However, this channel degradation condition depends on Eve's location and whether Eve's receiving antenna is more powerful than Bob's. To overcome this limitation, we introduce a two-way protocol inspired in IEEE TIT (1993) that eliminates the channel degradation condition. In the proposed two-way protocol, on a first phase, via Gaussian channel, Bob sends randomness to Alice, which is partially leaked to Eve. Then, on a second phase, Alice transmits information to Bob over a public noiseless channel. We derive the secrecy capacity of the two-way protocol when the channel to Eve is also Gaussian. We show that the capacity of the two-way protocol is always positive. We present numerical values of the capacities illustrating the gains obtained by our proposed protocol. We apply our result to simple yet realistic models of satellite communication channels.
2006.11476
Yuan Yao
Yuan Yao, Chang Liu, Dezhao Luo, Yu Zhou, Qixiang Ye
Video Playback Rate Perception for Self-supervisedSpatio-Temporal Representation Learning
CVPR 2020
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In self-supervised spatio-temporal representation learning, the temporal resolution and long-short term characteristics are not yet fully explored, which limits representation capabilities of learned models. In this paper, we propose a novel self-supervised method, referred to as video Playback Rate Perception (PRP), to learn spatio-temporal representation in a simple-yet-effective way. PRP roots in a dilated sampling strategy, which produces self-supervision signals about video playback rates for representation model learning. PRP is implemented with a feature encoder, a classification module, and a reconstructing decoder, to achieve spatio-temporal semantic retention in a collaborative discrimination-generation manner. The discriminative perception model follows a feature encoder to prefer perceiving low temporal resolution and long-term representation by classifying fast-forward rates. The generative perception model acts as a feature decoder to focus on comprehending high temporal resolution and short-term representation by introducing a motion-attention mechanism. PRP is applied on typical video target tasks including action recognition and video retrieval. Experiments show that PRP outperforms state-of-the-art self-supervised models with significant margins. Code is available at github.com/yuanyao366/PRP
[ { "created": "Sat, 20 Jun 2020 02:26:07 GMT", "version": "v1" } ]
2020-06-23
[ [ "Yao", "Yuan", "" ], [ "Liu", "Chang", "" ], [ "Luo", "Dezhao", "" ], [ "Zhou", "Yu", "" ], [ "Ye", "Qixiang", "" ] ]
In self-supervised spatio-temporal representation learning, the temporal resolution and long-short term characteristics are not yet fully explored, which limits representation capabilities of learned models. In this paper, we propose a novel self-supervised method, referred to as video Playback Rate Perception (PRP), to learn spatio-temporal representation in a simple-yet-effective way. PRP roots in a dilated sampling strategy, which produces self-supervision signals about video playback rates for representation model learning. PRP is implemented with a feature encoder, a classification module, and a reconstructing decoder, to achieve spatio-temporal semantic retention in a collaborative discrimination-generation manner. The discriminative perception model follows a feature encoder to prefer perceiving low temporal resolution and long-term representation by classifying fast-forward rates. The generative perception model acts as a feature decoder to focus on comprehending high temporal resolution and short-term representation by introducing a motion-attention mechanism. PRP is applied on typical video target tasks including action recognition and video retrieval. Experiments show that PRP outperforms state-of-the-art self-supervised models with significant margins. Code is available at github.com/yuanyao366/PRP
2110.08221
Matthew Leinhauser
Matthew Leinhauser, Ren\'e Widera, Sergei Bastrakov, Alexander Debus, Michael Bussmann, Sunita Chandrasekaran
Metrics and Design of an Instruction Roofline Model for AMD GPUs
14 pages, 7 figures, 2 tables, 4 equations, explains how to create an instruction roofline model for an AMD GPU as of Oct. 2021
null
null
null
cs.DC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Due to the recent announcement of the Frontier supercomputer, many scientific application developers are working to make their applications compatible with AMD architectures (CPU-GPU), which means moving away from the traditional CPU and NVIDIA-GPU systems. Due to the current limitations of profiling tools for AMD GPUs, this shift leaves a void in how to measure application performance on AMD GPUs. In this paper, we design an instruction roofline model for AMD GPUs using AMD's ROCProfiler and a benchmarking tool, BabelStream (the HIP implementation), as a way to measure an application's performance in instructions and memory transactions on new AMD hardware. Specifically, we create instruction roofline models for a case study scientific application, PIConGPU, an open source particle-in-cell (PIC) simulations application used for plasma and laser-plasma physics on the NVIDIA V100, AMD Radeon Instinct MI60, and AMD Instinct MI100 GPUs. When looking at the performance of multiple kernels of interest in PIConGPU we find that although the AMD MI100 GPU achieves a similar, or better, execution time compared to the NVIDIA V100 GPU, profiling tool differences make comparing performance of these two architectures hard. When looking at execution time, GIPS, and instruction intensity, the AMD MI60 achieves the worst performance out of the three GPUs used in this work.
[ { "created": "Fri, 15 Oct 2021 17:32:59 GMT", "version": "v1" }, { "created": "Wed, 10 Nov 2021 15:57:28 GMT", "version": "v2" } ]
2021-11-11
[ [ "Leinhauser", "Matthew", "" ], [ "Widera", "René", "" ], [ "Bastrakov", "Sergei", "" ], [ "Debus", "Alexander", "" ], [ "Bussmann", "Michael", "" ], [ "Chandrasekaran", "Sunita", "" ] ]
Due to the recent announcement of the Frontier supercomputer, many scientific application developers are working to make their applications compatible with AMD architectures (CPU-GPU), which means moving away from the traditional CPU and NVIDIA-GPU systems. Due to the current limitations of profiling tools for AMD GPUs, this shift leaves a void in how to measure application performance on AMD GPUs. In this paper, we design an instruction roofline model for AMD GPUs using AMD's ROCProfiler and a benchmarking tool, BabelStream (the HIP implementation), as a way to measure an application's performance in instructions and memory transactions on new AMD hardware. Specifically, we create instruction roofline models for a case study scientific application, PIConGPU, an open source particle-in-cell (PIC) simulations application used for plasma and laser-plasma physics on the NVIDIA V100, AMD Radeon Instinct MI60, and AMD Instinct MI100 GPUs. When looking at the performance of multiple kernels of interest in PIConGPU we find that although the AMD MI100 GPU achieves a similar, or better, execution time compared to the NVIDIA V100 GPU, profiling tool differences make comparing performance of these two architectures hard. When looking at execution time, GIPS, and instruction intensity, the AMD MI60 achieves the worst performance out of the three GPUs used in this work.
1002.3183
Vitaly Feldman
Vitaly Feldman
A Complete Characterization of Statistical Query Learning with Applications to Evolvability
Simplified Lemma 3.8 and it's applications
Proceedings of the 44th IEEE Symposium on Foundations of Computer Science, pp 375-384, 2009
null
null
cs.CC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Statistical query (SQ) learning model of Kearns (1993) is a natural restriction of the PAC learning model in which a learning algorithm is allowed to obtain estimates of statistical properties of the examples but cannot see the examples themselves. We describe a new and simple characterization of the query complexity of learning in the SQ learning model. Unlike the previously known bounds on SQ learning our characterization preserves the accuracy and the efficiency of learning. The preservation of accuracy implies that that our characterization gives the first characterization of SQ learning in the agnostic learning framework. The preservation of efficiency is achieved using a new boosting technique and allows us to derive a new approach to the design of evolutionary algorithms in Valiant's (2006) model of evolvability. We use this approach to demonstrate the existence of a large class of monotone evolutionary learning algorithms based on square loss performance estimation. These results differ significantly from the few known evolutionary algorithms and give evidence that evolvability in Valiant's model is a more versatile phenomenon than there had been previous reason to suspect.
[ { "created": "Tue, 16 Feb 2010 22:35:39 GMT", "version": "v1" }, { "created": "Tue, 21 Feb 2012 00:43:31 GMT", "version": "v2" }, { "created": "Mon, 25 Nov 2013 04:57:18 GMT", "version": "v3" } ]
2013-11-26
[ [ "Feldman", "Vitaly", "" ] ]
Statistical query (SQ) learning model of Kearns (1993) is a natural restriction of the PAC learning model in which a learning algorithm is allowed to obtain estimates of statistical properties of the examples but cannot see the examples themselves. We describe a new and simple characterization of the query complexity of learning in the SQ learning model. Unlike the previously known bounds on SQ learning our characterization preserves the accuracy and the efficiency of learning. The preservation of accuracy implies that that our characterization gives the first characterization of SQ learning in the agnostic learning framework. The preservation of efficiency is achieved using a new boosting technique and allows us to derive a new approach to the design of evolutionary algorithms in Valiant's (2006) model of evolvability. We use this approach to demonstrate the existence of a large class of monotone evolutionary learning algorithms based on square loss performance estimation. These results differ significantly from the few known evolutionary algorithms and give evidence that evolvability in Valiant's model is a more versatile phenomenon than there had been previous reason to suspect.
2109.12391
Xiangyu Yue
Xiangyu Yue, Zangwei Zheng, Colorado Reed, Hari Prasanna Das, Kurt Keutzer, Alberto Sangiovanni Vincentelli
Multi-source Few-shot Domain Adaptation
null
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-source Domain Adaptation (MDA) aims to transfer predictive models from multiple, fully-labeled source domains to an unlabeled target domain. However, in many applications, relevant labeled source datasets may not be available, and collecting source labels can be as expensive as labeling the target data itself. In this paper, we investigate Multi-source Few-shot Domain Adaptation (MFDA): a new domain adaptation scenario with limited multi-source labels and unlabeled target data. As we show, existing methods often fail to learn discriminative features for both source and target domains in the MFDA setting. Therefore, we propose a novel framework, termed Multi-Source Few-shot Adaptation Network (MSFAN), which can be trained end-to-end in a non-adversarial manner. MSFAN operates by first using a type of prototypical, multi-domain, self-supervised learning to learn features that are not only domain-invariant but also class-discriminative. Second, MSFAN uses a small, labeled support set to enforce feature consistency and domain invariance across domains. Finally, prototypes from multiple sources are leveraged to learn better classifiers. Compared with state-of-the-art MDA methods, MSFAN improves the mean classification accuracy over different domain pairs on MFDA by 20.2%, 9.4%, and 16.2% on Office, Office-Home, and DomainNet, respectively.
[ { "created": "Sat, 25 Sep 2021 15:54:01 GMT", "version": "v1" } ]
2021-09-28
[ [ "Yue", "Xiangyu", "" ], [ "Zheng", "Zangwei", "" ], [ "Reed", "Colorado", "" ], [ "Das", "Hari Prasanna", "" ], [ "Keutzer", "Kurt", "" ], [ "Vincentelli", "Alberto Sangiovanni", "" ] ]
Multi-source Domain Adaptation (MDA) aims to transfer predictive models from multiple, fully-labeled source domains to an unlabeled target domain. However, in many applications, relevant labeled source datasets may not be available, and collecting source labels can be as expensive as labeling the target data itself. In this paper, we investigate Multi-source Few-shot Domain Adaptation (MFDA): a new domain adaptation scenario with limited multi-source labels and unlabeled target data. As we show, existing methods often fail to learn discriminative features for both source and target domains in the MFDA setting. Therefore, we propose a novel framework, termed Multi-Source Few-shot Adaptation Network (MSFAN), which can be trained end-to-end in a non-adversarial manner. MSFAN operates by first using a type of prototypical, multi-domain, self-supervised learning to learn features that are not only domain-invariant but also class-discriminative. Second, MSFAN uses a small, labeled support set to enforce feature consistency and domain invariance across domains. Finally, prototypes from multiple sources are leveraged to learn better classifiers. Compared with state-of-the-art MDA methods, MSFAN improves the mean classification accuracy over different domain pairs on MFDA by 20.2%, 9.4%, and 16.2% on Office, Office-Home, and DomainNet, respectively.
2203.15071
Elizabeth Daly
Elizabeth M. Daly, Massimiliano Mattetti, \"Oznur Alkan, Rahul Nair
User Driven Model Adjustment via Boolean Rule Explanations
null
null
null
null
cs.AI cs.HC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
AI solutions are heavily dependant on the quality and accuracy of the input training data, however the training data may not always fully reflect the most up-to-date policy landscape or may be missing business logic. The advances in explainability have opened the possibility of allowing users to interact with interpretable explanations of ML predictions in order to inject modifications or constraints that more accurately reflect current realities of the system. In this paper, we present a solution which leverages the predictive power of ML models while allowing the user to specify modifications to decision boundaries. Our interactive overlay approach achieves this goal without requiring model retraining, making it appropriate for systems that need to apply instant changes to their decision making. We demonstrate that user feedback rules can be layered with the ML predictions to provide immediate changes which in turn supports learning with less data.
[ { "created": "Mon, 28 Mar 2022 20:27:02 GMT", "version": "v1" } ]
2022-03-30
[ [ "Daly", "Elizabeth M.", "" ], [ "Mattetti", "Massimiliano", "" ], [ "Alkan", "Öznur", "" ], [ "Nair", "Rahul", "" ] ]
AI solutions are heavily dependant on the quality and accuracy of the input training data, however the training data may not always fully reflect the most up-to-date policy landscape or may be missing business logic. The advances in explainability have opened the possibility of allowing users to interact with interpretable explanations of ML predictions in order to inject modifications or constraints that more accurately reflect current realities of the system. In this paper, we present a solution which leverages the predictive power of ML models while allowing the user to specify modifications to decision boundaries. Our interactive overlay approach achieves this goal without requiring model retraining, making it appropriate for systems that need to apply instant changes to their decision making. We demonstrate that user feedback rules can be layered with the ML predictions to provide immediate changes which in turn supports learning with less data.
1904.11746
Meead Saberi
Ziyuan Gu, Sajjad Shafiei, Zhiyuan Liu, Meead Saberi
Optimal distance- and time-dependent area-based pricing with the Network Fundamental Diagram
39 pages, 13 figures
Transportation Research Part C: Emerging Technologies 95, 1-28 (2018)
10.1016/j.trc.2018.07.004
null
cs.SY math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given the efficiency and equity concerns of a cordon toll, this paper proposes a few alternative distance-dependent area-based pricing models for a large-scale dynamic traffic network. We use the Network Fundamental Diagram (NFD) to monitor the network traffic state over time and consider different trip lengths in the toll calculation. The first model is a distance toll that is linearly related to the distance traveled within the cordon. The second model is an improved joint distance and time toll (JDTT) whereby users are charged jointly in proportion to the distance traveled and time spent within the cordon. The third model is a further improved joint distance and delay toll (JDDT) which replaces the time toll in the JDTT with a delay toll component. To solve the optimal toll level problem, we develop a simulation-based optimization (SBO) framework. Specifically, we propose a simultaneous approach and a sequential approach, respectively, based on the proportional-integral (PI) feedback controller to iteratively adjust the JDTT and JDDT, and use a calibrated large-scale simulation-based dynamic traffic assignment (DTA) model of Melbourne, Australia to evaluate the network performance under different pricing scenarios. While the framework is developed for static pricing, we show that it can be easily extended to solve time-dependent pricing by using multiple PI controllers. Results show that although the distance toll keeps the network from entering the congested regime of the NFD, it naturally drives users into the shortest paths within the cordon resulting in an uneven distribution of congestion. This is reflected by a large clockwise hysteresis loop in the NFD. In contrast, both the JDTT and JDDT reduce the size of the hysteresis loop while achieving the same control objective.
[ { "created": "Fri, 26 Apr 2019 10:09:07 GMT", "version": "v1" } ]
2020-09-24
[ [ "Gu", "Ziyuan", "" ], [ "Shafiei", "Sajjad", "" ], [ "Liu", "Zhiyuan", "" ], [ "Saberi", "Meead", "" ] ]
Given the efficiency and equity concerns of a cordon toll, this paper proposes a few alternative distance-dependent area-based pricing models for a large-scale dynamic traffic network. We use the Network Fundamental Diagram (NFD) to monitor the network traffic state over time and consider different trip lengths in the toll calculation. The first model is a distance toll that is linearly related to the distance traveled within the cordon. The second model is an improved joint distance and time toll (JDTT) whereby users are charged jointly in proportion to the distance traveled and time spent within the cordon. The third model is a further improved joint distance and delay toll (JDDT) which replaces the time toll in the JDTT with a delay toll component. To solve the optimal toll level problem, we develop a simulation-based optimization (SBO) framework. Specifically, we propose a simultaneous approach and a sequential approach, respectively, based on the proportional-integral (PI) feedback controller to iteratively adjust the JDTT and JDDT, and use a calibrated large-scale simulation-based dynamic traffic assignment (DTA) model of Melbourne, Australia to evaluate the network performance under different pricing scenarios. While the framework is developed for static pricing, we show that it can be easily extended to solve time-dependent pricing by using multiple PI controllers. Results show that although the distance toll keeps the network from entering the congested regime of the NFD, it naturally drives users into the shortest paths within the cordon resulting in an uneven distribution of congestion. This is reflected by a large clockwise hysteresis loop in the NFD. In contrast, both the JDTT and JDDT reduce the size of the hysteresis loop while achieving the same control objective.
2205.08812
Hanh T. M. Tran
Hanh Thi Minh Tran, David Hogg
Anomaly detection using prediction error with Spatio-Temporal Convolutional LSTM
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this paper, we propose a novel method for video anomaly detection motivated by an existing architecture for sequence-to-sequence prediction and reconstruction using a spatio-temporal convolutional Long Short-Term Memory (convLSTM). As in previous work on anomaly detection, anomalies arise as spatially localised failures in reconstruction or prediction. In experiments with five benchmark datasets, we show that using prediction gives superior performance to using reconstruction. We also compare performance with different length input/output sequences. Overall, our results using prediction are comparable with the state of the art on the benchmark datasets.
[ { "created": "Wed, 18 May 2022 09:25:53 GMT", "version": "v1" } ]
2022-05-19
[ [ "Tran", "Hanh Thi Minh", "" ], [ "Hogg", "David", "" ] ]
In this paper, we propose a novel method for video anomaly detection motivated by an existing architecture for sequence-to-sequence prediction and reconstruction using a spatio-temporal convolutional Long Short-Term Memory (convLSTM). As in previous work on anomaly detection, anomalies arise as spatially localised failures in reconstruction or prediction. In experiments with five benchmark datasets, we show that using prediction gives superior performance to using reconstruction. We also compare performance with different length input/output sequences. Overall, our results using prediction are comparable with the state of the art on the benchmark datasets.
2209.11135
Sacha L\'evy
Sacha L\'evy, Farimah Poursafaei, Kellin Pelrine, Reihaneh Rabbany
Active Keyword Selection to Track Evolving Topics on Twitter
10 pages, 3 figures
null
null
null
cs.SI cs.IR
http://creativecommons.org/licenses/by-nc-sa/4.0/
How can we study social interactions on evolving topics at a mass scale? Over the past decade, researchers from diverse fields such as economics, political science, and public health have often done this by querying Twitter's public API endpoints with hand-picked topical keywords to search or stream discussions. However, despite the API's accessibility, it remains difficult to select and update keywords to collect high-quality data relevant to topics of interest. In this paper, we propose an active learning method for rapidly refining query keywords to increase both the yielded topic relevance and dataset size. We leverage a large open-source COVID-19 Twitter dataset to illustrate the applicability of our method in tracking Tweets around the key sub-topics of Vaccine, Mask, and Lockdown. Our experiments show that our method achieves an average topic-related keyword recall 2x higher than baselines. We open-source our code along with a web interface for keyword selection to make data collection from Twitter more systematic for researchers.
[ { "created": "Thu, 22 Sep 2022 16:25:58 GMT", "version": "v1" } ]
2022-09-23
[ [ "Lévy", "Sacha", "" ], [ "Poursafaei", "Farimah", "" ], [ "Pelrine", "Kellin", "" ], [ "Rabbany", "Reihaneh", "" ] ]
How can we study social interactions on evolving topics at a mass scale? Over the past decade, researchers from diverse fields such as economics, political science, and public health have often done this by querying Twitter's public API endpoints with hand-picked topical keywords to search or stream discussions. However, despite the API's accessibility, it remains difficult to select and update keywords to collect high-quality data relevant to topics of interest. In this paper, we propose an active learning method for rapidly refining query keywords to increase both the yielded topic relevance and dataset size. We leverage a large open-source COVID-19 Twitter dataset to illustrate the applicability of our method in tracking Tweets around the key sub-topics of Vaccine, Mask, and Lockdown. Our experiments show that our method achieves an average topic-related keyword recall 2x higher than baselines. We open-source our code along with a web interface for keyword selection to make data collection from Twitter more systematic for researchers.
2308.09886
Amani Abusafia
Amani Abusafia, Abdallah Lakhdari, Athman Bouguettaya
Flow-Based Energy Services Composition
14 pages, 19 Figures, This is an accepted paper in the IEEE Transactions on Services Computing (IEEE TSC)
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
We propose a novel spatio-temporal service composition framework for crowdsourcing multiple IoT energy services to cater to multiple energy requests. We define a new energy service model to leverage the wearable-based energy and wireless power transfer technologies. We reformulate the problem of spatio-temporal service composition to provision multiple energy requests as a matching problem. We leverage the fragmented nature of energy to offer partial services to maximize the utilization of energy services. We propose EnergyFlowComp, a modified Maximum Flow matching algorithm that efficiently provisions IoT energy services to accommodate multiple energy requests. Moreover, we propose PartialFlowComp, an extension of the EnergyFlowComp approach that considers the partial-temporal overlap between services and requests in provisioning. We conduct an extensive set of experiments to assess the effectiveness and efficiency of the proposed framework.
[ { "created": "Sat, 19 Aug 2023 02:40:43 GMT", "version": "v1" } ]
2023-08-22
[ [ "Abusafia", "Amani", "" ], [ "Lakhdari", "Abdallah", "" ], [ "Bouguettaya", "Athman", "" ] ]
We propose a novel spatio-temporal service composition framework for crowdsourcing multiple IoT energy services to cater to multiple energy requests. We define a new energy service model to leverage the wearable-based energy and wireless power transfer technologies. We reformulate the problem of spatio-temporal service composition to provision multiple energy requests as a matching problem. We leverage the fragmented nature of energy to offer partial services to maximize the utilization of energy services. We propose EnergyFlowComp, a modified Maximum Flow matching algorithm that efficiently provisions IoT energy services to accommodate multiple energy requests. Moreover, we propose PartialFlowComp, an extension of the EnergyFlowComp approach that considers the partial-temporal overlap between services and requests in provisioning. We conduct an extensive set of experiments to assess the effectiveness and efficiency of the proposed framework.
2312.12441
Tuan T Nguyen
Neetu Sigger, Tuan Thanh Nguyen, Gianluca Tozzi, Quoc-Tuan Vien, Sinh Van Nguyen
DiffSpectralNet : Unveiling the Potential of Diffusion Models for Hyperspectral Image Classification
18 pages
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Hyperspectral images (HSI) have become popular for analysing remotely sensed images in multiple domain like agriculture, medical. However, existing models struggle with complex relationships and characteristics of spectral-spatial data due to the multi-band nature and data redundancy of hyperspectral data. To address this limitation, we propose a new network called DiffSpectralNet, which combines diffusion and transformer techniques. Our approach involves a two-step process. First, we use an unsupervised learning framework based on the diffusion model to extract both high-level and low-level spectral-spatial features. The diffusion method is capable of extracting diverse and meaningful spectral-spatial features, leading to improvement in HSI classification. Then, we employ a pretrained denoising U-Net to extract intermediate hierarchical features for classification. Finally, we use a supervised transformer-based classifier to perform the HSI classification. Through comprehensive experiments on HSI datasets, we evaluate the classification performance of DiffSpectralNet. The results demonstrate that our framework significantly outperforms existing approaches, achieving state-of-the-art performance.
[ { "created": "Sun, 29 Oct 2023 15:26:37 GMT", "version": "v1" } ]
2023-12-21
[ [ "Sigger", "Neetu", "" ], [ "Nguyen", "Tuan Thanh", "" ], [ "Tozzi", "Gianluca", "" ], [ "Vien", "Quoc-Tuan", "" ], [ "Van Nguyen", "Sinh", "" ] ]
Hyperspectral images (HSI) have become popular for analysing remotely sensed images in multiple domain like agriculture, medical. However, existing models struggle with complex relationships and characteristics of spectral-spatial data due to the multi-band nature and data redundancy of hyperspectral data. To address this limitation, we propose a new network called DiffSpectralNet, which combines diffusion and transformer techniques. Our approach involves a two-step process. First, we use an unsupervised learning framework based on the diffusion model to extract both high-level and low-level spectral-spatial features. The diffusion method is capable of extracting diverse and meaningful spectral-spatial features, leading to improvement in HSI classification. Then, we employ a pretrained denoising U-Net to extract intermediate hierarchical features for classification. Finally, we use a supervised transformer-based classifier to perform the HSI classification. Through comprehensive experiments on HSI datasets, we evaluate the classification performance of DiffSpectralNet. The results demonstrate that our framework significantly outperforms existing approaches, achieving state-of-the-art performance.
1906.06812
Ruggiero Seccia Mr
Francesco Foglino, Matteo Leonetti, Simone Sagratella, Ruggiero Seccia
A gray-box approach for curriculum learning
10 pages, 1 figure
Optimization of Complex Systems: Theory, Models, Algorithms and Applications, 2020, pp 720-729
10.1007/978-3-030-21803-4_72
null
cs.LG cs.AI math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Curriculum learning is often employed in deep reinforcement learning to let the agent progress more quickly towards better behaviors. Numerical methods for curriculum learning in the literature provides only initial heuristic solutions, with little to no guarantee on their quality. We define a new gray-box function that, including a suitable scheduling problem, can be effectively used to reformulate the curriculum learning problem. We propose different efficient numerical methods to address this gray-box reformulation. Preliminary numerical results on a benchmark task in the curriculum learning literature show the viability of the proposed approach.
[ { "created": "Mon, 17 Jun 2019 01:27:49 GMT", "version": "v1" } ]
2019-06-18
[ [ "Foglino", "Francesco", "" ], [ "Leonetti", "Matteo", "" ], [ "Sagratella", "Simone", "" ], [ "Seccia", "Ruggiero", "" ] ]
Curriculum learning is often employed in deep reinforcement learning to let the agent progress more quickly towards better behaviors. Numerical methods for curriculum learning in the literature provides only initial heuristic solutions, with little to no guarantee on their quality. We define a new gray-box function that, including a suitable scheduling problem, can be effectively used to reformulate the curriculum learning problem. We propose different efficient numerical methods to address this gray-box reformulation. Preliminary numerical results on a benchmark task in the curriculum learning literature show the viability of the proposed approach.
2206.05833
Mani Kumar Tellamekala
Mani Kumar Tellamekala, Shahin Amiriparian, Bj\"orn W. Schuller, Elisabeth Andr\'e, Timo Giesbrecht, Michel Valstar
COLD Fusion: Calibrated and Ordinal Latent Distribution Fusion for Uncertainty-Aware Multimodal Emotion Recognition
Accepted to IEEE Transactions on Pattern Analysis and Machine Intelligence
null
null
null
cs.CV cs.HC cs.MM
http://creativecommons.org/licenses/by-sa/4.0/
Automatically recognising apparent emotions from face and voice is hard, in part because of various sources of uncertainty, including in the input data and the labels used in a machine learning framework. This paper introduces an uncertainty-aware audiovisual fusion approach that quantifies modality-wise uncertainty towards emotion prediction. To this end, we propose a novel fusion framework in which we first learn latent distributions over audiovisual temporal context vectors separately, and then constrain the variance vectors of unimodal latent distributions so that they represent the amount of information each modality provides w.r.t. emotion recognition. In particular, we impose Calibration and Ordinal Ranking constraints on the variance vectors of audiovisual latent distributions. When well-calibrated, modality-wise uncertainty scores indicate how much their corresponding predictions may differ from the ground truth labels. Well-ranked uncertainty scores allow the ordinal ranking of different frames across the modalities. To jointly impose both these constraints, we propose a softmax distributional matching loss. In both classification and regression settings, we compare our uncertainty-aware fusion model with standard model-agnostic fusion baselines. Our evaluation on two emotion recognition corpora, AVEC 2019 CES and IEMOCAP, shows that audiovisual emotion recognition can considerably benefit from well-calibrated and well-ranked latent uncertainty measures.
[ { "created": "Sun, 12 Jun 2022 20:25:21 GMT", "version": "v1" }, { "created": "Mon, 16 Oct 2023 20:29:02 GMT", "version": "v2" } ]
2023-10-18
[ [ "Tellamekala", "Mani Kumar", "" ], [ "Amiriparian", "Shahin", "" ], [ "Schuller", "Björn W.", "" ], [ "André", "Elisabeth", "" ], [ "Giesbrecht", "Timo", "" ], [ "Valstar", "Michel", "" ] ]
Automatically recognising apparent emotions from face and voice is hard, in part because of various sources of uncertainty, including in the input data and the labels used in a machine learning framework. This paper introduces an uncertainty-aware audiovisual fusion approach that quantifies modality-wise uncertainty towards emotion prediction. To this end, we propose a novel fusion framework in which we first learn latent distributions over audiovisual temporal context vectors separately, and then constrain the variance vectors of unimodal latent distributions so that they represent the amount of information each modality provides w.r.t. emotion recognition. In particular, we impose Calibration and Ordinal Ranking constraints on the variance vectors of audiovisual latent distributions. When well-calibrated, modality-wise uncertainty scores indicate how much their corresponding predictions may differ from the ground truth labels. Well-ranked uncertainty scores allow the ordinal ranking of different frames across the modalities. To jointly impose both these constraints, we propose a softmax distributional matching loss. In both classification and regression settings, we compare our uncertainty-aware fusion model with standard model-agnostic fusion baselines. Our evaluation on two emotion recognition corpora, AVEC 2019 CES and IEMOCAP, shows that audiovisual emotion recognition can considerably benefit from well-calibrated and well-ranked latent uncertainty measures.
cs/0304031
Douglas A. Galbi
Douglas A. Galbi
Transforming the Structure of Network Interconnection and Transport
null
CommLaw Conspectus, v. 8, n. 2 (Summer 2000) pp. 203-18
null
null
cs.CY
null
Vibrant development of a network-based economy requires separating investment in highly location specific local access technology from the development of standardized, geography-independent, wide-area network services. Thus far interconnection arrangements and associated regulations have been too closely tied to the idiosyncratic geographic structure of individual operators' networks. A key industry challenge is to foster the development of a wide area lattice of common geographic points of interconnection. Sound regulatory and anti-trust policy can help address this industry need.
[ { "created": "Tue, 22 Apr 2003 20:18:43 GMT", "version": "v1" } ]
2007-05-23
[ [ "Galbi", "Douglas A.", "" ] ]
Vibrant development of a network-based economy requires separating investment in highly location specific local access technology from the development of standardized, geography-independent, wide-area network services. Thus far interconnection arrangements and associated regulations have been too closely tied to the idiosyncratic geographic structure of individual operators' networks. A key industry challenge is to foster the development of a wide area lattice of common geographic points of interconnection. Sound regulatory and anti-trust policy can help address this industry need.
1711.09762
Yehia Abd Alrahman
Yehia Abd Alrahman and Rocco De Nicola and Michele Loreti
A Behavioural Theory for Interactions in Collective-Adaptive Systems
30 pages, preprint submitted to Elsevier. arXiv admin note: text overlap with arXiv:1711.06092 and arXiv:1602.05635
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a process calculus, named AbC, to study the behavioural theory of interactions in collective-adaptive systems by relying on attribute-based communication. An AbC system consists of a set of parallel components each of which is equipped with a set of attributes. Communication takes place in an implicit multicast fashion, and interaction among components is dynamically established by taking into account "connections" as determined by predicates over their attributes. The structural operational semantics of AbC is based on Labeled Transition Systems that are also used to define bisimilarity between components. Labeled bisimilarity is in full agreement with a barbed congruence, defined by simple basic observables and context closure. The introduced equivalence is used to study the expressiveness of AbC in terms of encoding broadcast channel-based interactions and to establish formal relationships between system descriptions at different levels of abstraction.
[ { "created": "Thu, 23 Nov 2017 14:32:07 GMT", "version": "v1" }, { "created": "Sun, 29 Jul 2018 12:42:39 GMT", "version": "v2" } ]
2018-07-31
[ [ "Alrahman", "Yehia Abd", "" ], [ "De Nicola", "Rocco", "" ], [ "Loreti", "Michele", "" ] ]
We propose a process calculus, named AbC, to study the behavioural theory of interactions in collective-adaptive systems by relying on attribute-based communication. An AbC system consists of a set of parallel components each of which is equipped with a set of attributes. Communication takes place in an implicit multicast fashion, and interaction among components is dynamically established by taking into account "connections" as determined by predicates over their attributes. The structural operational semantics of AbC is based on Labeled Transition Systems that are also used to define bisimilarity between components. Labeled bisimilarity is in full agreement with a barbed congruence, defined by simple basic observables and context closure. The introduced equivalence is used to study the expressiveness of AbC in terms of encoding broadcast channel-based interactions and to establish formal relationships between system descriptions at different levels of abstraction.
1903.07026
Hussien Al-Hmood Dr
Hussien Al-Hmood and H.S. Al-Raweshidy
Effective rate analysis over Fluctuating Beckmann fading channels
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The effective rate of Fluctuating Beckmann (FB) fading channel is analysed. The moment generating function (MGF) of the instantaneous signal-to-noise (SNR) is used first to derive the effective rate for arbitrary values of the fading parameters in terms of the extended generalised bivariate Meijer's-$G$ function (EGBMGF). For integer valued of the multipath and shadowing severity fading parameters, the probability density function (PDF) of the instantaneous SNR is employed. To this end, simple exact mathematically tractable analytic expression is obtained. The Monte Carlo simulations and the numerical results are presented to verify the validation of our analysis.
[ { "created": "Sun, 17 Mar 2019 04:52:23 GMT", "version": "v1" } ]
2019-03-19
[ [ "Al-Hmood", "Hussien", "" ], [ "Al-Raweshidy", "H. S.", "" ] ]
The effective rate of Fluctuating Beckmann (FB) fading channel is analysed. The moment generating function (MGF) of the instantaneous signal-to-noise (SNR) is used first to derive the effective rate for arbitrary values of the fading parameters in terms of the extended generalised bivariate Meijer's-$G$ function (EGBMGF). For integer valued of the multipath and shadowing severity fading parameters, the probability density function (PDF) of the instantaneous SNR is employed. To this end, simple exact mathematically tractable analytic expression is obtained. The Monte Carlo simulations and the numerical results are presented to verify the validation of our analysis.
2205.15195
Shimin Zhang
Shimin Zhang, Ziteng Wang, Yukai Ju, Yihui Fu, Yueyue Na, Qiang Fu, Lei Xie
Personalized Acoustic Echo Cancellation for Full-duplex Communications
submitted to INTERSPEECH 22
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks (DNNs) have shown promising results for acoustic echo cancellation (AEC). But the DNN-based AEC models let through all near-end speakers including the interfering speech. In light of recent studies on personalized speech enhancement, we investigate the feasibility of personalized acoustic echo cancellation (PAEC) in this paper for full-duplex communications, where background noise and interfering speakers may coexist with acoustic echoes. Specifically, we first propose a novel backbone neural network termed as gated temporal convolutional neural network (GTCNN) that outperforms state-of-the-art AEC models in performance. Speaker embeddings like d-vectors are further adopted as auxiliary information to guide the GTCNN to focus on the target speaker. A special case in PAEC is that speech snippets of both parties on the call are enrolled. Experimental results show that auxiliary information from either the near-end speaker or the far-end speaker can improve the DNN-based AEC performance. Nevertheless, there is still much room for improvement in the utilization of the finite-dimensional speaker embeddings.
[ { "created": "Mon, 30 May 2022 15:47:12 GMT", "version": "v1" }, { "created": "Thu, 30 Jun 2022 02:50:28 GMT", "version": "v2" } ]
2022-07-01
[ [ "Zhang", "Shimin", "" ], [ "Wang", "Ziteng", "" ], [ "Ju", "Yukai", "" ], [ "Fu", "Yihui", "" ], [ "Na", "Yueyue", "" ], [ "Fu", "Qiang", "" ], [ "Xie", "Lei", "" ] ]
Deep neural networks (DNNs) have shown promising results for acoustic echo cancellation (AEC). But the DNN-based AEC models let through all near-end speakers including the interfering speech. In light of recent studies on personalized speech enhancement, we investigate the feasibility of personalized acoustic echo cancellation (PAEC) in this paper for full-duplex communications, where background noise and interfering speakers may coexist with acoustic echoes. Specifically, we first propose a novel backbone neural network termed as gated temporal convolutional neural network (GTCNN) that outperforms state-of-the-art AEC models in performance. Speaker embeddings like d-vectors are further adopted as auxiliary information to guide the GTCNN to focus on the target speaker. A special case in PAEC is that speech snippets of both parties on the call are enrolled. Experimental results show that auxiliary information from either the near-end speaker or the far-end speaker can improve the DNN-based AEC performance. Nevertheless, there is still much room for improvement in the utilization of the finite-dimensional speaker embeddings.
1902.03738
Yuehe Zhu
Yue-he Zhu and Ya-zhong Luo
Fast Evaluation of Low-Thrust Transfers via Deep Neural Networks
null
null
null
null
cs.LG astro-ph.IM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The design of low-thrust-based multitarget interplanetary missions requires a method to quickly and accurately evaluate the low-thrust transfer between any two visiting targets. Complete evaluation of the low-thrust transfer includes not only the estimation of the optimal fuel consumption but also the judgment of transfer feasibility. In this paper, a deep neural network (DNN)-based method is proposed for quickly evaluating low-thrust transfer. An efficient database generation method is developed for obtaining both the infeasible and optimal transfers. A classification DNN and a regression DNN are trained based on the infeasible and optimal transfers to judge the transfer feasibility and estimate the optimal fuel consumption, respectively. The simulation results show that the well-trained DNNs are capable of quickly determining the transfer feasibility with a correct rate of greater than 98% and approximating the optimal transfer fuel consumption with a relative estimation error of less than 0.4%. The tests on two asteroid chains further show the superiority of the DNN-based method for application to the design of low-thrust-based multitarget interplanetary missions
[ { "created": "Mon, 11 Feb 2019 05:43:16 GMT", "version": "v1" } ]
2019-02-12
[ [ "Zhu", "Yue-he", "" ], [ "Luo", "Ya-zhong", "" ] ]
The design of low-thrust-based multitarget interplanetary missions requires a method to quickly and accurately evaluate the low-thrust transfer between any two visiting targets. Complete evaluation of the low-thrust transfer includes not only the estimation of the optimal fuel consumption but also the judgment of transfer feasibility. In this paper, a deep neural network (DNN)-based method is proposed for quickly evaluating low-thrust transfer. An efficient database generation method is developed for obtaining both the infeasible and optimal transfers. A classification DNN and a regression DNN are trained based on the infeasible and optimal transfers to judge the transfer feasibility and estimate the optimal fuel consumption, respectively. The simulation results show that the well-trained DNNs are capable of quickly determining the transfer feasibility with a correct rate of greater than 98% and approximating the optimal transfer fuel consumption with a relative estimation error of less than 0.4%. The tests on two asteroid chains further show the superiority of the DNN-based method for application to the design of low-thrust-based multitarget interplanetary missions
2405.18060
Mehrimah Amiripour
Mehrimah Amirpour, Reza Azmi
PRFashion24: A Dataset for Sentiment Analysis of Fashion Products Reviews in Persian
8 page
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
The PRFashion24 dataset is a comprehensive Persian dataset collected from various online fashion stores, spanning from April 2020 to March 2024. With 767,272 reviews, it is the first dataset in its kind that encompasses diverse categories within the fashion industry in the Persian language. The goal of this study is to harness deep learning techniques, specifically Long Short-Term Memory (LSTM) networks and a combination of Bidirectional LSTM and Convolutional Neural Network (BiLSTM-CNN), to analyze and reveal sentiments towards online fashion shopping. The LSTM model yielded an accuracy of 81.23%, while the BiLSTM-CNN model reached 82.89%. This research aims not only to introduce a diverse dataset in the field of fashion but also to enhance the public's understanding of opinions on online fashion shopping, which predominantly reflect a positive sentiment. Upon publication, both the optimized models and the PRFashion24 dataset will be available on GitHub.
[ { "created": "Tue, 28 May 2024 11:19:13 GMT", "version": "v1" } ]
2024-05-29
[ [ "Amirpour", "Mehrimah", "" ], [ "Azmi", "Reza", "" ] ]
The PRFashion24 dataset is a comprehensive Persian dataset collected from various online fashion stores, spanning from April 2020 to March 2024. With 767,272 reviews, it is the first dataset in its kind that encompasses diverse categories within the fashion industry in the Persian language. The goal of this study is to harness deep learning techniques, specifically Long Short-Term Memory (LSTM) networks and a combination of Bidirectional LSTM and Convolutional Neural Network (BiLSTM-CNN), to analyze and reveal sentiments towards online fashion shopping. The LSTM model yielded an accuracy of 81.23%, while the BiLSTM-CNN model reached 82.89%. This research aims not only to introduce a diverse dataset in the field of fashion but also to enhance the public's understanding of opinions on online fashion shopping, which predominantly reflect a positive sentiment. Upon publication, both the optimized models and the PRFashion24 dataset will be available on GitHub.
2404.15687
Zhaoyang Chu
Zhaoyang Chu, Yao Wan, Qian Li, Yang Wu, Hongyu Zhang, Yulei Sui, Guandong Xu, Hai Jin
Graph Neural Networks for Vulnerability Detection: A Counterfactual Explanation
This paper was accepted in the proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2024)
null
10.1145/3650212.3652136
null
cs.SE cs.AI cs.CR
http://creativecommons.org/licenses/by/4.0/
Vulnerability detection is crucial for ensuring the security and reliability of software systems. Recently, Graph Neural Networks (GNNs) have emerged as a prominent code embedding approach for vulnerability detection, owing to their ability to capture the underlying semantic structure of source code. However, GNNs face significant challenges in explainability due to their inherently black-box nature. To this end, several factual reasoning-based explainers have been proposed. These explainers provide explanations for the predictions made by GNNs by analyzing the key features that contribute to the outcomes. We argue that these factual reasoning-based explanations cannot answer critical what-if questions: What would happen to the GNN's decision if we were to alter the code graph into alternative structures? Inspired by advancements of counterfactual reasoning in artificial intelligence, we propose CFExplainer, a novel counterfactual explainer for GNN-based vulnerability detection. Unlike factual reasoning-based explainers, CFExplainer seeks the minimal perturbation to the input code graph that leads to a change in the prediction, thereby addressing the what-if questions for vulnerability detection. We term this perturbation a counterfactual explanation, which can pinpoint the root causes of the detected vulnerability and furnish valuable insights for developers to undertake appropriate actions for fixing the vulnerability. Extensive experiments on four GNN-based vulnerability detection models demonstrate the effectiveness of CFExplainer over existing state-of-the-art factual reasoning-based explainers.
[ { "created": "Wed, 24 Apr 2024 06:52:53 GMT", "version": "v1" }, { "created": "Mon, 15 Jul 2024 14:05:49 GMT", "version": "v2" } ]
2024-07-16
[ [ "Chu", "Zhaoyang", "" ], [ "Wan", "Yao", "" ], [ "Li", "Qian", "" ], [ "Wu", "Yang", "" ], [ "Zhang", "Hongyu", "" ], [ "Sui", "Yulei", "" ], [ "Xu", "Guandong", "" ], [ "Jin", "Hai", "" ] ]
Vulnerability detection is crucial for ensuring the security and reliability of software systems. Recently, Graph Neural Networks (GNNs) have emerged as a prominent code embedding approach for vulnerability detection, owing to their ability to capture the underlying semantic structure of source code. However, GNNs face significant challenges in explainability due to their inherently black-box nature. To this end, several factual reasoning-based explainers have been proposed. These explainers provide explanations for the predictions made by GNNs by analyzing the key features that contribute to the outcomes. We argue that these factual reasoning-based explanations cannot answer critical what-if questions: What would happen to the GNN's decision if we were to alter the code graph into alternative structures? Inspired by advancements of counterfactual reasoning in artificial intelligence, we propose CFExplainer, a novel counterfactual explainer for GNN-based vulnerability detection. Unlike factual reasoning-based explainers, CFExplainer seeks the minimal perturbation to the input code graph that leads to a change in the prediction, thereby addressing the what-if questions for vulnerability detection. We term this perturbation a counterfactual explanation, which can pinpoint the root causes of the detected vulnerability and furnish valuable insights for developers to undertake appropriate actions for fixing the vulnerability. Extensive experiments on four GNN-based vulnerability detection models demonstrate the effectiveness of CFExplainer over existing state-of-the-art factual reasoning-based explainers.
2302.14286
Jianing Wang
Jianing Wang, Nuo Chen, Qiushi Sun, Wenkang Huang, Chengyu Wang, Ming Gao
HugNLP: A Unified and Comprehensive Library for Natural Language Processing
8 Pages
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
In this paper, we introduce HugNLP, a unified and comprehensive library for natural language processing (NLP) with the prevalent backend of HuggingFace Transformers, which is designed for NLP researchers to easily utilize off-the-shelf algorithms and develop novel methods with user-defined models and tasks in real-world scenarios. HugNLP consists of a hierarchical structure including models, processors and applications that unifies the learning process of pre-trained language models (PLMs) on different NLP tasks. Additionally, we present some featured NLP applications to show the effectiveness of HugNLP, such as knowledge-enhanced PLMs, universal information extraction, low-resource mining, and code understanding and generation, etc. The source code will be released on GitHub (https://github.com/wjn1996/HugNLP).
[ { "created": "Tue, 28 Feb 2023 03:38:26 GMT", "version": "v1" } ]
2023-03-01
[ [ "Wang", "Jianing", "" ], [ "Chen", "Nuo", "" ], [ "Sun", "Qiushi", "" ], [ "Huang", "Wenkang", "" ], [ "Wang", "Chengyu", "" ], [ "Gao", "Ming", "" ] ]
In this paper, we introduce HugNLP, a unified and comprehensive library for natural language processing (NLP) with the prevalent backend of HuggingFace Transformers, which is designed for NLP researchers to easily utilize off-the-shelf algorithms and develop novel methods with user-defined models and tasks in real-world scenarios. HugNLP consists of a hierarchical structure including models, processors and applications that unifies the learning process of pre-trained language models (PLMs) on different NLP tasks. Additionally, we present some featured NLP applications to show the effectiveness of HugNLP, such as knowledge-enhanced PLMs, universal information extraction, low-resource mining, and code understanding and generation, etc. The source code will be released on GitHub (https://github.com/wjn1996/HugNLP).
2311.16171
Omkar Shelke
Omkar Shelke and Pranavi Pathakota and Anandsingh Chauhan and Harshad Khadilkar and Hardik Meisheri and Balaraman Ravindran
Multi-Agent Learning of Efficient Fulfilment and Routing Strategies in E-Commerce
null
null
null
null
cs.AI cs.LG cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an integrated algorithmic framework for minimising product delivery costs in e-commerce (known as the cost-to-serve or C2S). One of the major challenges in e-commerce is the large volume of spatio-temporally diverse orders from multiple customers, each of which has to be fulfilled from one of several warehouses using a fleet of vehicles. This results in two levels of decision-making: (i) selection of a fulfillment node for each order (including the option of deferral to a future time), and then (ii) routing of vehicles (each of which can carry multiple orders originating from the same warehouse). We propose an approach that combines graph neural networks and reinforcement learning to train the node selection and vehicle routing agents. We include real-world constraints such as warehouse inventory capacity, vehicle characteristics such as travel times, service times, carrying capacity, and customer constraints including time windows for delivery. The complexity of this problem arises from the fact that outcomes (rewards) are driven both by the fulfillment node mapping as well as the routing algorithms, and are spatio-temporally distributed. Our experiments show that this algorithmic pipeline outperforms pure heuristic policies.
[ { "created": "Mon, 20 Nov 2023 10:32:28 GMT", "version": "v1" } ]
2023-11-29
[ [ "Shelke", "Omkar", "" ], [ "Pathakota", "Pranavi", "" ], [ "Chauhan", "Anandsingh", "" ], [ "Khadilkar", "Harshad", "" ], [ "Meisheri", "Hardik", "" ], [ "Ravindran", "Balaraman", "" ] ]
This paper presents an integrated algorithmic framework for minimising product delivery costs in e-commerce (known as the cost-to-serve or C2S). One of the major challenges in e-commerce is the large volume of spatio-temporally diverse orders from multiple customers, each of which has to be fulfilled from one of several warehouses using a fleet of vehicles. This results in two levels of decision-making: (i) selection of a fulfillment node for each order (including the option of deferral to a future time), and then (ii) routing of vehicles (each of which can carry multiple orders originating from the same warehouse). We propose an approach that combines graph neural networks and reinforcement learning to train the node selection and vehicle routing agents. We include real-world constraints such as warehouse inventory capacity, vehicle characteristics such as travel times, service times, carrying capacity, and customer constraints including time windows for delivery. The complexity of this problem arises from the fact that outcomes (rewards) are driven both by the fulfillment node mapping as well as the routing algorithms, and are spatio-temporally distributed. Our experiments show that this algorithmic pipeline outperforms pure heuristic policies.
2003.13909
Meng Hua
Meng Hua, Qingqing Wu, Derrick Wing Kwan Ng, Jun Zhao, Luxi Yang
Intelligent Reflecting Surface-Aided Joint Processing Coordinated Multipoint Transmission
This is preprint version submitted to IEEE journal for possible publication
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates intelligent reflecting surface (IRS)-aided multicell wireless networks, where an IRS is deployed to assist the joint processing coordinated multipoint (JP-CoMP) transmission from multiple base stations (BSs) to multiple cell-edge users. By taking into account the fairness among cell-edge users, we aim at maximizing the minimum achievable rate of cell-edge users by jointly optimizing the transmit beamforming at the BSs and the phase shifts at the IRS. As a compromise approach, we transform the non-convex max-min problem into an equivalent form based on the mean-square error method, which facilities the design of an efficient suboptimal iterative algorithm. In addition, we investigate two scenarios, namely the single-user system and the multiuser system. For the former scenario, the optimal transmit beamforming is obtained based on the dual subgradient method, while the phase shift matrix is optimized based on the Majorization-Minimization method. For the latter scenario, the transmit beamforming matrix and phase shift matrix are obtained by the second-order cone programming and semidefinite relaxation techniques, respectively. Numerical results demonstrate the significant performance improvement achieved by deploying an IRS. Furthermore, the proposed JP-CoMP design significantly outperforms the conventional coordinated scheduling/coordinated beamforming coordinated multipoint (CS/CB-CoMP) design in terms of max-min rate.
[ { "created": "Tue, 31 Mar 2020 01:58:09 GMT", "version": "v1" }, { "created": "Wed, 1 Apr 2020 01:12:33 GMT", "version": "v2" }, { "created": "Fri, 3 Apr 2020 00:57:47 GMT", "version": "v3" }, { "created": "Sun, 29 Nov 2020 02:58:46 GMT", "version": "v4" } ]
2020-12-01
[ [ "Hua", "Meng", "" ], [ "Wu", "Qingqing", "" ], [ "Ng", "Derrick Wing Kwan", "" ], [ "Zhao", "Jun", "" ], [ "Yang", "Luxi", "" ] ]
This paper investigates intelligent reflecting surface (IRS)-aided multicell wireless networks, where an IRS is deployed to assist the joint processing coordinated multipoint (JP-CoMP) transmission from multiple base stations (BSs) to multiple cell-edge users. By taking into account the fairness among cell-edge users, we aim at maximizing the minimum achievable rate of cell-edge users by jointly optimizing the transmit beamforming at the BSs and the phase shifts at the IRS. As a compromise approach, we transform the non-convex max-min problem into an equivalent form based on the mean-square error method, which facilities the design of an efficient suboptimal iterative algorithm. In addition, we investigate two scenarios, namely the single-user system and the multiuser system. For the former scenario, the optimal transmit beamforming is obtained based on the dual subgradient method, while the phase shift matrix is optimized based on the Majorization-Minimization method. For the latter scenario, the transmit beamforming matrix and phase shift matrix are obtained by the second-order cone programming and semidefinite relaxation techniques, respectively. Numerical results demonstrate the significant performance improvement achieved by deploying an IRS. Furthermore, the proposed JP-CoMP design significantly outperforms the conventional coordinated scheduling/coordinated beamforming coordinated multipoint (CS/CB-CoMP) design in terms of max-min rate.
2012.00413
Zhengyan Zhang
Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun
CPM: A Large-scale Generative Chinese Pre-trained Language Model
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pre-trained Language Models (PLMs) have proven to be beneficial for various downstream NLP tasks. Recently, GPT-3, with 175 billion parameters and 570GB training data, drew a lot of attention due to the capacity of few-shot (even zero-shot) learning. However, applying GPT-3 to address Chinese NLP tasks is still challenging, as the training corpus of GPT-3 is primarily English, and the parameters are not publicly available. In this technical report, we release the Chinese Pre-trained Language Model (CPM) with generative pre-training on large-scale Chinese training data. To the best of our knowledge, CPM, with 2.6 billion parameters and 100GB Chinese training data, is the largest Chinese pre-trained language model, which could facilitate several downstream Chinese NLP tasks, such as conversation, essay generation, cloze test, and language understanding. Extensive experiments demonstrate that CPM achieves strong performance on many NLP tasks in the settings of few-shot (even zero-shot) learning. The code and parameters are available at https://github.com/TsinghuaAI/CPM-Generate.
[ { "created": "Tue, 1 Dec 2020 11:32:56 GMT", "version": "v1" } ]
2020-12-02
[ [ "Zhang", "Zhengyan", "" ], [ "Han", "Xu", "" ], [ "Zhou", "Hao", "" ], [ "Ke", "Pei", "" ], [ "Gu", "Yuxian", "" ], [ "Ye", "Deming", "" ], [ "Qin", "Yujia", "" ], [ "Su", "Yusheng", "" ], [ "Ji", "Haozhe", "" ], [ "Guan", "Jian", "" ], [ "Qi", "Fanchao", "" ], [ "Wang", "Xiaozhi", "" ], [ "Zheng", "Yanan", "" ], [ "Zeng", "Guoyang", "" ], [ "Cao", "Huanqi", "" ], [ "Chen", "Shengqi", "" ], [ "Li", "Daixuan", "" ], [ "Sun", "Zhenbo", "" ], [ "Liu", "Zhiyuan", "" ], [ "Huang", "Minlie", "" ], [ "Han", "Wentao", "" ], [ "Tang", "Jie", "" ], [ "Li", "Juanzi", "" ], [ "Zhu", "Xiaoyan", "" ], [ "Sun", "Maosong", "" ] ]
Pre-trained Language Models (PLMs) have proven to be beneficial for various downstream NLP tasks. Recently, GPT-3, with 175 billion parameters and 570GB training data, drew a lot of attention due to the capacity of few-shot (even zero-shot) learning. However, applying GPT-3 to address Chinese NLP tasks is still challenging, as the training corpus of GPT-3 is primarily English, and the parameters are not publicly available. In this technical report, we release the Chinese Pre-trained Language Model (CPM) with generative pre-training on large-scale Chinese training data. To the best of our knowledge, CPM, with 2.6 billion parameters and 100GB Chinese training data, is the largest Chinese pre-trained language model, which could facilitate several downstream Chinese NLP tasks, such as conversation, essay generation, cloze test, and language understanding. Extensive experiments demonstrate that CPM achieves strong performance on many NLP tasks in the settings of few-shot (even zero-shot) learning. The code and parameters are available at https://github.com/TsinghuaAI/CPM-Generate.
1805.02744
Junjie Wang
Junjie Wang and Ye Yang and Rahul Krishna and Tim Menzies and Qing Wang
Effective Automated Decision Support for Managing Crowdtesting
12 pages
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Crowdtesting has grown to be an effective alter-native to traditional testing, especially in mobile apps. However,crowdtesting is hard to manage in nature. Given the complexity of mobile applications and unpredictability of distributed, parallel crowdtesting process, it is difficult to estimate (a) the remaining number of bugs as yet undetected or (b) the required cost to find those bugs. Experience-based decisions may result in ineffective crowdtesting process. This paper aims at exploring automated decision support to effectively manage crowdtesting process. The proposed ISENSE applies incremental sampling technique to process crowdtesting reports arriving in chronological order, organizes them into fixed-size groups as dynamic inputs, and predicts two test completion indicators in an incrementally manner. The two indicators are: 1)total number of bugs predicted with Capture-ReCapture (CRC)model, and 2) required test cost for achieving certain test objectives predicted with AutoRegressive Integrated Moving Average(ARIMA) model. We assess ISENSE using 46,434 reports of 218 crowdtesting tasks from one of the largest crowdtesting platforms in China. Its effectiveness is demonstrated through two applications for automating crowdtesting management, i.e. automation oftask closing decision, and semi-automation of task closing trade-off analysis. The results show that decision automation using ISENSE will provide managers with greater opportunities to achieve cost-effectiveness gains of crowdtesting. Specifically, a median of 100% bugs can be detected with 30% saved cost basedon the automated close prediction
[ { "created": "Mon, 7 May 2018 21:07:42 GMT", "version": "v1" } ]
2018-05-09
[ [ "Wang", "Junjie", "" ], [ "Yang", "Ye", "" ], [ "Krishna", "Rahul", "" ], [ "Menzies", "Tim", "" ], [ "Wang", "Qing", "" ] ]
Crowdtesting has grown to be an effective alter-native to traditional testing, especially in mobile apps. However,crowdtesting is hard to manage in nature. Given the complexity of mobile applications and unpredictability of distributed, parallel crowdtesting process, it is difficult to estimate (a) the remaining number of bugs as yet undetected or (b) the required cost to find those bugs. Experience-based decisions may result in ineffective crowdtesting process. This paper aims at exploring automated decision support to effectively manage crowdtesting process. The proposed ISENSE applies incremental sampling technique to process crowdtesting reports arriving in chronological order, organizes them into fixed-size groups as dynamic inputs, and predicts two test completion indicators in an incrementally manner. The two indicators are: 1)total number of bugs predicted with Capture-ReCapture (CRC)model, and 2) required test cost for achieving certain test objectives predicted with AutoRegressive Integrated Moving Average(ARIMA) model. We assess ISENSE using 46,434 reports of 218 crowdtesting tasks from one of the largest crowdtesting platforms in China. Its effectiveness is demonstrated through two applications for automating crowdtesting management, i.e. automation oftask closing decision, and semi-automation of task closing trade-off analysis. The results show that decision automation using ISENSE will provide managers with greater opportunities to achieve cost-effectiveness gains of crowdtesting. Specifically, a median of 100% bugs can be detected with 30% saved cost basedon the automated close prediction
2301.08951
Chengmin Gao
Chengmin Gao and Bin Li
Time-Conditioned Generative Modeling of Object-Centric Representations for Video Decomposition and Prediction
null
Proceedings of the 39th Conference on Uncertainty in Artificial Intelligence (UAI-23), pp.613-623, 2023
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When perceiving the world from multiple viewpoints, humans have the ability to reason about the complete objects in a compositional manner even when an object is completely occluded from certain viewpoints. Meanwhile, humans are able to imagine novel views after observing multiple viewpoints. Recent remarkable advances in multi-view object-centric learning still leaves some unresolved problems: 1) The shapes of partially or completely occluded objects can not be well reconstructed. 2) The novel viewpoint prediction depends on expensive viewpoint annotations rather than implicit rules in view representations. In this paper, we introduce a time-conditioned generative model for videos. To reconstruct the complete shape of an object accurately, we enhance the disentanglement between the latent representations of objects and views, where the latent representations of time-conditioned views are jointly inferred with a Transformer and then are input to a sequential extension of Slot Attention to learn object-centric representations. In addition, Gaussian processes are employed as priors of view latent variables for video generation and novel-view prediction without viewpoint annotations. Experiments on multiple datasets demonstrate that the proposed model can make object-centric video decomposition, reconstruct the complete shapes of occluded objects, and make novel-view predictions.
[ { "created": "Sat, 21 Jan 2023 13:39:39 GMT", "version": "v1" }, { "created": "Fri, 27 Jan 2023 06:01:24 GMT", "version": "v2" }, { "created": "Wed, 7 Jun 2023 15:54:03 GMT", "version": "v3" }, { "created": "Thu, 26 Oct 2023 10:07:02 GMT", "version": "v4" } ]
2023-10-27
[ [ "Gao", "Chengmin", "" ], [ "Li", "Bin", "" ] ]
When perceiving the world from multiple viewpoints, humans have the ability to reason about the complete objects in a compositional manner even when an object is completely occluded from certain viewpoints. Meanwhile, humans are able to imagine novel views after observing multiple viewpoints. Recent remarkable advances in multi-view object-centric learning still leaves some unresolved problems: 1) The shapes of partially or completely occluded objects can not be well reconstructed. 2) The novel viewpoint prediction depends on expensive viewpoint annotations rather than implicit rules in view representations. In this paper, we introduce a time-conditioned generative model for videos. To reconstruct the complete shape of an object accurately, we enhance the disentanglement between the latent representations of objects and views, where the latent representations of time-conditioned views are jointly inferred with a Transformer and then are input to a sequential extension of Slot Attention to learn object-centric representations. In addition, Gaussian processes are employed as priors of view latent variables for video generation and novel-view prediction without viewpoint annotations. Experiments on multiple datasets demonstrate that the proposed model can make object-centric video decomposition, reconstruct the complete shapes of occluded objects, and make novel-view predictions.
2408.06665
Shuqi He
Shuqi He, Jun Zhuang, Ding Wang, Jun Song
RW-NSGCN: A Robust Approach to Structural Attacks via Negative Sampling
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Node classification using Graph Neural Networks (GNNs) has been widely applied in various practical scenarios, such as predicting user interests and detecting communities in social networks. However, recent studies have shown that graph-structured networks often contain potential noise and attacks, in the form of topological perturbations and weight disturbances, which can lead to decreased classification performance in GNNs. To improve the robustness of the model, we propose a novel method: Random Walk Negative Sampling Graph Convolutional Network (RW-NSGCN). Specifically, RW-NSGCN integrates the Random Walk with Restart (RWR) and PageRank (PGR) algorithms for negative sampling and employs a Determinantal Point Process (DPP)-based GCN for convolution operations. RWR leverages both global and local information to manage noise and local variations, while PGR assesses node importance to stabilize the topological structure. The DPP-based GCN ensures diversity among negative samples and aggregates their features to produce robust node embeddings, thereby improving classification performance. Experimental results demonstrate that the RW-NSGCN model effectively addresses network topology attacks and weight instability, increasing the accuracy of anomaly detection and overall stability. In terms of classification accuracy, RW-NSGCN significantly outperforms existing methods, showing greater resilience across various scenarios and effectively mitigating the impact of such vulnerabilities.
[ { "created": "Tue, 13 Aug 2024 06:34:56 GMT", "version": "v1" } ]
2024-08-14
[ [ "He", "Shuqi", "" ], [ "Zhuang", "Jun", "" ], [ "Wang", "Ding", "" ], [ "Song", "Jun", "" ] ]
Node classification using Graph Neural Networks (GNNs) has been widely applied in various practical scenarios, such as predicting user interests and detecting communities in social networks. However, recent studies have shown that graph-structured networks often contain potential noise and attacks, in the form of topological perturbations and weight disturbances, which can lead to decreased classification performance in GNNs. To improve the robustness of the model, we propose a novel method: Random Walk Negative Sampling Graph Convolutional Network (RW-NSGCN). Specifically, RW-NSGCN integrates the Random Walk with Restart (RWR) and PageRank (PGR) algorithms for negative sampling and employs a Determinantal Point Process (DPP)-based GCN for convolution operations. RWR leverages both global and local information to manage noise and local variations, while PGR assesses node importance to stabilize the topological structure. The DPP-based GCN ensures diversity among negative samples and aggregates their features to produce robust node embeddings, thereby improving classification performance. Experimental results demonstrate that the RW-NSGCN model effectively addresses network topology attacks and weight instability, increasing the accuracy of anomaly detection and overall stability. In terms of classification accuracy, RW-NSGCN significantly outperforms existing methods, showing greater resilience across various scenarios and effectively mitigating the impact of such vulnerabilities.
2310.18263
Anoop V. S.
Adhish S. Sujan, Ajitha. V, Aleena Benny, Amiya M. P., V. S. Anoop
MalFake: A Multimodal Fake News Identification for Malayalam using Recurrent Neural Networks and VGG-16
null
null
null
null
cs.CL cs.CY
http://creativecommons.org/licenses/by/4.0/
The amount of news being consumed online has substantially expanded in recent years. Fake news has become increasingly common, especially in regional languages like Malayalam, due to the rapid publication and lack of editorial standards on some online sites. Fake news may have a terrible effect on society, causing people to make bad judgments, lose faith in authorities, and even engage in violent behavior. When we take into the context of India, there are many regional languages, and fake news is spreading in every language. Therefore, providing efficient techniques for identifying false information in regional tongues is crucial. Until now, little to no work has been done in Malayalam, extracting features from multiple modalities to classify fake news. Multimodal approaches are more accurate in detecting fake news, as features from multiple modalities are extracted to build the deep learning classification model. As far as we know, this is the first piece of work in Malayalam that uses multimodal deep learning to tackle false information. Models trained with more than one modality typically outperform models taught with only one modality. Our study in the Malayalam language utilizing multimodal deep learning is a significant step toward more effective misinformation detection and mitigation.
[ { "created": "Fri, 27 Oct 2023 16:51:29 GMT", "version": "v1" } ]
2023-10-30
[ [ "Sujan", "Adhish S.", "" ], [ "V", "Ajitha.", "" ], [ "Benny", "Aleena", "" ], [ "P.", "Amiya M.", "" ], [ "Anoop", "V. S.", "" ] ]
The amount of news being consumed online has substantially expanded in recent years. Fake news has become increasingly common, especially in regional languages like Malayalam, due to the rapid publication and lack of editorial standards on some online sites. Fake news may have a terrible effect on society, causing people to make bad judgments, lose faith in authorities, and even engage in violent behavior. When we take into the context of India, there are many regional languages, and fake news is spreading in every language. Therefore, providing efficient techniques for identifying false information in regional tongues is crucial. Until now, little to no work has been done in Malayalam, extracting features from multiple modalities to classify fake news. Multimodal approaches are more accurate in detecting fake news, as features from multiple modalities are extracted to build the deep learning classification model. As far as we know, this is the first piece of work in Malayalam that uses multimodal deep learning to tackle false information. Models trained with more than one modality typically outperform models taught with only one modality. Our study in the Malayalam language utilizing multimodal deep learning is a significant step toward more effective misinformation detection and mitigation.
2306.12977
Wonjae Shin
Jaehyup Seong, Mesut Toka, Wonjae Shin
Sum-Rate Maximization of RSMA-based Aerial Communications with Energy Harvesting: A Reinforcement Learning Approach
13 pages, 4 figures, submitted to IEEE Wireless Communications Letters
null
null
null
cs.IT cs.LG math.IT
http://creativecommons.org/licenses/by/4.0/
In this letter, we investigate a joint power and beamforming design problem for rate-splitting multiple access (RSMA)-based aerial communications with energy harvesting, where a self-sustainable aerial base station serves multiple users by utilizing the harvested energy. Considering maximizing the sum-rate from the long-term perspective, we utilize a deep reinforcement learning (DRL) approach, namely the soft actor-critic algorithm, to restrict the maximum transmission power at each time based on the stochastic property of the channel environment, harvested energy, and battery power information. Moreover, for designing precoders and power allocation among all the private/common streams of the RSMA, we employ sequential least squares programming (SLSQP) using the Han-Powell quasi-Newton method to maximize the sum-rate for the given transmission power via DRL. Numerical results show the superiority of the proposed scheme over several baseline methods in terms of the average sum-rate performance.
[ { "created": "Thu, 22 Jun 2023 15:38:22 GMT", "version": "v1" } ]
2023-06-23
[ [ "Seong", "Jaehyup", "" ], [ "Toka", "Mesut", "" ], [ "Shin", "Wonjae", "" ] ]
In this letter, we investigate a joint power and beamforming design problem for rate-splitting multiple access (RSMA)-based aerial communications with energy harvesting, where a self-sustainable aerial base station serves multiple users by utilizing the harvested energy. Considering maximizing the sum-rate from the long-term perspective, we utilize a deep reinforcement learning (DRL) approach, namely the soft actor-critic algorithm, to restrict the maximum transmission power at each time based on the stochastic property of the channel environment, harvested energy, and battery power information. Moreover, for designing precoders and power allocation among all the private/common streams of the RSMA, we employ sequential least squares programming (SLSQP) using the Han-Powell quasi-Newton method to maximize the sum-rate for the given transmission power via DRL. Numerical results show the superiority of the proposed scheme over several baseline methods in terms of the average sum-rate performance.
1512.09287
Vladimir Potapov N.
Vladimir N. Potapov
Partial covering arrays for data hiding and quantization
7 pages
SEMR, 2018. V. 15, P. 561-569
10.17377/semi.2018.15.045
null
cs.IT math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of finding a set (partial covering array) $S$ of vertices of the Boolean $n$-cube having cardinality $2^{n-k}$ and intersecting with maximum number of $k$-dimensional faces. We prove that the ratio between the numbers of the $k$-faces containing elements of $S$ to $k$-faces is less than $1-\frac{1+o(1)}{\sqrt{2\pi k}}$ as $n\rightarrow\infty$ for sufficiently large $k$. The solution of the problem in the class of linear codes is found. Connections between this problem, cryptography and an efficiency of quantization are discussed.
[ { "created": "Thu, 31 Dec 2015 13:32:24 GMT", "version": "v1" }, { "created": "Sat, 5 Aug 2017 04:48:31 GMT", "version": "v2" } ]
2018-11-01
[ [ "Potapov", "Vladimir N.", "" ] ]
We consider the problem of finding a set (partial covering array) $S$ of vertices of the Boolean $n$-cube having cardinality $2^{n-k}$ and intersecting with maximum number of $k$-dimensional faces. We prove that the ratio between the numbers of the $k$-faces containing elements of $S$ to $k$-faces is less than $1-\frac{1+o(1)}{\sqrt{2\pi k}}$ as $n\rightarrow\infty$ for sufficiently large $k$. The solution of the problem in the class of linear codes is found. Connections between this problem, cryptography and an efficiency of quantization are discussed.
2307.02276
Ben Norman
Ben Norman, Jeff Clune
First-Explore, then Exploit: Meta-Learning Intelligent Exploration
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Standard reinforcement learning (RL) agents never intelligently explore like a human (i.e. by taking into account complex domain priors and previous explorations). Even the most basic intelligent exploration strategies such as exhaustive search are only inefficiently or poorly approximated by approaches such as novelty search or intrinsic motivation, let alone more complicated strategies like learning new skills, climbing stairs, opening doors, or conducting experiments. This lack of intelligent exploration limits sample efficiency and prevents solving hard exploration domains. We argue a core barrier prohibiting many RL approaches from learning intelligent exploration is that the methods attempt to explore and exploit simultaneously, which harms both exploration and exploitation as the goals often conflict. We propose a novel meta-RL framework (First-Explore) with two policies: one policy learns to only explore and one policy learns to only exploit. Once trained, we can then explore with the explore policy, for as long as desired, and then exploit based on all the information gained during exploration. This approach avoids the conflict of trying to do both exploration and exploitation at once. We demonstrate that First-Explore can learn intelligent exploration strategies such as exhaustive search and more, and that it outperforms dominant standard RL and meta-RL approaches on domains where exploration requires sacrificing reward. First-Explore is a significant step towards creating meta-RL algorithms capable of learning human-level exploration which is essential to solve challenging unseen hard-exploration domains.
[ { "created": "Wed, 5 Jul 2023 13:20:21 GMT", "version": "v1" } ]
2023-07-06
[ [ "Norman", "Ben", "" ], [ "Clune", "Jeff", "" ] ]
Standard reinforcement learning (RL) agents never intelligently explore like a human (i.e. by taking into account complex domain priors and previous explorations). Even the most basic intelligent exploration strategies such as exhaustive search are only inefficiently or poorly approximated by approaches such as novelty search or intrinsic motivation, let alone more complicated strategies like learning new skills, climbing stairs, opening doors, or conducting experiments. This lack of intelligent exploration limits sample efficiency and prevents solving hard exploration domains. We argue a core barrier prohibiting many RL approaches from learning intelligent exploration is that the methods attempt to explore and exploit simultaneously, which harms both exploration and exploitation as the goals often conflict. We propose a novel meta-RL framework (First-Explore) with two policies: one policy learns to only explore and one policy learns to only exploit. Once trained, we can then explore with the explore policy, for as long as desired, and then exploit based on all the information gained during exploration. This approach avoids the conflict of trying to do both exploration and exploitation at once. We demonstrate that First-Explore can learn intelligent exploration strategies such as exhaustive search and more, and that it outperforms dominant standard RL and meta-RL approaches on domains where exploration requires sacrificing reward. First-Explore is a significant step towards creating meta-RL algorithms capable of learning human-level exploration which is essential to solve challenging unseen hard-exploration domains.
2401.08415
Arda Senocak
Jiu Feng, Mehmet Hamza Erol, Joon Son Chung, Arda Senocak
From Coarse to Fine: Efficient Training for Audio Spectrogram Transformers
ICASSP 2024
null
null
null
cs.SD cs.LG eess.AS
http://creativecommons.org/licenses/by/4.0/
Transformers have become central to recent advances in audio classification. However, training an audio spectrogram transformer, e.g. AST, from scratch can be resource and time-intensive. Furthermore, the complexity of transformers heavily depends on the input audio spectrogram size. In this work, we aim to optimize AST training by linking to the resolution in the time-axis. We introduce multi-phase training of audio spectrogram transformers by connecting the seminal idea of coarse-to-fine with transformer models. To achieve this, we propose a set of methods for temporal compression. By employing one of these methods, the transformer model learns from lower-resolution (coarse) data in the initial phases, and then is fine-tuned with high-resolution data later in a curriculum learning strategy. Experimental results demonstrate that the proposed training mechanism for AST leads to improved (or on-par) performance with faster convergence, i.e. requiring fewer computational resources and less time. This approach is also generalizable to other AST-based methods regardless of their learning paradigms.
[ { "created": "Tue, 16 Jan 2024 14:59:37 GMT", "version": "v1" } ]
2024-01-17
[ [ "Feng", "Jiu", "" ], [ "Erol", "Mehmet Hamza", "" ], [ "Chung", "Joon Son", "" ], [ "Senocak", "Arda", "" ] ]
Transformers have become central to recent advances in audio classification. However, training an audio spectrogram transformer, e.g. AST, from scratch can be resource and time-intensive. Furthermore, the complexity of transformers heavily depends on the input audio spectrogram size. In this work, we aim to optimize AST training by linking to the resolution in the time-axis. We introduce multi-phase training of audio spectrogram transformers by connecting the seminal idea of coarse-to-fine with transformer models. To achieve this, we propose a set of methods for temporal compression. By employing one of these methods, the transformer model learns from lower-resolution (coarse) data in the initial phases, and then is fine-tuned with high-resolution data later in a curriculum learning strategy. Experimental results demonstrate that the proposed training mechanism for AST leads to improved (or on-par) performance with faster convergence, i.e. requiring fewer computational resources and less time. This approach is also generalizable to other AST-based methods regardless of their learning paradigms.
2401.09366
Thomas Lamiaux
Thomas Lamiaux, Benedikt Ahrens
An Introduction to Different Approaches to Initial Semantics
null
null
null
null
cs.LO cs.PL
http://creativecommons.org/licenses/by/4.0/
Characterizing programming languages with variable binding as initial objects, was first achieved by Fiore, Plotkin, and Turi in their seminal paper published at LICS'99. To do so, in particular to prove initiality theorems, they developed a framework based on monoidal categories, functors with strengths, and $\Sigma$-monoids. An alternative approach using modules over monads was later introduced by Hirschowitz and Maggesi, for endofunctor categories, that is, for particular monoidal categories. This approach has the advantage of providing a more general and abstract definition of signatures and models; however, no general initiality result is known for this notion of signature. Furthermore, Matthes and Uustalu provided a categorical formalism for constructing (initial) monads via Mendler-style recursion, that can also be used for initial semantics. The different approaches have been developed further in several articles. However, in practice, the literature is difficult to access, and links between the different strands of work remain underexplored. In the present work, we give an introduction to initial semantics that encompasses the three different strands. We develop a suitable "pushout" of Hirschowitz and Maggesi's framework with Fiore's, and rely on Matthes and Uustalu's formalism to provide modular proofs. For this purpose, we generalize both Hirschowitz and Maggesi's framework, and Matthes and Uustalu's formalism to the general setting of monoidal categories studied by Fiore and collaborators. Moreover, we provide fully worked out presentation of some basic instances of the literature, and an extensive discussion of related work explaining the links between the different approaches.
[ { "created": "Wed, 17 Jan 2024 17:36:26 GMT", "version": "v1" } ]
2024-01-18
[ [ "Lamiaux", "Thomas", "" ], [ "Ahrens", "Benedikt", "" ] ]
Characterizing programming languages with variable binding as initial objects, was first achieved by Fiore, Plotkin, and Turi in their seminal paper published at LICS'99. To do so, in particular to prove initiality theorems, they developed a framework based on monoidal categories, functors with strengths, and $\Sigma$-monoids. An alternative approach using modules over monads was later introduced by Hirschowitz and Maggesi, for endofunctor categories, that is, for particular monoidal categories. This approach has the advantage of providing a more general and abstract definition of signatures and models; however, no general initiality result is known for this notion of signature. Furthermore, Matthes and Uustalu provided a categorical formalism for constructing (initial) monads via Mendler-style recursion, that can also be used for initial semantics. The different approaches have been developed further in several articles. However, in practice, the literature is difficult to access, and links between the different strands of work remain underexplored. In the present work, we give an introduction to initial semantics that encompasses the three different strands. We develop a suitable "pushout" of Hirschowitz and Maggesi's framework with Fiore's, and rely on Matthes and Uustalu's formalism to provide modular proofs. For this purpose, we generalize both Hirschowitz and Maggesi's framework, and Matthes and Uustalu's formalism to the general setting of monoidal categories studied by Fiore and collaborators. Moreover, we provide fully worked out presentation of some basic instances of the literature, and an extensive discussion of related work explaining the links between the different approaches.
1903.08097
Alessandra Cervone
Alessandra Cervone, Chandra Khatri, Rahul Goel, Behnam Hedayatnia, Anu Venkatesh, Dilek Hakkani-Tur, Raefer Gabriel
Natural Language Generation at Scale: A Case Study for Open Domain Question Answering
Accepted to INLG 2019
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current approaches to Natural Language Generation (NLG) for dialog mainly focus on domain-specific, task-oriented applications (e.g. restaurant booking) using limited ontologies (up to 20 slot types), usually without considering the previous conversation context. Furthermore, these approaches require large amounts of data for each domain, and do not benefit from examples that may be available for other domains. This work explores the feasibility of applying statistical NLG to scenarios requiring larger ontologies, such as multi-domain dialog applications or open-domain question answering (QA) based on knowledge graphs. We model NLG through an Encoder-Decoder framework using a large dataset of interactions between real-world users and a conversational agent for open-domain QA. First, we investigate the impact of increasing the number of slot types on the generation quality and experiment with different partitions of the QA data with progressively larger ontologies (up to 369 slot types). Second, we perform multi-task learning experiments between open-domain QA and task-oriented dialog, and benchmark our model on a popular NLG dataset. Moreover, we experiment with using the conversational context as an additional input to improve response generation quality. Our experiments show the feasibility of learning statistical NLG models for open-domain QA with larger ontologies.
[ { "created": "Tue, 19 Mar 2019 16:35:29 GMT", "version": "v1" }, { "created": "Mon, 23 Sep 2019 21:25:39 GMT", "version": "v2" } ]
2019-09-25
[ [ "Cervone", "Alessandra", "" ], [ "Khatri", "Chandra", "" ], [ "Goel", "Rahul", "" ], [ "Hedayatnia", "Behnam", "" ], [ "Venkatesh", "Anu", "" ], [ "Hakkani-Tur", "Dilek", "" ], [ "Gabriel", "Raefer", "" ] ]
Current approaches to Natural Language Generation (NLG) for dialog mainly focus on domain-specific, task-oriented applications (e.g. restaurant booking) using limited ontologies (up to 20 slot types), usually without considering the previous conversation context. Furthermore, these approaches require large amounts of data for each domain, and do not benefit from examples that may be available for other domains. This work explores the feasibility of applying statistical NLG to scenarios requiring larger ontologies, such as multi-domain dialog applications or open-domain question answering (QA) based on knowledge graphs. We model NLG through an Encoder-Decoder framework using a large dataset of interactions between real-world users and a conversational agent for open-domain QA. First, we investigate the impact of increasing the number of slot types on the generation quality and experiment with different partitions of the QA data with progressively larger ontologies (up to 369 slot types). Second, we perform multi-task learning experiments between open-domain QA and task-oriented dialog, and benchmark our model on a popular NLG dataset. Moreover, we experiment with using the conversational context as an additional input to improve response generation quality. Our experiments show the feasibility of learning statistical NLG models for open-domain QA with larger ontologies.
1707.04783
Honggang Hu
Honggang Hu, Xiaolong Yang, and Shaohua Tang
New Classes of Ternary Bent Functions from the Coulter-Matthews Bent Functions
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has been an active research issue for many years to construct new bent functions. For $k$ odd with $\gcd(n, k)=1$, and $a\in\mathbb{F}_{3^n}^{*}$, the function $f(x)=Tr(ax^{\frac{3^k+1}{2}})$ is weakly regular bent over $\mathbb{F}_{3^n}$, where $Tr(\cdot):\mathbb{F}_{3^n}\rightarrow\mathbb{F}_3$ is the trace function. This is the well-known Coulter-Matthews bent function. In this paper, we determine the dual function of $f(x)$ completely. As a consequence, we find many classes of ternary bent functions not reported in the literature previously. Such bent functions are not quadratic if $k>1$, and have $\left(\left(\frac{1+\sqrt{5}}{2}\right)^{w+1}-\right.$ $\left.\left(\frac{1-\sqrt{5}}{2}\right)^{w+1}\right)/\sqrt{5}$ or $\left(\left(\frac{1+\sqrt{5}}{2}\right)^{n-w+1}-\right.$ $\left.\left(\frac{1-\sqrt{5}}{2}\right)^{n-w+1}\right)/\sqrt{5}$ trace terms, where $0<w<n$ and $wk\equiv 1\ (\bmod\;n)$. Among them, five special cases are especially interesting: for the case of $k=(n+1)/2$, the number of trace terms is $\left(\left(\frac{1+\sqrt{5}}{2}\right)^{n-1}-\right.$ $\left.\left(\frac{1-\sqrt{5}}{2}\right)^{n-1}\right)/\sqrt{5}$; for the case of $k=n-1$, the number of trace terms is $\left(\left(\frac{1+\sqrt{5}}{2}\right)^n-\right.$ $\left.\left(\frac{1-\sqrt{5}}{2}\right)^n\right)/\sqrt{5}$; for the case of $k=(n-1)/2$, the number of trace terms is $\left(\left(\frac{1+\sqrt{5}}{2}\right)^{n-1}-\right.$ $\left.\left(\frac{1-\sqrt{5}}{2}\right)^{n-1}\right)/\sqrt{5}$; for the case of $(n, k)=(5t+4, 4t+3)$ or $(5t+1, 4t+1)$ with $t\geq 1$, the number of trace terms is 8; and for the case of $(n, k)=(7t+6, 6t+5)$ or $(7t+1, 6t+1)$ with $t\geq 1$, the number of trace terms is 21. As a byproduct, we find new classes of ternary bent functions with only 8 or 21 trace terms.
[ { "created": "Sat, 15 Jul 2017 20:05:08 GMT", "version": "v1" } ]
2017-07-18
[ [ "Hu", "Honggang", "" ], [ "Yang", "Xiaolong", "" ], [ "Tang", "Shaohua", "" ] ]
It has been an active research issue for many years to construct new bent functions. For $k$ odd with $\gcd(n, k)=1$, and $a\in\mathbb{F}_{3^n}^{*}$, the function $f(x)=Tr(ax^{\frac{3^k+1}{2}})$ is weakly regular bent over $\mathbb{F}_{3^n}$, where $Tr(\cdot):\mathbb{F}_{3^n}\rightarrow\mathbb{F}_3$ is the trace function. This is the well-known Coulter-Matthews bent function. In this paper, we determine the dual function of $f(x)$ completely. As a consequence, we find many classes of ternary bent functions not reported in the literature previously. Such bent functions are not quadratic if $k>1$, and have $\left(\left(\frac{1+\sqrt{5}}{2}\right)^{w+1}-\right.$ $\left.\left(\frac{1-\sqrt{5}}{2}\right)^{w+1}\right)/\sqrt{5}$ or $\left(\left(\frac{1+\sqrt{5}}{2}\right)^{n-w+1}-\right.$ $\left.\left(\frac{1-\sqrt{5}}{2}\right)^{n-w+1}\right)/\sqrt{5}$ trace terms, where $0<w<n$ and $wk\equiv 1\ (\bmod\;n)$. Among them, five special cases are especially interesting: for the case of $k=(n+1)/2$, the number of trace terms is $\left(\left(\frac{1+\sqrt{5}}{2}\right)^{n-1}-\right.$ $\left.\left(\frac{1-\sqrt{5}}{2}\right)^{n-1}\right)/\sqrt{5}$; for the case of $k=n-1$, the number of trace terms is $\left(\left(\frac{1+\sqrt{5}}{2}\right)^n-\right.$ $\left.\left(\frac{1-\sqrt{5}}{2}\right)^n\right)/\sqrt{5}$; for the case of $k=(n-1)/2$, the number of trace terms is $\left(\left(\frac{1+\sqrt{5}}{2}\right)^{n-1}-\right.$ $\left.\left(\frac{1-\sqrt{5}}{2}\right)^{n-1}\right)/\sqrt{5}$; for the case of $(n, k)=(5t+4, 4t+3)$ or $(5t+1, 4t+1)$ with $t\geq 1$, the number of trace terms is 8; and for the case of $(n, k)=(7t+6, 6t+5)$ or $(7t+1, 6t+1)$ with $t\geq 1$, the number of trace terms is 21. As a byproduct, we find new classes of ternary bent functions with only 8 or 21 trace terms.
2406.19538
Anthony Rios
Dan Schumacher, Fatemeh Haji, Tara Grey, Niharika Bandlamudi, Nupoor Karnik, Gagana Uday Kumar, Jason Cho-Yu Chiang, Paul Rad, Nishant Vishwamitra, Anthony Rios
Context Matters: An Empirical Study of the Impact of Contextual Information in Temporal Question Answering Systems
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Large language models (LLMs) often struggle with temporal reasoning, crucial for tasks like historical event analysis and time-sensitive information retrieval. Despite advancements, state-of-the-art models falter in handling temporal information, especially when faced with irrelevant or noisy contexts. This paper addresses this gap by empirically examining the robustness of temporal question-answering (TQA) systems trained on various context types, including relevant, irrelevant, slightly altered, and no context. Our findings indicate that training with a mix of these contexts enhances model robustness and accuracy. Additionally, we show that the position of context relative to the question significantly impacts performance, with question-first positioning yielding better results. We introduce two new context-rich TQA datasets, ContextAQA and ContextTQE, and provide comprehensive evaluations and guidelines for training robust TQA models. Our work lays the foundation for developing reliable and context-aware temporal QA systems, with broader implications for enhancing LLM robustness against diverse and potentially adversarial information.
[ { "created": "Thu, 27 Jun 2024 21:31:30 GMT", "version": "v1" } ]
2024-07-01
[ [ "Schumacher", "Dan", "" ], [ "Haji", "Fatemeh", "" ], [ "Grey", "Tara", "" ], [ "Bandlamudi", "Niharika", "" ], [ "Karnik", "Nupoor", "" ], [ "Kumar", "Gagana Uday", "" ], [ "Chiang", "Jason Cho-Yu", "" ], [ "Rad", "Paul", "" ], [ "Vishwamitra", "Nishant", "" ], [ "Rios", "Anthony", "" ] ]
Large language models (LLMs) often struggle with temporal reasoning, crucial for tasks like historical event analysis and time-sensitive information retrieval. Despite advancements, state-of-the-art models falter in handling temporal information, especially when faced with irrelevant or noisy contexts. This paper addresses this gap by empirically examining the robustness of temporal question-answering (TQA) systems trained on various context types, including relevant, irrelevant, slightly altered, and no context. Our findings indicate that training with a mix of these contexts enhances model robustness and accuracy. Additionally, we show that the position of context relative to the question significantly impacts performance, with question-first positioning yielding better results. We introduce two new context-rich TQA datasets, ContextAQA and ContextTQE, and provide comprehensive evaluations and guidelines for training robust TQA models. Our work lays the foundation for developing reliable and context-aware temporal QA systems, with broader implications for enhancing LLM robustness against diverse and potentially adversarial information.
2104.02937
Miguel A. Mosteiro
Dariusz R. Kowalski and Miguel A. Mosteiro
Polynomial Anonymous Dynamic Distributed Computing without a Unique Leader
arXiv admin note: text overlap with arXiv:1707.04282
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
Counting the number of nodes in Anonymous Dynamic Networks is enticing from an algorithmic perspective: an important computation in a restricted platform with promising applications. Starting with Michail, Chatzigiannakis, and Spirakis [19], a flurry of papers sped up the running time guarantees from doubly-exponential to polynomial [16]. There is a common theme across all those works: a distinguished node is assumed to be present, because Counting cannot be solved deterministically without at least one. In the present work we study challenging questions that naturally follow: how to efficiently count with more than one distinguished node, or how to count without any distinguished node. More importantly, what is the minimal information needed about these distinguished nodes and what is the best we can aim for (count precision, stochastic guarantees, etc.) without any. We present negative and positive results to answer these questions. To the best of our knowledge, this is the first work that addresses them.
[ { "created": "Wed, 7 Apr 2021 06:12:52 GMT", "version": "v1" } ]
2021-04-08
[ [ "Kowalski", "Dariusz R.", "" ], [ "Mosteiro", "Miguel A.", "" ] ]
Counting the number of nodes in Anonymous Dynamic Networks is enticing from an algorithmic perspective: an important computation in a restricted platform with promising applications. Starting with Michail, Chatzigiannakis, and Spirakis [19], a flurry of papers sped up the running time guarantees from doubly-exponential to polynomial [16]. There is a common theme across all those works: a distinguished node is assumed to be present, because Counting cannot be solved deterministically without at least one. In the present work we study challenging questions that naturally follow: how to efficiently count with more than one distinguished node, or how to count without any distinguished node. More importantly, what is the minimal information needed about these distinguished nodes and what is the best we can aim for (count precision, stochastic guarantees, etc.) without any. We present negative and positive results to answer these questions. To the best of our knowledge, this is the first work that addresses them.
2004.02438
Xuming Hu
Xuming Hu, Chenwei Zhang, Yusong Xu, Lijie Wen, Philip S. Yu
SelfORE: Self-supervised Relational Feature Learning for Open Relation Extraction
In EMNLP 2020 as a long paper. Code and data are available at https://github.com/THU-BPM/SelfORE
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Open relation extraction is the task of extracting open-domain relation facts from natural language sentences. Existing works either utilize heuristics or distant-supervised annotations to train a supervised classifier over pre-defined relations, or adopt unsupervised methods with additional assumptions that have less discriminative power. In this work, we proposed a self-supervised framework named SelfORE, which exploits weak, self-supervised signals by leveraging large pretrained language model for adaptive clustering on contextualized relational features, and bootstraps the self-supervised signals by improving contextualized features in relation classification. Experimental results on three datasets show the effectiveness and robustness of SelfORE on open-domain Relation Extraction when comparing with competitive baselines.
[ { "created": "Mon, 6 Apr 2020 07:23:17 GMT", "version": "v1" }, { "created": "Tue, 6 Oct 2020 12:32:20 GMT", "version": "v2" } ]
2020-10-07
[ [ "Hu", "Xuming", "" ], [ "Zhang", "Chenwei", "" ], [ "Xu", "Yusong", "" ], [ "Wen", "Lijie", "" ], [ "Yu", "Philip S.", "" ] ]
Open relation extraction is the task of extracting open-domain relation facts from natural language sentences. Existing works either utilize heuristics or distant-supervised annotations to train a supervised classifier over pre-defined relations, or adopt unsupervised methods with additional assumptions that have less discriminative power. In this work, we proposed a self-supervised framework named SelfORE, which exploits weak, self-supervised signals by leveraging large pretrained language model for adaptive clustering on contextualized relational features, and bootstraps the self-supervised signals by improving contextualized features in relation classification. Experimental results on three datasets show the effectiveness and robustness of SelfORE on open-domain Relation Extraction when comparing with competitive baselines.
2209.14265
Peng Yin
Shreyas Kulkarni, Peng Yin, and Sebastian Scherer
360FusionNeRF: Panoramic Neural Radiance Fields with Joint Guidance
8 pages, Fig 3, Submitted to IEEE RAL. arXiv admin note: text overlap with arXiv:2106.10859, arXiv:2104.00677, arXiv:2203.09957, arXiv:2204.00928 by other authors
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a method to synthesize novel views from a single $360^\circ$ panorama image based on the neural radiance field (NeRF). Prior studies in a similar setting rely on the neighborhood interpolation capability of multi-layer perceptions to complete missing regions caused by occlusion, which leads to artifacts in their predictions. We propose 360FusionNeRF, a semi-supervised learning framework where we introduce geometric supervision and semantic consistency to guide the progressive training process. Firstly, the input image is re-projected to $360^\circ$ images, and auxiliary depth maps are extracted at other camera positions. The depth supervision, in addition to the NeRF color guidance, improves the geometry of the synthesized views. Additionally, we introduce a semantic consistency loss that encourages realistic renderings of novel views. We extract these semantic features using a pre-trained visual encoder such as CLIP, a Vision Transformer trained on hundreds of millions of diverse 2D photographs mined from the web with natural language supervision. Experiments indicate that our proposed method can produce plausible completions of unobserved regions while preserving the features of the scene. When trained across various scenes, 360FusionNeRF consistently achieves the state-of-the-art performance when transferring to synthetic Structured3D dataset (PSNR~5%, SSIM~3% LPIPS~13%), real-world Matterport3D dataset (PSNR~3%, SSIM~3% LPIPS~9%) and Replica360 dataset (PSNR~8%, SSIM~2% LPIPS~18%).
[ { "created": "Wed, 28 Sep 2022 17:30:53 GMT", "version": "v1" }, { "created": "Mon, 3 Oct 2022 07:31:30 GMT", "version": "v2" } ]
2022-10-04
[ [ "Kulkarni", "Shreyas", "" ], [ "Yin", "Peng", "" ], [ "Scherer", "Sebastian", "" ] ]
We present a method to synthesize novel views from a single $360^\circ$ panorama image based on the neural radiance field (NeRF). Prior studies in a similar setting rely on the neighborhood interpolation capability of multi-layer perceptions to complete missing regions caused by occlusion, which leads to artifacts in their predictions. We propose 360FusionNeRF, a semi-supervised learning framework where we introduce geometric supervision and semantic consistency to guide the progressive training process. Firstly, the input image is re-projected to $360^\circ$ images, and auxiliary depth maps are extracted at other camera positions. The depth supervision, in addition to the NeRF color guidance, improves the geometry of the synthesized views. Additionally, we introduce a semantic consistency loss that encourages realistic renderings of novel views. We extract these semantic features using a pre-trained visual encoder such as CLIP, a Vision Transformer trained on hundreds of millions of diverse 2D photographs mined from the web with natural language supervision. Experiments indicate that our proposed method can produce plausible completions of unobserved regions while preserving the features of the scene. When trained across various scenes, 360FusionNeRF consistently achieves the state-of-the-art performance when transferring to synthetic Structured3D dataset (PSNR~5%, SSIM~3% LPIPS~13%), real-world Matterport3D dataset (PSNR~3%, SSIM~3% LPIPS~9%) and Replica360 dataset (PSNR~8%, SSIM~2% LPIPS~18%).
1801.08985
Steven Hickson
Steven Hickson, Anelia Angelova, Irfan Essa, Rahul Sukthankar
Object category learning and retrieval with weak supervision
Camera-ready version for NIPS 2017 workshop Learning with Limited Labeled Data
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of retrieving objects from image data and learning to classify them into meaningful semantic categories with minimal supervision. To that end, we propose a fully differentiable unsupervised deep clustering approach to learn semantic classes in an end-to-end fashion without individual class labeling using only unlabeled object proposals. The key contributions of our work are 1) a kmeans clustering objective where the clusters are learned as parameters of the network and are represented as memory units, and 2) simultaneously building a feature representation, or embedding, while learning to cluster it. This approach shows promising results on two popular computer vision datasets: on CIFAR10 for clustering objects, and on the more complex and challenging Cityscapes dataset for semantically discovering classes which visually correspond to cars, people, and bicycles. Currently, the only supervision provided is segmentation objectness masks, but this method can be extended to use an unsupervised objectness-based object generation mechanism which will make the approach completely unsupervised.
[ { "created": "Fri, 26 Jan 2018 21:47:59 GMT", "version": "v1" }, { "created": "Mon, 23 Jul 2018 20:22:15 GMT", "version": "v2" } ]
2018-07-25
[ [ "Hickson", "Steven", "" ], [ "Angelova", "Anelia", "" ], [ "Essa", "Irfan", "" ], [ "Sukthankar", "Rahul", "" ] ]
We consider the problem of retrieving objects from image data and learning to classify them into meaningful semantic categories with minimal supervision. To that end, we propose a fully differentiable unsupervised deep clustering approach to learn semantic classes in an end-to-end fashion without individual class labeling using only unlabeled object proposals. The key contributions of our work are 1) a kmeans clustering objective where the clusters are learned as parameters of the network and are represented as memory units, and 2) simultaneously building a feature representation, or embedding, while learning to cluster it. This approach shows promising results on two popular computer vision datasets: on CIFAR10 for clustering objects, and on the more complex and challenging Cityscapes dataset for semantically discovering classes which visually correspond to cars, people, and bicycles. Currently, the only supervision provided is segmentation objectness masks, but this method can be extended to use an unsupervised objectness-based object generation mechanism which will make the approach completely unsupervised.
1712.00269
Nikolay Jetchev
Nikolay Jetchev, Urs Bergmann, Calvin Seward
GANosaic: Mosaic Creation with Generative Texture Manifolds
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Workshop on Machine Learning for Creativity and Design
null
null
null
cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a novel framework for generating texture mosaics with convolutional neural networks. Our method is called GANosaic and performs optimization in the latent noise space of a generative texture model, which allows the transformation of a content image into a mosaic exhibiting the visual properties of the underlying texture manifold. To represent that manifold, we use a state-of-the-art generative adversarial method for texture synthesis, which can learn expressive texture representations from data and produce mosaic images with very high resolution. This fully convolutional model generates smooth (without any visible borders) mosaic images which morph and blend different textures locally. In addition, we develop a new type of differentiable statistical regularization appropriate for optimization over the prior noise space of the PSGAN model.
[ { "created": "Fri, 1 Dec 2017 10:35:13 GMT", "version": "v1" } ]
2017-12-04
[ [ "Jetchev", "Nikolay", "" ], [ "Bergmann", "Urs", "" ], [ "Seward", "Calvin", "" ] ]
This paper presents a novel framework for generating texture mosaics with convolutional neural networks. Our method is called GANosaic and performs optimization in the latent noise space of a generative texture model, which allows the transformation of a content image into a mosaic exhibiting the visual properties of the underlying texture manifold. To represent that manifold, we use a state-of-the-art generative adversarial method for texture synthesis, which can learn expressive texture representations from data and produce mosaic images with very high resolution. This fully convolutional model generates smooth (without any visible borders) mosaic images which morph and blend different textures locally. In addition, we develop a new type of differentiable statistical regularization appropriate for optimization over the prior noise space of the PSGAN model.
1308.6797
Nir Ailon
Nir Ailon
Online Ranking: Discrete Choice, Spearman Correlation and Other Feedback
null
null
null
null
cs.LG cs.GT stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a set $V$ of $n$ objects, an online ranking system outputs at each time step a full ranking of the set, observes a feedback of some form and suffers a loss. We study the setting in which the (adversarial) feedback is an element in $V$, and the loss is the position (0th, 1st, 2nd...) of the item in the outputted ranking. More generally, we study a setting in which the feedback is a subset $U$ of at most $k$ elements in $V$, and the loss is the sum of the positions of those elements. We present an algorithm of expected regret $O(n^{3/2}\sqrt{Tk})$ over a time horizon of $T$ steps with respect to the best single ranking in hindsight. This improves previous algorithms and analyses either by a factor of either $\Omega(\sqrt{k})$, a factor of $\Omega(\sqrt{\log n})$ or by improving running time from quadratic to $O(n\log n)$ per round. We also prove a matching lower bound. Our techniques also imply an improved regret bound for online rank aggregation over the Spearman correlation measure, and to other more complex ranking loss functions.
[ { "created": "Fri, 30 Aug 2013 17:03:16 GMT", "version": "v1" }, { "created": "Mon, 2 Sep 2013 02:18:18 GMT", "version": "v2" }, { "created": "Sun, 8 Sep 2013 02:52:39 GMT", "version": "v3" }, { "created": "Wed, 25 Sep 2013 22:05:33 GMT", "version": "v4" }, { "created": "Mon, 14 Oct 2013 14:44:41 GMT", "version": "v5" } ]
2013-10-15
[ [ "Ailon", "Nir", "" ] ]
Given a set $V$ of $n$ objects, an online ranking system outputs at each time step a full ranking of the set, observes a feedback of some form and suffers a loss. We study the setting in which the (adversarial) feedback is an element in $V$, and the loss is the position (0th, 1st, 2nd...) of the item in the outputted ranking. More generally, we study a setting in which the feedback is a subset $U$ of at most $k$ elements in $V$, and the loss is the sum of the positions of those elements. We present an algorithm of expected regret $O(n^{3/2}\sqrt{Tk})$ over a time horizon of $T$ steps with respect to the best single ranking in hindsight. This improves previous algorithms and analyses either by a factor of either $\Omega(\sqrt{k})$, a factor of $\Omega(\sqrt{\log n})$ or by improving running time from quadratic to $O(n\log n)$ per round. We also prove a matching lower bound. Our techniques also imply an improved regret bound for online rank aggregation over the Spearman correlation measure, and to other more complex ranking loss functions.
1809.01984
Pierre-Emmanuel Mazar\'e
Pierre-Emmanuel Mazar\'e, Samuel Humeau, Martin Raison, Antoine Bordes
Training Millions of Personalized Dialogue Agents
EMNLP 2018
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current dialogue systems are not very engaging for users, especially when trained end-to-end without relying on proactive reengaging scripted strategies. Zhang et al. (2018) showed that the engagement level of end-to-end dialogue models increases when conditioning them on text personas providing some personalized back-story to the model. However, the dataset used in Zhang et al. (2018) is synthetic and of limited size as it contains around 1k different personas. In this paper we introduce a new dataset providing 5 million personas and 700 million persona-based dialogues. Our experiments show that, at this scale, training using personas still improves the performance of end-to-end systems. In addition, we show that other tasks benefit from the wide coverage of our dataset by fine-tuning our model on the data from Zhang et al. (2018) and achieving state-of-the-art results.
[ { "created": "Thu, 6 Sep 2018 13:36:40 GMT", "version": "v1" } ]
2018-09-07
[ [ "Mazaré", "Pierre-Emmanuel", "" ], [ "Humeau", "Samuel", "" ], [ "Raison", "Martin", "" ], [ "Bordes", "Antoine", "" ] ]
Current dialogue systems are not very engaging for users, especially when trained end-to-end without relying on proactive reengaging scripted strategies. Zhang et al. (2018) showed that the engagement level of end-to-end dialogue models increases when conditioning them on text personas providing some personalized back-story to the model. However, the dataset used in Zhang et al. (2018) is synthetic and of limited size as it contains around 1k different personas. In this paper we introduce a new dataset providing 5 million personas and 700 million persona-based dialogues. Our experiments show that, at this scale, training using personas still improves the performance of end-to-end systems. In addition, we show that other tasks benefit from the wide coverage of our dataset by fine-tuning our model on the data from Zhang et al. (2018) and achieving state-of-the-art results.
2408.07326
Seongmin Hong
Seungjae Moon, Jung-Hoon Kim, Junsoo Kim, Seongmin Hong, Junseo Cha, Minsu Kim, Sukbin Lim, Gyubin Choi, Dongjin Seo, Jongho Kim, Hunjong Lee, Hyunjun Park, Ryeowook Ko, Soongyu Choi, Jongse Park, Jinwon Lee, Joo-Young Kim
LPU: A Latency-Optimized and Highly Scalable Processor for Large Language Model Inference
null
null
null
null
cs.AR
http://creativecommons.org/licenses/by-nc-nd/4.0/
The explosive arrival of OpenAI's ChatGPT has fueled the globalization of large language model (LLM), which consists of billions of pretrained parameters that embodies the aspects of syntax and semantics. HyperAccel introduces latency processing unit (LPU), a latency-optimized and highly scalable processor architecture for the acceleration of LLM inference. LPU perfectly balances the memory bandwidth and compute logic with streamlined dataflow to maximize performance and efficiency. LPU is equipped with expandable synchronization link (ESL) that hides data synchronization latency between multiple LPUs. HyperDex complements LPU as an intuitive software framework to run LLM applications. LPU achieves 1.25 ms/token and 20.9 ms/token for 1.3B and 66B model, respectively, which is 2.09x and 1.37x faster than the GPU. LPU, synthesized using Samsung 4nm process, has total area of 0.824 mm2 and power consumption of 284.31 mW. LPU-based servers achieve 1.33x and 1.32x energy efficiency over NVIDIA H100 and L4 servers, respectively.
[ { "created": "Wed, 14 Aug 2024 06:56:20 GMT", "version": "v1" } ]
2024-08-15
[ [ "Moon", "Seungjae", "" ], [ "Kim", "Jung-Hoon", "" ], [ "Kim", "Junsoo", "" ], [ "Hong", "Seongmin", "" ], [ "Cha", "Junseo", "" ], [ "Kim", "Minsu", "" ], [ "Lim", "Sukbin", "" ], [ "Choi", "Gyubin", "" ], [ "Seo", "Dongjin", "" ], [ "Kim", "Jongho", "" ], [ "Lee", "Hunjong", "" ], [ "Park", "Hyunjun", "" ], [ "Ko", "Ryeowook", "" ], [ "Choi", "Soongyu", "" ], [ "Park", "Jongse", "" ], [ "Lee", "Jinwon", "" ], [ "Kim", "Joo-Young", "" ] ]
The explosive arrival of OpenAI's ChatGPT has fueled the globalization of large language model (LLM), which consists of billions of pretrained parameters that embodies the aspects of syntax and semantics. HyperAccel introduces latency processing unit (LPU), a latency-optimized and highly scalable processor architecture for the acceleration of LLM inference. LPU perfectly balances the memory bandwidth and compute logic with streamlined dataflow to maximize performance and efficiency. LPU is equipped with expandable synchronization link (ESL) that hides data synchronization latency between multiple LPUs. HyperDex complements LPU as an intuitive software framework to run LLM applications. LPU achieves 1.25 ms/token and 20.9 ms/token for 1.3B and 66B model, respectively, which is 2.09x and 1.37x faster than the GPU. LPU, synthesized using Samsung 4nm process, has total area of 0.824 mm2 and power consumption of 284.31 mW. LPU-based servers achieve 1.33x and 1.32x energy efficiency over NVIDIA H100 and L4 servers, respectively.
2406.16707
Vivienne Huiling Wang
Vivienne Huiling Wang, Tinghuai Wang, Wenyan Yang, Joni-Kristian K\"am\"ar\"ainen, Joni Pajarinen
Probabilistic Subgoal Representations for Hierarchical Reinforcement learning
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
In goal-conditioned hierarchical reinforcement learning (HRL), a high-level policy specifies a subgoal for the low-level policy to reach. Effective HRL hinges on a suitable subgoal represen tation function, abstracting state space into latent subgoal space and inducing varied low-level behaviors. Existing methods adopt a subgoal representation that provides a deterministic mapping from state space to latent subgoal space. Instead, this paper utilizes Gaussian Processes (GPs) for the first probabilistic subgoal representation. Our method employs a GP prior on the latent subgoal space to learn a posterior distribution over the subgoal representation functions while exploiting the long-range correlation in the state space through learnable kernels. This enables an adaptive memory that integrates long-range subgoal information from prior planning steps allowing to cope with stochastic uncertainties. Furthermore, we propose a novel learning objective to facilitate the simultaneous learning of probabilistic subgoal representations and policies within a unified framework. In experiments, our approach outperforms state-of-the-art baselines in standard benchmarks but also in environments with stochastic elements and under diverse reward conditions. Additionally, our model shows promising capabilities in transferring low-level policies across different tasks.
[ { "created": "Mon, 24 Jun 2024 15:09:22 GMT", "version": "v1" } ]
2024-06-25
[ [ "Wang", "Vivienne Huiling", "" ], [ "Wang", "Tinghuai", "" ], [ "Yang", "Wenyan", "" ], [ "Kämäräinen", "Joni-Kristian", "" ], [ "Pajarinen", "Joni", "" ] ]
In goal-conditioned hierarchical reinforcement learning (HRL), a high-level policy specifies a subgoal for the low-level policy to reach. Effective HRL hinges on a suitable subgoal represen tation function, abstracting state space into latent subgoal space and inducing varied low-level behaviors. Existing methods adopt a subgoal representation that provides a deterministic mapping from state space to latent subgoal space. Instead, this paper utilizes Gaussian Processes (GPs) for the first probabilistic subgoal representation. Our method employs a GP prior on the latent subgoal space to learn a posterior distribution over the subgoal representation functions while exploiting the long-range correlation in the state space through learnable kernels. This enables an adaptive memory that integrates long-range subgoal information from prior planning steps allowing to cope with stochastic uncertainties. Furthermore, we propose a novel learning objective to facilitate the simultaneous learning of probabilistic subgoal representations and policies within a unified framework. In experiments, our approach outperforms state-of-the-art baselines in standard benchmarks but also in environments with stochastic elements and under diverse reward conditions. Additionally, our model shows promising capabilities in transferring low-level policies across different tasks.
2210.04026
Yun Liu
Yun Liu, Xiaomeng Xu, Weihang Chen, Haocheng Yuan, He Wang, Jing Xu, Rui Chen, Li Yi
Enhancing Generalizable 6D Pose Tracking of an In-Hand Object with Tactile Sensing
null
null
10.1109/LRA.2023.3337690
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When manipulating an object to accomplish complex tasks, humans rely on both vision and touch to keep track of the object's 6D pose. However, most existing object pose tracking systems in robotics rely exclusively on visual signals, which hinder a robot's ability to manipulate objects effectively. To address this limitation, we introduce TEG-Track, a tactile-enhanced 6D pose tracking system that can track previously unseen objects held in hand. From consecutive tactile signals, TEG-Track optimizes object velocities from marker flows when slippage does not occur, or regresses velocities using a slippage estimation network when slippage is detected. The estimated object velocities are integrated into a geometric-kinematic optimization scheme to enhance existing visual pose trackers. To evaluate our method and to facilitate future research, we construct a real-world dataset for visual-tactile in-hand object pose tracking. Experimental results demonstrate that TEG-Track consistently enhances state-of-the-art generalizable 6D pose trackers in synthetic and real-world scenarios. Our code and dataset are available at https://github.com/leolyliu/TEG-Track.
[ { "created": "Sat, 8 Oct 2022 13:47:03 GMT", "version": "v1" }, { "created": "Sat, 23 Dec 2023 14:54:29 GMT", "version": "v2" } ]
2023-12-27
[ [ "Liu", "Yun", "" ], [ "Xu", "Xiaomeng", "" ], [ "Chen", "Weihang", "" ], [ "Yuan", "Haocheng", "" ], [ "Wang", "He", "" ], [ "Xu", "Jing", "" ], [ "Chen", "Rui", "" ], [ "Yi", "Li", "" ] ]
When manipulating an object to accomplish complex tasks, humans rely on both vision and touch to keep track of the object's 6D pose. However, most existing object pose tracking systems in robotics rely exclusively on visual signals, which hinder a robot's ability to manipulate objects effectively. To address this limitation, we introduce TEG-Track, a tactile-enhanced 6D pose tracking system that can track previously unseen objects held in hand. From consecutive tactile signals, TEG-Track optimizes object velocities from marker flows when slippage does not occur, or regresses velocities using a slippage estimation network when slippage is detected. The estimated object velocities are integrated into a geometric-kinematic optimization scheme to enhance existing visual pose trackers. To evaluate our method and to facilitate future research, we construct a real-world dataset for visual-tactile in-hand object pose tracking. Experimental results demonstrate that TEG-Track consistently enhances state-of-the-art generalizable 6D pose trackers in synthetic and real-world scenarios. Our code and dataset are available at https://github.com/leolyliu/TEG-Track.
2106.08670
Vimukthini Pinto
Vimukthini Pinto, Cheng Xue, Chathura Nagoda Gamage, Matthew Stephenson and Jochen Renz
The Difficulty of Novelty Detection in Open-World Physical Domains: An Application to Angry Birds
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Detecting and responding to novel situations in open-world environments is a key capability of human cognition and is a persistent problem for AI systems. In an open-world, novelties can appear in many different forms and may be easy or hard to detect. Therefore, to accurately evaluate the novelty detection capability of AI systems, it is necessary to investigate how difficult it may be to detect different types of novelty. In this paper, we propose a qualitative physics-based method to quantify the difficulty of novelty detection focusing on open-world physical domains. We apply our method in the popular physics simulation game Angry Birds, and conduct a user study across different novelties to validate our method. Results indicate that our calculated detection difficulties are in line with those of human users.
[ { "created": "Wed, 16 Jun 2021 10:14:09 GMT", "version": "v1" }, { "created": "Sun, 25 Jun 2023 07:41:19 GMT", "version": "v2" } ]
2023-06-27
[ [ "Pinto", "Vimukthini", "" ], [ "Xue", "Cheng", "" ], [ "Gamage", "Chathura Nagoda", "" ], [ "Stephenson", "Matthew", "" ], [ "Renz", "Jochen", "" ] ]
Detecting and responding to novel situations in open-world environments is a key capability of human cognition and is a persistent problem for AI systems. In an open-world, novelties can appear in many different forms and may be easy or hard to detect. Therefore, to accurately evaluate the novelty detection capability of AI systems, it is necessary to investigate how difficult it may be to detect different types of novelty. In this paper, we propose a qualitative physics-based method to quantify the difficulty of novelty detection focusing on open-world physical domains. We apply our method in the popular physics simulation game Angry Birds, and conduct a user study across different novelties to validate our method. Results indicate that our calculated detection difficulties are in line with those of human users.
1807.07203
Nakamasa Inoue
Nakamasa Inoue, Koichi Shinoda
Few-Shot Adaptation for Multimedia Semantic Indexing
null
null
null
null
cs.MM cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a few-shot adaptation framework, which bridges zero-shot learning and supervised many-shot learning, for semantic indexing of image and video data. Few-shot adaptation provides robust parameter estimation with few training examples, by optimizing the parameters of zero-shot learning and supervised many-shot learning simultaneously. In this method, first we build a zero-shot detector, and then update it by using the few examples. Our experiments show the effectiveness of the proposed framework on three datasets: TRECVID Semantic Indexing 2010, 2014, and ImageNET. On the ImageNET dataset, we show that our method outperforms recent few-shot learning methods. On the TRECVID 2014 dataset, we achieve 15.19% and 35.98% in Mean Average Precision under the zero-shot condition and the supervised condition, respectively. To the best of our knowledge, these are the best results on this dataset.
[ { "created": "Thu, 19 Jul 2018 00:58:33 GMT", "version": "v1" } ]
2018-07-20
[ [ "Inoue", "Nakamasa", "" ], [ "Shinoda", "Koichi", "" ] ]
We propose a few-shot adaptation framework, which bridges zero-shot learning and supervised many-shot learning, for semantic indexing of image and video data. Few-shot adaptation provides robust parameter estimation with few training examples, by optimizing the parameters of zero-shot learning and supervised many-shot learning simultaneously. In this method, first we build a zero-shot detector, and then update it by using the few examples. Our experiments show the effectiveness of the proposed framework on three datasets: TRECVID Semantic Indexing 2010, 2014, and ImageNET. On the ImageNET dataset, we show that our method outperforms recent few-shot learning methods. On the TRECVID 2014 dataset, we achieve 15.19% and 35.98% in Mean Average Precision under the zero-shot condition and the supervised condition, respectively. To the best of our knowledge, these are the best results on this dataset.
2203.14208
En Yu
En Yu, Zhuoling Li, Shoudong Han
Towards Discriminative Representation: Multi-view Trajectory Contrastive Learning for Online Multi-object Tracking
Accepted by CVPR2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Discriminative representation is crucial for the association step in multi-object tracking. Recent work mainly utilizes features in single or neighboring frames for constructing metric loss and empowering networks to extract representation of targets. Although this strategy is effective, it fails to fully exploit the information contained in a whole trajectory. To this end, we propose a strategy, namely multi-view trajectory contrastive learning, in which each trajectory is represented as a center vector. By maintaining all the vectors in a dynamically updated memory bank, a trajectory-level contrastive loss is devised to explore the inter-frame information in the whole trajectories. Besides, in this strategy, each target is represented as multiple adaptively selected keypoints rather than a pre-defined anchor or center. This design allows the network to generate richer representation from multiple views of the same target, which can better characterize occluded objects. Additionally, in the inference stage, a similarity-guided feature fusion strategy is developed for further boosting the quality of the trajectory representation. Extensive experiments have been conducted on MOTChallenge to verify the effectiveness of the proposed techniques. The experimental results indicate that our method has surpassed preceding trackers and established new state-of-the-art performance.
[ { "created": "Sun, 27 Mar 2022 04:53:31 GMT", "version": "v1" }, { "created": "Tue, 5 Apr 2022 11:09:27 GMT", "version": "v2" } ]
2022-04-06
[ [ "Yu", "En", "" ], [ "Li", "Zhuoling", "" ], [ "Han", "Shoudong", "" ] ]
Discriminative representation is crucial for the association step in multi-object tracking. Recent work mainly utilizes features in single or neighboring frames for constructing metric loss and empowering networks to extract representation of targets. Although this strategy is effective, it fails to fully exploit the information contained in a whole trajectory. To this end, we propose a strategy, namely multi-view trajectory contrastive learning, in which each trajectory is represented as a center vector. By maintaining all the vectors in a dynamically updated memory bank, a trajectory-level contrastive loss is devised to explore the inter-frame information in the whole trajectories. Besides, in this strategy, each target is represented as multiple adaptively selected keypoints rather than a pre-defined anchor or center. This design allows the network to generate richer representation from multiple views of the same target, which can better characterize occluded objects. Additionally, in the inference stage, a similarity-guided feature fusion strategy is developed for further boosting the quality of the trajectory representation. Extensive experiments have been conducted on MOTChallenge to verify the effectiveness of the proposed techniques. The experimental results indicate that our method has surpassed preceding trackers and established new state-of-the-art performance.
1608.04017
Maziar Mirzazad Barijough
J.J. Garcia-Luna-Aceves, Maziar Mirzazad Barijough
Efficient Multicasting in Content-Centric Networks Using Datagrams
arXiv admin note: text overlap with arXiv:1603.08491
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Named Data Networking (NDN) and Content-Centric Networking (CCNx) architectures are the leading approaches for content-centric networking, and both require using Interests (requests that elicit content) and maintaining per-Interest forwarding state in Pending Interest Tables (PIT) to store per-Interest forwarding state. To date, PITs have been assumed to be necessary to enable native support for multicasting in the data plane, such that multicast forwarding trees (MFT) are established by the forwarding and aggregation of Interests using PITs. We present a new approach to content-centric networks based on anonymous datagrams that provides native support for multicasting, but does so without the need to maintain per-Interest forwarding state. Simulation experiments are used to show that the proposed new approach attains the same end-to-end delays for multicasting while requiring orders of magnitude fewer forwarding entries.
[ { "created": "Sat, 13 Aug 2016 18:42:32 GMT", "version": "v1" } ]
2016-08-16
[ [ "Garcia-Luna-Aceves", "J. J.", "" ], [ "Barijough", "Maziar Mirzazad", "" ] ]
The Named Data Networking (NDN) and Content-Centric Networking (CCNx) architectures are the leading approaches for content-centric networking, and both require using Interests (requests that elicit content) and maintaining per-Interest forwarding state in Pending Interest Tables (PIT) to store per-Interest forwarding state. To date, PITs have been assumed to be necessary to enable native support for multicasting in the data plane, such that multicast forwarding trees (MFT) are established by the forwarding and aggregation of Interests using PITs. We present a new approach to content-centric networks based on anonymous datagrams that provides native support for multicasting, but does so without the need to maintain per-Interest forwarding state. Simulation experiments are used to show that the proposed new approach attains the same end-to-end delays for multicasting while requiring orders of magnitude fewer forwarding entries.
2102.06783
George Mertzios
George B. Mertzios, Hendrik Molter, Malte Renken, Paul G. Spirakis, Philipp Zschoche
The Complexity of Transitively Orienting Temporal Graphs
null
null
null
null
cs.DS cs.CC cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a temporal network with discrete time-labels on its edges, entities and information can only "flow" along sequences of edges whose time-labels are non-decreasing (resp. increasing), i.e. along temporal (resp. strict temporal) paths. Nevertheless, in the model for temporal networks of [Kempe et al., JCSS, 2002], the individual time-labeled edges remain undirected: an edge $e=\{u,v\}$ with time-label $t$ specifies that "$u$ communicates with $v$ at time $t$". This is a symmetric relation between $u$ and $v$, and it can be interpreted that the information can flow in either direction. In this paper we make a first attempt to understand how the direction of information flow on one edge can impact the direction of information flow on other edges. More specifically, we introduce the notion of a temporal transitive orientation and we systematically investigate its algorithmic behavior in various situations. An orientation of a temporal graph is called temporally transitive if, whenever $u$ has a directed edge towards $v$ with time-label $t_1$ and $v$ has a directed edge towards $w$ with time-label $t_2\geq t_1$, then $u$ also has a directed edge towards $w$ with some time-label $t_3\geq t_2$. If we just demand that this implication holds whenever $t_2 > t_1$, the orientation is called strictly temporally transitive. Our main result is a conceptually simple, yet technically quite involved, polynomial-time algorithm for recognizing whether a given temporal graph $\mathcal{G}$ is transitively orientable. In wide contrast we prove that, surprisingly, it is NP-hard to recognize whether $\mathcal{G}$ is strictly transitively orientable. Additionally we introduce and investigate further related problems to temporal transitivity, notably among them the temporal transitive completion problem, for which we prove both algorithmic and hardness results.
[ { "created": "Fri, 12 Feb 2021 21:39:26 GMT", "version": "v1" }, { "created": "Sat, 8 Jul 2023 02:18:57 GMT", "version": "v2" } ]
2023-07-11
[ [ "Mertzios", "George B.", "" ], [ "Molter", "Hendrik", "" ], [ "Renken", "Malte", "" ], [ "Spirakis", "Paul G.", "" ], [ "Zschoche", "Philipp", "" ] ]
In a temporal network with discrete time-labels on its edges, entities and information can only "flow" along sequences of edges whose time-labels are non-decreasing (resp. increasing), i.e. along temporal (resp. strict temporal) paths. Nevertheless, in the model for temporal networks of [Kempe et al., JCSS, 2002], the individual time-labeled edges remain undirected: an edge $e=\{u,v\}$ with time-label $t$ specifies that "$u$ communicates with $v$ at time $t$". This is a symmetric relation between $u$ and $v$, and it can be interpreted that the information can flow in either direction. In this paper we make a first attempt to understand how the direction of information flow on one edge can impact the direction of information flow on other edges. More specifically, we introduce the notion of a temporal transitive orientation and we systematically investigate its algorithmic behavior in various situations. An orientation of a temporal graph is called temporally transitive if, whenever $u$ has a directed edge towards $v$ with time-label $t_1$ and $v$ has a directed edge towards $w$ with time-label $t_2\geq t_1$, then $u$ also has a directed edge towards $w$ with some time-label $t_3\geq t_2$. If we just demand that this implication holds whenever $t_2 > t_1$, the orientation is called strictly temporally transitive. Our main result is a conceptually simple, yet technically quite involved, polynomial-time algorithm for recognizing whether a given temporal graph $\mathcal{G}$ is transitively orientable. In wide contrast we prove that, surprisingly, it is NP-hard to recognize whether $\mathcal{G}$ is strictly transitively orientable. Additionally we introduce and investigate further related problems to temporal transitivity, notably among them the temporal transitive completion problem, for which we prove both algorithmic and hardness results.
1111.6223
Mingyi Hong
Mingyi Hong, Alfredo Garcia, J. Joaquin Escudero Garzas, Ana Garcia-Armada
Lower Bounds Optimization for Coordinated Linear Transmission Beamformer Design in Multicell Network Downlink
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the coordinated downlink beamforming problem in a cellular network with the base stations (BSs) equipped with multiple antennas, and with each user equipped with a single antenna. The BSs cooperate in sharing their local interference information, and they aim at maximizing the sum rate of the users in the network. A set of new lower bounds (one bound for each BS) of the non-convex sum rate is identified. These bounds facilitate the development of a set of algorithms that allow the BSs to update their beams by optimizing their respective lower bounds. We show that when there is a single user per-BS, the lower bound maximization problem can be solved exactly with rank-1 solutions. In this case, the overall sum rate maximization problem can be solved to a KKT point. Numerical results show that the proposed algorithms achieve high system throughput with reduced backhaul information exchange among the BSs.
[ { "created": "Sun, 27 Nov 2011 03:48:04 GMT", "version": "v1" } ]
2011-11-29
[ [ "Hong", "Mingyi", "" ], [ "Garcia", "Alfredo", "" ], [ "Garzas", "J. Joaquin Escudero", "" ], [ "Garcia-Armada", "Ana", "" ] ]
We consider the coordinated downlink beamforming problem in a cellular network with the base stations (BSs) equipped with multiple antennas, and with each user equipped with a single antenna. The BSs cooperate in sharing their local interference information, and they aim at maximizing the sum rate of the users in the network. A set of new lower bounds (one bound for each BS) of the non-convex sum rate is identified. These bounds facilitate the development of a set of algorithms that allow the BSs to update their beams by optimizing their respective lower bounds. We show that when there is a single user per-BS, the lower bound maximization problem can be solved exactly with rank-1 solutions. In this case, the overall sum rate maximization problem can be solved to a KKT point. Numerical results show that the proposed algorithms achieve high system throughput with reduced backhaul information exchange among the BSs.
1804.09118
Robert Shorten
J. O'Connell, B. Cardiff, R. Shorten
dockChain: A Solution for Electric Vehicles Charge Point Anxiety
A version of this paper is accepted for presentation at the 21st IEEE International Conference on Intelligent Transportation Systems
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses Charge Point Anxiety surrounding electric vehicles (EVs), an issue preventing the mass adoption of this greener mode of transport. We discuss the design and implementation of a charge point adapter called \textit{dockChain} that will help mitigate Charge Point Anxiety. The key feature of the dockChain is that it allows multiple EVs to connect simultaneously to a single charge by connecting the adapters together in a chain resulting in additional charging opportunities - without the need for infrastructural changes. We describe the operation of the network of adapters, the hardware components and charging policies for the adapter. A distributed algorithm that can detect the length of the chain in a dockChain network is also presented.
[ { "created": "Tue, 24 Apr 2018 16:12:31 GMT", "version": "v1" }, { "created": "Sat, 20 Oct 2018 21:31:49 GMT", "version": "v2" } ]
2018-10-23
[ [ "O'Connell", "J.", "" ], [ "Cardiff", "B.", "" ], [ "Shorten", "R.", "" ] ]
This paper addresses Charge Point Anxiety surrounding electric vehicles (EVs), an issue preventing the mass adoption of this greener mode of transport. We discuss the design and implementation of a charge point adapter called \textit{dockChain} that will help mitigate Charge Point Anxiety. The key feature of the dockChain is that it allows multiple EVs to connect simultaneously to a single charge by connecting the adapters together in a chain resulting in additional charging opportunities - without the need for infrastructural changes. We describe the operation of the network of adapters, the hardware components and charging policies for the adapter. A distributed algorithm that can detect the length of the chain in a dockChain network is also presented.
2211.01753
Md Tanvirul Alam
Md Tanvirul Alam, Dipkamal Bhusal, Youngja Park and Nidhi Rastogi
Looking Beyond IoCs: Automatically Extracting Attack Patterns from External CTI
null
null
null
null
cs.CR cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Public and commercial organizations extensively share cyberthreat intelligence (CTI) to prepare systems to defend against existing and emerging cyberattacks. However, traditional CTI has primarily focused on tracking known threat indicators such as IP addresses and domain names, which may not provide long-term value in defending against evolving attacks. To address this challenge, we propose to use more robust threat intelligence signals called attack patterns. LADDER is a knowledge extraction framework that can extract text-based attack patterns from CTI reports at scale. The framework characterizes attack patterns by capturing the phases of an attack in Android and enterprise networks and systematically maps them to the MITRE ATT\&CK pattern framework. LADDER can be used by security analysts to determine the presence of attack vectors related to existing and emerging threats, enabling them to prepare defenses proactively. We also present several use cases to demonstrate the application of LADDER in real-world scenarios. Finally, we provide a new, open-access benchmark malware dataset to train future cyberthreat intelligence models.
[ { "created": "Tue, 1 Nov 2022 12:16:30 GMT", "version": "v1" }, { "created": "Tue, 11 Jul 2023 22:46:12 GMT", "version": "v2" } ]
2023-07-13
[ [ "Alam", "Md Tanvirul", "" ], [ "Bhusal", "Dipkamal", "" ], [ "Park", "Youngja", "" ], [ "Rastogi", "Nidhi", "" ] ]
Public and commercial organizations extensively share cyberthreat intelligence (CTI) to prepare systems to defend against existing and emerging cyberattacks. However, traditional CTI has primarily focused on tracking known threat indicators such as IP addresses and domain names, which may not provide long-term value in defending against evolving attacks. To address this challenge, we propose to use more robust threat intelligence signals called attack patterns. LADDER is a knowledge extraction framework that can extract text-based attack patterns from CTI reports at scale. The framework characterizes attack patterns by capturing the phases of an attack in Android and enterprise networks and systematically maps them to the MITRE ATT\&CK pattern framework. LADDER can be used by security analysts to determine the presence of attack vectors related to existing and emerging threats, enabling them to prepare defenses proactively. We also present several use cases to demonstrate the application of LADDER in real-world scenarios. Finally, we provide a new, open-access benchmark malware dataset to train future cyberthreat intelligence models.
2101.03285
Yu Tian
Yu Tian, Leonardo Zorron Cheng Tao Pu, Yuyuan Liu, Gabriel Maicas, Johan W. Verjans, Alastair D. Burt, Seon Ho Shin, Rajvinder Singh, Gustavo Carneiro
Detecting, Localising and Classifying Polyps from Colonoscopy Videos using Deep Learning
Preprint to submit to IEEE journals
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose and analyse a system that can automatically detect, localise and classify polyps from colonoscopy videos. The detection of frames with polyps is formulated as a few-shot anomaly classification problem, where the training set is highly imbalanced with the large majority of frames consisting of normal images and a small minority comprising frames with polyps. Colonoscopy videos may contain blurry images and frames displaying feces and water jet sprays to clean the colon -- such frames can mistakenly be detected as anomalies, so we have implemented a classifier to reject these two types of frames before polyp detection takes place. Next, given a frame containing a polyp, our method localises (with a bounding box around the polyp) and classifies it into five different classes. Furthermore, we study a method to improve the reliability and interpretability of the classification result using uncertainty estimation and classification calibration. Classification uncertainty and calibration not only help improve classification accuracy by rejecting low-confidence and high-uncertain results, but can be used by doctors to decide how to decide on the classification of a polyp. All the proposed detection, localisation and classification methods are tested using large data sets and compared with relevant baseline approaches.
[ { "created": "Sat, 9 Jan 2021 04:25:34 GMT", "version": "v1" } ]
2021-01-12
[ [ "Tian", "Yu", "" ], [ "Pu", "Leonardo Zorron Cheng Tao", "" ], [ "Liu", "Yuyuan", "" ], [ "Maicas", "Gabriel", "" ], [ "Verjans", "Johan W.", "" ], [ "Burt", "Alastair D.", "" ], [ "Shin", "Seon Ho", "" ], [ "Singh", "Rajvinder", "" ], [ "Carneiro", "Gustavo", "" ] ]
In this paper, we propose and analyse a system that can automatically detect, localise and classify polyps from colonoscopy videos. The detection of frames with polyps is formulated as a few-shot anomaly classification problem, where the training set is highly imbalanced with the large majority of frames consisting of normal images and a small minority comprising frames with polyps. Colonoscopy videos may contain blurry images and frames displaying feces and water jet sprays to clean the colon -- such frames can mistakenly be detected as anomalies, so we have implemented a classifier to reject these two types of frames before polyp detection takes place. Next, given a frame containing a polyp, our method localises (with a bounding box around the polyp) and classifies it into five different classes. Furthermore, we study a method to improve the reliability and interpretability of the classification result using uncertainty estimation and classification calibration. Classification uncertainty and calibration not only help improve classification accuracy by rejecting low-confidence and high-uncertain results, but can be used by doctors to decide how to decide on the classification of a polyp. All the proposed detection, localisation and classification methods are tested using large data sets and compared with relevant baseline approaches.
2109.03551
Wen-Chin Huang
Yi-Syuan Liou, Wen-Chin Huang, Ming-Chi Yen, Shu-Wei Tsai, Yu-Huai Peng, Tomoki Toda, Yu Tsao, Hsin-Min Wang
Time Alignment using Lip Images for Frame-based Electrolaryngeal Voice Conversion
Accepted to APSIPA ASC 2021
null
null
null
cs.SD cs.CL cs.CV eess.AS
http://creativecommons.org/licenses/by-nc-nd/4.0/
Voice conversion (VC) is an effective approach to electrolaryngeal (EL) speech enhancement, a task that aims to improve the quality of the artificial voice from an electrolarynx device. In frame-based VC methods, time alignment needs to be performed prior to model training, and the dynamic time warping (DTW) algorithm is widely adopted to compute the best time alignment between each utterance pair. The validity is based on the assumption that the same phonemes of the speakers have similar features and can be mapped by measuring a pre-defined distance between speech frames of the source and the target. However, the special characteristics of the EL speech can break the assumption, resulting in a sub-optimal DTW alignment. In this work, we propose to use lip images for time alignment, as we assume that the lip movements of laryngectomee remain normal compared to healthy people. We investigate two naive lip representations and distance metrics, and experimental results demonstrate that the proposed method can significantly outperform the audio-only alignment in terms of objective and subjective evaluations.
[ { "created": "Wed, 8 Sep 2021 11:24:09 GMT", "version": "v1" } ]
2021-09-09
[ [ "Liou", "Yi-Syuan", "" ], [ "Huang", "Wen-Chin", "" ], [ "Yen", "Ming-Chi", "" ], [ "Tsai", "Shu-Wei", "" ], [ "Peng", "Yu-Huai", "" ], [ "Toda", "Tomoki", "" ], [ "Tsao", "Yu", "" ], [ "Wang", "Hsin-Min", "" ] ]
Voice conversion (VC) is an effective approach to electrolaryngeal (EL) speech enhancement, a task that aims to improve the quality of the artificial voice from an electrolarynx device. In frame-based VC methods, time alignment needs to be performed prior to model training, and the dynamic time warping (DTW) algorithm is widely adopted to compute the best time alignment between each utterance pair. The validity is based on the assumption that the same phonemes of the speakers have similar features and can be mapped by measuring a pre-defined distance between speech frames of the source and the target. However, the special characteristics of the EL speech can break the assumption, resulting in a sub-optimal DTW alignment. In this work, we propose to use lip images for time alignment, as we assume that the lip movements of laryngectomee remain normal compared to healthy people. We investigate two naive lip representations and distance metrics, and experimental results demonstrate that the proposed method can significantly outperform the audio-only alignment in terms of objective and subjective evaluations.
1301.2046
A. Emre Cetin
A. Emre Cetin
In-situ associative permuting
12 pages
null
null
null
cs.DS
http://creativecommons.org/licenses/by-nc-sa/3.0/
The technique of in-situ associative permuting is introduced which is an association of in-situ permuting and in-situ inverting. It is suitable for associatively permutable permutations of {1,2,...,n} where the elements that will be inverted are negative and stored in order relative to each other according to their absolute values. Let K[1...n] be an array of n integer keys each in the range [1,n], and it is allowed to modify the keys in the range [-n,n]. If the integer keys are rearranged such that one of each distinct key having the value i is moved to the i'th position of K, then the resulting arrangement (will be denoted by K^P) can be transformed in-situ into associatively permutable permutation pi^P using only logn additional bits. The associatively permutable permutation pi^P not only stores the ranks of the keys of K^P but also uniquely represents K^P. Restoring the keys from pi^P is not considered. However, in-situ associative permuting pi^P in O(n) time using logn additional bits rearranges the elements of pi^P in order, as well as lets to restore the keys of K^P in O(n) further time using the inverses of the negative ranks. This means that an array of n integer keys each in the range [1,n] can be sorted using only logn bits of additional space.
[ { "created": "Thu, 10 Jan 2013 08:08:48 GMT", "version": "v1" } ]
2013-01-11
[ [ "Cetin", "A. Emre", "" ] ]
The technique of in-situ associative permuting is introduced which is an association of in-situ permuting and in-situ inverting. It is suitable for associatively permutable permutations of {1,2,...,n} where the elements that will be inverted are negative and stored in order relative to each other according to their absolute values. Let K[1...n] be an array of n integer keys each in the range [1,n], and it is allowed to modify the keys in the range [-n,n]. If the integer keys are rearranged such that one of each distinct key having the value i is moved to the i'th position of K, then the resulting arrangement (will be denoted by K^P) can be transformed in-situ into associatively permutable permutation pi^P using only logn additional bits. The associatively permutable permutation pi^P not only stores the ranks of the keys of K^P but also uniquely represents K^P. Restoring the keys from pi^P is not considered. However, in-situ associative permuting pi^P in O(n) time using logn additional bits rearranges the elements of pi^P in order, as well as lets to restore the keys of K^P in O(n) further time using the inverses of the negative ranks. This means that an array of n integer keys each in the range [1,n] can be sorted using only logn bits of additional space.
2207.07736
James Gleeson
James Gleeson, Daniel Snider, Yvonne Yang, Moshe Gabel, Eyal de Lara, Gennady Pekhimenko
Optimizing Data Collection in Deep Reinforcement Learning
MLBench 2022 ( https://memani1.github.io/mlbench22/ ) camera ready submission
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reinforcement learning (RL) workloads take a notoriously long time to train due to the large number of samples collected at run-time from simulators. Unfortunately, cluster scale-up approaches remain expensive, and commonly used CPU implementations of simulators induce high overhead when switching back and forth between GPU computations. We explore two optimizations that increase RL data collection efficiency by increasing GPU utilization: (1) GPU vectorization: parallelizing simulation on the GPU for increased hardware parallelism, and (2) simulator kernel fusion: fusing multiple simulation steps to run in a single GPU kernel launch to reduce global memory bandwidth requirements. We find that GPU vectorization can achieve up to $1024\times$ speedup over commonly used CPU simulators. We profile the performance of different implementations and show that for a simple simulator, ML compiler implementations (XLA) of GPU vectorization outperform a DNN framework (PyTorch) by $13.4\times$ by reducing CPU overhead from repeated Python to DL backend API calls. We show that simulator kernel fusion speedups with a simple simulator are $11.3\times$ and increase by up to $1024\times$ as simulator complexity increases in terms of memory bandwidth requirements. We show that the speedups from simulator kernel fusion are orthogonal and combinable with GPU vectorization, leading to a multiplicative speedup.
[ { "created": "Fri, 15 Jul 2022 20:22:31 GMT", "version": "v1" } ]
2022-07-19
[ [ "Gleeson", "James", "" ], [ "Snider", "Daniel", "" ], [ "Yang", "Yvonne", "" ], [ "Gabel", "Moshe", "" ], [ "de Lara", "Eyal", "" ], [ "Pekhimenko", "Gennady", "" ] ]
Reinforcement learning (RL) workloads take a notoriously long time to train due to the large number of samples collected at run-time from simulators. Unfortunately, cluster scale-up approaches remain expensive, and commonly used CPU implementations of simulators induce high overhead when switching back and forth between GPU computations. We explore two optimizations that increase RL data collection efficiency by increasing GPU utilization: (1) GPU vectorization: parallelizing simulation on the GPU for increased hardware parallelism, and (2) simulator kernel fusion: fusing multiple simulation steps to run in a single GPU kernel launch to reduce global memory bandwidth requirements. We find that GPU vectorization can achieve up to $1024\times$ speedup over commonly used CPU simulators. We profile the performance of different implementations and show that for a simple simulator, ML compiler implementations (XLA) of GPU vectorization outperform a DNN framework (PyTorch) by $13.4\times$ by reducing CPU overhead from repeated Python to DL backend API calls. We show that simulator kernel fusion speedups with a simple simulator are $11.3\times$ and increase by up to $1024\times$ as simulator complexity increases in terms of memory bandwidth requirements. We show that the speedups from simulator kernel fusion are orthogonal and combinable with GPU vectorization, leading to a multiplicative speedup.
1808.01558
Zhiwen Shao
Zhiwen Shao, Hengliang Zhu, Xin Tan, Yangyang Hao, Lizhuang Ma
Deep Multi-Center Learning for Face Alignment
This paper has been accepted by Neurocomputing
null
10.1016/j.neucom.2018.11.108
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Facial landmarks are highly correlated with each other since a certain landmark can be estimated by its neighboring landmarks. Most of the existing deep learning methods only use one fully-connected layer called shape prediction layer to estimate the locations of facial landmarks. In this paper, we propose a novel deep learning framework named Multi-Center Learning with multiple shape prediction layers for face alignment. In particular, each shape prediction layer emphasizes on the detection of a certain cluster of semantically relevant landmarks respectively. Challenging landmarks are focused firstly, and each cluster of landmarks is further optimized respectively. Moreover, to reduce the model complexity, we propose a model assembling method to integrate multiple shape prediction layers into one shape prediction layer. Extensive experiments demonstrate that our method is effective for handling complex occlusions and appearance variations with real-time performance. The code for our method is available at https://github.com/ZhiwenShao/MCNet-Extension.
[ { "created": "Sun, 5 Aug 2018 04:01:53 GMT", "version": "v1" }, { "created": "Sun, 18 Nov 2018 06:30:36 GMT", "version": "v2" } ]
2019-04-26
[ [ "Shao", "Zhiwen", "" ], [ "Zhu", "Hengliang", "" ], [ "Tan", "Xin", "" ], [ "Hao", "Yangyang", "" ], [ "Ma", "Lizhuang", "" ] ]
Facial landmarks are highly correlated with each other since a certain landmark can be estimated by its neighboring landmarks. Most of the existing deep learning methods only use one fully-connected layer called shape prediction layer to estimate the locations of facial landmarks. In this paper, we propose a novel deep learning framework named Multi-Center Learning with multiple shape prediction layers for face alignment. In particular, each shape prediction layer emphasizes on the detection of a certain cluster of semantically relevant landmarks respectively. Challenging landmarks are focused firstly, and each cluster of landmarks is further optimized respectively. Moreover, to reduce the model complexity, we propose a model assembling method to integrate multiple shape prediction layers into one shape prediction layer. Extensive experiments demonstrate that our method is effective for handling complex occlusions and appearance variations with real-time performance. The code for our method is available at https://github.com/ZhiwenShao/MCNet-Extension.
1907.04640
Dmytro Gavinsky
Dmytro Gavinsky, Pavel Pudl\'ak
Santha-Vazirani sources, deterministic condensers and very strong extractors
null
null
null
null
cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The notion of semi-random sources, also known as Santha-Vazirani (SV) sources, stands for a sequence of n bits, where the dependence of the i'th bit on the previous i-1 bits is limited for every $i\in[n]$. If the dependence of the i'th bit on the remaining n-1 bits is limited, then this is a strong SV-source. Even the strong SV-sources are known not to admit (universal) deterministic extractors, but they have seeded extractors, as their min-entropy is $\Omega(n)$. It is intuitively obvious that strong SV-sources are more than just high-min-entropy sources, and this work explores the intuition. Deterministic condensers are known not to exist for general high-min-entropy sources, and we construct for any constants $\epsilon, \delta \in (0,1)$ a deterministic condenser that maps n bits coming from a strong SV-source with bias at most $\delta$ to $\Omega(n)$ bits of min-entropy rate at least $1-\epsilon$. In conclusion we observe that deterministic condensers are closely related to very strong extractors - a proposed strengthening of the notion of strong (seeded) extractors: in particular, our constructions can be viewed as very strong extractors for the family of strong Santha-Vazirani distributions. The notion of very strong extractors requires that the output remains unpredictable even to someone who knows not only the seed value (as in the case of strong extractors), but also the extractor's outputs corresponding to the same input value with each of the preceding seed values (say, under the lexicographic ordering). Very strong extractors closely resemble the original notion of SV-sources, except that the bits must satisfy the unpredictability requirement only on average.
[ { "created": "Mon, 8 Jul 2019 22:58:04 GMT", "version": "v1" }, { "created": "Sat, 22 Feb 2020 18:41:11 GMT", "version": "v2" } ]
2022-04-05
[ [ "Gavinsky", "Dmytro", "" ], [ "Pudlák", "Pavel", "" ] ]
The notion of semi-random sources, also known as Santha-Vazirani (SV) sources, stands for a sequence of n bits, where the dependence of the i'th bit on the previous i-1 bits is limited for every $i\in[n]$. If the dependence of the i'th bit on the remaining n-1 bits is limited, then this is a strong SV-source. Even the strong SV-sources are known not to admit (universal) deterministic extractors, but they have seeded extractors, as their min-entropy is $\Omega(n)$. It is intuitively obvious that strong SV-sources are more than just high-min-entropy sources, and this work explores the intuition. Deterministic condensers are known not to exist for general high-min-entropy sources, and we construct for any constants $\epsilon, \delta \in (0,1)$ a deterministic condenser that maps n bits coming from a strong SV-source with bias at most $\delta$ to $\Omega(n)$ bits of min-entropy rate at least $1-\epsilon$. In conclusion we observe that deterministic condensers are closely related to very strong extractors - a proposed strengthening of the notion of strong (seeded) extractors: in particular, our constructions can be viewed as very strong extractors for the family of strong Santha-Vazirani distributions. The notion of very strong extractors requires that the output remains unpredictable even to someone who knows not only the seed value (as in the case of strong extractors), but also the extractor's outputs corresponding to the same input value with each of the preceding seed values (say, under the lexicographic ordering). Very strong extractors closely resemble the original notion of SV-sources, except that the bits must satisfy the unpredictability requirement only on average.
2308.13588
Fan Lei
Fan Lei, Yuxin Ma, Stewart Fotheringham, Elizabeth Mack, Ziqi Li, Mehak Sachdeva, Sarah Bardin and Ross Maciejewski
GeoExplainer: A Visual Analytics Framework for Spatial Modeling Contextualization and Report Generation
12 pages, 7 figures, accepted by IEEE VIS 2023
null
null
null
cs.HC cs.LG
http://creativecommons.org/licenses/by/4.0/
Geographic regression models of various descriptions are often applied to identify patterns and anomalies in the determinants of spatially distributed observations. These types of analyses focus on answering why questions about underlying spatial phenomena, e.g., why is crime higher in this locale, why do children in one school district outperform those in another, etc.? Answers to these questions require explanations of the model structure, the choice of parameters, and contextualization of the findings with respect to their geographic context. This is particularly true for local forms of regression models which are focused on the role of locational context in determining human behavior. In this paper, we present GeoExplainer, a visual analytics framework designed to support analysts in creating explanative documentation that summarizes and contextualizes their spatial analyses. As analysts create their spatial models, our framework flags potential issues with model parameter selections, utilizes template-based text generation to summarize model outputs, and links with external knowledge repositories to provide annotations that help to explain the model results. As analysts explore the model results, all visualizations and annotations can be captured in an interactive report generation widget. We demonstrate our framework using a case study modeling the determinants of voting in the 2016 US Presidential Election.
[ { "created": "Fri, 25 Aug 2023 16:55:33 GMT", "version": "v1" } ]
2023-08-29
[ [ "Lei", "Fan", "" ], [ "Ma", "Yuxin", "" ], [ "Fotheringham", "Stewart", "" ], [ "Mack", "Elizabeth", "" ], [ "Li", "Ziqi", "" ], [ "Sachdeva", "Mehak", "" ], [ "Bardin", "Sarah", "" ], [ "Maciejewski", "Ross", "" ] ]
Geographic regression models of various descriptions are often applied to identify patterns and anomalies in the determinants of spatially distributed observations. These types of analyses focus on answering why questions about underlying spatial phenomena, e.g., why is crime higher in this locale, why do children in one school district outperform those in another, etc.? Answers to these questions require explanations of the model structure, the choice of parameters, and contextualization of the findings with respect to their geographic context. This is particularly true for local forms of regression models which are focused on the role of locational context in determining human behavior. In this paper, we present GeoExplainer, a visual analytics framework designed to support analysts in creating explanative documentation that summarizes and contextualizes their spatial analyses. As analysts create their spatial models, our framework flags potential issues with model parameter selections, utilizes template-based text generation to summarize model outputs, and links with external knowledge repositories to provide annotations that help to explain the model results. As analysts explore the model results, all visualizations and annotations can be captured in an interactive report generation widget. We demonstrate our framework using a case study modeling the determinants of voting in the 2016 US Presidential Election.
1905.10118
Yuqing Ni
Yuqing Ni, Kemi Ding, Yong Yang, Ling Shi
On the Performance Analysis of Binary Hypothesis Testing with Byzantine Sensors
Accepted by the 38th Chinese Control Conference (CCC)
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the impact of Byzantine attacks in distributed detection under binary hypothesis testing. It is assumed that a fraction of the transmitted sensor measurements are compromised by the injected data from a Byzantine attacker, whose purpose is to confuse the decision maker at the fusion center. From the perspective of a Byzantine attacker, under the injection energy constraint, an optimization problem is formulated to maximize the asymptotic missed detection error probability, which is based on the Kullback-Leibler divergence. The properties of the optimal attack strategy are analyzed by convex optimization and parametric optimization methods. Based on the derived theoretic results, a coordinate descent algorithm is proposed to search the optimal attack solution. Simulation examples are provided to illustrate the effectiveness of the obtained attack strategy.
[ { "created": "Fri, 24 May 2019 10:03:39 GMT", "version": "v1" } ]
2019-05-27
[ [ "Ni", "Yuqing", "" ], [ "Ding", "Kemi", "" ], [ "Yang", "Yong", "" ], [ "Shi", "Ling", "" ] ]
We investigate the impact of Byzantine attacks in distributed detection under binary hypothesis testing. It is assumed that a fraction of the transmitted sensor measurements are compromised by the injected data from a Byzantine attacker, whose purpose is to confuse the decision maker at the fusion center. From the perspective of a Byzantine attacker, under the injection energy constraint, an optimization problem is formulated to maximize the asymptotic missed detection error probability, which is based on the Kullback-Leibler divergence. The properties of the optimal attack strategy are analyzed by convex optimization and parametric optimization methods. Based on the derived theoretic results, a coordinate descent algorithm is proposed to search the optimal attack solution. Simulation examples are provided to illustrate the effectiveness of the obtained attack strategy.
1709.03264
Miguel Cardenas Montes
Miguel C\'ardenas-Montes, Iv\'an M\'endez-Jim\'enez, Juan Jos\'e Rodr\'iguez-V\'azquez, and Jos\'e Mar\'ia Hern\'andez Calama
Report: Performance comparison between C2075 and P100 GPU cards using cosmological correlation functions
null
null
null
null
cs.PF astro-ph.IM cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this report, some cosmological correlation functions are used to evaluate the differential performance between C2075 and P100 GPU cards. In the past, the correlation functions used in this work have been widely studied and exploited on some previous GPU architectures. The analysis of the performance indicates that a speedup in the range from 13 to 15 is achieved without any additional optimization process for the P100 card.
[ { "created": "Mon, 11 Sep 2017 07:03:23 GMT", "version": "v1" } ]
2017-09-12
[ [ "Cárdenas-Montes", "Miguel", "" ], [ "Méndez-Jiménez", "Iván", "" ], [ "Rodríguez-Vázquez", "Juan José", "" ], [ "Calama", "José María Hernández", "" ] ]
In this report, some cosmological correlation functions are used to evaluate the differential performance between C2075 and P100 GPU cards. In the past, the correlation functions used in this work have been widely studied and exploited on some previous GPU architectures. The analysis of the performance indicates that a speedup in the range from 13 to 15 is achieved without any additional optimization process for the P100 card.
2402.19142
Pavlos Rath-Manakidis
Pavlos Rath-Manakidis, Frederik Strothmann, Tobias Glasmachers, Laurenz Wiskott
ProtoP-OD: Explainable Object Detection with Prototypical Parts
9 pages, 11 figures
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Interpretation and visualization of the behavior of detection transformers tends to highlight the locations in the image that the model attends to, but it provides limited insight into the \emph{semantics} that the model is focusing on. This paper introduces an extension to detection transformers that constructs prototypical local features and uses them in object detection. These custom features, which we call prototypical parts, are designed to be mutually exclusive and align with the classifications of the model. The proposed extension consists of a bottleneck module, the prototype neck, that computes a discretized representation of prototype activations and a new loss term that matches prototypes to object classes. This setup leads to interpretable representations in the prototype neck, allowing visual inspection of the image content perceived by the model and a better understanding of the model's reliability. We show experimentally that our method incurs only a limited performance penalty, and we provide examples that demonstrate the quality of the explanations provided by our method, which we argue outweighs the performance penalty.
[ { "created": "Thu, 29 Feb 2024 13:25:15 GMT", "version": "v1" } ]
2024-03-01
[ [ "Rath-Manakidis", "Pavlos", "" ], [ "Strothmann", "Frederik", "" ], [ "Glasmachers", "Tobias", "" ], [ "Wiskott", "Laurenz", "" ] ]
Interpretation and visualization of the behavior of detection transformers tends to highlight the locations in the image that the model attends to, but it provides limited insight into the \emph{semantics} that the model is focusing on. This paper introduces an extension to detection transformers that constructs prototypical local features and uses them in object detection. These custom features, which we call prototypical parts, are designed to be mutually exclusive and align with the classifications of the model. The proposed extension consists of a bottleneck module, the prototype neck, that computes a discretized representation of prototype activations and a new loss term that matches prototypes to object classes. This setup leads to interpretable representations in the prototype neck, allowing visual inspection of the image content perceived by the model and a better understanding of the model's reliability. We show experimentally that our method incurs only a limited performance penalty, and we provide examples that demonstrate the quality of the explanations provided by our method, which we argue outweighs the performance penalty.
1804.06660
Iosif Szeidert PhD
Cristian Vasar, Iosif Szeidert, Ioan Filip, Gabriela Prostean
Short Term Electric Load Forecast with Artificial Neural Networks
7 pages, 13 figures, IFAC MCPL 2007 The 4th International Federation of Automatic Control Conference on Management and Control of Production and Logistics, September 27-30, Sibiu - Romania, pp.443-449
IFAC MCPL 2007 The 4th International Federation of Automatic Control Conference on Management and Control of Production and Logistics, September 27-30, Sibiu - Romania, ISBN: 978-973-739-481-1
null
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents issues regarding short term electric load forecasting using feedforward and Elman recurrent neural networks. The study cases were developed using measured data representing electrical energy consume from Banat area. There were considered 35 different types of structure for both feedforward and recurrent network cases. For each type of neural network structure were performed many trainings and best solution was selected. The issue of forecasting the load on short term is essential in the effective energetic consume management in an open market environment.
[ { "created": "Wed, 18 Apr 2018 11:36:51 GMT", "version": "v1" } ]
2018-04-19
[ [ "Vasar", "Cristian", "" ], [ "Szeidert", "Iosif", "" ], [ "Filip", "Ioan", "" ], [ "Prostean", "Gabriela", "" ] ]
This paper presents issues regarding short term electric load forecasting using feedforward and Elman recurrent neural networks. The study cases were developed using measured data representing electrical energy consume from Banat area. There were considered 35 different types of structure for both feedforward and recurrent network cases. For each type of neural network structure were performed many trainings and best solution was selected. The issue of forecasting the load on short term is essential in the effective energetic consume management in an open market environment.
1611.02453
Thorsten Wissmann
Carsten Lutz and Frank Wolter
The Data Complexity of Description Logic Ontologies
null
Logical Methods in Computer Science, Volume 13, Issue 4 (November 13, 2017) lmcs:2203
10.23638/LMCS-13(4:7)2017
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyze the data complexity of ontology-mediated querying where the ontologies are formulated in a description logic (DL) of the ALC family and queries are conjunctive queries, positive existential queries, or acyclic conjunctive queries. Our approach is non-uniform in the sense that we aim to understand the complexity of each single ontology instead of for all ontologies formulated in a certain language. While doing so, we quantify over the queries and are interested, for example, in the question whether all queries can be evaluated in polynomial time w.r.t. a given ontology. Our results include a PTime/coNP-dichotomy for ontologies of depth one in the description logic ALCFI, the same dichotomy for ALC- and ALCI-ontologies of unrestricted depth, and the non-existence of such a dichotomy for ALCF-ontologies. For the latter DL, we additionally show that it is undecidable whether a given ontology admits PTime query evaluation. We also consider the connection between PTime query evaluation and rewritability into (monadic) Datalog.
[ { "created": "Tue, 8 Nov 2016 09:52:54 GMT", "version": "v1" }, { "created": "Tue, 24 Oct 2017 09:19:25 GMT", "version": "v2" }, { "created": "Fri, 10 Nov 2017 09:38:00 GMT", "version": "v3" } ]
2023-06-22
[ [ "Lutz", "Carsten", "" ], [ "Wolter", "Frank", "" ] ]
We analyze the data complexity of ontology-mediated querying where the ontologies are formulated in a description logic (DL) of the ALC family and queries are conjunctive queries, positive existential queries, or acyclic conjunctive queries. Our approach is non-uniform in the sense that we aim to understand the complexity of each single ontology instead of for all ontologies formulated in a certain language. While doing so, we quantify over the queries and are interested, for example, in the question whether all queries can be evaluated in polynomial time w.r.t. a given ontology. Our results include a PTime/coNP-dichotomy for ontologies of depth one in the description logic ALCFI, the same dichotomy for ALC- and ALCI-ontologies of unrestricted depth, and the non-existence of such a dichotomy for ALCF-ontologies. For the latter DL, we additionally show that it is undecidable whether a given ontology admits PTime query evaluation. We also consider the connection between PTime query evaluation and rewritability into (monadic) Datalog.
2102.08416
Shoaib Ehsan
Maria Waheed, Michael Milford, Klaus D. McDonald-Maier, Shoaib Ehsan
Improving Visual Place Recognition Performance by Maximising Complementarity
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Visual place recognition (VPR) is the problem of recognising a previously visited location using visual information. Many attempts to improve the performance of VPR methods have been made in the literature. One approach that has received attention recently is the multi-process fusion where different VPR methods run in parallel and their outputs are combined in an effort to achieve better performance. The multi-process fusion, however, does not have a well-defined criterion for selecting and combining different VPR methods from a wide range of available options. To the best of our knowledge, this paper investigates the complementarity of state-of-the-art VPR methods systematically for the first time and identifies those combinations which can result in better performance. The paper presents a well-defined framework which acts as a sanity check to find the complementarity between two techniques by utilising a McNemar's test-like approach. The framework allows estimation of upper and lower complementarity bounds for the VPR techniques to be combined, along with an estimate of maximum VPR performance that may be achieved. Based on this framework, results are presented for eight state-of-the-art VPR methods on ten widely-used VPR datasets showing the potential of different combinations of techniques for achieving better performance.
[ { "created": "Tue, 16 Feb 2021 19:18:33 GMT", "version": "v1" } ]
2021-02-18
[ [ "Waheed", "Maria", "" ], [ "Milford", "Michael", "" ], [ "McDonald-Maier", "Klaus D.", "" ], [ "Ehsan", "Shoaib", "" ] ]
Visual place recognition (VPR) is the problem of recognising a previously visited location using visual information. Many attempts to improve the performance of VPR methods have been made in the literature. One approach that has received attention recently is the multi-process fusion where different VPR methods run in parallel and their outputs are combined in an effort to achieve better performance. The multi-process fusion, however, does not have a well-defined criterion for selecting and combining different VPR methods from a wide range of available options. To the best of our knowledge, this paper investigates the complementarity of state-of-the-art VPR methods systematically for the first time and identifies those combinations which can result in better performance. The paper presents a well-defined framework which acts as a sanity check to find the complementarity between two techniques by utilising a McNemar's test-like approach. The framework allows estimation of upper and lower complementarity bounds for the VPR techniques to be combined, along with an estimate of maximum VPR performance that may be achieved. Based on this framework, results are presented for eight state-of-the-art VPR methods on ten widely-used VPR datasets showing the potential of different combinations of techniques for achieving better performance.
2108.09119
Qingyang Zhou
Qingyang Zhou, Rongpeng Li, Zhifeng Zhao, Chenghui Peng, and Honggang Zhang
Semantic Communication with Adaptive Universal Transformer
null
null
null
null
cs.CL eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the development of deep learning (DL), natural language processing (NLP) makes it possible for us to analyze and understand a large amount of language texts. Accordingly, we can achieve a semantic communication in terms of joint semantic source and channel coding over a noisy channel with the help of NLP. However, the existing method to realize this goal is to use a fixed transformer of NLP while ignoring the difference of semantic information contained in each sentence. To solve this problem, we propose a new semantic communication system based on Universal Transformer. Compared with the traditional transformer, an adaptive circulation mechanism is introduced in the Universal Transformer. Through the introduction of the circulation mechanism, the new semantic communication system can be more flexible to transmit sentences with different semantic information, and achieve better end-to-end performance under various channel conditions.
[ { "created": "Fri, 20 Aug 2021 11:36:24 GMT", "version": "v1" }, { "created": "Fri, 27 Aug 2021 04:25:44 GMT", "version": "v2" }, { "created": "Mon, 29 Nov 2021 14:54:56 GMT", "version": "v3" } ]
2021-11-30
[ [ "Zhou", "Qingyang", "" ], [ "Li", "Rongpeng", "" ], [ "Zhao", "Zhifeng", "" ], [ "Peng", "Chenghui", "" ], [ "Zhang", "Honggang", "" ] ]
With the development of deep learning (DL), natural language processing (NLP) makes it possible for us to analyze and understand a large amount of language texts. Accordingly, we can achieve a semantic communication in terms of joint semantic source and channel coding over a noisy channel with the help of NLP. However, the existing method to realize this goal is to use a fixed transformer of NLP while ignoring the difference of semantic information contained in each sentence. To solve this problem, we propose a new semantic communication system based on Universal Transformer. Compared with the traditional transformer, an adaptive circulation mechanism is introduced in the Universal Transformer. Through the introduction of the circulation mechanism, the new semantic communication system can be more flexible to transmit sentences with different semantic information, and achieve better end-to-end performance under various channel conditions.
1506.07609
Vikas Garg
Vikas K. Garg, Cynthia Rudin, and Tommi Jaakkola
CRAFT: ClusteR-specific Assorted Feature selecTion
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a framework for clustering with cluster-specific feature selection. The framework, CRAFT, is derived from asymptotic log posterior formulations of nonparametric MAP-based clustering models. CRAFT handles assorted data, i.e., both numeric and categorical data, and the underlying objective functions are intuitively appealing. The resulting algorithm is simple to implement and scales nicely, requires minimal parameter tuning, obviates the need to specify the number of clusters a priori, and compares favorably with other methods on real datasets.
[ { "created": "Thu, 25 Jun 2015 04:14:49 GMT", "version": "v1" } ]
2015-06-26
[ [ "Garg", "Vikas K.", "" ], [ "Rudin", "Cynthia", "" ], [ "Jaakkola", "Tommi", "" ] ]
We present a framework for clustering with cluster-specific feature selection. The framework, CRAFT, is derived from asymptotic log posterior formulations of nonparametric MAP-based clustering models. CRAFT handles assorted data, i.e., both numeric and categorical data, and the underlying objective functions are intuitively appealing. The resulting algorithm is simple to implement and scales nicely, requires minimal parameter tuning, obviates the need to specify the number of clusters a priori, and compares favorably with other methods on real datasets.
1705.09860
Edgar Sucar
Edgar Sucar, Jean-Bernard Hayet
Probabilistic Global Scale Estimation for MonoSLAM Based on Generic Object Detection
Int. Workshop on Visual Odometry, CVPR, (July 2017)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a novel method to estimate the global scale of a 3D reconstructed model within a Kalman filtering-based monocular SLAM algorithm. Our Bayesian framework integrates height priors over the detected objects belonging to a set of broad predefined classes, based on recent advances in fast generic object detection. Each observation is produced on single frames, so that we do not need a data association process along video frames. This is because we associate the height priors with the image region sizes at image places where map features projections fall within the object detection regions. We present very promising results of this approach obtained on several experiments with different object classes.
[ { "created": "Sat, 27 May 2017 20:14:31 GMT", "version": "v1" } ]
2017-05-30
[ [ "Sucar", "Edgar", "" ], [ "Hayet", "Jean-Bernard", "" ] ]
This paper proposes a novel method to estimate the global scale of a 3D reconstructed model within a Kalman filtering-based monocular SLAM algorithm. Our Bayesian framework integrates height priors over the detected objects belonging to a set of broad predefined classes, based on recent advances in fast generic object detection. Each observation is produced on single frames, so that we do not need a data association process along video frames. This is because we associate the height priors with the image region sizes at image places where map features projections fall within the object detection regions. We present very promising results of this approach obtained on several experiments with different object classes.
2202.00273
Axel Sauer
Axel Sauer, Katja Schwarz, Andreas Geiger
StyleGAN-XL: Scaling StyleGAN to Large Diverse Datasets
To appear in SIGGRAPH 2022. Project Page: https://sites.google.com/view/stylegan-xl/
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computer graphics has experienced a recent surge of data-centric approaches for photorealistic and controllable content creation. StyleGAN in particular sets new standards for generative modeling regarding image quality and controllability. However, StyleGAN's performance severely degrades on large unstructured datasets such as ImageNet. StyleGAN was designed for controllability; hence, prior works suspect its restrictive design to be unsuitable for diverse datasets. In contrast, we find the main limiting factor to be the current training strategy. Following the recently introduced Projected GAN paradigm, we leverage powerful neural network priors and a progressive growing strategy to successfully train the latest StyleGAN3 generator on ImageNet. Our final model, StyleGAN-XL, sets a new state-of-the-art on large-scale image synthesis and is the first to generate images at a resolution of $1024^2$ at such a dataset scale. We demonstrate that this model can invert and edit images beyond the narrow domain of portraits or specific object classes.
[ { "created": "Tue, 1 Feb 2022 08:22:34 GMT", "version": "v1" }, { "created": "Thu, 5 May 2022 09:18:29 GMT", "version": "v2" } ]
2022-05-06
[ [ "Sauer", "Axel", "" ], [ "Schwarz", "Katja", "" ], [ "Geiger", "Andreas", "" ] ]
Computer graphics has experienced a recent surge of data-centric approaches for photorealistic and controllable content creation. StyleGAN in particular sets new standards for generative modeling regarding image quality and controllability. However, StyleGAN's performance severely degrades on large unstructured datasets such as ImageNet. StyleGAN was designed for controllability; hence, prior works suspect its restrictive design to be unsuitable for diverse datasets. In contrast, we find the main limiting factor to be the current training strategy. Following the recently introduced Projected GAN paradigm, we leverage powerful neural network priors and a progressive growing strategy to successfully train the latest StyleGAN3 generator on ImageNet. Our final model, StyleGAN-XL, sets a new state-of-the-art on large-scale image synthesis and is the first to generate images at a resolution of $1024^2$ at such a dataset scale. We demonstrate that this model can invert and edit images beyond the narrow domain of portraits or specific object classes.
2308.14500
Di Yang
Di Yang, Yaohui Wang, Antitza Dantcheva, Quan Kong, Lorenzo Garattoni, Gianpiero Francesca, Francois Bremond
LAC: Latent Action Composition for Skeleton-based Action Segmentation
ICCV 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Skeleton-based action segmentation requires recognizing composable actions in untrimmed videos. Current approaches decouple this problem by first extracting local visual features from skeleton sequences and then processing them by a temporal model to classify frame-wise actions. However, their performances remain limited as the visual features cannot sufficiently express composable actions. In this context, we propose Latent Action Composition (LAC), a novel self-supervised framework aiming at learning from synthesized composable motions for skeleton-based action segmentation. LAC is composed of a novel generation module towards synthesizing new sequences. Specifically, we design a linear latent space in the generator to represent primitive motion. New composed motions can be synthesized by simply performing arithmetic operations on latent representations of multiple input skeleton sequences. LAC leverages such synthesized sequences, which have large diversity and complexity, for learning visual representations of skeletons in both sequence and frame spaces via contrastive learning. The resulting visual encoder has a high expressive power and can be effectively transferred onto action segmentation tasks by end-to-end fine-tuning without the need for additional temporal models. We conduct a study focusing on transfer-learning and we show that representations learned from pre-trained LAC outperform the state-of-the-art by a large margin on TSU, Charades, PKU-MMD datasets.
[ { "created": "Mon, 28 Aug 2023 11:20:48 GMT", "version": "v1" }, { "created": "Wed, 30 Aug 2023 14:18:58 GMT", "version": "v2" }, { "created": "Thu, 31 Aug 2023 12:02:47 GMT", "version": "v3" }, { "created": "Wed, 21 Feb 2024 18:50:50 GMT", "version": "v4" } ]
2024-02-22
[ [ "Yang", "Di", "" ], [ "Wang", "Yaohui", "" ], [ "Dantcheva", "Antitza", "" ], [ "Kong", "Quan", "" ], [ "Garattoni", "Lorenzo", "" ], [ "Francesca", "Gianpiero", "" ], [ "Bremond", "Francois", "" ] ]
Skeleton-based action segmentation requires recognizing composable actions in untrimmed videos. Current approaches decouple this problem by first extracting local visual features from skeleton sequences and then processing them by a temporal model to classify frame-wise actions. However, their performances remain limited as the visual features cannot sufficiently express composable actions. In this context, we propose Latent Action Composition (LAC), a novel self-supervised framework aiming at learning from synthesized composable motions for skeleton-based action segmentation. LAC is composed of a novel generation module towards synthesizing new sequences. Specifically, we design a linear latent space in the generator to represent primitive motion. New composed motions can be synthesized by simply performing arithmetic operations on latent representations of multiple input skeleton sequences. LAC leverages such synthesized sequences, which have large diversity and complexity, for learning visual representations of skeletons in both sequence and frame spaces via contrastive learning. The resulting visual encoder has a high expressive power and can be effectively transferred onto action segmentation tasks by end-to-end fine-tuning without the need for additional temporal models. We conduct a study focusing on transfer-learning and we show that representations learned from pre-trained LAC outperform the state-of-the-art by a large margin on TSU, Charades, PKU-MMD datasets.
2401.13807
Daniel Bochen Tan
Daniel Bochen Tan and Shuohao Ping and Jason Cong
Depth-Optimal Addressing of 2D Qubit Array with 1D Controls Based on Exact Binary Matrix Factorization
null
null
null
null
cs.ET quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reducing control complexity is essential for achieving large-scale quantum computing. However, reducing control knobs may compromise the ability to independently address each qubit. Recent progress in neutral atom-based platforms suggests that rectangular (row-column) addressing may strike a balance between control granularity and flexibility for 2D qubit arrays. This scheme allows addressing qubits on the intersections of a set of rows and columns each time. While quadratically reducing controls, it may necessitate more depth. We formulate the depth-optimal rectangular addressing problem as exact binary matrix factorization, an NP-hard problem also appearing in communication complexity and combinatorial optimization. We introduce a satisfiability modulo theories-based solver for this problem, and a heuristic, row packing, performing close to the optimal solver on various benchmarks. Furthermore, we discuss rectangular addressing in the context of fault-tolerant quantum computing, leveraging a natural two-level structure.
[ { "created": "Wed, 24 Jan 2024 20:58:14 GMT", "version": "v1" }, { "created": "Fri, 22 Mar 2024 23:36:10 GMT", "version": "v2" } ]
2024-03-26
[ [ "Tan", "Daniel Bochen", "" ], [ "Ping", "Shuohao", "" ], [ "Cong", "Jason", "" ] ]
Reducing control complexity is essential for achieving large-scale quantum computing. However, reducing control knobs may compromise the ability to independently address each qubit. Recent progress in neutral atom-based platforms suggests that rectangular (row-column) addressing may strike a balance between control granularity and flexibility for 2D qubit arrays. This scheme allows addressing qubits on the intersections of a set of rows and columns each time. While quadratically reducing controls, it may necessitate more depth. We formulate the depth-optimal rectangular addressing problem as exact binary matrix factorization, an NP-hard problem also appearing in communication complexity and combinatorial optimization. We introduce a satisfiability modulo theories-based solver for this problem, and a heuristic, row packing, performing close to the optimal solver on various benchmarks. Furthermore, we discuss rectangular addressing in the context of fault-tolerant quantum computing, leveraging a natural two-level structure.
2203.11740
Junbo Tao
Jun-Bo Tao, Bai-Qing Sun, Wei-Dong Zhu, Shi-You Qu, Jia-Qiang Li, Guo-Qi Li, Yan-Yan Wang, Ling-Kun Chen, Chong Wu, Yu Xiong, Jiaxuan Zhou
The Deep Learning model of Higher-Lower-Order Cognition, Memory, and Affection- More General Than KAN
null
null
null
null
cs.NE cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We firstly simulated disease dynamics by KAN (Kolmogorov-Arnold Networks) nearly 4 years ago, but the kernel functions in the edge include the exponential number of infected and discharged people and is also in line with the Kolmogorov-Arnold representation theorem, and the shared weights in the edge are the infection rate and cure rate, and used activation function by tanh at the node of edge. And this Arxiv preprint version 1 of March 2022 is an upgraded version of KAN, considering the invariant coarse-grained which calculated by residual or gradient of MSE loss. The improved KAN is PNN (Plasticity Neural Networks) or ELKAN (Edge Learning KNN), in addition to edge learning, it also considered the trimming of the edge. We not inspired by the Kolmogorov-Arnold representation theorem but inspired by the brain science. The ELKAN to explain brain, the variables correspond to different types of neurons, the learning edge can be explained by rebalance of synaptic strength and glial cells phagocytose synapses, and the kernel function means the discharge of neurons and synapses, different neurons and edges mean brain regions. Through testing by cosine, the ELKAN or ORPNN (Optimized Range PNN) is better than the KAN or CRPNN (Constant Range PNN).The ELKAN is more general to explore brain, such as mechanism of consciousness, interactions of natural frequencies in brain regions, synaptic and neuronal discharge frequencies, and data signal frequencies; mechanism of Alzheimer's disease, the Alzheimer's patients has more high frequencies in the upstream brain regions; long short-term relatively good and inferior memory which means gradient of architecture and architecture; turbulent energy flow in different brain regions, turbulence critical conditions need to be met; heart-brain of the quantum entanglement may occur between the emotions of heartbeat and the synaptic strength of brain potentials.
[ { "created": "Sat, 19 Mar 2022 14:38:54 GMT", "version": "v1" }, { "created": "Wed, 1 Mar 2023 17:42:34 GMT", "version": "v10" }, { "created": "Wed, 8 Mar 2023 17:07:54 GMT", "version": "v11" }, { "created": "Sun, 26 Mar 2023 11:12:27 GMT", "version": "v12" }, { "created": "Sun, 16 Apr 2023 14:01:52 GMT", "version": "v13" }, { "created": "Wed, 26 Apr 2023 17:35:40 GMT", "version": "v14" }, { "created": "Mon, 12 Jun 2023 07:00:38 GMT", "version": "v15" }, { "created": "Tue, 1 Aug 2023 20:20:59 GMT", "version": "v16" }, { "created": "Sun, 15 Oct 2023 17:22:13 GMT", "version": "v17" }, { "created": "Sat, 1 Jun 2024 11:58:22 GMT", "version": "v18" }, { "created": "Sun, 15 May 2022 06:52:23 GMT", "version": "v2" }, { "created": "Mon, 15 Aug 2022 17:39:58 GMT", "version": "v3" }, { "created": "Wed, 24 Aug 2022 14:12:29 GMT", "version": "v4" }, { "created": "Tue, 4 Oct 2022 17:19:56 GMT", "version": "v5" }, { "created": "Tue, 18 Oct 2022 14:11:14 GMT", "version": "v6" }, { "created": "Mon, 7 Nov 2022 11:11:14 GMT", "version": "v7" }, { "created": "Thu, 5 Jan 2023 16:38:11 GMT", "version": "v8" }, { "created": "Sun, 12 Feb 2023 08:13:37 GMT", "version": "v9" } ]
2024-06-04
[ [ "Tao", "Jun-Bo", "" ], [ "Sun", "Bai-Qing", "" ], [ "Zhu", "Wei-Dong", "" ], [ "Qu", "Shi-You", "" ], [ "Li", "Jia-Qiang", "" ], [ "Li", "Guo-Qi", "" ], [ "Wang", "Yan-Yan", "" ], [ "Chen", "Ling-Kun", "" ], [ "Wu", "Chong", "" ], [ "Xiong", "Yu", "" ], [ "Zhou", "Jiaxuan", "" ] ]
We firstly simulated disease dynamics by KAN (Kolmogorov-Arnold Networks) nearly 4 years ago, but the kernel functions in the edge include the exponential number of infected and discharged people and is also in line with the Kolmogorov-Arnold representation theorem, and the shared weights in the edge are the infection rate and cure rate, and used activation function by tanh at the node of edge. And this Arxiv preprint version 1 of March 2022 is an upgraded version of KAN, considering the invariant coarse-grained which calculated by residual or gradient of MSE loss. The improved KAN is PNN (Plasticity Neural Networks) or ELKAN (Edge Learning KNN), in addition to edge learning, it also considered the trimming of the edge. We not inspired by the Kolmogorov-Arnold representation theorem but inspired by the brain science. The ELKAN to explain brain, the variables correspond to different types of neurons, the learning edge can be explained by rebalance of synaptic strength and glial cells phagocytose synapses, and the kernel function means the discharge of neurons and synapses, different neurons and edges mean brain regions. Through testing by cosine, the ELKAN or ORPNN (Optimized Range PNN) is better than the KAN or CRPNN (Constant Range PNN).The ELKAN is more general to explore brain, such as mechanism of consciousness, interactions of natural frequencies in brain regions, synaptic and neuronal discharge frequencies, and data signal frequencies; mechanism of Alzheimer's disease, the Alzheimer's patients has more high frequencies in the upstream brain regions; long short-term relatively good and inferior memory which means gradient of architecture and architecture; turbulent energy flow in different brain regions, turbulence critical conditions need to be met; heart-brain of the quantum entanglement may occur between the emotions of heartbeat and the synaptic strength of brain potentials.
1804.09705
Thomas Sturm
Hoon Hong, Thomas Sturm
Positive Solutions of Systems of Signed Parametric Polynomial Inequalities
null
Proc. CASC 2018, LNCS 11077, pp.238-253, Springer 2018
10.1007/978-3-319-99639-4_17
null
cs.SC cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider systems of strict multivariate polynomial inequalities over the reals. All polynomial coefficients are parameters ranging over the reals, where for each coefficient we prescribe its sign. We are interested in the existence of positive real solutions of our system for all choices of coefficients subject to our sign conditions. We give a decision procedure for the existence of such solutions. In the positive case our procedure yields a parametric positive solution as a rational function in the coefficients. Our framework allows to reformulate heuristic subtropical approaches for non-parametric systems of polynomial inequalities that have been recently used in qualitative biological network analysis and, independently, in satisfiability modulo theory solving. We apply our results to characterize the incompleteness of those methods.
[ { "created": "Wed, 25 Apr 2018 17:53:25 GMT", "version": "v1" } ]
2018-09-06
[ [ "Hong", "Hoon", "" ], [ "Sturm", "Thomas", "" ] ]
We consider systems of strict multivariate polynomial inequalities over the reals. All polynomial coefficients are parameters ranging over the reals, where for each coefficient we prescribe its sign. We are interested in the existence of positive real solutions of our system for all choices of coefficients subject to our sign conditions. We give a decision procedure for the existence of such solutions. In the positive case our procedure yields a parametric positive solution as a rational function in the coefficients. Our framework allows to reformulate heuristic subtropical approaches for non-parametric systems of polynomial inequalities that have been recently used in qualitative biological network analysis and, independently, in satisfiability modulo theory solving. We apply our results to characterize the incompleteness of those methods.
1810.04456
Zhaohui Che
Zhaohui Che, Ali Borji, Guangtao Zhai, Xiongkuo Min
Invariance Analysis of Saliency Models versus Human Gaze During Scene Free Viewing
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most of current studies on human gaze and saliency modeling have used high-quality stimuli. In real world, however, captured images undergo various types of distortions during the whole acquisition, transmission, and displaying chain. Some distortion types include motion blur, lighting variations and rotation. Despite few efforts, influences of ubiquitous distortions on visual attention and saliency models have not been systematically investigated. In this paper, we first create a large-scale database including eye movements of 10 observers over 1900 images degraded by 19 types of distortions. Second, by analyzing eye movements and saliency models, we find that: a) observers look at different locations over distorted versus original images, and b) performances of saliency models are drastically hindered over distorted images, with the maximum performance drop belonging to Rotation and Shearing distortions. Finally, we investigate the effectiveness of different distortions when serving as data augmentation transformations. Experimental results verify that some useful data augmentation transformations which preserve human gaze of reference images can improve deep saliency models against distortions, while some invalid transformations which severely change human gaze will degrade the performance.
[ { "created": "Wed, 10 Oct 2018 11:10:28 GMT", "version": "v1" } ]
2018-10-11
[ [ "Che", "Zhaohui", "" ], [ "Borji", "Ali", "" ], [ "Zhai", "Guangtao", "" ], [ "Min", "Xiongkuo", "" ] ]
Most of current studies on human gaze and saliency modeling have used high-quality stimuli. In real world, however, captured images undergo various types of distortions during the whole acquisition, transmission, and displaying chain. Some distortion types include motion blur, lighting variations and rotation. Despite few efforts, influences of ubiquitous distortions on visual attention and saliency models have not been systematically investigated. In this paper, we first create a large-scale database including eye movements of 10 observers over 1900 images degraded by 19 types of distortions. Second, by analyzing eye movements and saliency models, we find that: a) observers look at different locations over distorted versus original images, and b) performances of saliency models are drastically hindered over distorted images, with the maximum performance drop belonging to Rotation and Shearing distortions. Finally, we investigate the effectiveness of different distortions when serving as data augmentation transformations. Experimental results verify that some useful data augmentation transformations which preserve human gaze of reference images can improve deep saliency models against distortions, while some invalid transformations which severely change human gaze will degrade the performance.
1605.01335
Jakub Sygnowski
Jakub Sygnowski and Henryk Michalewski
Learning from the memory of Atari 2600
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We train a number of neural networks to play games Bowling, Breakout and Seaquest using information stored in the memory of a video game console Atari 2600. We consider four models of neural networks which differ in size and architecture: two networks which use only information contained in the RAM and two mixed networks which use both information in the RAM and information from the screen. As the benchmark we used the convolutional model proposed in NIPS and received comparable results in all considered games. Quite surprisingly, in the case of Seaquest we were able to train RAM-only agents which behave better than the benchmark screen-only agent. Mixing screen and RAM did not lead to an improved performance comparing to screen-only and RAM-only agents.
[ { "created": "Wed, 4 May 2016 16:23:34 GMT", "version": "v1" } ]
2016-05-05
[ [ "Sygnowski", "Jakub", "" ], [ "Michalewski", "Henryk", "" ] ]
We train a number of neural networks to play games Bowling, Breakout and Seaquest using information stored in the memory of a video game console Atari 2600. We consider four models of neural networks which differ in size and architecture: two networks which use only information contained in the RAM and two mixed networks which use both information in the RAM and information from the screen. As the benchmark we used the convolutional model proposed in NIPS and received comparable results in all considered games. Quite surprisingly, in the case of Seaquest we were able to train RAM-only agents which behave better than the benchmark screen-only agent. Mixing screen and RAM did not lead to an improved performance comparing to screen-only and RAM-only agents.
1811.02741
Khondokar Fida Hasan
Khondokar Fida Hasan, Yanming Feng and Yu-Chu Tian
GNSS Time Synchronization in Vehicular Ad-Hoc Networks: Benefits and Feasibility
10 pages
Hasan, Khondokar Fida, Yanming Feng, and Yu-Chu Tian. "GNSS Time Synchronization in Vehicular Ad-Hoc Networks: Benefits and Feasibility." IEEE Transactions on Intelligent Transportation Systems (2018)
10.1109/TITS.2017.2789291
null
cs.NI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Time synchronization is critical for the operation of distributed systems in networked environments. It is also demanded in vehicular ad-hoc networks (VANETs), which, as a special type of wireless networks, are becoming increasingly important for emerging cooperative intelligent transport systems. Global navigation satellite system (GNSS) is a proven technology to provide precise timing information in many distributed systems. It is well recognized to be the primary means for vehicle positioning and velocity determination in VANETs. However, GNSS-based time synchronization is not well understood for its role in the coordination of various tasks in VANETs. To address this issue, this paper examines the requirements, potential benefits, and feasibility of GNSS time synchronization in VANETs. The availability of GNSS time synchronization is characterized by almost 100% in our experiments in high-rise urban streets, where the availability of GNSS positioning solutions is only 80%. Experiments are also conducted to test the accuracy of time synchronization with 1-PPS signals output from consumer grade GNSS receivers. They have shown 30-ns synchronization accuracy between two receivers of different models. All these experimental results demonstrate the feasibility of GNSS time synchronization for stringent VANET applications.
[ { "created": "Wed, 7 Nov 2018 02:58:52 GMT", "version": "v1" } ]
2018-11-08
[ [ "Hasan", "Khondokar Fida", "" ], [ "Feng", "Yanming", "" ], [ "Tian", "Yu-Chu", "" ] ]
Time synchronization is critical for the operation of distributed systems in networked environments. It is also demanded in vehicular ad-hoc networks (VANETs), which, as a special type of wireless networks, are becoming increasingly important for emerging cooperative intelligent transport systems. Global navigation satellite system (GNSS) is a proven technology to provide precise timing information in many distributed systems. It is well recognized to be the primary means for vehicle positioning and velocity determination in VANETs. However, GNSS-based time synchronization is not well understood for its role in the coordination of various tasks in VANETs. To address this issue, this paper examines the requirements, potential benefits, and feasibility of GNSS time synchronization in VANETs. The availability of GNSS time synchronization is characterized by almost 100% in our experiments in high-rise urban streets, where the availability of GNSS positioning solutions is only 80%. Experiments are also conducted to test the accuracy of time synchronization with 1-PPS signals output from consumer grade GNSS receivers. They have shown 30-ns synchronization accuracy between two receivers of different models. All these experimental results demonstrate the feasibility of GNSS time synchronization for stringent VANET applications.
2107.02389
Qingyong Hu
Qingyong Hu, Bo Yang, Linhai Xie, Stefano Rosa, Yulan Guo, Zhihua Wang, Niki Trigoni and Andrew Markham
Learning Semantic Segmentation of Large-Scale Point Clouds with Random Sampling
IEEE TPAMI 2021. arXiv admin note: substantial text overlap with arXiv:1911.11236
null
10.1109/TPAMI.2021.3083288
null
cs.CV cs.AI cs.RO eess.SP
http://creativecommons.org/licenses/by-nc-sa/4.0/
We study the problem of efficient semantic segmentation of large-scale 3D point clouds. By relying on expensive sampling techniques or computationally heavy pre/post-processing steps, most existing approaches are only able to be trained and operate over small-scale point clouds. In this paper, we introduce RandLA-Net, an efficient and lightweight neural architecture to directly infer per-point semantics for large-scale point clouds. The key to our approach is to use random point sampling instead of more complex point selection approaches. Although remarkably computation and memory efficient, random sampling can discard key features by chance. To overcome this, we introduce a novel local feature aggregation module to progressively increase the receptive field for each 3D point, thereby effectively preserving geometric details. Comparative experiments show that our RandLA-Net can process 1 million points in a single pass up to 200x faster than existing approaches. Moreover, extensive experiments on five large-scale point cloud datasets, including Semantic3D, SemanticKITTI, Toronto3D, NPM3D and S3DIS, demonstrate the state-of-the-art semantic segmentation performance of our RandLA-Net.
[ { "created": "Tue, 6 Jul 2021 05:08:34 GMT", "version": "v1" } ]
2021-07-07
[ [ "Hu", "Qingyong", "" ], [ "Yang", "Bo", "" ], [ "Xie", "Linhai", "" ], [ "Rosa", "Stefano", "" ], [ "Guo", "Yulan", "" ], [ "Wang", "Zhihua", "" ], [ "Trigoni", "Niki", "" ], [ "Markham", "Andrew", "" ] ]
We study the problem of efficient semantic segmentation of large-scale 3D point clouds. By relying on expensive sampling techniques or computationally heavy pre/post-processing steps, most existing approaches are only able to be trained and operate over small-scale point clouds. In this paper, we introduce RandLA-Net, an efficient and lightweight neural architecture to directly infer per-point semantics for large-scale point clouds. The key to our approach is to use random point sampling instead of more complex point selection approaches. Although remarkably computation and memory efficient, random sampling can discard key features by chance. To overcome this, we introduce a novel local feature aggregation module to progressively increase the receptive field for each 3D point, thereby effectively preserving geometric details. Comparative experiments show that our RandLA-Net can process 1 million points in a single pass up to 200x faster than existing approaches. Moreover, extensive experiments on five large-scale point cloud datasets, including Semantic3D, SemanticKITTI, Toronto3D, NPM3D and S3DIS, demonstrate the state-of-the-art semantic segmentation performance of our RandLA-Net.
2002.00329
Jeongyeol Kwon
Jeongyeol Kwon, Constantine Caramanis
The EM Algorithm gives Sample-Optimality for Learning Mixtures of Well-Separated Gaussians
Accepted to COLT 2020; Title changed
null
null
null
cs.LG math.ST stat.ML stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of spherical Gaussian Mixture models with $k \geq 3$ components when the components are well separated. A fundamental previous result established that separation of $\Omega(\sqrt{\log k})$ is necessary and sufficient for identifiability of the parameters with polynomial sample complexity (Regev and Vijayaraghavan, 2017). In the same context, we show that $\tilde{O} (kd/\epsilon^2)$ samples suffice for any $\epsilon \lesssim 1/k$, closing the gap from polynomial to linear, and thus giving the first optimal sample upper bound for the parameter estimation of well-separated Gaussian mixtures. We accomplish this by proving a new result for the Expectation-Maximization (EM) algorithm: we show that EM converges locally, under separation $\Omega(\sqrt{\log k})$. The previous best-known guarantee required $\Omega(\sqrt{k})$ separation (Yan, et al., 2017). Unlike prior work, our results do not assume or use prior knowledge of the (potentially different) mixing weights or variances of the Gaussian components. Furthermore, our results show that the finite-sample error of EM does not depend on non-universal quantities such as pairwise distances between means of Gaussian components.
[ { "created": "Sun, 2 Feb 2020 05:09:26 GMT", "version": "v1" }, { "created": "Fri, 19 Jun 2020 17:36:40 GMT", "version": "v2" } ]
2020-06-22
[ [ "Kwon", "Jeongyeol", "" ], [ "Caramanis", "Constantine", "" ] ]
We consider the problem of spherical Gaussian Mixture models with $k \geq 3$ components when the components are well separated. A fundamental previous result established that separation of $\Omega(\sqrt{\log k})$ is necessary and sufficient for identifiability of the parameters with polynomial sample complexity (Regev and Vijayaraghavan, 2017). In the same context, we show that $\tilde{O} (kd/\epsilon^2)$ samples suffice for any $\epsilon \lesssim 1/k$, closing the gap from polynomial to linear, and thus giving the first optimal sample upper bound for the parameter estimation of well-separated Gaussian mixtures. We accomplish this by proving a new result for the Expectation-Maximization (EM) algorithm: we show that EM converges locally, under separation $\Omega(\sqrt{\log k})$. The previous best-known guarantee required $\Omega(\sqrt{k})$ separation (Yan, et al., 2017). Unlike prior work, our results do not assume or use prior knowledge of the (potentially different) mixing weights or variances of the Gaussian components. Furthermore, our results show that the finite-sample error of EM does not depend on non-universal quantities such as pairwise distances between means of Gaussian components.
2407.19889
Carlos Pe\~narrubia
Carlos Penarrubia, Jose J. Valero-Mas, Jorge Calvo-Zaragoza
Self-Supervised Learning for Text Recognition: A Critical Survey
This article is under revision
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Text Recognition (TR) refers to the research area that focuses on retrieving textual information from images, a topic that has seen significant advancements in the last decade due to the use of Deep Neural Networks (DNN). However, these solutions often necessitate vast amounts of manually labeled or synthetic data. Addressing this challenge, Self-Supervised Learning (SSL) has gained attention by utilizing large datasets of unlabeled data to train DNN, thereby generating meaningful and robust representations. Although SSL was initially overlooked in TR because of its unique characteristics, recent years have witnessed a surge in the development of SSL methods specifically for this field. This rapid development, however, has led to many methods being explored independently, without taking previous efforts in methodology or comparison into account, thereby hindering progress in the field of research. This paper, therefore, seeks to consolidate the use of SSL in the field of TR, offering a critical and comprehensive overview of the current state of the art. We will review and analyze the existing methods, compare their results, and highlight inconsistencies in the current literature. This thorough analysis aims to provide general insights into the field, propose standardizations, identify new research directions, and foster its proper development.
[ { "created": "Mon, 29 Jul 2024 11:11:17 GMT", "version": "v1" } ]
2024-07-30
[ [ "Penarrubia", "Carlos", "" ], [ "Valero-Mas", "Jose J.", "" ], [ "Calvo-Zaragoza", "Jorge", "" ] ]
Text Recognition (TR) refers to the research area that focuses on retrieving textual information from images, a topic that has seen significant advancements in the last decade due to the use of Deep Neural Networks (DNN). However, these solutions often necessitate vast amounts of manually labeled or synthetic data. Addressing this challenge, Self-Supervised Learning (SSL) has gained attention by utilizing large datasets of unlabeled data to train DNN, thereby generating meaningful and robust representations. Although SSL was initially overlooked in TR because of its unique characteristics, recent years have witnessed a surge in the development of SSL methods specifically for this field. This rapid development, however, has led to many methods being explored independently, without taking previous efforts in methodology or comparison into account, thereby hindering progress in the field of research. This paper, therefore, seeks to consolidate the use of SSL in the field of TR, offering a critical and comprehensive overview of the current state of the art. We will review and analyze the existing methods, compare their results, and highlight inconsistencies in the current literature. This thorough analysis aims to provide general insights into the field, propose standardizations, identify new research directions, and foster its proper development.
2111.04588
Honey Nikam
Honey Nikam, Siddharth Satyam, Shubham Sahay
Long Short-Term Memory Implementation Exploiting Passive RRAM Crossbar Array
null
null
10.1109/TED.2021.3133197
null
cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ever-increasing demand to extract temporal correlations across sequential data and perform context-based learning in this era of big data has led to the development of long short-term memory (LSTM) networks. Furthermore, there is an urgent need to perform these time-series data-dependent applications including speech/video processing and recognition, language modelling and translation, etc. on compact internet-of-things (IoT) edge devices with limited energy. To this end, in this work, for the first time, we propose an extremely area- and energy-efficient LSTM network implementation exploiting the passive resistive random access memory (RRAM) crossbar array. We developed a hardware-aware LSTM network simulation framework and performed an extensive analysis of the proposed LSTM implementation considering the non-ideal hardware artifacts such as spatial (device-to-device) and temporal variations, non-linearity, noise, etc. utilizing an experimentally calibrated comprehensive phenomenological model for passive RRAM crossbar array. Our results indicate that the proposed passive RRAM crossbar-based LSTM network implementation not only outperforms the prior digital and active 1T-1R crossbar-based LSTM implementations by more than three orders of magnitude in terms of area and two orders of magnitude in terms of training energy for identical network accuracy, but also exhibits robustness against spatial and temporal variations and noise, and a faster convergence rate. Our work may provide the incentive for experimental realization of LSTM networks on passive RRAM crossbar arrays.
[ { "created": "Mon, 8 Nov 2021 15:50:09 GMT", "version": "v1" } ]
2022-04-06
[ [ "Nikam", "Honey", "" ], [ "Satyam", "Siddharth", "" ], [ "Sahay", "Shubham", "" ] ]
The ever-increasing demand to extract temporal correlations across sequential data and perform context-based learning in this era of big data has led to the development of long short-term memory (LSTM) networks. Furthermore, there is an urgent need to perform these time-series data-dependent applications including speech/video processing and recognition, language modelling and translation, etc. on compact internet-of-things (IoT) edge devices with limited energy. To this end, in this work, for the first time, we propose an extremely area- and energy-efficient LSTM network implementation exploiting the passive resistive random access memory (RRAM) crossbar array. We developed a hardware-aware LSTM network simulation framework and performed an extensive analysis of the proposed LSTM implementation considering the non-ideal hardware artifacts such as spatial (device-to-device) and temporal variations, non-linearity, noise, etc. utilizing an experimentally calibrated comprehensive phenomenological model for passive RRAM crossbar array. Our results indicate that the proposed passive RRAM crossbar-based LSTM network implementation not only outperforms the prior digital and active 1T-1R crossbar-based LSTM implementations by more than three orders of magnitude in terms of area and two orders of magnitude in terms of training energy for identical network accuracy, but also exhibits robustness against spatial and temporal variations and noise, and a faster convergence rate. Our work may provide the incentive for experimental realization of LSTM networks on passive RRAM crossbar arrays.
2310.18349
Minghao Tang
Minghao Tang, Yongquan He, Yongxiu Xu, Hongbo Xu, Wenyuan Zhang, Yang Lin
A Boundary Offset Prediction Network for Named Entity Recognition
Accepted by Findings of EMNLP 2023, 13 pages
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Named entity recognition (NER) is a fundamental task in natural language processing that aims to identify and classify named entities in text. However, span-based methods for NER typically assign entity types to text spans, resulting in an imbalanced sample space and neglecting the connections between non-entity and entity spans. To address these issues, we propose a novel approach for NER, named the Boundary Offset Prediction Network (BOPN), which predicts the boundary offsets between candidate spans and their nearest entity spans. By leveraging the guiding semantics of boundary offsets, BOPN establishes connections between non-entity and entity spans, enabling non-entity spans to function as additional positive samples for entity detection. Furthermore, our method integrates entity type and span representations to generate type-aware boundary offsets instead of using entity types as detection targets. We conduct experiments on eight widely-used NER datasets, and the results demonstrate that our proposed BOPN outperforms previous state-of-the-art methods.
[ { "created": "Mon, 23 Oct 2023 05:04:07 GMT", "version": "v1" } ]
2023-10-31
[ [ "Tang", "Minghao", "" ], [ "He", "Yongquan", "" ], [ "Xu", "Yongxiu", "" ], [ "Xu", "Hongbo", "" ], [ "Zhang", "Wenyuan", "" ], [ "Lin", "Yang", "" ] ]
Named entity recognition (NER) is a fundamental task in natural language processing that aims to identify and classify named entities in text. However, span-based methods for NER typically assign entity types to text spans, resulting in an imbalanced sample space and neglecting the connections between non-entity and entity spans. To address these issues, we propose a novel approach for NER, named the Boundary Offset Prediction Network (BOPN), which predicts the boundary offsets between candidate spans and their nearest entity spans. By leveraging the guiding semantics of boundary offsets, BOPN establishes connections between non-entity and entity spans, enabling non-entity spans to function as additional positive samples for entity detection. Furthermore, our method integrates entity type and span representations to generate type-aware boundary offsets instead of using entity types as detection targets. We conduct experiments on eight widely-used NER datasets, and the results demonstrate that our proposed BOPN outperforms previous state-of-the-art methods.
2303.02206
Navid Madani
Navid Madani, Rohini K. Srihari, Kenneth Joseph
Domain Specific Question Answering Over Knowledge Graphs Using Logical Programming and Large Language Models
null
null
null
null
cs.LG cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Answering questions over domain-specific graphs requires a tailored approach due to the limited number of relations and the specific nature of the domain. Our approach integrates classic logical programming languages into large language models (LLMs), enabling the utilization of logical reasoning capabilities to tackle the KGQA task. By representing the questions as Prolog queries, which are readable and near close to natural language in representation, we facilitate the generation of programmatically derived answers. To validate the effectiveness of our approach, we evaluate it using a well-known benchmark dataset, MetaQA. Our experimental results demonstrate that our method achieves accurate identification of correct answer entities for all test questions, even when trained on a small fraction of annotated data. Overall, our work presents a promising approach to addressing question answering over domain-specific graphs, offering an explainable and robust solution by incorporating logical programming languages.
[ { "created": "Fri, 3 Mar 2023 20:35:38 GMT", "version": "v1" }, { "created": "Wed, 23 Aug 2023 14:23:48 GMT", "version": "v2" } ]
2023-08-24
[ [ "Madani", "Navid", "" ], [ "Srihari", "Rohini K.", "" ], [ "Joseph", "Kenneth", "" ] ]
Answering questions over domain-specific graphs requires a tailored approach due to the limited number of relations and the specific nature of the domain. Our approach integrates classic logical programming languages into large language models (LLMs), enabling the utilization of logical reasoning capabilities to tackle the KGQA task. By representing the questions as Prolog queries, which are readable and near close to natural language in representation, we facilitate the generation of programmatically derived answers. To validate the effectiveness of our approach, we evaluate it using a well-known benchmark dataset, MetaQA. Our experimental results demonstrate that our method achieves accurate identification of correct answer entities for all test questions, even when trained on a small fraction of annotated data. Overall, our work presents a promising approach to addressing question answering over domain-specific graphs, offering an explainable and robust solution by incorporating logical programming languages.
1401.5261
Pietro Codara
Pietro Codara, Ottavio M. D'Antona, Vincenzo Marra
An Analysis of Ruspini Partitions in G\"odel Logic
22 pages
International Journal of Approximate Reasoning 50/6 (2009) 825-836
10.1016/j.ijar.2009.02.007
null
cs.LO math.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
By a Ruspini partition we mean a finite family of fuzzy sets $\{f_1, \ldots, f_n\}$, $f_i : [0,1] \to [0,1]$, such that $\sum_{i=1}^n f_i(x)=1$ for all $x \in [0,1]$, where $[0,1]$ denotes the real unit interval. We analyze such partitions in the language of G\"odel logic. Our first main result identifies the precise degree to which the Ruspini condition is expressible in this language, and yields inter alia a constructive procedure to axiomatize a given Ruspini partition by a theory in G\"odel logic. Our second main result extends this analysis to Ruspini partitions fulfilling the natural additional condition that each $f_i$ has at most one left and one right neighbour, meaning that $\min_{x \in [0,1]}{\{f_{i_1}(x),f_{i_2}(x),f_{i_3}(x)\}}=0$ holds for $i_1\neq i_2\neq i_3$.
[ { "created": "Tue, 21 Jan 2014 11:00:05 GMT", "version": "v1" } ]
2014-01-22
[ [ "Codara", "Pietro", "" ], [ "D'Antona", "Ottavio M.", "" ], [ "Marra", "Vincenzo", "" ] ]
By a Ruspini partition we mean a finite family of fuzzy sets $\{f_1, \ldots, f_n\}$, $f_i : [0,1] \to [0,1]$, such that $\sum_{i=1}^n f_i(x)=1$ for all $x \in [0,1]$, where $[0,1]$ denotes the real unit interval. We analyze such partitions in the language of G\"odel logic. Our first main result identifies the precise degree to which the Ruspini condition is expressible in this language, and yields inter alia a constructive procedure to axiomatize a given Ruspini partition by a theory in G\"odel logic. Our second main result extends this analysis to Ruspini partitions fulfilling the natural additional condition that each $f_i$ has at most one left and one right neighbour, meaning that $\min_{x \in [0,1]}{\{f_{i_1}(x),f_{i_2}(x),f_{i_3}(x)\}}=0$ holds for $i_1\neq i_2\neq i_3$.