id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2112.11389
Samujjwal Ghosh
Samujjwal Ghosh, Subhadeep Maji, Maunendra Sankar Desarkar
Supervised Graph Contrastive Pretraining for Text Classification
A condensed version of this paper has been accepted to ACM SAC'22. DOI: https://doi.org/10.1145/3477314.3507194
null
10.1145/3477314.3507194
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contrastive pretraining techniques for text classification has been largely studied in an unsupervised setting. However, oftentimes labeled data from related tasks which share label semantics with current task is available. We hypothesize that using this labeled data effectively can lead to better generalization on current task. In this paper, we propose a novel way to effectively utilize labeled data from related tasks with a graph based supervised contrastive learning approach. We formulate a token-graph by extrapolating the supervised information from examples to tokens. Our formulation results in an embedding space where tokens with high/low probability of belonging to same class are near/further-away from one another. We also develop detailed theoretical insights which serve as a motivation for our method. In our experiments with $13$ datasets, we show our method outperforms pretraining schemes by $2.5\%$ and also example-level contrastive learning based formulation by $1.8\%$ on average. In addition, we show cross-domain effectiveness of our method in a zero-shot setting by $3.91\%$ on average. Lastly, we also demonstrate our method can be used as a noisy teacher in a knowledge distillation setting to significantly improve performance of transformer based models in low labeled data regime by $4.57\%$ on average.
[ { "created": "Tue, 21 Dec 2021 17:47:14 GMT", "version": "v1" } ]
2021-12-22
[ [ "Ghosh", "Samujjwal", "" ], [ "Maji", "Subhadeep", "" ], [ "Desarkar", "Maunendra Sankar", "" ] ]
Contrastive pretraining techniques for text classification has been largely studied in an unsupervised setting. However, oftentimes labeled data from related tasks which share label semantics with current task is available. We hypothesize that using this labeled data effectively can lead to better generalization on current task. In this paper, we propose a novel way to effectively utilize labeled data from related tasks with a graph based supervised contrastive learning approach. We formulate a token-graph by extrapolating the supervised information from examples to tokens. Our formulation results in an embedding space where tokens with high/low probability of belonging to same class are near/further-away from one another. We also develop detailed theoretical insights which serve as a motivation for our method. In our experiments with $13$ datasets, we show our method outperforms pretraining schemes by $2.5\%$ and also example-level contrastive learning based formulation by $1.8\%$ on average. In addition, we show cross-domain effectiveness of our method in a zero-shot setting by $3.91\%$ on average. Lastly, we also demonstrate our method can be used as a noisy teacher in a knowledge distillation setting to significantly improve performance of transformer based models in low labeled data regime by $4.57\%$ on average.
2310.18205
Shubham Mittal
Shubham Mittal, Megha Sundriyal, Preslav Nakov
Lost in Translation, Found in Spans: Identifying Claims in Multilingual Social Media
EMNLP 2023 (main)
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Claim span identification (CSI) is an important step in fact-checking pipelines, aiming to identify text segments that contain a checkworthy claim or assertion in a social media post. Despite its importance to journalists and human fact-checkers, it remains a severely understudied problem, and the scarce research on this topic so far has only focused on English. Here we aim to bridge this gap by creating a novel dataset, X-CLAIM, consisting of 7K real-world claims collected from numerous social media platforms in five Indian languages and English. We report strong baselines with state-of-the-art encoder-only language models (e.g., XLM-R) and we demonstrate the benefits of training on multiple languages over alternative cross-lingual transfer methods such as zero-shot transfer, or training on translated data, from a high-resource language such as English. We evaluate generative large language models from the GPT series using prompting methods on the X-CLAIM dataset and we find that they underperform the smaller encoder-only language models for low-resource languages.
[ { "created": "Fri, 27 Oct 2023 15:28:12 GMT", "version": "v1" } ]
2023-10-30
[ [ "Mittal", "Shubham", "" ], [ "Sundriyal", "Megha", "" ], [ "Nakov", "Preslav", "" ] ]
Claim span identification (CSI) is an important step in fact-checking pipelines, aiming to identify text segments that contain a checkworthy claim or assertion in a social media post. Despite its importance to journalists and human fact-checkers, it remains a severely understudied problem, and the scarce research on this topic so far has only focused on English. Here we aim to bridge this gap by creating a novel dataset, X-CLAIM, consisting of 7K real-world claims collected from numerous social media platforms in five Indian languages and English. We report strong baselines with state-of-the-art encoder-only language models (e.g., XLM-R) and we demonstrate the benefits of training on multiple languages over alternative cross-lingual transfer methods such as zero-shot transfer, or training on translated data, from a high-resource language such as English. We evaluate generative large language models from the GPT series using prompting methods on the X-CLAIM dataset and we find that they underperform the smaller encoder-only language models for low-resource languages.
1512.01843
Kamran Keykhosravi
Kamran Keykhosravi, Erik Agrell, Giuseppe Durisi
Rates Achievable on a Fiber-Optical Split-Step Fourier Channel
null
null
null
null
cs.IT math.IT physics.optics
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A lower bound on the capacity of the split-step Fourier channel is derived. The channel under study is a concatenation of smaller segments, within which three operations are performed on the signal, namely, nonlinearity, linearity, and noise addition. Simulation results indicate that for a fixed number of segments, our lower bound saturates in the high-power regime and that the larger the number of segments is, the higher is the saturation point. We also obtain an alternative lower bound, which is less tight but has a simple closed-form expression. This bound allows us to conclude that the saturation point grows unbounded with the number of segments. Specifically, it grows as $c+(1/2)\log(K)$, where $K$ is the number of segments and $c$ is a constant. The connection between our channel model and the nonlinear Schr\"odinger equation is discussed.
[ { "created": "Sun, 6 Dec 2015 22:05:26 GMT", "version": "v1" }, { "created": "Tue, 29 Dec 2015 20:11:19 GMT", "version": "v2" }, { "created": "Thu, 13 Oct 2016 21:01:27 GMT", "version": "v3" }, { "created": "Fri, 21 Oct 2016 15:28:38 GMT", "version": "v4" } ]
2016-10-24
[ [ "Keykhosravi", "Kamran", "" ], [ "Agrell", "Erik", "" ], [ "Durisi", "Giuseppe", "" ] ]
A lower bound on the capacity of the split-step Fourier channel is derived. The channel under study is a concatenation of smaller segments, within which three operations are performed on the signal, namely, nonlinearity, linearity, and noise addition. Simulation results indicate that for a fixed number of segments, our lower bound saturates in the high-power regime and that the larger the number of segments is, the higher is the saturation point. We also obtain an alternative lower bound, which is less tight but has a simple closed-form expression. This bound allows us to conclude that the saturation point grows unbounded with the number of segments. Specifically, it grows as $c+(1/2)\log(K)$, where $K$ is the number of segments and $c$ is a constant. The connection between our channel model and the nonlinear Schr\"odinger equation is discussed.
2312.05086
Asma Bensalah
Asma Bensalah, Antonio Parziale, Giuseppe De Gregorio, Angelo Marcelli, Alicia Forn\'es, and Llad\'os
I Can't Believe It's Not Better: In-air Movement For Alzheimer Handwriting Synthetic Generation
null
null
10.1007/978-3-031-19745-1_20
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
During recent years, there here has been a boom in terms of deep learning use for handwriting analysis and recognition. One main application for handwriting analysis is early detection and diagnosis in the health field. Unfortunately, most real case problems still suffer a scarcity of data, which makes difficult the use of deep learning-based models. To alleviate this problem, some works resort to synthetic data generation. Lately, more works are directed towards guided data synthetic generation, a generation that uses the domain and data knowledge to generate realistic data that can be useful to train deep learning models. In this work, we combine the domain knowledge about the Alzheimer's disease for handwriting and use it for a more guided data generation. Concretely, we have explored the use of in-air movements for synthetic data generation.
[ { "created": "Fri, 8 Dec 2023 15:14:41 GMT", "version": "v1" } ]
2023-12-11
[ [ "Bensalah", "Asma", "" ], [ "Parziale", "Antonio", "" ], [ "De Gregorio", "Giuseppe", "" ], [ "Marcelli", "Angelo", "" ], [ "Fornés", "Alicia", "" ], [ "Lladós", "", "" ] ]
During recent years, there here has been a boom in terms of deep learning use for handwriting analysis and recognition. One main application for handwriting analysis is early detection and diagnosis in the health field. Unfortunately, most real case problems still suffer a scarcity of data, which makes difficult the use of deep learning-based models. To alleviate this problem, some works resort to synthetic data generation. Lately, more works are directed towards guided data synthetic generation, a generation that uses the domain and data knowledge to generate realistic data that can be useful to train deep learning models. In this work, we combine the domain knowledge about the Alzheimer's disease for handwriting and use it for a more guided data generation. Concretely, we have explored the use of in-air movements for synthetic data generation.
2007.11427
Karl Norrman
Karl Norrman, Vaishnavi Sundararajan and Alessandro Bruni
Formal Analysis of EDHOC Key Establishment for Constrained IoT Devices
12 pages; version 3 is the version accepted to SECRYPT 2021
In Proceedings of the 18th International Conference on Security and Cryptography (2021), ISBN 978-989-758-524-1, ISSN 2184-7711, pages 210-221
null
null
cs.CR cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Constrained IoT devices are becoming ubiquitous in society and there is a need for secure communication protocols that respect the constraints under which these devices operate. EDHOC is an authenticated key establishment protocol for constrained IoT devices, currently being standardized by the Internet Engineering Task Force (IETF). A rudimentary version of EDHOC with only two key establishment methods was formally analyzed in 2018. Since then, the protocol has evolved significantly and several new key establishment methods have been added. In this paper, we present a formal analysis of all EDHOC methods in an enhanced symbolic Dolev-Yao model using the Tamarin tool. We show that not all methods satisfy the authentication notion injective of agreement, but that they all do satisfy a notion of implicit authentication, as well as Perfect Forward Secrecy (PFS) of the session key material. We identify other weaknesses to which we propose improvements. For example, a party may intend to establish a session key with a certain peer, but end up establishing it with another, trusted but compromised, peer. We communicated our findings and proposals to the IETF, which has incorporated some of these in newer versions of the standard.
[ { "created": "Wed, 22 Jul 2020 13:35:49 GMT", "version": "v1" }, { "created": "Fri, 11 Sep 2020 17:46:56 GMT", "version": "v2" }, { "created": "Thu, 15 Jul 2021 13:41:41 GMT", "version": "v3" } ]
2021-07-16
[ [ "Norrman", "Karl", "" ], [ "Sundararajan", "Vaishnavi", "" ], [ "Bruni", "Alessandro", "" ] ]
Constrained IoT devices are becoming ubiquitous in society and there is a need for secure communication protocols that respect the constraints under which these devices operate. EDHOC is an authenticated key establishment protocol for constrained IoT devices, currently being standardized by the Internet Engineering Task Force (IETF). A rudimentary version of EDHOC with only two key establishment methods was formally analyzed in 2018. Since then, the protocol has evolved significantly and several new key establishment methods have been added. In this paper, we present a formal analysis of all EDHOC methods in an enhanced symbolic Dolev-Yao model using the Tamarin tool. We show that not all methods satisfy the authentication notion injective of agreement, but that they all do satisfy a notion of implicit authentication, as well as Perfect Forward Secrecy (PFS) of the session key material. We identify other weaknesses to which we propose improvements. For example, a party may intend to establish a session key with a certain peer, but end up establishing it with another, trusted but compromised, peer. We communicated our findings and proposals to the IETF, which has incorporated some of these in newer versions of the standard.
2304.14295
Amer Mouawad
Michael R. Fellows, Mario Grobler, Nicole Megow, Amer E. Mouawad, Vijayaragunathan Ramamoorthi, Frances A. Rosamond, Daniel Schmand, Sebastian Siebertz
On Solution Discovery via Reconfiguration
null
null
null
null
cs.CC cs.DM cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The dynamics of real-world applications and systems require efficient methods for improving infeasible solutions or restoring corrupted ones by making modifications to the current state of a system in a restricted way. We propose a new framework of solution discovery via reconfiguration for constructing a feasible solution for a given problem by executing a sequence of small modifications starting from a given state. Our framework integrates and formalizes different aspects of classical local search, reoptimization, and combinatorial reconfiguration. We exemplify our framework on a multitude of fundamental combinatorial problems, namely Vertex Cover, Independent Set, Dominating Set, and Coloring. We study the classical as well as the parameterized complexity of the solution discovery variants of those problems and explore the boundary between tractable and intractable instances.
[ { "created": "Thu, 27 Apr 2023 15:58:41 GMT", "version": "v1" } ]
2023-04-28
[ [ "Fellows", "Michael R.", "" ], [ "Grobler", "Mario", "" ], [ "Megow", "Nicole", "" ], [ "Mouawad", "Amer E.", "" ], [ "Ramamoorthi", "Vijayaragunathan", "" ], [ "Rosamond", "Frances A.", "" ], [ "Schmand", "Daniel", "" ], [ "Siebertz", "Sebastian", "" ] ]
The dynamics of real-world applications and systems require efficient methods for improving infeasible solutions or restoring corrupted ones by making modifications to the current state of a system in a restricted way. We propose a new framework of solution discovery via reconfiguration for constructing a feasible solution for a given problem by executing a sequence of small modifications starting from a given state. Our framework integrates and formalizes different aspects of classical local search, reoptimization, and combinatorial reconfiguration. We exemplify our framework on a multitude of fundamental combinatorial problems, namely Vertex Cover, Independent Set, Dominating Set, and Coloring. We study the classical as well as the parameterized complexity of the solution discovery variants of those problems and explore the boundary between tractable and intractable instances.
2011.14298
Alphin J Thottupattu
Alphin J. Thottupattu, Jayanthi Sivaswamy, Venkateswaran P. Krishnan
A method for large diffeomorphic registration via broken geodesics
18 pages and 9 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Anatomical variabilities seen in longitudinal data or inter-subject data is usually described by the underlying deformation, captured by non-rigid registration of these images. Stationary Velocity Field (SVF) based non-rigid registration algorithms are widely used for registration. SVF based methods form a metric-free framework which captures a finite dimensional submanifold of deformations embedded in the infinite dimensional smooth manifold of diffeomorphisms. However, these methods cover only a limited degree of deformations. In this paper, we address this limitation and define an approximate metric space for the manifold of diffeomorphisms $\mathcal{G}$. We propose a method to break down the large deformation into finite compositions of small deformations. This results in a broken geodesic path on $\mathcal{G}$ and its length now forms an approximate registration metric. We illustrate the method using a simple, intensity-based, log-demon implementation. Validation results of the proposed method show that it can capture large and complex deformations while producing qualitatively better results than the state-of-the-art methods. The results also demonstrate that the proposed registration metric is a good indicator of the degree of deformation.
[ { "created": "Sun, 29 Nov 2020 06:14:53 GMT", "version": "v1" }, { "created": "Sun, 3 Jan 2021 05:49:37 GMT", "version": "v2" } ]
2021-01-05
[ [ "Thottupattu", "Alphin J.", "" ], [ "Sivaswamy", "Jayanthi", "" ], [ "Krishnan", "Venkateswaran P.", "" ] ]
Anatomical variabilities seen in longitudinal data or inter-subject data is usually described by the underlying deformation, captured by non-rigid registration of these images. Stationary Velocity Field (SVF) based non-rigid registration algorithms are widely used for registration. SVF based methods form a metric-free framework which captures a finite dimensional submanifold of deformations embedded in the infinite dimensional smooth manifold of diffeomorphisms. However, these methods cover only a limited degree of deformations. In this paper, we address this limitation and define an approximate metric space for the manifold of diffeomorphisms $\mathcal{G}$. We propose a method to break down the large deformation into finite compositions of small deformations. This results in a broken geodesic path on $\mathcal{G}$ and its length now forms an approximate registration metric. We illustrate the method using a simple, intensity-based, log-demon implementation. Validation results of the proposed method show that it can capture large and complex deformations while producing qualitatively better results than the state-of-the-art methods. The results also demonstrate that the proposed registration metric is a good indicator of the degree of deformation.
0807.3483
Arnaud Martin
Arnaud Martin (E3I2)
Implementing general belief function framework with a practical codification for low complexity
Advances and Applications of DSmT for Information Fusion, Florentin Smarandache & Jean Dezert (Ed.) (2008) Pnd
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this chapter, we propose a new practical codification of the elements of the Venn diagram in order to easily manipulate the focal elements. In order to reduce the complexity, the eventual constraints must be integrated in the codification at the beginning. Hence, we only consider a reduced hyper power set $D_r^\Theta$ that can be $2^\Theta$ or $D^\Theta$. We describe all the steps of a general belief function framework. The step of decision is particularly studied, indeed, when we can decide on intersections of the singletons of the discernment space no actual decision functions are easily to use. Hence, two approaches are proposed, an extension of previous one and an approach based on the specificity of the elements on which to decide. The principal goal of this chapter is to provide practical codes of a general belief function framework for the researchers and users needing the belief function theory.
[ { "created": "Tue, 22 Jul 2008 13:50:22 GMT", "version": "v1" } ]
2008-07-23
[ [ "Martin", "Arnaud", "", "E3I2" ] ]
In this chapter, we propose a new practical codification of the elements of the Venn diagram in order to easily manipulate the focal elements. In order to reduce the complexity, the eventual constraints must be integrated in the codification at the beginning. Hence, we only consider a reduced hyper power set $D_r^\Theta$ that can be $2^\Theta$ or $D^\Theta$. We describe all the steps of a general belief function framework. The step of decision is particularly studied, indeed, when we can decide on intersections of the singletons of the discernment space no actual decision functions are easily to use. Hence, two approaches are proposed, an extension of previous one and an approach based on the specificity of the elements on which to decide. The principal goal of this chapter is to provide practical codes of a general belief function framework for the researchers and users needing the belief function theory.
2203.12646
Christopher Harth-Kitzerow
Christopher Harth-Kitzerow, Georg Carle, Fan Fei, Andre Luckow, Johannes Klepsch
CRGC -- A Practical Framework for Constructing Reusable Garbled Circuits
13 pages, 7 figures
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we introduce two schemes to construct reusable garbled circuits (RGCs) in the semi-honest setting. Our completely reusable garbled circuit (CRGC) scheme allows the generator (party A) to construct and send an obfuscated boolean circuit along with an encoded input to the evaluator (party B). In contrast to Yao's Garbled Circuit protocol, B can securely evaluate the same CRGC with an arbitrary number of inputs. As a tradeoff, CRGCs predictably leak some input bits of A to B. We also propose a partially reusable garbled circuit (PRGC) scheme that divides a circuit into reusable and non-reusable sections. PRGCs do not leak input bits of A. We benchmark our CRGC implementation against the state-of-the-art garbled circuit libraries EMP SH2PC and TinyGarble2. Using our framework, evaluating a CRGC is up to twenty times faster, albeit with weaker privacy guarantees, than evaluating an equivalent garbled circuit constructed by the two existing libraries. Our open-source library can convert any C++ function to a CRGC at approx. 80 million gates per second and repeatedly evaluate a CRGC at approx. 350 million gates per second. Additionally, a compressed CRGC is approx. 75% smaller in file size than the unobfuscated boolean circuit.
[ { "created": "Wed, 23 Mar 2022 18:11:16 GMT", "version": "v1" }, { "created": "Sun, 27 Mar 2022 14:07:05 GMT", "version": "v2" }, { "created": "Fri, 29 Apr 2022 09:19:11 GMT", "version": "v3" }, { "created": "Fri, 6 May 2022 13:44:12 GMT", "version": "v4" } ]
2022-05-09
[ [ "Harth-Kitzerow", "Christopher", "" ], [ "Carle", "Georg", "" ], [ "Fei", "Fan", "" ], [ "Luckow", "Andre", "" ], [ "Klepsch", "Johannes", "" ] ]
In this work, we introduce two schemes to construct reusable garbled circuits (RGCs) in the semi-honest setting. Our completely reusable garbled circuit (CRGC) scheme allows the generator (party A) to construct and send an obfuscated boolean circuit along with an encoded input to the evaluator (party B). In contrast to Yao's Garbled Circuit protocol, B can securely evaluate the same CRGC with an arbitrary number of inputs. As a tradeoff, CRGCs predictably leak some input bits of A to B. We also propose a partially reusable garbled circuit (PRGC) scheme that divides a circuit into reusable and non-reusable sections. PRGCs do not leak input bits of A. We benchmark our CRGC implementation against the state-of-the-art garbled circuit libraries EMP SH2PC and TinyGarble2. Using our framework, evaluating a CRGC is up to twenty times faster, albeit with weaker privacy guarantees, than evaluating an equivalent garbled circuit constructed by the two existing libraries. Our open-source library can convert any C++ function to a CRGC at approx. 80 million gates per second and repeatedly evaluate a CRGC at approx. 350 million gates per second. Additionally, a compressed CRGC is approx. 75% smaller in file size than the unobfuscated boolean circuit.
0710.5194
Masoud Ebrahimi
Masoud Ebrahimi and Amir K. Khandani
Rate-Constrained Wireless Networks with Fading Channels: Interference-Limited and Noise-Limited Regimes
Submitted to IEEE Trans. Information Theory
null
null
null
cs.IT math.IT
null
A network of $n$ wireless communication links is considered in a Rayleigh fading environment. It is assumed that each link can be active and transmit with a constant power $P$ or remain silent. The objective is to maximize the number of active links such that each active link can transmit with a constant rate $\lambda$. An upper bound is derived that shows the number of active links scales at most like $\frac{1}{\lambda} \log n$. To obtain a lower bound, a decentralized link activation strategy is described and analyzed. It is shown that for small values of $\lambda$, the number of supported links by this strategy meets the upper bound; however, as $\lambda$ grows, this number becomes far below the upper bound. To shrink the gap between the upper bound and the achievability result, a modified link activation strategy is proposed and analyzed based on some results from random graph theory. It is shown that this modified strategy performs very close to the optimum. Specifically, this strategy is \emph{asymptotically almost surely} optimum when $\lambda$ approaches $\infty$ or 0. It turns out the optimality results are obtained in an interference-limited regime. It is demonstrated that, by proper selection of the algorithm parameters, the proposed scheme also allows the network to operate in a noise-limited regime in which the transmission rates can be adjusted by the transmission powers. The price for this flexibility is a decrease in the throughput scaling law by a multiplicative factor of $\log \log n$.
[ { "created": "Fri, 26 Oct 2007 23:36:59 GMT", "version": "v1" } ]
2007-10-30
[ [ "Ebrahimi", "Masoud", "" ], [ "Khandani", "Amir K.", "" ] ]
A network of $n$ wireless communication links is considered in a Rayleigh fading environment. It is assumed that each link can be active and transmit with a constant power $P$ or remain silent. The objective is to maximize the number of active links such that each active link can transmit with a constant rate $\lambda$. An upper bound is derived that shows the number of active links scales at most like $\frac{1}{\lambda} \log n$. To obtain a lower bound, a decentralized link activation strategy is described and analyzed. It is shown that for small values of $\lambda$, the number of supported links by this strategy meets the upper bound; however, as $\lambda$ grows, this number becomes far below the upper bound. To shrink the gap between the upper bound and the achievability result, a modified link activation strategy is proposed and analyzed based on some results from random graph theory. It is shown that this modified strategy performs very close to the optimum. Specifically, this strategy is \emph{asymptotically almost surely} optimum when $\lambda$ approaches $\infty$ or 0. It turns out the optimality results are obtained in an interference-limited regime. It is demonstrated that, by proper selection of the algorithm parameters, the proposed scheme also allows the network to operate in a noise-limited regime in which the transmission rates can be adjusted by the transmission powers. The price for this flexibility is a decrease in the throughput scaling law by a multiplicative factor of $\log \log n$.
1712.07814
Yingxiang Sun
Yingxiang Sun, Jiajia Chen, Chau Yuen, and Susanto Rahardja
Indoor Sound Source Localization with Probabilistic Neural Network
10 pages, accepted by IEEE Transactions on Industrial Electronics
IEEE Transactions on Industrial Electronics, vol. 65, no. 8, pp. 6403-6413, Aug. 2018
10.1109/TIE.2017.2786219
null
cs.SD cs.LG eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is known that adverse environments such as high reverberation and low signal-to-noise ratio (SNR) pose a great challenge to indoor sound source localization. To address this challenge, in this paper, we propose a sound source localization algorithm based on probabilistic neural network, namely Generalized cross correlation Classification Algorithm (GCA). Experimental results for adverse environments with high reverberation time T60 up to 600ms and low SNR such as -10dB show that, the average azimuth angle error and elevation angle error by GCA are only 4.6 degrees and 3.1 degrees respectively. Compared with three recently published algorithms, GCA has increased the success rate on direction of arrival estimation significantly with good robustness to environmental changes. These results show that the proposed GCA can localize accurately and robustly for diverse indoor applications where the site acoustic features can be studied prior to the localization stage.
[ { "created": "Thu, 21 Dec 2017 07:26:53 GMT", "version": "v1" } ]
2018-12-05
[ [ "Sun", "Yingxiang", "" ], [ "Chen", "Jiajia", "" ], [ "Yuen", "Chau", "" ], [ "Rahardja", "Susanto", "" ] ]
It is known that adverse environments such as high reverberation and low signal-to-noise ratio (SNR) pose a great challenge to indoor sound source localization. To address this challenge, in this paper, we propose a sound source localization algorithm based on probabilistic neural network, namely Generalized cross correlation Classification Algorithm (GCA). Experimental results for adverse environments with high reverberation time T60 up to 600ms and low SNR such as -10dB show that, the average azimuth angle error and elevation angle error by GCA are only 4.6 degrees and 3.1 degrees respectively. Compared with three recently published algorithms, GCA has increased the success rate on direction of arrival estimation significantly with good robustness to environmental changes. These results show that the proposed GCA can localize accurately and robustly for diverse indoor applications where the site acoustic features can be studied prior to the localization stage.
2402.03050
Rupak Raj Ghimire
Rupak Raj Ghimire and Bal Krishna Bal and Prakash Poudyal
A Comprehensive Study of the Current State-of-the-Art in Nepali Automatic Speech Recognition Systems
Accepted in International Conference on Technologies for Computer, Electrical, Electronics & Communication (ICT-CEEL 2023)
null
null
null
cs.SD cs.CL eess.AS
http://creativecommons.org/licenses/by/4.0/
In this paper, we examine the research conducted in the field of Nepali Automatic Speech Recognition (ASR). The primary objective of this survey is to conduct a comprehensive review of the works on Nepali Automatic Speech Recognition Systems completed to date, explore the different datasets used, examine the technology utilized, and take account of the obstacles encountered in implementing the Nepali ASR system. In tandem with the global trends of ever-increasing research on speech recognition based research, the number of Nepalese ASR-related projects are also growing. Nevertheless, the investigation of language and acoustic models of the Nepali language has not received adequate attention compared to languages that possess ample resources. In this context, we provide a framework as well as directions for future investigations.
[ { "created": "Mon, 5 Feb 2024 14:34:14 GMT", "version": "v1" } ]
2024-02-06
[ [ "Ghimire", "Rupak Raj", "" ], [ "Bal", "Bal Krishna", "" ], [ "Poudyal", "Prakash", "" ] ]
In this paper, we examine the research conducted in the field of Nepali Automatic Speech Recognition (ASR). The primary objective of this survey is to conduct a comprehensive review of the works on Nepali Automatic Speech Recognition Systems completed to date, explore the different datasets used, examine the technology utilized, and take account of the obstacles encountered in implementing the Nepali ASR system. In tandem with the global trends of ever-increasing research on speech recognition based research, the number of Nepalese ASR-related projects are also growing. Nevertheless, the investigation of language and acoustic models of the Nepali language has not received adequate attention compared to languages that possess ample resources. In this context, we provide a framework as well as directions for future investigations.
2402.06411
V\'ictor Osma-Ruiz
Guillermo Garcia-Barrios, Eduardo Latorre Iglesias, Juana M. Gutierrez-Arriola, Ruben Fraile, Nicolas Saenz-Lechon, Victor Jose Osma-Ruiz
Exploiting spatial diversity for increasing the robustness of sound source localization systems against reverberation
null
null
10.1016/j.apacoust.2022.109138
null
cs.SD eess.AS eess.SP
http://creativecommons.org/licenses/by-nc-nd/4.0/
Acoustic reverberation is one of the most relevant factors that hampers the localization of a sound source inside a room. To date, several approaches have been proposed to deal with it, but have not always been evaluated under realistic conditions. This paper proposes exploiting spatial diversity as an alternative approach to achieve robustness against reverberation. The theoretical arguments supporting this approach are first presented and later confirmed by means of simulation results and real measurements. Simulations are run for reverberation times up to 2 s, thus providing results with a wider range of validity than in other previous research works. It is concluded that the use of systems consisting of several, sufficiently separated, small arrays leads to the best results in reverberant environments. Some recommendations are given regarding the choice of the array sizes, the separation among them, and the way to combine SRP-PHAT maps obtained from diverse arrays.
[ { "created": "Fri, 9 Feb 2024 13:57:02 GMT", "version": "v1" } ]
2024-02-12
[ [ "Garcia-Barrios", "Guillermo", "" ], [ "Iglesias", "Eduardo Latorre", "" ], [ "Gutierrez-Arriola", "Juana M.", "" ], [ "Fraile", "Ruben", "" ], [ "Saenz-Lechon", "Nicolas", "" ], [ "Osma-Ruiz", "Victor Jose", "" ] ]
Acoustic reverberation is one of the most relevant factors that hampers the localization of a sound source inside a room. To date, several approaches have been proposed to deal with it, but have not always been evaluated under realistic conditions. This paper proposes exploiting spatial diversity as an alternative approach to achieve robustness against reverberation. The theoretical arguments supporting this approach are first presented and later confirmed by means of simulation results and real measurements. Simulations are run for reverberation times up to 2 s, thus providing results with a wider range of validity than in other previous research works. It is concluded that the use of systems consisting of several, sufficiently separated, small arrays leads to the best results in reverberant environments. Some recommendations are given regarding the choice of the array sizes, the separation among them, and the way to combine SRP-PHAT maps obtained from diverse arrays.
2209.07031
Shuai Hua
Shuai Hua, Xinxin Li, Yunpeng Jing, Qunfeng Liu
A semantic hierarchical graph neural network for text classification
10 pages, 3 figures
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The key to the text classification task is language representation and important information extraction, and there are many related studies. In recent years, the research on graph neural network (GNN) in text classification has gradually emerged and shown its advantages, but the existing models mainly focus on directly inputting words as graph nodes into the GNN models ignoring the different levels of semantic structure information in the samples. To address the issue, we propose a new hierarchical graph neural network (HieGNN) which extracts corresponding information from word-level, sentence-level and document-level respectively. Experimental results on several benchmark datasets achieve better or similar results compared to several baseline methods, which demonstrate that our model is able to obtain more useful information for classification from samples.
[ { "created": "Thu, 15 Sep 2022 03:59:31 GMT", "version": "v1" } ]
2022-09-16
[ [ "Hua", "Shuai", "" ], [ "Li", "Xinxin", "" ], [ "Jing", "Yunpeng", "" ], [ "Liu", "Qunfeng", "" ] ]
The key to the text classification task is language representation and important information extraction, and there are many related studies. In recent years, the research on graph neural network (GNN) in text classification has gradually emerged and shown its advantages, but the existing models mainly focus on directly inputting words as graph nodes into the GNN models ignoring the different levels of semantic structure information in the samples. To address the issue, we propose a new hierarchical graph neural network (HieGNN) which extracts corresponding information from word-level, sentence-level and document-level respectively. Experimental results on several benchmark datasets achieve better or similar results compared to several baseline methods, which demonstrate that our model is able to obtain more useful information for classification from samples.
1907.13376
Hossein A. Rahmani
Hossein A. Rahmani, Mohammad Aliannejadi, Rasoul Mirzaei Zadeh, Mitra Baratchi, Mohsen Afsharchi, Fabio Crestani
Category-Aware Location Embedding for Point-of-Interest Recommendation
4 pages, 1 figures
null
10.1145/3341981.3344240 10.1145/3341981.3344240 10.1145/3341981.3344240
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, Point of interest (POI) recommendation has gained ever-increasing importance in various Location-Based Social Networks (LBSNs). With the recent advances of neural models, much work has sought to leverage neural networks to learn neural embeddings in a pre-training phase that achieve an improved representation of POIs and consequently a better recommendation. However, previous studies fail to capture crucial information about POIs such as categorical information. In this paper, we propose a novel neural model that generates a POI embedding incorporating sequential and categorical information from POIs. Our model consists of a check-in module and a category module. The check-in module captures the geographical influence of POIs derived from the sequence of users' check-ins, while the category module captures the characteristics of POIs derived from the category information. To validate the efficacy of the model, we experimented with two large-scale LBSN datasets. Our experimental results demonstrate that our approach significantly outperforms state-of-the-art POI recommendation methods.
[ { "created": "Wed, 31 Jul 2019 09:14:16 GMT", "version": "v1" } ]
2019-08-01
[ [ "Rahmani", "Hossein A.", "" ], [ "Aliannejadi", "Mohammad", "" ], [ "Zadeh", "Rasoul Mirzaei", "" ], [ "Baratchi", "Mitra", "" ], [ "Afsharchi", "Mohsen", "" ], [ "Crestani", "Fabio", "" ] ]
Recently, Point of interest (POI) recommendation has gained ever-increasing importance in various Location-Based Social Networks (LBSNs). With the recent advances of neural models, much work has sought to leverage neural networks to learn neural embeddings in a pre-training phase that achieve an improved representation of POIs and consequently a better recommendation. However, previous studies fail to capture crucial information about POIs such as categorical information. In this paper, we propose a novel neural model that generates a POI embedding incorporating sequential and categorical information from POIs. Our model consists of a check-in module and a category module. The check-in module captures the geographical influence of POIs derived from the sequence of users' check-ins, while the category module captures the characteristics of POIs derived from the category information. To validate the efficacy of the model, we experimented with two large-scale LBSN datasets. Our experimental results demonstrate that our approach significantly outperforms state-of-the-art POI recommendation methods.
2003.13786
Asish Mukhopadhyay
Sudiksha Khanduja, Aayushi Srivastava, Md. Zamilur Rahman and Asish Mukhopadhyay
Generating Weakly Chordal Graphs from Arbitrary Graphs
15 pages, 29 figures
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a scheme for generating a weakly chordal graph from a randomly generated input graph, G = (V, E). We reduce G to a chordal graph H by adding fill-edges, using the minimum vertex degree heuristic. Since H is necessarily a weakly chordal graph, we use an algorithm for deleting edges from a weakly chordal graph that preserves the weak chordality property of H. The edges that are candidates for deletion are the fill-edges that were inserted into G. In order to delete a maximal number of fill-edges, we maintain these in a queue. A fill-edge is removed from the front of the queue, which we then try to delete from H. If this violates the weak chordality property of H, we reinsert this edge at the back of the queue. This loop continues till no more fill-edges can be removed from H. Operationally, we implement this by defining a deletion round as one in which the edge at the back of the queue is at the front.We stop when the size of the queue does not change over two successive deletion rounds and output H.
[ { "created": "Fri, 20 Mar 2020 02:45:21 GMT", "version": "v1" } ]
2020-04-01
[ [ "Khanduja", "Sudiksha", "" ], [ "Srivastava", "Aayushi", "" ], [ "Rahman", "Md. Zamilur", "" ], [ "Mukhopadhyay", "Asish", "" ] ]
We propose a scheme for generating a weakly chordal graph from a randomly generated input graph, G = (V, E). We reduce G to a chordal graph H by adding fill-edges, using the minimum vertex degree heuristic. Since H is necessarily a weakly chordal graph, we use an algorithm for deleting edges from a weakly chordal graph that preserves the weak chordality property of H. The edges that are candidates for deletion are the fill-edges that were inserted into G. In order to delete a maximal number of fill-edges, we maintain these in a queue. A fill-edge is removed from the front of the queue, which we then try to delete from H. If this violates the weak chordality property of H, we reinsert this edge at the back of the queue. This loop continues till no more fill-edges can be removed from H. Operationally, we implement this by defining a deletion round as one in which the edge at the back of the queue is at the front.We stop when the size of the queue does not change over two successive deletion rounds and output H.
0906.3920
EPTCS
Claudio Guidi, Fabrizio Montesi
Reasoning About a Service-oriented Programming Paradigm
null
EPTCS 2, 2009, pp. 67-81
10.4204/EPTCS.2.6
null
cs.PL cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper is about a new way for programming distributed applications: the service-oriented one. It is a concept paper based upon our experience in developing a theory and a language for programming services. Both the theoretical formalization and the language interpreter showed us the evidence that a new programming paradigm exists. In this paper we illustrate the basic features it is characterized by.
[ { "created": "Mon, 22 Jun 2009 05:49:12 GMT", "version": "v1" } ]
2009-06-23
[ [ "Guidi", "Claudio", "" ], [ "Montesi", "Fabrizio", "" ] ]
This paper is about a new way for programming distributed applications: the service-oriented one. It is a concept paper based upon our experience in developing a theory and a language for programming services. Both the theoretical formalization and the language interpreter showed us the evidence that a new programming paradigm exists. In this paper we illustrate the basic features it is characterized by.
2011.11052
Ahror Belaid
Hicham Messaoudi, Ahror Belaid, Mohamed Lamine Allaoui, Ahcene Zetout, Mohand Said Allili, Souhil Tliba, Douraied Ben Salem, Pierre-Henri Conze
Efficient embedding network for 3D brain tumor segmentation
Multimodal Brain Tumor Segmentation Challenge 2020
Multimodal Brain Tumor Segmentation Challenge 2020 (BRATS) BrainLes 2020
null
30
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
3D medical image processing with deep learning greatly suffers from a lack of data. Thus, studies carried out in this field are limited compared to works related to 2D natural image analysis, where very large datasets exist. As a result, powerful and efficient 2D convolutional neural networks have been developed and trained. In this paper, we investigate a way to transfer the performance of a two-dimensional classiffication network for the purpose of three-dimensional semantic segmentation of brain tumors. We propose an asymmetric U-Net network by incorporating the EfficientNet model as part of the encoding branch. As the input data is in 3D, the first layers of the encoder are devoted to the reduction of the third dimension in order to fit the input of the EfficientNet network. Experimental results on validation and test data from the BraTS 2020 challenge demonstrate that the proposed method achieve promising performance.
[ { "created": "Sun, 22 Nov 2020 16:17:29 GMT", "version": "v1" } ]
2020-11-24
[ [ "Messaoudi", "Hicham", "" ], [ "Belaid", "Ahror", "" ], [ "Allaoui", "Mohamed Lamine", "" ], [ "Zetout", "Ahcene", "" ], [ "Allili", "Mohand Said", "" ], [ "Tliba", "Souhil", "" ], [ "Salem", "Douraied Ben", "" ], [ "Conze", "Pierre-Henri", "" ] ]
3D medical image processing with deep learning greatly suffers from a lack of data. Thus, studies carried out in this field are limited compared to works related to 2D natural image analysis, where very large datasets exist. As a result, powerful and efficient 2D convolutional neural networks have been developed and trained. In this paper, we investigate a way to transfer the performance of a two-dimensional classiffication network for the purpose of three-dimensional semantic segmentation of brain tumors. We propose an asymmetric U-Net network by incorporating the EfficientNet model as part of the encoding branch. As the input data is in 3D, the first layers of the encoder are devoted to the reduction of the third dimension in order to fit the input of the EfficientNet network. Experimental results on validation and test data from the BraTS 2020 challenge demonstrate that the proposed method achieve promising performance.
2103.14600
Alper Kamil Bozkurt
Alper Kamil Bozkurt, Yu Wang, Miroslav Pajic
Model-Free Learning of Safe yet Effective Controllers
null
null
null
null
cs.RO cs.FL cs.LG cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of learning safe control policies that are also effective; i.e., maximizing the probability of satisfying a linear temporal logic (LTL) specification of a task, and the discounted reward capturing the (classic) control performance. We consider unknown environments modeled as Markov decision processes. We propose a model-free reinforcement learning algorithm that learns a policy that first maximizes the probability of ensuring safety, then the probability of satisfying the given LTL specification and lastly, the sum of discounted Quality of Control rewards. Finally, we illustrate applicability of our RL-based approach.
[ { "created": "Fri, 26 Mar 2021 17:05:12 GMT", "version": "v1" }, { "created": "Sun, 26 Sep 2021 22:57:53 GMT", "version": "v2" } ]
2021-09-28
[ [ "Bozkurt", "Alper Kamil", "" ], [ "Wang", "Yu", "" ], [ "Pajic", "Miroslav", "" ] ]
We study the problem of learning safe control policies that are also effective; i.e., maximizing the probability of satisfying a linear temporal logic (LTL) specification of a task, and the discounted reward capturing the (classic) control performance. We consider unknown environments modeled as Markov decision processes. We propose a model-free reinforcement learning algorithm that learns a policy that first maximizes the probability of ensuring safety, then the probability of satisfying the given LTL specification and lastly, the sum of discounted Quality of Control rewards. Finally, we illustrate applicability of our RL-based approach.
2310.05934
Se Jin Park
Se Jin Park, Joanna Hong, Minsu Kim, Yong Man Ro
DF-3DFace: One-to-Many Speech Synchronized 3D Face Animation with Diffusion
null
null
null
null
cs.CV cs.AI cs.MM eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Speech-driven 3D facial animation has gained significant attention for its ability to create realistic and expressive facial animations in 3D space based on speech. Learning-based methods have shown promising progress in achieving accurate facial motion synchronized with speech. However, one-to-many nature of speech-to-3D facial synthesis has not been fully explored: while the lip accurately synchronizes with the speech content, other facial attributes beyond speech-related motions are variable with respect to the speech. To account for the potential variance in the facial attributes within a single speech, we propose DF-3DFace, a diffusion-driven speech-to-3D face mesh synthesis. DF-3DFace captures the complex one-to-many relationships between speech and 3D face based on diffusion. It concurrently achieves aligned lip motion by exploiting audio-mesh synchronization and masked conditioning. Furthermore, the proposed method jointly models identity and pose in addition to facial motions so that it can generate 3D face animation without requiring a reference identity mesh and produce natural head poses. We contribute a new large-scale 3D facial mesh dataset, 3D-HDTF to enable the synthesis of variations in identities, poses, and facial motions of 3D face mesh. Extensive experiments demonstrate that our method successfully generates highly variable facial shapes and motions from speech and simultaneously achieves more realistic facial animation than the state-of-the-art methods.
[ { "created": "Wed, 23 Aug 2023 04:14:55 GMT", "version": "v1" } ]
2023-10-11
[ [ "Park", "Se Jin", "" ], [ "Hong", "Joanna", "" ], [ "Kim", "Minsu", "" ], [ "Ro", "Yong Man", "" ] ]
Speech-driven 3D facial animation has gained significant attention for its ability to create realistic and expressive facial animations in 3D space based on speech. Learning-based methods have shown promising progress in achieving accurate facial motion synchronized with speech. However, one-to-many nature of speech-to-3D facial synthesis has not been fully explored: while the lip accurately synchronizes with the speech content, other facial attributes beyond speech-related motions are variable with respect to the speech. To account for the potential variance in the facial attributes within a single speech, we propose DF-3DFace, a diffusion-driven speech-to-3D face mesh synthesis. DF-3DFace captures the complex one-to-many relationships between speech and 3D face based on diffusion. It concurrently achieves aligned lip motion by exploiting audio-mesh synchronization and masked conditioning. Furthermore, the proposed method jointly models identity and pose in addition to facial motions so that it can generate 3D face animation without requiring a reference identity mesh and produce natural head poses. We contribute a new large-scale 3D facial mesh dataset, 3D-HDTF to enable the synthesis of variations in identities, poses, and facial motions of 3D face mesh. Extensive experiments demonstrate that our method successfully generates highly variable facial shapes and motions from speech and simultaneously achieves more realistic facial animation than the state-of-the-art methods.
2002.00252
Kevin Vermeulen
Kevin Vermeulen, Burim Ljuma, Vamsi Addanki, Matthieu Gouel, Olivier Fourmaux, Timur Friedman and Reza Rejaie
Alias Resolution Based on ICMP Rate Limiting
Preprint to appear in Proceedings of Passive and Active Measurement (PAM 2020) Conference, Eugene, OR, March 2020
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Alias resolution techniques (e.g., Midar) associate, mostly through active measurement, a set of IP addresses as belonging to a common router. These techniques rely on distinct router features that can serve as a signature. Their applicability is affected by router support of the features and the robustness of the signature. This paper presents a new alias resolution tool called Limited Ltd. that exploits ICMP rate limiting, a feature that is increasingly supported by modern routers that has not previously been used for alias resolution. It sends ICMP probes toward target interfaces in order to trigger rate limiting, extracting features from the probe reply loss traces. It uses a machine learning classifier to designate pairs of interfaces as aliases. We describe the details of the algorithm used by Limited Ltd. and illustrate its feasibility and accuracy. Limited Ltd. not only is the first tool that can perform alias resolution on IPv6 routers that do not generate monotonically increasing fragmentation IDs (e.g., Juniper routers) but it also complements the state-of-the-art techniques for IPv4 alias resolution. All of our code and the collected dataset are publicly available.
[ { "created": "Sat, 1 Feb 2020 18:11:19 GMT", "version": "v1" } ]
2020-02-04
[ [ "Vermeulen", "Kevin", "" ], [ "Ljuma", "Burim", "" ], [ "Addanki", "Vamsi", "" ], [ "Gouel", "Matthieu", "" ], [ "Fourmaux", "Olivier", "" ], [ "Friedman", "Timur", "" ], [ "Rejaie", "Reza", "" ] ]
Alias resolution techniques (e.g., Midar) associate, mostly through active measurement, a set of IP addresses as belonging to a common router. These techniques rely on distinct router features that can serve as a signature. Their applicability is affected by router support of the features and the robustness of the signature. This paper presents a new alias resolution tool called Limited Ltd. that exploits ICMP rate limiting, a feature that is increasingly supported by modern routers that has not previously been used for alias resolution. It sends ICMP probes toward target interfaces in order to trigger rate limiting, extracting features from the probe reply loss traces. It uses a machine learning classifier to designate pairs of interfaces as aliases. We describe the details of the algorithm used by Limited Ltd. and illustrate its feasibility and accuracy. Limited Ltd. not only is the first tool that can perform alias resolution on IPv6 routers that do not generate monotonically increasing fragmentation IDs (e.g., Juniper routers) but it also complements the state-of-the-art techniques for IPv4 alias resolution. All of our code and the collected dataset are publicly available.
1408.0540
Awais Khawar
Awais Khawar, Ahmed Abdelhadi, and T. Charles Clancy
Target Detection Performance of Spectrum Sharing MIMO Radars
submitted to IEEE transactions. Distribution Statement A: Approved for public release; distribution is unlimited
null
10.1109/JSEN.2015.2424393
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Future wireless communication systems are envisioned to share radio frequency (RF) spectrum, with other services such as radars, in order to meet the growing spectrum demands. In this paper, we consider co-channel spectrum sharing between cellular systems and radars. We address the problem of target detection by radars that are subject to shape its waveform in a way that it does not cause interference to cellular systems. We consider a multiple-input multiple-output (MIMO) radar and a MIMO cellular communication system with $\mc K$ base stations (BS). We propose a spectrum sharing algorithm which steers radar nulls, by projecting radar waveform onto the null space of interference channel, towards a `selected' BS, thus, protecting it from radar interference. This BS is selected, among $\mc K$ BSs, on the basis of guaranteeing minimum waveform degradation. We study target detection capabilities of this null-space projected (NSP) waveform and compare it with the orthogonal waveform. We derive the generalized likelihood ratio test (GLRT) for target detection and derive detector statistic for NSP and orthogonal waveform. The target detection performance for NSP and orthogonal waveform is studied theoretically and via Monte Carlo simulations.
[ { "created": "Sun, 3 Aug 2014 20:33:10 GMT", "version": "v1" }, { "created": "Thu, 14 Aug 2014 19:31:49 GMT", "version": "v2" } ]
2016-11-18
[ [ "Khawar", "Awais", "" ], [ "Abdelhadi", "Ahmed", "" ], [ "Clancy", "T. Charles", "" ] ]
Future wireless communication systems are envisioned to share radio frequency (RF) spectrum, with other services such as radars, in order to meet the growing spectrum demands. In this paper, we consider co-channel spectrum sharing between cellular systems and radars. We address the problem of target detection by radars that are subject to shape its waveform in a way that it does not cause interference to cellular systems. We consider a multiple-input multiple-output (MIMO) radar and a MIMO cellular communication system with $\mc K$ base stations (BS). We propose a spectrum sharing algorithm which steers radar nulls, by projecting radar waveform onto the null space of interference channel, towards a `selected' BS, thus, protecting it from radar interference. This BS is selected, among $\mc K$ BSs, on the basis of guaranteeing minimum waveform degradation. We study target detection capabilities of this null-space projected (NSP) waveform and compare it with the orthogonal waveform. We derive the generalized likelihood ratio test (GLRT) for target detection and derive detector statistic for NSP and orthogonal waveform. The target detection performance for NSP and orthogonal waveform is studied theoretically and via Monte Carlo simulations.
2010.07374
Jean-Samuel Leboeuf
Jean-Samuel Leboeuf, Fr\'ed\'eric LeBlanc and Mario Marchand
Decision trees as partitioning machines to characterize their generalization properties
9 pages, 5 appendices
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Decision trees are popular machine learning models that are simple to build and easy to interpret. Even though algorithms to learn decision trees date back to almost 50 years, key properties affecting their generalization error are still weakly bounded. Hence, we revisit binary decision trees on real-valued features from the perspective of partitions of the data. We introduce the notion of partitioning function, and we relate it to the growth function and to the VC dimension. Using this new concept, we are able to find the exact VC dimension of decision stumps, which is given by the largest integer $d$ such that $2\ell \ge \binom{d}{\left\lfloor\frac{d}{2}\right\rfloor}$, where $\ell$ is the number of real-valued features. We provide a recursive expression to bound the partitioning functions, resulting in a upper bound on the growth function of any decision tree structure. This allows us to show that the VC dimension of a binary tree structure with $N$ internal nodes is of order $N \log(N\ell)$. Finally, we elaborate a pruning algorithm based on these results that performs better than the CART algorithm on a number of datasets, with the advantage that no cross-validation is required.
[ { "created": "Wed, 14 Oct 2020 19:25:58 GMT", "version": "v1" } ]
2020-10-16
[ [ "Leboeuf", "Jean-Samuel", "" ], [ "LeBlanc", "Frédéric", "" ], [ "Marchand", "Mario", "" ] ]
Decision trees are popular machine learning models that are simple to build and easy to interpret. Even though algorithms to learn decision trees date back to almost 50 years, key properties affecting their generalization error are still weakly bounded. Hence, we revisit binary decision trees on real-valued features from the perspective of partitions of the data. We introduce the notion of partitioning function, and we relate it to the growth function and to the VC dimension. Using this new concept, we are able to find the exact VC dimension of decision stumps, which is given by the largest integer $d$ such that $2\ell \ge \binom{d}{\left\lfloor\frac{d}{2}\right\rfloor}$, where $\ell$ is the number of real-valued features. We provide a recursive expression to bound the partitioning functions, resulting in a upper bound on the growth function of any decision tree structure. This allows us to show that the VC dimension of a binary tree structure with $N$ internal nodes is of order $N \log(N\ell)$. Finally, we elaborate a pruning algorithm based on these results that performs better than the CART algorithm on a number of datasets, with the advantage that no cross-validation is required.
2402.12660
Liumeng Xue
Liumeng Xue, Chaoren Wang, Mingxuan Wang, Xueyao Zhang, Jun Han, Zhizheng Wu
SingVisio: Visual Analytics of Diffusion Model for Singing Voice Conversion
null
null
null
null
cs.SD cs.HC eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this study, we present SingVisio, an interactive visual analysis system that aims to explain the diffusion model used in singing voice conversion. SingVisio provides a visual display of the generation process in diffusion models, showcasing the step-by-step denoising of the noisy spectrum and its transformation into a clean spectrum that captures the desired singer's timbre. The system also facilitates side-by-side comparisons of different conditions, such as source content, melody, and target timbre, highlighting the impact of these conditions on the diffusion generation process and resulting conversions. Through comprehensive evaluations, SingVisio demonstrates its effectiveness in terms of system design, functionality, explainability, and user-friendliness. It offers users of various backgrounds valuable learning experiences and insights into the diffusion model for singing voice conversion.
[ { "created": "Tue, 20 Feb 2024 02:16:24 GMT", "version": "v1" } ]
2024-02-21
[ [ "Xue", "Liumeng", "" ], [ "Wang", "Chaoren", "" ], [ "Wang", "Mingxuan", "" ], [ "Zhang", "Xueyao", "" ], [ "Han", "Jun", "" ], [ "Wu", "Zhizheng", "" ] ]
In this study, we present SingVisio, an interactive visual analysis system that aims to explain the diffusion model used in singing voice conversion. SingVisio provides a visual display of the generation process in diffusion models, showcasing the step-by-step denoising of the noisy spectrum and its transformation into a clean spectrum that captures the desired singer's timbre. The system also facilitates side-by-side comparisons of different conditions, such as source content, melody, and target timbre, highlighting the impact of these conditions on the diffusion generation process and resulting conversions. Through comprehensive evaluations, SingVisio demonstrates its effectiveness in terms of system design, functionality, explainability, and user-friendliness. It offers users of various backgrounds valuable learning experiences and insights into the diffusion model for singing voice conversion.
1707.04504
Helena Peic Tukuljac
Helena Peic Tukuljac, Herve Lissek and Pierre Vandergheynst
Localization of Sound Sources in a Room with One Microphone
null
null
null
null
cs.SD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Estimation of the location of sound sources is usually done using microphone arrays. Such settings provide an environment where we know the difference between the received signals among different microphones in the terms of phase or attenuation, which enables localization of the sound sources. In our solution we exploit the properties of the room transfer function in order to localize a sound source inside a room with only one microphone. The shape of the room and the position of the microphone are assumed to be known. The design guidelines and limitations of the sensing matrix are given. Implementation is based on the sparsity in the terms of voxels in a room that are occupied by a source. What is especially interesting about our solution is that we provide localization of the sound sources not only in the horizontal plane, but in the terms of the 3D coordinates inside the room.
[ { "created": "Fri, 14 Jul 2017 13:25:44 GMT", "version": "v1" } ]
2017-07-17
[ [ "Tukuljac", "Helena Peic", "" ], [ "Lissek", "Herve", "" ], [ "Vandergheynst", "Pierre", "" ] ]
Estimation of the location of sound sources is usually done using microphone arrays. Such settings provide an environment where we know the difference between the received signals among different microphones in the terms of phase or attenuation, which enables localization of the sound sources. In our solution we exploit the properties of the room transfer function in order to localize a sound source inside a room with only one microphone. The shape of the room and the position of the microphone are assumed to be known. The design guidelines and limitations of the sensing matrix are given. Implementation is based on the sparsity in the terms of voxels in a room that are occupied by a source. What is especially interesting about our solution is that we provide localization of the sound sources not only in the horizontal plane, but in the terms of the 3D coordinates inside the room.
2008.09753
Tai-Xiang Jiang
Yi-Si Luo, Xi-Le Zhao, Tai-Xiang Jiang, Yu-Bang Zheng, Yi Chang
Unsupervised Hyperspectral Mixed Noise Removal Via Spatial-Spectral Constrained Deep Image Prior
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recently, convolutional neural network (CNN)-based methods are proposed for hyperspectral images (HSIs) denoising. Among them, unsupervised methods such as the deep image prior (DIP) have received much attention because these methods do not require any training data. However, DIP suffers from the semi-convergence behavior, i.e., the iteration of DIP needs to terminate by referring to the ground-truth image at the optimal iteration point. In this paper, we propose the spatial-spectral constrained deep image prior (S2DIP) for HSI mixed noise removal. Specifically, we incorporate DIP with a spatial-spectral total variation (SSTV) term to fully preserve the spatial-spectral local smoothness of the HSI and an $\ell_1$-norm term to capture the complex sparse noise. The proposed S2DIP jointly leverages the expressive power brought from the deep CNN without any training data and exploits the HSI and noise structures via hand-crafted priors. Thus, our method avoids the semi-convergence behavior, showing higher stabilities than DIP. Meanwhile, our method largely enhances the HSI denoising ability of DIP. To tackle the proposed denoising model, we develop an alternating direction multiplier method algorithm. Extensive experiments demonstrate that the proposed S2DIP outperforms optimization-based and supervised CNN-based state-of-the-art HSI denoising methods.
[ { "created": "Sat, 22 Aug 2020 04:25:08 GMT", "version": "v1" }, { "created": "Thu, 10 Jun 2021 14:22:11 GMT", "version": "v2" } ]
2021-06-11
[ [ "Luo", "Yi-Si", "" ], [ "Zhao", "Xi-Le", "" ], [ "Jiang", "Tai-Xiang", "" ], [ "Zheng", "Yu-Bang", "" ], [ "Chang", "Yi", "" ] ]
Recently, convolutional neural network (CNN)-based methods are proposed for hyperspectral images (HSIs) denoising. Among them, unsupervised methods such as the deep image prior (DIP) have received much attention because these methods do not require any training data. However, DIP suffers from the semi-convergence behavior, i.e., the iteration of DIP needs to terminate by referring to the ground-truth image at the optimal iteration point. In this paper, we propose the spatial-spectral constrained deep image prior (S2DIP) for HSI mixed noise removal. Specifically, we incorporate DIP with a spatial-spectral total variation (SSTV) term to fully preserve the spatial-spectral local smoothness of the HSI and an $\ell_1$-norm term to capture the complex sparse noise. The proposed S2DIP jointly leverages the expressive power brought from the deep CNN without any training data and exploits the HSI and noise structures via hand-crafted priors. Thus, our method avoids the semi-convergence behavior, showing higher stabilities than DIP. Meanwhile, our method largely enhances the HSI denoising ability of DIP. To tackle the proposed denoising model, we develop an alternating direction multiplier method algorithm. Extensive experiments demonstrate that the proposed S2DIP outperforms optimization-based and supervised CNN-based state-of-the-art HSI denoising methods.
2105.01901
Nicolas Kuhn Dr.
Kuhn Nicolas and Fernandes David and Dubois Emmanuel and Pradas David
Impact of channel access and transport mechanisms on QoE in GEO-satellite based LTE backhauling systems
5 pages, 5 Figures, 6 Tables
null
null
null
cs.NI
http://creativecommons.org/publicdomain/zero/1.0/
Backhauling services through satellite systems have doubled between 2012 and 2018. There is an increasing demand for this service for which satellite systems typically allocate a fixed resource. This solution may not help in optimizing the usage of the scarce satellite resource. This study measures the relevance of using dynamic resource allocation mechanisms for backhaul services through satellite systems. The satellite system is emulated with OpenSAND, the LTE system with Amarisoft and the experiments are orchestrated by OpenBACH. We compare the relevance of applying TCP PEP mechanisms and dynamic resource allocations for different traffic services by measuring the QoE for web browsing, data transfer and VoIP applications. The main conclusions are the following. When the system is congested, PEP and layer-2 access mechanisms do not provide significant improvements. When the system is not congested, data transfer can be greatly improved through protocols and channel access mechanism optimization. Tuning the Constant Rate Assignment can help in reducing the cost of the resource and provide QoE improvements when the network is not loaded.
[ { "created": "Wed, 5 May 2021 07:27:29 GMT", "version": "v1" } ]
2021-05-06
[ [ "Nicolas", "Kuhn", "" ], [ "David", "Fernandes", "" ], [ "Emmanuel", "Dubois", "" ], [ "David", "Pradas", "" ] ]
Backhauling services through satellite systems have doubled between 2012 and 2018. There is an increasing demand for this service for which satellite systems typically allocate a fixed resource. This solution may not help in optimizing the usage of the scarce satellite resource. This study measures the relevance of using dynamic resource allocation mechanisms for backhaul services through satellite systems. The satellite system is emulated with OpenSAND, the LTE system with Amarisoft and the experiments are orchestrated by OpenBACH. We compare the relevance of applying TCP PEP mechanisms and dynamic resource allocations for different traffic services by measuring the QoE for web browsing, data transfer and VoIP applications. The main conclusions are the following. When the system is congested, PEP and layer-2 access mechanisms do not provide significant improvements. When the system is not congested, data transfer can be greatly improved through protocols and channel access mechanism optimization. Tuning the Constant Rate Assignment can help in reducing the cost of the resource and provide QoE improvements when the network is not loaded.
2108.07190
Jiska Classen
Jiska Classen and Matthias Hollick
Happy MitM: Fun and Toys in Every Bluetooth Device
null
WiSec 2021: Proceedings of the 14th ACM Conference on Security and Privacy in Wireless and Mobile Networks
10.1145/3448300.3467822
null
cs.CR cs.NI
http://creativecommons.org/licenses/by/4.0/
Bluetooth pairing establishes trust on first use between two devices by creating a shared key. Similar to certificate warnings in TLS, the Bluetooth specification requires warning users upon issues with this key, because this can indicate ongoing Machine-in-the-Middle (MitM) attacks. This paper uncovers that none of the major Bluetooth stacks warns users, which violates the specification. Clear warnings would protect users from recently published and potential future security issues in Bluetooth authentication and encryption.
[ { "created": "Mon, 16 Aug 2021 15:56:08 GMT", "version": "v1" } ]
2021-08-17
[ [ "Classen", "Jiska", "" ], [ "Hollick", "Matthias", "" ] ]
Bluetooth pairing establishes trust on first use between two devices by creating a shared key. Similar to certificate warnings in TLS, the Bluetooth specification requires warning users upon issues with this key, because this can indicate ongoing Machine-in-the-Middle (MitM) attacks. This paper uncovers that none of the major Bluetooth stacks warns users, which violates the specification. Clear warnings would protect users from recently published and potential future security issues in Bluetooth authentication and encryption.
1805.05409
Jason Anastasopoulos
L. Jason Anastasopoulos and Andrew B. Whitford
Machine Learning for Public Administration Research, with Application to Organizational Reputation
null
null
null
null
cs.CY cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning methods have gained a great deal of popularity in recent years among public administration scholars and practitioners. These techniques open the door to the analysis of text, image and other types of data that allow us to test foundational theories of public administration and to develop new theories. Despite the excitement surrounding machine learning methods, clarity regarding their proper use and potential pitfalls is lacking. This paper attempts to fill this gap in the literature through providing a machine learning "guide to practice" for public administration scholars and practitioners. Here, we take a foundational view of machine learning and describe how these methods can enrich public administration research and practice through their ability develop new measures, tap into new sources of data and conduct statistical inference and causal inference in a principled manner. We then turn our attention to the pitfalls of using these methods such as unvalidated measures and lack of interpretability. Finally, we demonstrate how machine learning techniques can help us learn about organizational reputation in federal agencies through an illustrated example using tweets from 13 executive federal agencies.
[ { "created": "Fri, 11 May 2018 14:30:30 GMT", "version": "v1" }, { "created": "Tue, 11 Sep 2018 15:32:10 GMT", "version": "v2" } ]
2018-09-12
[ [ "Anastasopoulos", "L. Jason", "" ], [ "Whitford", "Andrew B.", "" ] ]
Machine learning methods have gained a great deal of popularity in recent years among public administration scholars and practitioners. These techniques open the door to the analysis of text, image and other types of data that allow us to test foundational theories of public administration and to develop new theories. Despite the excitement surrounding machine learning methods, clarity regarding their proper use and potential pitfalls is lacking. This paper attempts to fill this gap in the literature through providing a machine learning "guide to practice" for public administration scholars and practitioners. Here, we take a foundational view of machine learning and describe how these methods can enrich public administration research and practice through their ability develop new measures, tap into new sources of data and conduct statistical inference and causal inference in a principled manner. We then turn our attention to the pitfalls of using these methods such as unvalidated measures and lack of interpretability. Finally, we demonstrate how machine learning techniques can help us learn about organizational reputation in federal agencies through an illustrated example using tweets from 13 executive federal agencies.
1911.11361
Yifan Wu
Yifan Wu, George Tucker, Ofir Nachum
Behavior Regularized Offline Reinforcement Learning
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In reinforcement learning (RL) research, it is common to assume access to direct online interactions with the environment. However in many real-world applications, access to the environment is limited to a fixed offline dataset of logged experience. In such settings, standard RL algorithms have been shown to diverge or otherwise yield poor performance. Accordingly, recent work has suggested a number of remedies to these issues. In this work, we introduce a general framework, behavior regularized actor critic (BRAC), to empirically evaluate recently proposed methods as well as a number of simple baselines across a variety of offline continuous control tasks. Surprisingly, we find that many of the technical complexities introduced in recent methods are unnecessary to achieve strong performance. Additional ablations provide insights into which design choices matter most in the offline RL setting.
[ { "created": "Tue, 26 Nov 2019 06:11:34 GMT", "version": "v1" } ]
2019-11-27
[ [ "Wu", "Yifan", "" ], [ "Tucker", "George", "" ], [ "Nachum", "Ofir", "" ] ]
In reinforcement learning (RL) research, it is common to assume access to direct online interactions with the environment. However in many real-world applications, access to the environment is limited to a fixed offline dataset of logged experience. In such settings, standard RL algorithms have been shown to diverge or otherwise yield poor performance. Accordingly, recent work has suggested a number of remedies to these issues. In this work, we introduce a general framework, behavior regularized actor critic (BRAC), to empirically evaluate recently proposed methods as well as a number of simple baselines across a variety of offline continuous control tasks. Surprisingly, we find that many of the technical complexities introduced in recent methods are unnecessary to achieve strong performance. Additional ablations provide insights into which design choices matter most in the offline RL setting.
1202.6095
Henry Pfister
Yung-Yih Jian, Henry D. Pfister, Krishna R. Narayanan
Approaching Capacity at High-Rates with Iterative Hard-Decision Decoding
22 pages, this version accepted to the IEEE Transactions on Information Theory
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A variety of low-density parity-check (LDPC) ensembles have now been observed to approach capacity with message-passing decoding. However, all of them use soft (i.e., non-binary) messages and a posteriori probability (APP) decoding of their component codes. In this paper, we show that one can approach capacity at high rates using iterative hard-decision decoding (HDD) of generalized product codes. Specifically, a class of spatially-coupled GLDPC codes with BCH component codes is considered, and it is observed that, in the high-rate regime, they can approach capacity under the proposed iterative HDD. These codes can be seen as generalized product codes and are closely related to braided block codes. An iterative HDD algorithm is proposed that enables one to analyze the performance of these codes via density evolution (DE).
[ { "created": "Tue, 28 Feb 2012 00:10:52 GMT", "version": "v1" }, { "created": "Mon, 20 Aug 2012 19:09:02 GMT", "version": "v2" }, { "created": "Sun, 31 May 2015 00:51:00 GMT", "version": "v3" }, { "created": "Wed, 17 May 2017 15:41:52 GMT", "version": "v4" } ]
2017-05-18
[ [ "Jian", "Yung-Yih", "" ], [ "Pfister", "Henry D.", "" ], [ "Narayanan", "Krishna R.", "" ] ]
A variety of low-density parity-check (LDPC) ensembles have now been observed to approach capacity with message-passing decoding. However, all of them use soft (i.e., non-binary) messages and a posteriori probability (APP) decoding of their component codes. In this paper, we show that one can approach capacity at high rates using iterative hard-decision decoding (HDD) of generalized product codes. Specifically, a class of spatially-coupled GLDPC codes with BCH component codes is considered, and it is observed that, in the high-rate regime, they can approach capacity under the proposed iterative HDD. These codes can be seen as generalized product codes and are closely related to braided block codes. An iterative HDD algorithm is proposed that enables one to analyze the performance of these codes via density evolution (DE).
2310.18617
Branislav Kveton
Shima Alizadeh, Aniruddha Bhargava, Karthick Gopalswamy, Lalit Jain, Branislav Kveton, and Ge Liu
Pessimistic Off-Policy Multi-Objective Optimization
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-objective optimization is a type of decision making problems where multiple conflicting objectives are optimized. We study offline optimization of multi-objective policies from data collected by an existing policy. We propose a pessimistic estimator for the multi-objective policy values that can be easily plugged into existing formulas for hypervolume computation and optimized. The estimator is based on inverse propensity scores (IPS), and improves upon a naive IPS estimator in both theory and experiments. Our analysis is general, and applies beyond our IPS estimators and methods for optimizing them. The pessimistic estimator can be optimized by policy gradients and performs well in all of our experiments.
[ { "created": "Sat, 28 Oct 2023 06:50:15 GMT", "version": "v1" } ]
2023-10-31
[ [ "Alizadeh", "Shima", "" ], [ "Bhargava", "Aniruddha", "" ], [ "Gopalswamy", "Karthick", "" ], [ "Jain", "Lalit", "" ], [ "Kveton", "Branislav", "" ], [ "Liu", "Ge", "" ] ]
Multi-objective optimization is a type of decision making problems where multiple conflicting objectives are optimized. We study offline optimization of multi-objective policies from data collected by an existing policy. We propose a pessimistic estimator for the multi-objective policy values that can be easily plugged into existing formulas for hypervolume computation and optimized. The estimator is based on inverse propensity scores (IPS), and improves upon a naive IPS estimator in both theory and experiments. Our analysis is general, and applies beyond our IPS estimators and methods for optimizing them. The pessimistic estimator can be optimized by policy gradients and performs well in all of our experiments.
2204.04026
Anne Lauscher
Carolin Holtermann, Anne Lauscher, Simone Paolo Ponzetto
Fair and Argumentative Language Modeling for Computational Argumentation
ACL 2022
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
Although much work in NLP has focused on measuring and mitigating stereotypical bias in semantic spaces, research addressing bias in computational argumentation is still in its infancy. In this paper, we address this research gap and conduct a thorough investigation of bias in argumentative language models. To this end, we introduce ABBA, a novel resource for bias measurement specifically tailored to argumentation. We employ our resource to assess the effect of argumentative fine-tuning and debiasing on the intrinsic bias found in transformer-based language models using a lightweight adapter-based approach that is more sustainable and parameter-efficient than full fine-tuning. Finally, we analyze the potential impact of language model debiasing on the performance in argument quality prediction, a downstream task of computational argumentation. Our results show that we are able to successfully and sustainably remove bias in general and argumentative language models while preserving (and sometimes improving) model performance in downstream tasks. We make all experimental code and data available at https://github.com/umanlp/FairArgumentativeLM.
[ { "created": "Fri, 8 Apr 2022 12:23:46 GMT", "version": "v1" } ]
2022-04-11
[ [ "Holtermann", "Carolin", "" ], [ "Lauscher", "Anne", "" ], [ "Ponzetto", "Simone Paolo", "" ] ]
Although much work in NLP has focused on measuring and mitigating stereotypical bias in semantic spaces, research addressing bias in computational argumentation is still in its infancy. In this paper, we address this research gap and conduct a thorough investigation of bias in argumentative language models. To this end, we introduce ABBA, a novel resource for bias measurement specifically tailored to argumentation. We employ our resource to assess the effect of argumentative fine-tuning and debiasing on the intrinsic bias found in transformer-based language models using a lightweight adapter-based approach that is more sustainable and parameter-efficient than full fine-tuning. Finally, we analyze the potential impact of language model debiasing on the performance in argument quality prediction, a downstream task of computational argumentation. Our results show that we are able to successfully and sustainably remove bias in general and argumentative language models while preserving (and sometimes improving) model performance in downstream tasks. We make all experimental code and data available at https://github.com/umanlp/FairArgumentativeLM.
1906.07194
Cristian Danescu-Niculescu-Mizil
Justine Zhang, Robert Filbin, Christine Morrison, Jaclyn Weiser, Cristian Danescu-Niculescu-Mizil
Finding Your Voice: The Linguistic Development of Mental Health Counselors
To appear at ACL 2019, 12 pages, 2 figures; code available through the Cornell Conversational Analysis Toolkit (https://convokit.cornell.edu)
null
null
null
cs.CL cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mental health counseling is an enterprise with profound societal importance where conversations play a primary role. In order to acquire the conversational skills needed to face a challenging range of situations, mental health counselors must rely on training and on continued experience with actual clients. However, in the absence of large scale longitudinal studies, the nature and significance of this developmental process remain unclear. For example, prior literature suggests that experience might not translate into consequential changes in counselor behavior. This has led some to even argue that counseling is a profession without expertise. In this work, we develop a computational framework to quantify the extent to which individuals change their linguistic behavior with experience and to study the nature of this evolution. We use our framework to conduct a large longitudinal study of mental health counseling conversations, tracking over 3,400 counselors across their tenure. We reveal that overall, counselors do indeed change their conversational behavior to become more diverse across interactions, developing an individual voice that distinguishes them from other counselors. Furthermore, a finer-grained investigation shows that the rate and nature of this diversification vary across functionally different conversational components.
[ { "created": "Mon, 17 Jun 2019 18:00:04 GMT", "version": "v1" } ]
2019-06-19
[ [ "Zhang", "Justine", "" ], [ "Filbin", "Robert", "" ], [ "Morrison", "Christine", "" ], [ "Weiser", "Jaclyn", "" ], [ "Danescu-Niculescu-Mizil", "Cristian", "" ] ]
Mental health counseling is an enterprise with profound societal importance where conversations play a primary role. In order to acquire the conversational skills needed to face a challenging range of situations, mental health counselors must rely on training and on continued experience with actual clients. However, in the absence of large scale longitudinal studies, the nature and significance of this developmental process remain unclear. For example, prior literature suggests that experience might not translate into consequential changes in counselor behavior. This has led some to even argue that counseling is a profession without expertise. In this work, we develop a computational framework to quantify the extent to which individuals change their linguistic behavior with experience and to study the nature of this evolution. We use our framework to conduct a large longitudinal study of mental health counseling conversations, tracking over 3,400 counselors across their tenure. We reveal that overall, counselors do indeed change their conversational behavior to become more diverse across interactions, developing an individual voice that distinguishes them from other counselors. Furthermore, a finer-grained investigation shows that the rate and nature of this diversification vary across functionally different conversational components.
1501.03975
Vijay Manikandan Janakiraman
Vijay Manikandan Janakiraman and XuanLong Nguyen and Dennis Assanis
Stochastic Gradient Based Extreme Learning Machines For Online Learning of Advanced Combustion Engines
This paper was written as an extract from my PhD thesis (July 2013) and so references may not be to date as of this submission (Jan 2015). The article is in review and contains 10 figures, 35 references
null
null
null
cs.NE cs.LG cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article, a stochastic gradient based online learning algorithm for Extreme Learning Machines (ELM) is developed (SG-ELM). A stability criterion based on Lyapunov approach is used to prove both asymptotic stability of estimation error and stability in the estimated parameters suitable for identification of nonlinear dynamic systems. The developed algorithm not only guarantees stability, but also reduces the computational demand compared to the OS-ELM approach based on recursive least squares. In order to demonstrate the effectiveness of the algorithm on a real-world scenario, an advanced combustion engine identification problem is considered. The algorithm is applied to two case studies: An online regression learning for system identification of a Homogeneous Charge Compression Ignition (HCCI) Engine and an online classification learning (with class imbalance) for identifying the dynamic operating envelope of the HCCI Engine. The results indicate that the accuracy of the proposed SG-ELM is comparable to that of the state-of-the-art but adds stability and a reduction in computational effort.
[ { "created": "Fri, 16 Jan 2015 13:18:34 GMT", "version": "v1" } ]
2015-01-19
[ [ "Janakiraman", "Vijay Manikandan", "" ], [ "Nguyen", "XuanLong", "" ], [ "Assanis", "Dennis", "" ] ]
In this article, a stochastic gradient based online learning algorithm for Extreme Learning Machines (ELM) is developed (SG-ELM). A stability criterion based on Lyapunov approach is used to prove both asymptotic stability of estimation error and stability in the estimated parameters suitable for identification of nonlinear dynamic systems. The developed algorithm not only guarantees stability, but also reduces the computational demand compared to the OS-ELM approach based on recursive least squares. In order to demonstrate the effectiveness of the algorithm on a real-world scenario, an advanced combustion engine identification problem is considered. The algorithm is applied to two case studies: An online regression learning for system identification of a Homogeneous Charge Compression Ignition (HCCI) Engine and an online classification learning (with class imbalance) for identifying the dynamic operating envelope of the HCCI Engine. The results indicate that the accuracy of the proposed SG-ELM is comparable to that of the state-of-the-art but adds stability and a reduction in computational effort.
2204.08242
Jarek Duda Dr
Jarek Duda
Fast optimization of common basis for matrix set through Common Singular Value Decomposition
4 pages, 3 figures
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
SVD (singular value decomposition) is one of the basic tools of machine learning, allowing to optimize basis for a given matrix. However, sometimes we have a set of matrices $\{A_k\}_k$ instead, and would like to optimize a single common basis for them: find orthogonal matrices $U$, $V$, such that $\{U^T A_k V\}$ set of matrices is somehow simpler. For example DCT-II is orthonormal basis of functions commonly used in image/video compression - as discussed here, this kind of basis can be quickly automatically optimized for a given dataset. While also discussed gradient descent optimization might be computationally costly, there is proposed CSVD (common SVD): fast general approach based on SVD. Specifically, we choose $U$ as built of eigenvectors of $\sum_i (w_k)^q (A_k A_k^T)^p$ and $V$ of $\sum_k (w_k)^q (A_k^T A_k)^p$, where $w_k$ are their weights, $p,q>0$ are some chosen powers e.g. 1/2, optionally with normalization e.g. $A \to A - rc^T$ where $r_i=\sum_j A_{ij}, c_j =\sum_i A_{ij}$.
[ { "created": "Mon, 18 Apr 2022 10:18:51 GMT", "version": "v1" } ]
2022-04-19
[ [ "Duda", "Jarek", "" ] ]
SVD (singular value decomposition) is one of the basic tools of machine learning, allowing to optimize basis for a given matrix. However, sometimes we have a set of matrices $\{A_k\}_k$ instead, and would like to optimize a single common basis for them: find orthogonal matrices $U$, $V$, such that $\{U^T A_k V\}$ set of matrices is somehow simpler. For example DCT-II is orthonormal basis of functions commonly used in image/video compression - as discussed here, this kind of basis can be quickly automatically optimized for a given dataset. While also discussed gradient descent optimization might be computationally costly, there is proposed CSVD (common SVD): fast general approach based on SVD. Specifically, we choose $U$ as built of eigenvectors of $\sum_i (w_k)^q (A_k A_k^T)^p$ and $V$ of $\sum_k (w_k)^q (A_k^T A_k)^p$, where $w_k$ are their weights, $p,q>0$ are some chosen powers e.g. 1/2, optionally with normalization e.g. $A \to A - rc^T$ where $r_i=\sum_j A_{ij}, c_j =\sum_i A_{ij}$.
1304.7819
Michael Adrir Scott
Michael 'Adrir' Scott
Vocalnayno: Designing a Game-Based Intervention to Support Reading Development in Primary Schools
Presented at the 6th European Conference on Games-Based Learning, Oct 4-5, 2012, Cork, Ireland
Proceedings of the 6th European Conference on Games-Based Learning. ACPI: Reading, UK. 654--657
null
null
cs.CY cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Encouraging children to read frequently and helping them to develop their reading skills as effectively as possible can be a challenge for some primary schools. This research questions whether the use of a game-based intervention can integrate into the existing teaching culture to aid volunteer teaching assistants in achieving a more significant impact on pupil reading development. A prototype based on the initial process of requirements gathering is presented using Multimedia Fusion Developer 2. The design incorporates a game-like exercise where a foam volcano character releases bubbles containing letters and words. Pupils must read these aloud in order to burst them open, which is recorded as a metric of reading ability.
[ { "created": "Mon, 29 Apr 2013 23:58:35 GMT", "version": "v1" } ]
2013-05-01
[ [ "Scott", "Michael 'Adrir'", "" ] ]
Encouraging children to read frequently and helping them to develop their reading skills as effectively as possible can be a challenge for some primary schools. This research questions whether the use of a game-based intervention can integrate into the existing teaching culture to aid volunteer teaching assistants in achieving a more significant impact on pupil reading development. A prototype based on the initial process of requirements gathering is presented using Multimedia Fusion Developer 2. The design incorporates a game-like exercise where a foam volcano character releases bubbles containing letters and words. Pupils must read these aloud in order to burst them open, which is recorded as a metric of reading ability.
2106.00157
Bei Wang
Lin Yan, Talha Bin Masood, Raghavendra Sridharamurthy, Farhan Rasheed, Vijay Natarajan, Ingrid Hotz, Bei Wang
Scalar Field Comparison with Topological Descriptors: Properties and Applications for Scientific Visualization
null
null
10.1111/cgf.14331
null
cs.HC cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In topological data analysis and visualization, topological descriptors such as persistence diagrams, merge trees, contour trees, Reeb graphs, and Morse-Smale complexes play an essential role in capturing the shape of scalar field data. We present a state-of-the-art report on scalar field comparison using topological descriptors. We provide a taxonomy of existing approaches based on visualization tasks associated with three categories of data: single fields, time-varying fields, and ensembles. These tasks include symmetry detection, periodicity detection, key event/feature detection, feature tracking, clustering, and structure statistics. Our main contributions include the formulation of a set of desirable mathematical and computational properties of comparative measures, and the classification of visualization tasks and applications that are enabled by these measures.
[ { "created": "Tue, 1 Jun 2021 00:34:18 GMT", "version": "v1" } ]
2024-06-06
[ [ "Yan", "Lin", "" ], [ "Masood", "Talha Bin", "" ], [ "Sridharamurthy", "Raghavendra", "" ], [ "Rasheed", "Farhan", "" ], [ "Natarajan", "Vijay", "" ], [ "Hotz", "Ingrid", "" ], [ "Wang", "Bei", "" ] ]
In topological data analysis and visualization, topological descriptors such as persistence diagrams, merge trees, contour trees, Reeb graphs, and Morse-Smale complexes play an essential role in capturing the shape of scalar field data. We present a state-of-the-art report on scalar field comparison using topological descriptors. We provide a taxonomy of existing approaches based on visualization tasks associated with three categories of data: single fields, time-varying fields, and ensembles. These tasks include symmetry detection, periodicity detection, key event/feature detection, feature tracking, clustering, and structure statistics. Our main contributions include the formulation of a set of desirable mathematical and computational properties of comparative measures, and the classification of visualization tasks and applications that are enabled by these measures.
2309.02027
Katerina Schindlerova Hlavackova-Schindler
Katerina Hlavackova-Schindler, Anna Melnykova, Irene Tubikanec
Granger Causal Inference in Multivariate Hawkes Processes by Minimum Message Length
26 pages, 5 figures
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Multivariate Hawkes processes (MHPs) are versatile probabilistic tools used to model various real-life phenomena: earthquakes, operations on stock markets, neuronal activity, virus propagation and many others. In this paper, we focus on MHPs with exponential decay kernels and estimate connectivity graphs, which represent the Granger causal relations between their components. We approach this inference problem by proposing an optimization criterion and model selection algorithm based on the minimum message length (MML) principle. MML compares Granger causal models using the Occam's razor principle in the following way: even when models have a comparable goodness-of-fit to the observed data, the one generating the most concise explanation of the data is preferred. While most of the state-of-art methods using lasso-type penalization tend to overfitting in scenarios with short time horizons, the proposed MML-based method achieves high F1 scores in these settings. We conduct a numerical study comparing the proposed algorithm to other related classical and state-of-art methods, where we achieve the highest F1 scores in specific sparse graph settings. We illustrate the proposed method also on G7 sovereign bond data and obtain causal connections, which are in agreement with the expert knowledge available in the literature.
[ { "created": "Tue, 5 Sep 2023 08:13:34 GMT", "version": "v1" }, { "created": "Wed, 10 Apr 2024 19:03:58 GMT", "version": "v2" } ]
2024-04-12
[ [ "Hlavackova-Schindler", "Katerina", "" ], [ "Melnykova", "Anna", "" ], [ "Tubikanec", "Irene", "" ] ]
Multivariate Hawkes processes (MHPs) are versatile probabilistic tools used to model various real-life phenomena: earthquakes, operations on stock markets, neuronal activity, virus propagation and many others. In this paper, we focus on MHPs with exponential decay kernels and estimate connectivity graphs, which represent the Granger causal relations between their components. We approach this inference problem by proposing an optimization criterion and model selection algorithm based on the minimum message length (MML) principle. MML compares Granger causal models using the Occam's razor principle in the following way: even when models have a comparable goodness-of-fit to the observed data, the one generating the most concise explanation of the data is preferred. While most of the state-of-art methods using lasso-type penalization tend to overfitting in scenarios with short time horizons, the proposed MML-based method achieves high F1 scores in these settings. We conduct a numerical study comparing the proposed algorithm to other related classical and state-of-art methods, where we achieve the highest F1 scores in specific sparse graph settings. We illustrate the proposed method also on G7 sovereign bond data and obtain causal connections, which are in agreement with the expert knowledge available in the literature.
1602.06657
Kaushik Sarkar
Kaushik Sarkar, Hari Sundaram
Influencing Busy People in a Social Network
null
null
10.1371/journal.pone.0162014
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We identify influential early adopters in a social network, where individuals are resource constrained, to maximize the spread of multiple, costly behaviors. A solution to this problem is especially important for viral marketing. The problem of maximizing influence in a social network is challenging since it is computationally intractable. We make three contributions. First, propose a new model of collective behavior that incorporates individual intent, knowledge of neighbors actions and resource constraints. Second, we show that the multiple behavior influence maximization is NP-hard. Furthermore, we show that the problem is submodular, implying the existence of a greedy solution that approximates the optimal solution to within a constant. However, since the greedy algorithm is expensive for large networks, we propose efficient heuristics to identify the influential individuals, including heuristics to assign behaviors to the different early adopters. We test our approach on synthetic and real-world topologies with excellent results. We evaluate the effectiveness under three metrics: unique number of participants, total number of active behaviors and network resource utilization. Our heuristics produce 15-51% increase in expected resource utilization over the naive approach.
[ { "created": "Mon, 22 Feb 2016 06:17:38 GMT", "version": "v1" }, { "created": "Tue, 15 Mar 2016 03:29:10 GMT", "version": "v2" } ]
2017-02-08
[ [ "Sarkar", "Kaushik", "" ], [ "Sundaram", "Hari", "" ] ]
We identify influential early adopters in a social network, where individuals are resource constrained, to maximize the spread of multiple, costly behaviors. A solution to this problem is especially important for viral marketing. The problem of maximizing influence in a social network is challenging since it is computationally intractable. We make three contributions. First, propose a new model of collective behavior that incorporates individual intent, knowledge of neighbors actions and resource constraints. Second, we show that the multiple behavior influence maximization is NP-hard. Furthermore, we show that the problem is submodular, implying the existence of a greedy solution that approximates the optimal solution to within a constant. However, since the greedy algorithm is expensive for large networks, we propose efficient heuristics to identify the influential individuals, including heuristics to assign behaviors to the different early adopters. We test our approach on synthetic and real-world topologies with excellent results. We evaluate the effectiveness under three metrics: unique number of participants, total number of active behaviors and network resource utilization. Our heuristics produce 15-51% increase in expected resource utilization over the naive approach.
2206.12100
Zahra Ghodsi
Zahra Ghodsi, Mojan Javaheripi, Nojan Sheybani, Xinqiao Zhang, Ke Huang, Farinaz Koushanfar
zPROBE: Zero Peek Robustness Checks for Federated Learning
ICCV 2023
null
null
null
cs.LG cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Privacy-preserving federated learning allows multiple users to jointly train a model with coordination of a central server. The server only learns the final aggregation result, thus the users' (private) training data is not leaked from the individual model updates. However, keeping the individual updates private allows malicious users to perform Byzantine attacks and degrade the accuracy without being detected. Best existing defenses against Byzantine workers rely on robust rank-based statistics, e.g., median, to find malicious updates. However, implementing privacy-preserving rank-based statistics is nontrivial and not scalable in the secure domain, as it requires sorting all individual updates. We establish the first private robustness check that uses high break point rank-based statistics on aggregated model updates. By exploiting randomized clustering, we significantly improve the scalability of our defense without compromising privacy. We leverage our statistical bounds in zero-knowledge proofs to detect and remove malicious updates without revealing the private user updates. Our novel framework, zPROBE, enables Byzantine resilient and secure federated learning. Empirical evaluations demonstrate that zPROBE provides a low overhead solution to defend against state-of-the-art Byzantine attacks while preserving privacy.
[ { "created": "Fri, 24 Jun 2022 06:20:37 GMT", "version": "v1" }, { "created": "Tue, 25 Oct 2022 19:42:48 GMT", "version": "v2" }, { "created": "Tue, 5 Sep 2023 17:14:01 GMT", "version": "v3" } ]
2023-09-06
[ [ "Ghodsi", "Zahra", "" ], [ "Javaheripi", "Mojan", "" ], [ "Sheybani", "Nojan", "" ], [ "Zhang", "Xinqiao", "" ], [ "Huang", "Ke", "" ], [ "Koushanfar", "Farinaz", "" ] ]
Privacy-preserving federated learning allows multiple users to jointly train a model with coordination of a central server. The server only learns the final aggregation result, thus the users' (private) training data is not leaked from the individual model updates. However, keeping the individual updates private allows malicious users to perform Byzantine attacks and degrade the accuracy without being detected. Best existing defenses against Byzantine workers rely on robust rank-based statistics, e.g., median, to find malicious updates. However, implementing privacy-preserving rank-based statistics is nontrivial and not scalable in the secure domain, as it requires sorting all individual updates. We establish the first private robustness check that uses high break point rank-based statistics on aggregated model updates. By exploiting randomized clustering, we significantly improve the scalability of our defense without compromising privacy. We leverage our statistical bounds in zero-knowledge proofs to detect and remove malicious updates without revealing the private user updates. Our novel framework, zPROBE, enables Byzantine resilient and secure federated learning. Empirical evaluations demonstrate that zPROBE provides a low overhead solution to defend against state-of-the-art Byzantine attacks while preserving privacy.
2406.09722
Sidike Paheding
Abhilash Durgam, Sidike Paheding, Vikas Dhiman, Vijay Devabhaktuni
Cross-view geo-localization: a survey
null
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Cross-view geo-localization has garnered notable attention in the realm of computer vision, spurred by the widespread availability of copious geotagged datasets and the advancements in machine learning techniques. This paper provides a thorough survey of cutting-edge methodologies, techniques, and associated challenges that are integral to this domain, with a focus on feature-based and deep learning strategies. Feature-based methods capitalize on unique features to establish correspondences across disparate viewpoints, whereas deep learning-based methodologies deploy convolutional neural networks to embed view-invariant attributes. This work also delineates the multifaceted challenges encountered in cross-view geo-localization, such as variations in viewpoints and illumination, the occurrence of occlusions, and it elucidates innovative solutions that have been formulated to tackle these issues. Furthermore, we delineate benchmark datasets and relevant evaluation metrics, and also perform a comparative analysis of state-of-the-art techniques. Finally, we conclude the paper with a discussion on prospective avenues for future research and the burgeoning applications of cross-view geo-localization in an intricately interconnected global landscape.
[ { "created": "Fri, 14 Jun 2024 05:14:54 GMT", "version": "v1" } ]
2024-06-17
[ [ "Durgam", "Abhilash", "" ], [ "Paheding", "Sidike", "" ], [ "Dhiman", "Vikas", "" ], [ "Devabhaktuni", "Vijay", "" ] ]
Cross-view geo-localization has garnered notable attention in the realm of computer vision, spurred by the widespread availability of copious geotagged datasets and the advancements in machine learning techniques. This paper provides a thorough survey of cutting-edge methodologies, techniques, and associated challenges that are integral to this domain, with a focus on feature-based and deep learning strategies. Feature-based methods capitalize on unique features to establish correspondences across disparate viewpoints, whereas deep learning-based methodologies deploy convolutional neural networks to embed view-invariant attributes. This work also delineates the multifaceted challenges encountered in cross-view geo-localization, such as variations in viewpoints and illumination, the occurrence of occlusions, and it elucidates innovative solutions that have been formulated to tackle these issues. Furthermore, we delineate benchmark datasets and relevant evaluation metrics, and also perform a comparative analysis of state-of-the-art techniques. Finally, we conclude the paper with a discussion on prospective avenues for future research and the burgeoning applications of cross-view geo-localization in an intricately interconnected global landscape.
2206.13734
Dingwen Tao
Chengming Zhang, Tong Geng, Anqi Guo, Jiannan Tian, Martin Herbordt, Ang Li, Dingwen Tao
H-GCN: A Graph Convolutional Network Accelerator on Versal ACAP Architecture
8 pages, 8 figures, 4 tables, accepted by FPL'22
null
null
null
cs.AR cs.LG
http://creativecommons.org/licenses/by/4.0/
Graph Neural Networks (GNNs) have drawn tremendous attention due to their unique capability to extend Machine Learning (ML) approaches to applications broadly-defined as having unstructured data, especially graphs. Compared with other Machine Learning (ML) modalities, the acceleration of Graph Neural Networks (GNNs) is more challenging due to the irregularity and heterogeneity derived from graph typologies. Existing efforts, however, have focused mainly on handling graphs' irregularity and have not studied their heterogeneity. To this end we propose H-GCN, a PL (Programmable Logic) and AIE (AI Engine) based hybrid accelerator that leverages the emerging heterogeneity of Xilinx Versal Adaptive Compute Acceleration Platforms (ACAPs) to achieve high-performance GNN inference. In particular, H-GCN partitions each graph into three subgraphs based on its inherent heterogeneity, and processes them using PL and AIE, respectively. To further improve performance, we explore the sparsity support of AIE and develop an efficient density-aware method to automatically map tiles of sparse matrix-matrix multiplication (SpMM) onto the systolic tensor array. Compared with state-of-the-art GCN accelerators, H-GCN achieves, on average, speedups of 1.1~2.3X.
[ { "created": "Tue, 28 Jun 2022 03:37:31 GMT", "version": "v1" } ]
2022-06-29
[ [ "Zhang", "Chengming", "" ], [ "Geng", "Tong", "" ], [ "Guo", "Anqi", "" ], [ "Tian", "Jiannan", "" ], [ "Herbordt", "Martin", "" ], [ "Li", "Ang", "" ], [ "Tao", "Dingwen", "" ] ]
Graph Neural Networks (GNNs) have drawn tremendous attention due to their unique capability to extend Machine Learning (ML) approaches to applications broadly-defined as having unstructured data, especially graphs. Compared with other Machine Learning (ML) modalities, the acceleration of Graph Neural Networks (GNNs) is more challenging due to the irregularity and heterogeneity derived from graph typologies. Existing efforts, however, have focused mainly on handling graphs' irregularity and have not studied their heterogeneity. To this end we propose H-GCN, a PL (Programmable Logic) and AIE (AI Engine) based hybrid accelerator that leverages the emerging heterogeneity of Xilinx Versal Adaptive Compute Acceleration Platforms (ACAPs) to achieve high-performance GNN inference. In particular, H-GCN partitions each graph into three subgraphs based on its inherent heterogeneity, and processes them using PL and AIE, respectively. To further improve performance, we explore the sparsity support of AIE and develop an efficient density-aware method to automatically map tiles of sparse matrix-matrix multiplication (SpMM) onto the systolic tensor array. Compared with state-of-the-art GCN accelerators, H-GCN achieves, on average, speedups of 1.1~2.3X.
2205.00638
Chenchen Ding
Chenchen Ding
A Two Parameters Equation for Word Rank-Frequency Relation
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Let $f (\cdot)$ be the absolute frequency of words and $r$ be the rank of words in decreasing order of frequency, then the following function can fit the rank-frequency relation \[ f (r;s,t) = \left(\frac{r_{\tt max}}{r}\right)^{1-s} \left(\frac{r_{\tt max}+t \cdot r_{\tt exp}}{r+t \cdot r_{\tt exp}}\right)^{1+(1+t)s} \] where $r_{\tt max}$ and $r_{\tt exp}$ are the maximum and the expectation of the rank, respectively; $s>0$ and $t>0$ are parameters estimated from data. On well-behaved data, there should be $s<1$ and $s \cdot t < 1$.
[ { "created": "Mon, 2 May 2022 04:07:59 GMT", "version": "v1" } ]
2022-05-03
[ [ "Ding", "Chenchen", "" ] ]
Let $f (\cdot)$ be the absolute frequency of words and $r$ be the rank of words in decreasing order of frequency, then the following function can fit the rank-frequency relation \[ f (r;s,t) = \left(\frac{r_{\tt max}}{r}\right)^{1-s} \left(\frac{r_{\tt max}+t \cdot r_{\tt exp}}{r+t \cdot r_{\tt exp}}\right)^{1+(1+t)s} \] where $r_{\tt max}$ and $r_{\tt exp}$ are the maximum and the expectation of the rank, respectively; $s>0$ and $t>0$ are parameters estimated from data. On well-behaved data, there should be $s<1$ and $s \cdot t < 1$.
1301.0803
Zhen Liu
Zhen Liu, Jia-Lin He, Jaideep Srivastava
Cliques in complex networks reveal link formation and community evolution
null
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Missing link prediction in indirected and un-weighted network is an open and challenge problem which has been studied intensively in recent years. In this paper, we studied the relationships between community structure and link formation and proposed a Fast Block probabilistic Model(FBM). In accordance with the experiments on four real world networks, we have yielded very good accuracy of missing link prediction and huge improvement in computing efficiency compared to conventional methods. By analyzing the mechanism of link formation, we also discovered that clique structure plays a significant role to help us understand how links grow in communities. Therefore, we summarized three principles which are proved to be able to well explain the mechanism of link formation and network evolution from the theory of graph topology.
[ { "created": "Fri, 4 Jan 2013 18:56:45 GMT", "version": "v1" }, { "created": "Wed, 6 Mar 2013 00:21:17 GMT", "version": "v2" } ]
2013-03-07
[ [ "Liu", "Zhen", "" ], [ "He", "Jia-Lin", "" ], [ "Srivastava", "Jaideep", "" ] ]
Missing link prediction in indirected and un-weighted network is an open and challenge problem which has been studied intensively in recent years. In this paper, we studied the relationships between community structure and link formation and proposed a Fast Block probabilistic Model(FBM). In accordance with the experiments on four real world networks, we have yielded very good accuracy of missing link prediction and huge improvement in computing efficiency compared to conventional methods. By analyzing the mechanism of link formation, we also discovered that clique structure plays a significant role to help us understand how links grow in communities. Therefore, we summarized three principles which are proved to be able to well explain the mechanism of link formation and network evolution from the theory of graph topology.
1809.08372
Matthew Valenti
Enass Hriba and Matthew C. Valenti
The Impact of Correlated Blocking on Millimeter-Wave Personal Networks
7 pages, 8 figures, in IEEE Military Commun. Conf. (MILCOM), 2018
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to its potential to support high data rates at low latency with reasonable interference isolation, millimeter-wave (mmWave) communications has emerged as a promising solution for wireless personal-area networks (WPAN) and an enabler for emerging applications such as high-resolution untethered virtual reality. At mmWave, signals are prone to blockage by objects in the environment, including human bodies. Most mmWave systems utilize directional antennas in order to overcome the significant path loss. In this paper, we consider the effects of blockage and antenna directivity on the performance of a mmWave WPAN. Similar to related work, we assume that the interferers are in arbitrary locations and the blockages are drawn from a random point process. However, unlike related work that assumes independent blocking, we carefully account for the possibility of correlated blocking, which arises when two interferers are close to each other and therefore an obstruction that blocks the first interferer may likely block the second interferer. Closed form expressions for the blockage correlation coefficient and the distribution of the SINR are provided for the case of two dominant interferers and a fixed number of blockages drawn from a binomial point process. Finally, the effects of antenna directivity and the spatial randomness of the interferers are taken into account, resulting in SINR curves that fully account for correlated blocking, which are compared against curves that neglect correlation. The results provide insight into the validity of the commonly held assumption of independent blocking and the improved accuracy that can be obtained when the blocking correlation is taken into account.
[ { "created": "Sat, 22 Sep 2018 02:53:01 GMT", "version": "v1" } ]
2018-09-25
[ [ "Hriba", "Enass", "" ], [ "Valenti", "Matthew C.", "" ] ]
Due to its potential to support high data rates at low latency with reasonable interference isolation, millimeter-wave (mmWave) communications has emerged as a promising solution for wireless personal-area networks (WPAN) and an enabler for emerging applications such as high-resolution untethered virtual reality. At mmWave, signals are prone to blockage by objects in the environment, including human bodies. Most mmWave systems utilize directional antennas in order to overcome the significant path loss. In this paper, we consider the effects of blockage and antenna directivity on the performance of a mmWave WPAN. Similar to related work, we assume that the interferers are in arbitrary locations and the blockages are drawn from a random point process. However, unlike related work that assumes independent blocking, we carefully account for the possibility of correlated blocking, which arises when two interferers are close to each other and therefore an obstruction that blocks the first interferer may likely block the second interferer. Closed form expressions for the blockage correlation coefficient and the distribution of the SINR are provided for the case of two dominant interferers and a fixed number of blockages drawn from a binomial point process. Finally, the effects of antenna directivity and the spatial randomness of the interferers are taken into account, resulting in SINR curves that fully account for correlated blocking, which are compared against curves that neglect correlation. The results provide insight into the validity of the commonly held assumption of independent blocking and the improved accuracy that can be obtained when the blocking correlation is taken into account.
2403.01683
Qingyao Tian
Qingyao Tian, Huai Liao, Xinyan Huang, Jian Chen, Zihui Zhang, Bingyu Yang, Sebastien Ourselin and Hongbin Liu
DD-VNB: A Depth-based Dual-Loop Framework for Real-time Visually Navigated Bronchoscopy
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Real-time 6 DOF localization of bronchoscopes is crucial for enhancing intervention quality. However, current vision-based technologies struggle to balance between generalization to unseen data and computational speed. In this study, we propose a Depth-based Dual-Loop framework for real-time Visually Navigated Bronchoscopy (DD-VNB) that can generalize across patient cases without the need of re-training. The DD-VNB framework integrates two key modules: depth estimation and dual-loop localization. To address the domain gap among patients, we propose a knowledge-embedded depth estimation network that maps endoscope frames to depth, ensuring generalization by eliminating patient-specific textures. The network embeds view synthesis knowledge into a cycle adversarial architecture for scale-constrained monocular depth estimation. For real-time performance, our localization module embeds a fast ego-motion estimation network into the loop of depth registration. The ego-motion inference network estimates the pose change of the bronchoscope in high frequency while depth registration against the pre-operative 3D model provides absolute pose periodically. Specifically, the relative pose changes are fed into the registration process as the initial guess to boost its accuracy and speed. Experiments on phantom and in-vivo data from patients demonstrate the effectiveness of our framework: 1) monocular depth estimation outperforms SOTA, 2) localization achieves an accuracy of Absolute Tracking Error (ATE) of 4.7 $\pm$ 3.17 mm in phantom and 6.49 $\pm$ 3.88 mm in patient data, 3) with a frame-rate approaching video capture speed, 4) without the necessity of case-wise network retraining. The framework's superior speed and accuracy demonstrate its promising clinical potential for real-time bronchoscopic navigation.
[ { "created": "Mon, 4 Mar 2024 02:29:02 GMT", "version": "v1" }, { "created": "Fri, 15 Mar 2024 07:25:48 GMT", "version": "v2" } ]
2024-03-18
[ [ "Tian", "Qingyao", "" ], [ "Liao", "Huai", "" ], [ "Huang", "Xinyan", "" ], [ "Chen", "Jian", "" ], [ "Zhang", "Zihui", "" ], [ "Yang", "Bingyu", "" ], [ "Ourselin", "Sebastien", "" ], [ "Liu", "Hongbin", "" ] ]
Real-time 6 DOF localization of bronchoscopes is crucial for enhancing intervention quality. However, current vision-based technologies struggle to balance between generalization to unseen data and computational speed. In this study, we propose a Depth-based Dual-Loop framework for real-time Visually Navigated Bronchoscopy (DD-VNB) that can generalize across patient cases without the need of re-training. The DD-VNB framework integrates two key modules: depth estimation and dual-loop localization. To address the domain gap among patients, we propose a knowledge-embedded depth estimation network that maps endoscope frames to depth, ensuring generalization by eliminating patient-specific textures. The network embeds view synthesis knowledge into a cycle adversarial architecture for scale-constrained monocular depth estimation. For real-time performance, our localization module embeds a fast ego-motion estimation network into the loop of depth registration. The ego-motion inference network estimates the pose change of the bronchoscope in high frequency while depth registration against the pre-operative 3D model provides absolute pose periodically. Specifically, the relative pose changes are fed into the registration process as the initial guess to boost its accuracy and speed. Experiments on phantom and in-vivo data from patients demonstrate the effectiveness of our framework: 1) monocular depth estimation outperforms SOTA, 2) localization achieves an accuracy of Absolute Tracking Error (ATE) of 4.7 $\pm$ 3.17 mm in phantom and 6.49 $\pm$ 3.88 mm in patient data, 3) with a frame-rate approaching video capture speed, 4) without the necessity of case-wise network retraining. The framework's superior speed and accuracy demonstrate its promising clinical potential for real-time bronchoscopic navigation.
1510.04132
Rossi Kamal Mr
Rossi Kamal, Choong Seon Hong
Connected Big Data Measurement
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
In this paper, we have summarized how resilient Big Data monetization scheme outperforms state-of-the art schemes by maintaining a balance between CDS size and routing.
[ { "created": "Wed, 14 Oct 2015 14:55:41 GMT", "version": "v1" } ]
2015-10-15
[ [ "Kamal", "Rossi", "" ], [ "Hong", "Choong Seon", "" ] ]
In this paper, we have summarized how resilient Big Data monetization scheme outperforms state-of-the art schemes by maintaining a balance between CDS size and routing.
2004.09900
Harvineet Singh
Harvineet Singh, Moumita Sinha, Atanu R. Sinha, Sahil Garg, Neha Banerjee
An RNN-Survival Model to Decide Email Send Times
11 pages, 3 figures, 2 tables
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Email communications are ubiquitous. Firms control send times of emails and thereby the instants at which emails reach recipients (it is assumed email is received instantaneously from the send time). However, they do not control the duration it takes for recipients to open emails, labeled as time-to-open. Importantly, among emails that are opened, most occur within a short window from their send times. We posit that emails are likely to be opened sooner when send times are convenient for recipients, while for other send times, emails can get ignored. Thus, to compute appropriate send times it is important to predict times-to-open accurately. We propose a recurrent neural network (RNN) in a survival model framework to predict times-to-open, for each recipient. Using that we compute appropriate send times. We experiment on a data set of emails sent to a million customers over five months. The sequence of emails received by a person from a sender is a result of interactions with past emails from the sender, and hence contain useful signal that inform our model. This sequential dependence affords our proposed RNN-Survival (RNN-S) approach to outperform survival analysis approaches in predicting times-to-open. We show that best times to send emails can be computed accurately from predicted times-to-open. This approach allows a firm to tune send times of emails, which is in its control, to favorably influence open rates and engagement.
[ { "created": "Tue, 21 Apr 2020 10:53:14 GMT", "version": "v1" } ]
2020-04-22
[ [ "Singh", "Harvineet", "" ], [ "Sinha", "Moumita", "" ], [ "Sinha", "Atanu R.", "" ], [ "Garg", "Sahil", "" ], [ "Banerjee", "Neha", "" ] ]
Email communications are ubiquitous. Firms control send times of emails and thereby the instants at which emails reach recipients (it is assumed email is received instantaneously from the send time). However, they do not control the duration it takes for recipients to open emails, labeled as time-to-open. Importantly, among emails that are opened, most occur within a short window from their send times. We posit that emails are likely to be opened sooner when send times are convenient for recipients, while for other send times, emails can get ignored. Thus, to compute appropriate send times it is important to predict times-to-open accurately. We propose a recurrent neural network (RNN) in a survival model framework to predict times-to-open, for each recipient. Using that we compute appropriate send times. We experiment on a data set of emails sent to a million customers over five months. The sequence of emails received by a person from a sender is a result of interactions with past emails from the sender, and hence contain useful signal that inform our model. This sequential dependence affords our proposed RNN-Survival (RNN-S) approach to outperform survival analysis approaches in predicting times-to-open. We show that best times to send emails can be computed accurately from predicted times-to-open. This approach allows a firm to tune send times of emails, which is in its control, to favorably influence open rates and engagement.
2012.12093
Liangdong Lu
Liangdong Lu, Ruihu Li, Qiang Fu, Chen Xuan, Wenping Ma
Optimal Ternary Linear Complementary Dual Codes
arXiv admin note: substantial text overlap with arXiv:2010.10166
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Linear complementary dual (LCD) codes introduced by Massey are the codes whose intersections with their dual codes are trivial. It can help to improve the security of the information processed by sensitive devices, especially against side-channel attacks (SCA) and fault invasive attacks. In this paper, By construction of puncturing, extending, shortening and combination codes, many good ternary LCD codes are presented. We give a Table 1 with the values of $d_{LCD}(n,k)$ for length $ n \leq 20$. In addition, Many of these ternary LCD codes given in this paper are optimal which are saturating the lower or upper bound of Grassl's codetable in \cite{Grassl} and some of them are nearly optimal.
[ { "created": "Thu, 3 Dec 2020 06:18:52 GMT", "version": "v1" }, { "created": "Fri, 25 Dec 2020 09:23:28 GMT", "version": "v2" } ]
2020-12-29
[ [ "Lu", "Liangdong", "" ], [ "Li", "Ruihu", "" ], [ "Fu", "Qiang", "" ], [ "Xuan", "Chen", "" ], [ "Ma", "Wenping", "" ] ]
Linear complementary dual (LCD) codes introduced by Massey are the codes whose intersections with their dual codes are trivial. It can help to improve the security of the information processed by sensitive devices, especially against side-channel attacks (SCA) and fault invasive attacks. In this paper, By construction of puncturing, extending, shortening and combination codes, many good ternary LCD codes are presented. We give a Table 1 with the values of $d_{LCD}(n,k)$ for length $ n \leq 20$. In addition, Many of these ternary LCD codes given in this paper are optimal which are saturating the lower or upper bound of Grassl's codetable in \cite{Grassl} and some of them are nearly optimal.
1903.09518
Arthur Gaudron
Gaudron Arthur (CAOR)
Trial of an AI: Empowering people to explore law and science challenges
null
IFIM's International Journal on Law & Regulation of Artificial Intelligence & Robotics, 2019, 1 (1)
null
null
cs.OH cs.AI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artificial Intelligence represents many things: a new market to conquer or a quality label for tech companies, a threat for traditional industries, a menace for democracy, or a blessing for our busy everyday life. The press abounds in examples illustrating these aspects, but one should draw not hasty and premature conclusions. The first successes in AI have been a surprise for society at large-including researchers in the field. Today, after the initial stupefaction, we have examples of the system reactions: traditional companies are heavily investing in AI, social platforms are monitored during elections, data collection is more and more regulated, etc. The resilience of an organization (i.e. its capacity to resist to a shock) relies deeply on the perception of its environment. Future problems have to be anticipated, while unforeseen events occurring have to be quickly identified in order to be mitigated as fast as possible. The author states that this clear perception starts with a common definition of AI in terms of capacities and limits. AI practitioners should make notions and concepts accessible to the general public and the impacted fields (e.g. industries, law, education). It is a truism that only law experts would have the potential to estimate IA impacts on judicial system. However, questions remain on how to connect different kind of expertise and what is the appropriate level of detail required for the knowledge exchanges. And the same consideration is true for dissemination towards society. Ultimately, society will live with decisions made by the "experts". It sounds wise to involve society in the decision process rather than risking to pay consequences later. Therefore, society also needs the key concepts to understand AI impact on their life. This was the purpose of the trial of an IA that took place in October 2018 at the Court of Appeal of Paris: gathering experts from various fields to expose challenges in law and science towards a general public.
[ { "created": "Tue, 5 Mar 2019 07:22:29 GMT", "version": "v1" } ]
2019-03-25
[ [ "Arthur", "Gaudron", "", "CAOR" ] ]
Artificial Intelligence represents many things: a new market to conquer or a quality label for tech companies, a threat for traditional industries, a menace for democracy, or a blessing for our busy everyday life. The press abounds in examples illustrating these aspects, but one should draw not hasty and premature conclusions. The first successes in AI have been a surprise for society at large-including researchers in the field. Today, after the initial stupefaction, we have examples of the system reactions: traditional companies are heavily investing in AI, social platforms are monitored during elections, data collection is more and more regulated, etc. The resilience of an organization (i.e. its capacity to resist to a shock) relies deeply on the perception of its environment. Future problems have to be anticipated, while unforeseen events occurring have to be quickly identified in order to be mitigated as fast as possible. The author states that this clear perception starts with a common definition of AI in terms of capacities and limits. AI practitioners should make notions and concepts accessible to the general public and the impacted fields (e.g. industries, law, education). It is a truism that only law experts would have the potential to estimate IA impacts on judicial system. However, questions remain on how to connect different kind of expertise and what is the appropriate level of detail required for the knowledge exchanges. And the same consideration is true for dissemination towards society. Ultimately, society will live with decisions made by the "experts". It sounds wise to involve society in the decision process rather than risking to pay consequences later. Therefore, society also needs the key concepts to understand AI impact on their life. This was the purpose of the trial of an IA that took place in October 2018 at the Court of Appeal of Paris: gathering experts from various fields to expose challenges in law and science towards a general public.
2209.02270
F. Serhan Dani\c{s}
F. Serhan Dani\c{s}, A. Teoman Naskali, A. Taylan Cemgil, Cem Ersoy
An Indoor Localization Dataset and Data Collection Framework with High Precision Position Annotation
30 pages
F. Serhan Dani\c{s}, A. Teoman Naskali, A. Taylan Cemgil, Cem Ersoy, "An indoor localization dataset and data collection framework with high precision position annotation", Pervasive and Mobile Computing, Volume 81, 101554, 2022
10.1016/j.pmcj.2022.101554
null
cs.LG cs.CV cs.NI cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
We introduce a novel technique and an associated high resolution dataset that aims to precisely evaluate wireless signal based indoor positioning algorithms. The technique implements an augmented reality (AR) based positioning system that is used to annotate the wireless signal parameter data samples with high precision position data. We track the position of a practical and low cost navigable setup of cameras and a Bluetooth Low Energy (BLE) beacon in an area decorated with AR markers. We maximize the performance of the AR-based localization by using a redundant number of markers. Video streams captured by the cameras are subjected to a series of marker recognition, subset selection and filtering operations to yield highly precise pose estimations. Our results show that we can reduce the positional error of the AR localization system to a rate under 0.05 meters. The position data are then used to annotate the BLE data that are captured simultaneously by the sensors stationed in the environment, hence, constructing a wireless signal data set with the ground truth, which allows a wireless signal based localization system to be evaluated accurately.
[ { "created": "Tue, 6 Sep 2022 07:41:11 GMT", "version": "v1" } ]
2022-09-09
[ [ "Daniş", "F. Serhan", "" ], [ "Naskali", "A. Teoman", "" ], [ "Cemgil", "A. Taylan", "" ], [ "Ersoy", "Cem", "" ] ]
We introduce a novel technique and an associated high resolution dataset that aims to precisely evaluate wireless signal based indoor positioning algorithms. The technique implements an augmented reality (AR) based positioning system that is used to annotate the wireless signal parameter data samples with high precision position data. We track the position of a practical and low cost navigable setup of cameras and a Bluetooth Low Energy (BLE) beacon in an area decorated with AR markers. We maximize the performance of the AR-based localization by using a redundant number of markers. Video streams captured by the cameras are subjected to a series of marker recognition, subset selection and filtering operations to yield highly precise pose estimations. Our results show that we can reduce the positional error of the AR localization system to a rate under 0.05 meters. The position data are then used to annotate the BLE data that are captured simultaneously by the sensors stationed in the environment, hence, constructing a wireless signal data set with the ground truth, which allows a wireless signal based localization system to be evaluated accurately.
cs/0012022
Neil J. Gunther
Neil J. Gunther
Performance and Scalability Models for a Hypergrowth e-Commerce Web Site
15 pages; To appear in the book entitled "Performance Engineering - State of the Art and Current Trends," Lecture Notes in Computer Science, Springer-Verlag Heidelberg, 2001
null
null
null
cs.PF cs.DC cs.SE
null
The performance of successful Web-based e-commerce services has all the allure of a roller-coaster ride: accelerated fiscal growth combined with the ever-present danger of running out of server capacity. This chapter presents a case study based on the author's own capacity planning engagement with one of the hottest e-commerce Web sites in the world. Several spreadsheet techniques are presented for forecasting both short-term and long-term trends in the consumption of server capacity. Two new performance metrics are introduced for site planning and procurement: the effective demand, and the doubling period.
[ { "created": "Tue, 26 Dec 2000 22:42:39 GMT", "version": "v1" } ]
2007-05-23
[ [ "Gunther", "Neil J.", "" ] ]
The performance of successful Web-based e-commerce services has all the allure of a roller-coaster ride: accelerated fiscal growth combined with the ever-present danger of running out of server capacity. This chapter presents a case study based on the author's own capacity planning engagement with one of the hottest e-commerce Web sites in the world. Several spreadsheet techniques are presented for forecasting both short-term and long-term trends in the consumption of server capacity. Two new performance metrics are introduced for site planning and procurement: the effective demand, and the doubling period.
2108.11204
Lukasz Kucinski
Konrad Czechowski, Tomasz Odrzyg\'o\'zd\'z, Marek Zbysi\'nski, Micha{\l} Zawalski, Krzysztof Olejnik, Yuhuai Wu, {\L}ukasz Kuci\'nski, Piotr Mi{\l}o\'s
Subgoal Search For Complex Reasoning Tasks
NeurIPS 2021
null
null
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans excel in solving complex reasoning tasks through a mental process of moving from one idea to a related one. Inspired by this, we propose Subgoal Search (kSubS) method. Its key component is a learned subgoal generator that produces a diversity of subgoals that are both achievable and closer to the solution. Using subgoals reduces the search space and induces a high-level search graph suitable for efficient planning. In this paper, we implement kSubS using a transformer-based subgoal module coupled with the classical best-first search framework. We show that a simple approach of generating $k$-th step ahead subgoals is surprisingly efficient on three challenging domains: two popular puzzle games, Sokoban and the Rubik's Cube, and an inequality proving benchmark INT. kSubS achieves strong results including state-of-the-art on INT within a modest computational budget.
[ { "created": "Wed, 25 Aug 2021 12:40:04 GMT", "version": "v1" }, { "created": "Thu, 28 Oct 2021 15:35:09 GMT", "version": "v2" }, { "created": "Wed, 3 Apr 2024 15:35:04 GMT", "version": "v3" } ]
2024-04-04
[ [ "Czechowski", "Konrad", "" ], [ "Odrzygóźdź", "Tomasz", "" ], [ "Zbysiński", "Marek", "" ], [ "Zawalski", "Michał", "" ], [ "Olejnik", "Krzysztof", "" ], [ "Wu", "Yuhuai", "" ], [ "Kuciński", "Łukasz", "" ], [ "Miłoś", "Piotr", "" ] ]
Humans excel in solving complex reasoning tasks through a mental process of moving from one idea to a related one. Inspired by this, we propose Subgoal Search (kSubS) method. Its key component is a learned subgoal generator that produces a diversity of subgoals that are both achievable and closer to the solution. Using subgoals reduces the search space and induces a high-level search graph suitable for efficient planning. In this paper, we implement kSubS using a transformer-based subgoal module coupled with the classical best-first search framework. We show that a simple approach of generating $k$-th step ahead subgoals is surprisingly efficient on three challenging domains: two popular puzzle games, Sokoban and the Rubik's Cube, and an inequality proving benchmark INT. kSubS achieves strong results including state-of-the-art on INT within a modest computational budget.
2104.08450
Xiyun Li
Xiyun Li and Yong Xu and Meng Yu and Shi-Xiong Zhang and Jiaming Xu and Bo Xu and Dong Yu
MIMO Self-attentive RNN Beamformer for Multi-speaker Speech Separation
null
null
null
null
cs.SD cs.AI eess.AS
http://creativecommons.org/licenses/by/4.0/
Recently, our proposed recurrent neural network (RNN) based all deep learning minimum variance distortionless response (ADL-MVDR) beamformer method yielded superior performance over the conventional MVDR by replacing the matrix inversion and eigenvalue decomposition with two recurrent neural networks. In this work, we present a self-attentive RNN beamformer to further improve our previous RNN-based beamformer by leveraging on the powerful modeling capability of self-attention. Temporal-spatial self-attention module is proposed to better learn the beamforming weights from the speech and noise spatial covariance matrices. The temporal self-attention module could help RNN to learn global statistics of covariance matrices. The spatial self-attention module is designed to attend on the cross-channel correlation in the covariance matrices. Furthermore, a multi-channel input with multi-speaker directional features and multi-speaker speech separation outputs (MIMO) model is developed to improve the inference efficiency. The evaluations demonstrate that our proposed MIMO self-attentive RNN beamformer improves both the automatic speech recognition (ASR) accuracy and the perceptual estimation of speech quality (PESQ) against prior arts.
[ { "created": "Sat, 17 Apr 2021 05:02:04 GMT", "version": "v1" }, { "created": "Mon, 26 Apr 2021 08:18:36 GMT", "version": "v2" } ]
2021-04-27
[ [ "Li", "Xiyun", "" ], [ "Xu", "Yong", "" ], [ "Yu", "Meng", "" ], [ "Zhang", "Shi-Xiong", "" ], [ "Xu", "Jiaming", "" ], [ "Xu", "Bo", "" ], [ "Yu", "Dong", "" ] ]
Recently, our proposed recurrent neural network (RNN) based all deep learning minimum variance distortionless response (ADL-MVDR) beamformer method yielded superior performance over the conventional MVDR by replacing the matrix inversion and eigenvalue decomposition with two recurrent neural networks. In this work, we present a self-attentive RNN beamformer to further improve our previous RNN-based beamformer by leveraging on the powerful modeling capability of self-attention. Temporal-spatial self-attention module is proposed to better learn the beamforming weights from the speech and noise spatial covariance matrices. The temporal self-attention module could help RNN to learn global statistics of covariance matrices. The spatial self-attention module is designed to attend on the cross-channel correlation in the covariance matrices. Furthermore, a multi-channel input with multi-speaker directional features and multi-speaker speech separation outputs (MIMO) model is developed to improve the inference efficiency. The evaluations demonstrate that our proposed MIMO self-attentive RNN beamformer improves both the automatic speech recognition (ASR) accuracy and the perceptual estimation of speech quality (PESQ) against prior arts.
1601.01298
Hamideh Vosoughpour Yazdchi
Anna Lubiw, Jack Snoeyink, Hamideh Vosoughpour
Visibility Graphs, Dismantlability, and the Cops and Robbers Game
23 pages
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study versions of cop and robber pursuit-evasion games on the visibility graphs of polygons, and inside polygons with straight and curved sides. Each player has full information about the other player's location, players take turns, and the robber is captured when the cop arrives at the same point as the robber. In visibility graphs we show the cop can always win because visibility graphs are dismantlable, which is interesting as one of the few results relating visibility graphs to other known graph classes. We extend this to show that the cop wins games in which players move along straight line segments inside any polygon and, more generally, inside any simply connected planar region with a reasonable boundary. Essentially, our problem is a type of pursuit-evasion using the link metric rather than the Euclidean metric, and our result provides an interesting class of infinite cop-win graphs.
[ { "created": "Wed, 6 Jan 2016 20:26:31 GMT", "version": "v1" } ]
2016-01-07
[ [ "Lubiw", "Anna", "" ], [ "Snoeyink", "Jack", "" ], [ "Vosoughpour", "Hamideh", "" ] ]
We study versions of cop and robber pursuit-evasion games on the visibility graphs of polygons, and inside polygons with straight and curved sides. Each player has full information about the other player's location, players take turns, and the robber is captured when the cop arrives at the same point as the robber. In visibility graphs we show the cop can always win because visibility graphs are dismantlable, which is interesting as one of the few results relating visibility graphs to other known graph classes. We extend this to show that the cop wins games in which players move along straight line segments inside any polygon and, more generally, inside any simply connected planar region with a reasonable boundary. Essentially, our problem is a type of pursuit-evasion using the link metric rather than the Euclidean metric, and our result provides an interesting class of infinite cop-win graphs.
2202.09559
Zhengqing Miao
Zhengqing Miao, Xin Zhang, Carlo Menon, Yelong Zheng, Meirong Zhao, Dong Ming
Priming Cross-Session Motor Imagery Classification with A Universal Deep Domain Adaptation Framework
17 pages, 5figures
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motor imagery (MI) is a common brain computer interface (BCI) paradigm. EEG is non-stationary with low signal-to-noise, classifying motor imagery tasks of the same participant from different EEG recording sessions is generally challenging, as EEG data distribution may vary tremendously among different acquisition sessions. Although it is intuitive to consider the cross-session MI classification as a domain adaptation problem, the rationale and feasible approach is not elucidated. In this paper, we propose a Siamese deep domain adaptation (SDDA) framework for cross-session MI classification based on mathematical models in domain adaptation theory. The proposed framework can be easily applied to most existing artificial neural networks without altering the network structure, which facilitates our method with great flexibility and transferability. In the proposed framework, domain invariants were firstly constructed jointly with channel normalization and Euclidean alignment. Then, embedding features from source and target domain were mapped into the Reproducing Kernel Hilbert Space (RKHS) and aligned accordingly. A cosine-based center loss was also integrated into the framework to improve the generalizability of the SDDA. The proposed framework was validated with two classic and popular convolutional neural networks from BCI research field (EEGNet and ConvNet) in two MI-EEG public datasets (BCI Competition IV IIA, IIB). Compared to the vanilla EEGNet and ConvNet, the proposed SDDA framework was able to boost the MI classification accuracy by 15.2%, 10.2% respectively in IIA dataset, and 5.5%, 4.2% in IIB dataset. The final MI classification accuracy reached 82.01% in IIA dataset and 87.52% in IIB, which outperformed the state-of-the-art methods in the literature.
[ { "created": "Sat, 19 Feb 2022 09:30:08 GMT", "version": "v1" }, { "created": "Wed, 26 Jul 2023 01:36:38 GMT", "version": "v2" } ]
2023-07-27
[ [ "Miao", "Zhengqing", "" ], [ "Zhang", "Xin", "" ], [ "Menon", "Carlo", "" ], [ "Zheng", "Yelong", "" ], [ "Zhao", "Meirong", "" ], [ "Ming", "Dong", "" ] ]
Motor imagery (MI) is a common brain computer interface (BCI) paradigm. EEG is non-stationary with low signal-to-noise, classifying motor imagery tasks of the same participant from different EEG recording sessions is generally challenging, as EEG data distribution may vary tremendously among different acquisition sessions. Although it is intuitive to consider the cross-session MI classification as a domain adaptation problem, the rationale and feasible approach is not elucidated. In this paper, we propose a Siamese deep domain adaptation (SDDA) framework for cross-session MI classification based on mathematical models in domain adaptation theory. The proposed framework can be easily applied to most existing artificial neural networks without altering the network structure, which facilitates our method with great flexibility and transferability. In the proposed framework, domain invariants were firstly constructed jointly with channel normalization and Euclidean alignment. Then, embedding features from source and target domain were mapped into the Reproducing Kernel Hilbert Space (RKHS) and aligned accordingly. A cosine-based center loss was also integrated into the framework to improve the generalizability of the SDDA. The proposed framework was validated with two classic and popular convolutional neural networks from BCI research field (EEGNet and ConvNet) in two MI-EEG public datasets (BCI Competition IV IIA, IIB). Compared to the vanilla EEGNet and ConvNet, the proposed SDDA framework was able to boost the MI classification accuracy by 15.2%, 10.2% respectively in IIA dataset, and 5.5%, 4.2% in IIB dataset. The final MI classification accuracy reached 82.01% in IIA dataset and 87.52% in IIB, which outperformed the state-of-the-art methods in the literature.
2403.05399
Daniele Meli
Cristian Morasso, Daniele Meli, Yann Divet, Salvatore Sessa, Alessandro Farinelli
Planning and Inverse Kinematics of Hyper-Redundant Manipulators with VO-FABRIK
In publication in Springer Proceedings for the European Robotics Forum 2024
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
Hyper-redundant Robotic Manipulators (HRMs) offer great dexterity and flexibility of operation, but solving Inverse Kinematics (IK) is challenging. In this work, we introduce VO-FABRIK, an algorithm combining Forward and Backward Reaching Inverse Kinematics (FABRIK) for repeatable deterministic IK computation, and an approach inspired from velocity obstacles to perform path planning under collision and joint limits constraints. We show preliminary results on an industrial HRM with 19 actuated joints. Our algorithm achieves good performance where a state-of-the-art IK solver fails.
[ { "created": "Fri, 8 Mar 2024 15:53:03 GMT", "version": "v1" } ]
2024-03-11
[ [ "Morasso", "Cristian", "" ], [ "Meli", "Daniele", "" ], [ "Divet", "Yann", "" ], [ "Sessa", "Salvatore", "" ], [ "Farinelli", "Alessandro", "" ] ]
Hyper-redundant Robotic Manipulators (HRMs) offer great dexterity and flexibility of operation, but solving Inverse Kinematics (IK) is challenging. In this work, we introduce VO-FABRIK, an algorithm combining Forward and Backward Reaching Inverse Kinematics (FABRIK) for repeatable deterministic IK computation, and an approach inspired from velocity obstacles to perform path planning under collision and joint limits constraints. We show preliminary results on an industrial HRM with 19 actuated joints. Our algorithm achieves good performance where a state-of-the-art IK solver fails.
1911.00399
Joshua Chen
Joshua Chen
An Implementation of Homotopy Type Theory in Isabelle/Pure
Masters thesis
null
null
null
cs.LO math.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this Masters thesis we present an implementation of a fragment of "book HoTT" as an object logic for the interactive proof assistant Isabelle. We also give a mathematical description of the underlying theory of the Isabelle/Pure logical framework, and discuss various issues and design decisions that arise when attempting to encode intensional dependent type theory with universes inside a simple type-theoretic logical foundation.
[ { "created": "Thu, 31 Oct 2019 14:46:31 GMT", "version": "v1" } ]
2019-11-04
[ [ "Chen", "Joshua", "" ] ]
In this Masters thesis we present an implementation of a fragment of "book HoTT" as an object logic for the interactive proof assistant Isabelle. We also give a mathematical description of the underlying theory of the Isabelle/Pure logical framework, and discuss various issues and design decisions that arise when attempting to encode intensional dependent type theory with universes inside a simple type-theoretic logical foundation.
2112.03340
Yiren Jian
Yiren Jian, Lorenzo Torresani
Label Hallucination for Few-Shot Classification
Accepted by AAAI 2022. Code is available: https://github.com/yiren-jian/LabelHalluc
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Few-shot classification requires adapting knowledge learned from a large annotated base dataset to recognize novel unseen classes, each represented by few labeled examples. In such a scenario, pretraining a network with high capacity on the large dataset and then finetuning it on the few examples causes severe overfitting. At the same time, training a simple linear classifier on top of "frozen" features learned from the large labeled dataset fails to adapt the model to the properties of the novel classes, effectively inducing underfitting. In this paper we propose an alternative approach to both of these two popular strategies. First, our method pseudo-labels the entire large dataset using the linear classifier trained on the novel classes. This effectively "hallucinates" the novel classes in the large dataset, despite the novel categories not being present in the base database (novel and base classes are disjoint). Then, it finetunes the entire model with a distillation loss on the pseudo-labeled base examples, in addition to the standard cross-entropy loss on the novel dataset. This step effectively trains the network to recognize contextual and appearance cues that are useful for the novel-category recognition but using the entire large-scale base dataset and thus overcoming the inherent data-scarcity problem of few-shot learning. Despite the simplicity of the approach, we show that that our method outperforms the state-of-the-art on four well-established few-shot classification benchmarks.
[ { "created": "Mon, 6 Dec 2021 20:18:41 GMT", "version": "v1" } ]
2021-12-08
[ [ "Jian", "Yiren", "" ], [ "Torresani", "Lorenzo", "" ] ]
Few-shot classification requires adapting knowledge learned from a large annotated base dataset to recognize novel unseen classes, each represented by few labeled examples. In such a scenario, pretraining a network with high capacity on the large dataset and then finetuning it on the few examples causes severe overfitting. At the same time, training a simple linear classifier on top of "frozen" features learned from the large labeled dataset fails to adapt the model to the properties of the novel classes, effectively inducing underfitting. In this paper we propose an alternative approach to both of these two popular strategies. First, our method pseudo-labels the entire large dataset using the linear classifier trained on the novel classes. This effectively "hallucinates" the novel classes in the large dataset, despite the novel categories not being present in the base database (novel and base classes are disjoint). Then, it finetunes the entire model with a distillation loss on the pseudo-labeled base examples, in addition to the standard cross-entropy loss on the novel dataset. This step effectively trains the network to recognize contextual and appearance cues that are useful for the novel-category recognition but using the entire large-scale base dataset and thus overcoming the inherent data-scarcity problem of few-shot learning. Despite the simplicity of the approach, we show that that our method outperforms the state-of-the-art on four well-established few-shot classification benchmarks.
2408.01269
Lutao Jiang
Lutao Jiang, Hangyu Li and Lin Wang
A General Framework to Boost 3D GS Initialization for Text-to-3D Generation by Lexical Richness
null
ACM MM 2024
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Text-to-3D content creation has recently received much attention, especially with the prevalence of 3D Gaussians Splatting. In general, GS-based methods comprise two key stages: initialization and rendering optimization. To achieve initialization, existing works directly apply random sphere initialization or 3D diffusion models, e.g., Point-E, to derive the initial shapes. However, such strategies suffer from two critical yet challenging problems: 1) the final shapes are still similar to the initial ones even after training; 2) shapes can be produced only from simple texts, e.g., "a dog", not for lexically richer texts, e.g., "a dog is sitting on the top of the airplane". To address these problems, this paper proposes a novel general framework to boost the 3D GS Initialization for text-to-3D generation upon the lexical richness. Our key idea is to aggregate 3D Gaussians into spatially uniform voxels to represent complex shapes while enabling the spatial interaction among the 3D Gaussians and semantic interaction between Gaussians and texts. Specifically, we first construct a voxelized representation, where each voxel holds a 3D Gaussian with its position, scale, and rotation fixed while setting opacity as the sole factor to determine a position's occupancy. We then design an initialization network mainly consisting of two novel components: 1) Global Information Perception (GIP) block and 2) Gaussians-Text Fusion (GTF) block. Such a design enables each 3D Gaussian to assimilate the spatial information from other areas and semantic information from texts. Extensive experiments show the superiority of our framework of high-quality 3D GS initialization against the existing methods, e.g., Shap-E, by taking lexically simple, medium, and hard texts. Also, our framework can be seamlessly plugged into SoTA training frameworks, e.g., LucidDreamer, for semantically consistent text-to-3D generation.
[ { "created": "Fri, 2 Aug 2024 13:46:15 GMT", "version": "v1" } ]
2024-08-05
[ [ "Jiang", "Lutao", "" ], [ "Li", "Hangyu", "" ], [ "Wang", "Lin", "" ] ]
Text-to-3D content creation has recently received much attention, especially with the prevalence of 3D Gaussians Splatting. In general, GS-based methods comprise two key stages: initialization and rendering optimization. To achieve initialization, existing works directly apply random sphere initialization or 3D diffusion models, e.g., Point-E, to derive the initial shapes. However, such strategies suffer from two critical yet challenging problems: 1) the final shapes are still similar to the initial ones even after training; 2) shapes can be produced only from simple texts, e.g., "a dog", not for lexically richer texts, e.g., "a dog is sitting on the top of the airplane". To address these problems, this paper proposes a novel general framework to boost the 3D GS Initialization for text-to-3D generation upon the lexical richness. Our key idea is to aggregate 3D Gaussians into spatially uniform voxels to represent complex shapes while enabling the spatial interaction among the 3D Gaussians and semantic interaction between Gaussians and texts. Specifically, we first construct a voxelized representation, where each voxel holds a 3D Gaussian with its position, scale, and rotation fixed while setting opacity as the sole factor to determine a position's occupancy. We then design an initialization network mainly consisting of two novel components: 1) Global Information Perception (GIP) block and 2) Gaussians-Text Fusion (GTF) block. Such a design enables each 3D Gaussian to assimilate the spatial information from other areas and semantic information from texts. Extensive experiments show the superiority of our framework of high-quality 3D GS initialization against the existing methods, e.g., Shap-E, by taking lexically simple, medium, and hard texts. Also, our framework can be seamlessly plugged into SoTA training frameworks, e.g., LucidDreamer, for semantically consistent text-to-3D generation.
2003.05785
Davoud Mougouei
Davoud Mougouei and David M W Powers
Dependency-Aware Software Requirements Selection using Fuzzy Graphs and Integer Programming
arXiv admin note: text overlap with arXiv:2003.04806
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Software requirements selection aims to find an optimal subset of the requirements with the highest value while respecting the project constraints. But the value of a requirement may depend on the presence or absence of other requirements in the optimal subset. Such Value Dependencies, however, are imprecise and hard to capture. In this paper, we propose a method based on integer programming and fuzzy graphs to account for value dependencies and their imprecision in software requirements selection. The proposed method, referred to as Dependency-Aware Software Requirements Selection (DARS), is comprised of three components: (i) an automated technique for the identification of value dependencies from user preferences, (ii) a modeling technique based on fuzzy graphs that allows for capturing the imprecision of value dependencies, and (iii) an Integer Linear Programming (ILP) model that takes into account user preferences and value dependencies identified from those preferences to reduce the risk of value loss in software projects. Our work is verified by studying a real-world software project. The results show that our proposed method reduces the value loss in software projects and is scalable to large requirement sets.
[ { "created": "Wed, 11 Mar 2020 02:09:34 GMT", "version": "v1" } ]
2020-03-13
[ [ "Mougouei", "Davoud", "" ], [ "Powers", "David M W", "" ] ]
Software requirements selection aims to find an optimal subset of the requirements with the highest value while respecting the project constraints. But the value of a requirement may depend on the presence or absence of other requirements in the optimal subset. Such Value Dependencies, however, are imprecise and hard to capture. In this paper, we propose a method based on integer programming and fuzzy graphs to account for value dependencies and their imprecision in software requirements selection. The proposed method, referred to as Dependency-Aware Software Requirements Selection (DARS), is comprised of three components: (i) an automated technique for the identification of value dependencies from user preferences, (ii) a modeling technique based on fuzzy graphs that allows for capturing the imprecision of value dependencies, and (iii) an Integer Linear Programming (ILP) model that takes into account user preferences and value dependencies identified from those preferences to reduce the risk of value loss in software projects. Our work is verified by studying a real-world software project. The results show that our proposed method reduces the value loss in software projects and is scalable to large requirement sets.
1106.0669
M. L. Ginsberg
M. L. Ginsberg
GIB: Imperfect Information in a Computationally Challenging Game
null
Journal Of Artificial Intelligence Research, Volume 14, pages 303-358, 2001
10.1613/jair.820
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates the problems arising in the construction of a program to play the game of contract bridge. These problems include both the difficulty of solving the game's perfect information variant, and techniques needed to address the fact that bridge is not, in fact, a perfect information game. GIB, the program being described, involves five separate technical advances: partition search, the practical application of Monte Carlo techniques to realistic problems, a focus on achievable sets to solve problems inherent in the Monte Carlo approach, an extension of alpha-beta pruning from total orders to arbitrary distributive lattices, and the use of squeaky wheel optimization to find approximately optimal solutions to cardplay problems. GIB is currently believed to be of approximately expert caliber, and is currently the strongest computer bridge program in the world.
[ { "created": "Fri, 3 Jun 2011 14:53:55 GMT", "version": "v1" } ]
2011-06-06
[ [ "Ginsberg", "M. L.", "" ] ]
This paper investigates the problems arising in the construction of a program to play the game of contract bridge. These problems include both the difficulty of solving the game's perfect information variant, and techniques needed to address the fact that bridge is not, in fact, a perfect information game. GIB, the program being described, involves five separate technical advances: partition search, the practical application of Monte Carlo techniques to realistic problems, a focus on achievable sets to solve problems inherent in the Monte Carlo approach, an extension of alpha-beta pruning from total orders to arbitrary distributive lattices, and the use of squeaky wheel optimization to find approximately optimal solutions to cardplay problems. GIB is currently believed to be of approximately expert caliber, and is currently the strongest computer bridge program in the world.
2009.10515
Kostas Kolomvatsos
Panagiotis Oikonomou, Kostas Kolomvatsos, Nikos Tziritas, Georgios Theodoropoulos, Thanasis Loukopoulos, Georgios Stamoulis
A Fuzzy Logic Controller for Tasks Scheduling Using Unreliable Cloud Resources
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Cloud infrastructure offers to end users a broad set of heterogenous computational resources using the pay-as-you-go model. These virtualized resources can be provisioned using different pricing models like the unreliable model where resources are provided at a fraction of the cost but with no guarantee for an uninterrupted processing. However, the enormous gamut of opportunities comes with a great caveat as resource management and scheduling decisions are increasingly complicated. Moreover, the presented uncertainty in optimally selecting resources has also a negatively impact on the quality of solutions delivered by scheduling algorithms. In this paper, we present a dynamic scheduling algorithm (i.e., the Uncertainty-Driven Scheduling - UDS algorithm) for the management of scientific workflows in Cloud. Our model minimizes both the makespan and the monetary cost by dynamically selecting reliable or unreliable virtualized resources. For covering the uncertainty in decision making, we adopt a Fuzzy Logic Controller (FLC) to derive the pricing model of the resources that will host every task. We evaluate the performance of the proposed algorithm using real workflow applications being tested under the assumption of different probabilities regarding the revocation of unreliable resources. Numerical results depict the performance of the proposed approach and a comparative assessment reveals the position of the paper in the relevant literature.
[ { "created": "Tue, 22 Sep 2020 13:15:19 GMT", "version": "v1" } ]
2020-09-23
[ [ "Oikonomou", "Panagiotis", "" ], [ "Kolomvatsos", "Kostas", "" ], [ "Tziritas", "Nikos", "" ], [ "Theodoropoulos", "Georgios", "" ], [ "Loukopoulos", "Thanasis", "" ], [ "Stamoulis", "Georgios", "" ] ]
The Cloud infrastructure offers to end users a broad set of heterogenous computational resources using the pay-as-you-go model. These virtualized resources can be provisioned using different pricing models like the unreliable model where resources are provided at a fraction of the cost but with no guarantee for an uninterrupted processing. However, the enormous gamut of opportunities comes with a great caveat as resource management and scheduling decisions are increasingly complicated. Moreover, the presented uncertainty in optimally selecting resources has also a negatively impact on the quality of solutions delivered by scheduling algorithms. In this paper, we present a dynamic scheduling algorithm (i.e., the Uncertainty-Driven Scheduling - UDS algorithm) for the management of scientific workflows in Cloud. Our model minimizes both the makespan and the monetary cost by dynamically selecting reliable or unreliable virtualized resources. For covering the uncertainty in decision making, we adopt a Fuzzy Logic Controller (FLC) to derive the pricing model of the resources that will host every task. We evaluate the performance of the proposed algorithm using real workflow applications being tested under the assumption of different probabilities regarding the revocation of unreliable resources. Numerical results depict the performance of the proposed approach and a comparative assessment reveals the position of the paper in the relevant literature.
1005.0052
Byung-Hak Kim
Byung-Hak Kim and Henry D. Pfister
On the Joint Decoding of LDPC Codes and Finite-State Channels via Linear Programming
To appear in Proc. 2010 IEEE Int. Symp. Information Theory, Ausin, TX, June 12-18, 2010 (a small error in the reference corrected)
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, the linear programming (LP) decoder for binary linear codes, introduced by Feldman, et al. is extended to joint-decoding of binary-input finite-state channels. In particular, we provide a rigorous definition of LP joint-decoding pseudo-codewords (JD-PCWs) that enables evaluation of the pairwise error probability between codewords and JD-PCWs. This leads naturally to a provable upper bound on decoder failure probability. If the channel is a finite-state intersymbol interference channel, then the LP joint decoder also has the maximum-likelihood (ML) certificate property and all integer valued solutions are codewords. In this case, the performance loss relative to ML decoding can be explained completely by fractional valued JD-PCWs.
[ { "created": "Sat, 1 May 2010 07:30:55 GMT", "version": "v1" }, { "created": "Fri, 7 May 2010 19:32:31 GMT", "version": "v2" }, { "created": "Mon, 7 Jun 2010 20:42:05 GMT", "version": "v3" } ]
2010-06-09
[ [ "Kim", "Byung-Hak", "" ], [ "Pfister", "Henry D.", "" ] ]
In this paper, the linear programming (LP) decoder for binary linear codes, introduced by Feldman, et al. is extended to joint-decoding of binary-input finite-state channels. In particular, we provide a rigorous definition of LP joint-decoding pseudo-codewords (JD-PCWs) that enables evaluation of the pairwise error probability between codewords and JD-PCWs. This leads naturally to a provable upper bound on decoder failure probability. If the channel is a finite-state intersymbol interference channel, then the LP joint decoder also has the maximum-likelihood (ML) certificate property and all integer valued solutions are codewords. In this case, the performance loss relative to ML decoding can be explained completely by fractional valued JD-PCWs.
2310.13098
Piotr Gramacki
Piotr Gramacki, Kacper Le\'sniara, Kamil Raczycki, Szymon Wo\'zniak, Marcin Przymus, Piotr Szyma\'nski
SRAI: Towards Standardization of Geospatial AI
Accepted for the 6th ACM SIGSPATIAL International Workshop on AI for Geographic Knowledge Discovery (GeoAI 2023)
null
10.1145/3615886.3627740
null
cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Spatial Representations for Artificial Intelligence (srai) is a Python library for working with geospatial data. The library can download geospatial data, split a given area into micro-regions using multiple algorithms and train an embedding model using various architectures. It includes baseline models as well as more complex methods from published works. Those capabilities make it possible to use srai in a complete pipeline for geospatial task solving. The proposed library is the first step to standardize the geospatial AI domain toolset. It is fully open-source and published under Apache 2.0 licence.
[ { "created": "Thu, 19 Oct 2023 18:56:04 GMT", "version": "v1" }, { "created": "Mon, 23 Oct 2023 15:03:50 GMT", "version": "v2" } ]
2023-11-22
[ [ "Gramacki", "Piotr", "" ], [ "Leśniara", "Kacper", "" ], [ "Raczycki", "Kamil", "" ], [ "Woźniak", "Szymon", "" ], [ "Przymus", "Marcin", "" ], [ "Szymański", "Piotr", "" ] ]
Spatial Representations for Artificial Intelligence (srai) is a Python library for working with geospatial data. The library can download geospatial data, split a given area into micro-regions using multiple algorithms and train an embedding model using various architectures. It includes baseline models as well as more complex methods from published works. Those capabilities make it possible to use srai in a complete pipeline for geospatial task solving. The proposed library is the first step to standardize the geospatial AI domain toolset. It is fully open-source and published under Apache 2.0 licence.
2203.04142
Latif Salum
Latif Salum
A Reply to "On Salum's Algorithm for X3SAT"
null
null
null
null
cs.CC
http://creativecommons.org/licenses/by/4.0/
This paper is a reply to "On Salum's Algorithm for X3SAT" (arXiv:2104.02886)
[ { "created": "Mon, 6 Dec 2021 11:46:20 GMT", "version": "v1" }, { "created": "Wed, 9 Mar 2022 13:21:28 GMT", "version": "v2" } ]
2022-03-10
[ [ "Salum", "Latif", "" ] ]
This paper is a reply to "On Salum's Algorithm for X3SAT" (arXiv:2104.02886)
2308.03463
Zhongjie Duan
Zhongjie Duan, Lizhou You, Chengyu Wang, Cen Chen, Ziheng Wu, Weining Qian, Jun Huang
DiffSynth: Latent In-Iteration Deflickering for Realistic Video Synthesis
9 pages, 6 figures
null
null
null
cs.CV cs.MM
http://creativecommons.org/licenses/by/4.0/
In recent years, diffusion models have emerged as the most powerful approach in image synthesis. However, applying these models directly to video synthesis presents challenges, as it often leads to noticeable flickering contents. Although recently proposed zero-shot methods can alleviate flicker to some extent, we still struggle to generate coherent videos. In this paper, we propose DiffSynth, a novel approach that aims to convert image synthesis pipelines to video synthesis pipelines. DiffSynth consists of two key components: a latent in-iteration deflickering framework and a video deflickering algorithm. The latent in-iteration deflickering framework applies video deflickering to the latent space of diffusion models, effectively preventing flicker accumulation in intermediate steps. Additionally, we propose a video deflickering algorithm, named patch blending algorithm, that remaps objects in different frames and blends them together to enhance video consistency. One of the notable advantages of DiffSynth is its general applicability to various video synthesis tasks, including text-guided video stylization, fashion video synthesis, image-guided video stylization, video restoring, and 3D rendering. In the task of text-guided video stylization, we make it possible to synthesize high-quality videos without cherry-picking. The experimental results demonstrate the effectiveness of DiffSynth. All videos can be viewed on our project page. Source codes will also be released.
[ { "created": "Mon, 7 Aug 2023 10:41:52 GMT", "version": "v1" }, { "created": "Tue, 8 Aug 2023 07:54:55 GMT", "version": "v2" }, { "created": "Thu, 10 Aug 2023 02:26:16 GMT", "version": "v3" } ]
2023-08-11
[ [ "Duan", "Zhongjie", "" ], [ "You", "Lizhou", "" ], [ "Wang", "Chengyu", "" ], [ "Chen", "Cen", "" ], [ "Wu", "Ziheng", "" ], [ "Qian", "Weining", "" ], [ "Huang", "Jun", "" ] ]
In recent years, diffusion models have emerged as the most powerful approach in image synthesis. However, applying these models directly to video synthesis presents challenges, as it often leads to noticeable flickering contents. Although recently proposed zero-shot methods can alleviate flicker to some extent, we still struggle to generate coherent videos. In this paper, we propose DiffSynth, a novel approach that aims to convert image synthesis pipelines to video synthesis pipelines. DiffSynth consists of two key components: a latent in-iteration deflickering framework and a video deflickering algorithm. The latent in-iteration deflickering framework applies video deflickering to the latent space of diffusion models, effectively preventing flicker accumulation in intermediate steps. Additionally, we propose a video deflickering algorithm, named patch blending algorithm, that remaps objects in different frames and blends them together to enhance video consistency. One of the notable advantages of DiffSynth is its general applicability to various video synthesis tasks, including text-guided video stylization, fashion video synthesis, image-guided video stylization, video restoring, and 3D rendering. In the task of text-guided video stylization, we make it possible to synthesize high-quality videos without cherry-picking. The experimental results demonstrate the effectiveness of DiffSynth. All videos can be viewed on our project page. Source codes will also be released.
2012.08977
Hyemin Ahn
Hyemin Ahn, Obin Kwon, Kyoungdo Kim, Jaeyeon Jeong, Howoong Jun, Hongjung Lee, Dongheui Lee, Songhwai Oh
Visually Grounding Language Instruction for History-Dependent Manipulation
8 pages, 5 figures
null
null
null
cs.RO cs.LG
http://creativecommons.org/licenses/by/4.0/
This paper emphasizes the importance of a robot's ability to refer to its task history, especially when it executes a series of pick-and-place manipulations by following language instructions given one by one. The advantage of referring to the manipulation history can be categorized into two folds: (1) the language instructions omitting details but using expressions referring to the past can be interpreted, and (2) the visual information of objects occluded by previous manipulations can be inferred. For this, we introduce a history-dependent manipulation task which objective is to visually ground a series of language instructions for proper pick-and-place manipulations by referring to the past. We also suggest a relevant dataset and model which can be a baseline, and show that our model trained with the proposed dataset can also be applied to the real world based on the CycleGAN. Our dataset and code are publicly available on the project website: https://sites.google.com/view/history-dependent-manipulation.
[ { "created": "Wed, 16 Dec 2020 14:11:15 GMT", "version": "v1" }, { "created": "Mon, 14 Mar 2022 14:48:08 GMT", "version": "v2" } ]
2022-03-15
[ [ "Ahn", "Hyemin", "" ], [ "Kwon", "Obin", "" ], [ "Kim", "Kyoungdo", "" ], [ "Jeong", "Jaeyeon", "" ], [ "Jun", "Howoong", "" ], [ "Lee", "Hongjung", "" ], [ "Lee", "Dongheui", "" ], [ "Oh", "Songhwai", "" ] ]
This paper emphasizes the importance of a robot's ability to refer to its task history, especially when it executes a series of pick-and-place manipulations by following language instructions given one by one. The advantage of referring to the manipulation history can be categorized into two folds: (1) the language instructions omitting details but using expressions referring to the past can be interpreted, and (2) the visual information of objects occluded by previous manipulations can be inferred. For this, we introduce a history-dependent manipulation task which objective is to visually ground a series of language instructions for proper pick-and-place manipulations by referring to the past. We also suggest a relevant dataset and model which can be a baseline, and show that our model trained with the proposed dataset can also be applied to the real world based on the CycleGAN. Our dataset and code are publicly available on the project website: https://sites.google.com/view/history-dependent-manipulation.
2306.10006
Wolfgang Paier
Wolfgang Paier and Anna Hilsmann and Peter Eisert
Unsupervised Learning of Style-Aware Facial Animation from Real Acting Performances
16 pages, submitted to Graphical Models (Feb 2023)
null
null
null
cs.CV cs.GR cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper presents a novel approach for text/speech-driven animation of a photo-realistic head model based on blend-shape geometry, dynamic textures, and neural rendering. Training a VAE for geometry and texture yields a parametric model for accurate capturing and realistic synthesis of facial expressions from a latent feature vector. Our animation method is based on a conditional CNN that transforms text or speech into a sequence of animation parameters. In contrast to previous approaches, our animation model learns disentangling/synthesizing different acting-styles in an unsupervised manner, requiring only phonetic labels that describe the content of training sequences. For realistic real-time rendering, we train a U-Net that refines rasterization-based renderings by computing improved pixel colors and a foreground matte. We compare our framework qualitatively/quantitatively against recent methods for head modeling as well as facial animation and evaluate the perceived rendering/animation quality in a user-study, which indicates large improvements compared to state-of-the-art approaches
[ { "created": "Fri, 16 Jun 2023 17:58:04 GMT", "version": "v1" }, { "created": "Mon, 10 Jul 2023 13:58:20 GMT", "version": "v2" }, { "created": "Fri, 1 Sep 2023 18:08:05 GMT", "version": "v3" } ]
2023-09-06
[ [ "Paier", "Wolfgang", "" ], [ "Hilsmann", "Anna", "" ], [ "Eisert", "Peter", "" ] ]
This paper presents a novel approach for text/speech-driven animation of a photo-realistic head model based on blend-shape geometry, dynamic textures, and neural rendering. Training a VAE for geometry and texture yields a parametric model for accurate capturing and realistic synthesis of facial expressions from a latent feature vector. Our animation method is based on a conditional CNN that transforms text or speech into a sequence of animation parameters. In contrast to previous approaches, our animation model learns disentangling/synthesizing different acting-styles in an unsupervised manner, requiring only phonetic labels that describe the content of training sequences. For realistic real-time rendering, we train a U-Net that refines rasterization-based renderings by computing improved pixel colors and a foreground matte. We compare our framework qualitatively/quantitatively against recent methods for head modeling as well as facial animation and evaluate the perceived rendering/animation quality in a user-study, which indicates large improvements compared to state-of-the-art approaches
2003.09354
Varun Tolani
Varun Tolani, Somil Bansal, Aleksandra Faust, Claire Tomlin
Visual Navigation Among Humans with Optimal Control as a Supervisor
Project Website: https://smlbansal.github.io/LB-WayPtNav-DH/
null
null
null
cs.RO cs.AI cs.CV cs.LG cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Real world visual navigation requires robots to operate in unfamiliar, human-occupied dynamic environments. Navigation around humans is especially difficult because it requires anticipating their future motion, which can be quite challenging. We propose an approach that combines learning-based perception with model-based optimal control to navigate among humans based only on monocular, first-person RGB images. Our approach is enabled by our novel data-generation tool, HumANav that allows for photorealistic renderings of indoor environment scenes with humans in them, which are then used to train the perception module entirely in simulation. Through simulations and experiments on a mobile robot, we demonstrate that the learned navigation policies can anticipate and react to humans without explicitly predicting future human motion, generalize to previously unseen environments and human behaviors, and transfer directly from simulation to reality. Videos describing our approach and experiments, as well as a demo of HumANav are available on the project website.
[ { "created": "Fri, 20 Mar 2020 16:13:47 GMT", "version": "v1" }, { "created": "Fri, 12 Feb 2021 21:09:24 GMT", "version": "v2" } ]
2021-02-16
[ [ "Tolani", "Varun", "" ], [ "Bansal", "Somil", "" ], [ "Faust", "Aleksandra", "" ], [ "Tomlin", "Claire", "" ] ]
Real world visual navigation requires robots to operate in unfamiliar, human-occupied dynamic environments. Navigation around humans is especially difficult because it requires anticipating their future motion, which can be quite challenging. We propose an approach that combines learning-based perception with model-based optimal control to navigate among humans based only on monocular, first-person RGB images. Our approach is enabled by our novel data-generation tool, HumANav that allows for photorealistic renderings of indoor environment scenes with humans in them, which are then used to train the perception module entirely in simulation. Through simulations and experiments on a mobile robot, we demonstrate that the learned navigation policies can anticipate and react to humans without explicitly predicting future human motion, generalize to previously unseen environments and human behaviors, and transfer directly from simulation to reality. Videos describing our approach and experiments, as well as a demo of HumANav are available on the project website.
2106.00089
Fernando Gama
Fernando Gama, Brendon G. Anderson, Somayeh Sojoudi
Node-Variant Graph Filters in Graph Neural Networks
null
null
null
null
cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph neural networks (GNNs) have been successfully employed in a myriad of applications involving graph signals. Theoretical findings establish that GNNs use nonlinear activation functions to create low-eigenvalue frequency content that can be processed in a stable manner by subsequent graph convolutional filters. However, the exact shape of the frequency content created by nonlinear functions is not known and cannot be learned. In this work, we use node-variant graph filters (NVGFs) -- which are linear filters capable of creating frequencies -- as a means of investigating the role that frequency creation plays in GNNs. We show that, by replacing nonlinear activation functions by NVGFs, frequency creation mechanisms can be designed or learned. By doing so, the role of frequency creation is separated from the nonlinear nature of traditional GNNs. Simulations on graph signal processing problems are carried out to pinpoint the role of frequency creation.
[ { "created": "Mon, 31 May 2021 20:26:53 GMT", "version": "v1" }, { "created": "Fri, 4 Mar 2022 22:04:02 GMT", "version": "v2" } ]
2022-03-08
[ [ "Gama", "Fernando", "" ], [ "Anderson", "Brendon G.", "" ], [ "Sojoudi", "Somayeh", "" ] ]
Graph neural networks (GNNs) have been successfully employed in a myriad of applications involving graph signals. Theoretical findings establish that GNNs use nonlinear activation functions to create low-eigenvalue frequency content that can be processed in a stable manner by subsequent graph convolutional filters. However, the exact shape of the frequency content created by nonlinear functions is not known and cannot be learned. In this work, we use node-variant graph filters (NVGFs) -- which are linear filters capable of creating frequencies -- as a means of investigating the role that frequency creation plays in GNNs. We show that, by replacing nonlinear activation functions by NVGFs, frequency creation mechanisms can be designed or learned. By doing so, the role of frequency creation is separated from the nonlinear nature of traditional GNNs. Simulations on graph signal processing problems are carried out to pinpoint the role of frequency creation.
1903.03104
James Bagrow
Abigail Hotaling and James Bagrow
Accurate inference of crowdsourcing properties when using efficient allocation strategies
17 pages, 6 figures, 1 table
Scientific Reports 12, 6849 (2022)
10.1038/s41598-022-10794-9
null
cs.LG cs.HC stat.AP stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Allocation strategies improve the efficiency of crowdsourcing by decreasing the work needed to complete individual tasks accurately. However, these algorithms introduce bias by preferentially allocating workers onto easy tasks, leading to sets of completed tasks that are no longer representative of all tasks. This bias challenges inference of problem-wide properties such as typical task difficulty or crowd properties such as worker completion times, important information that goes beyond the crowd responses themselves. Here we study inference about problem properties when using an allocation algorithm to improve crowd efficiency. We introduce Decision-Explicit Probability Sampling (DEPS), a novel method to perform inference of problem properties while accounting for the potential bias introduced by an allocation strategy. Experiments on real and synthetic crowdsourcing data show that DEPS outperforms baseline inference methods while still leveraging the efficiency gains of the allocation method. The ability to perform accurate inference of general properties when using non-representative data allows crowdsourcers to extract more knowledge out of a given crowdsourced dataset.
[ { "created": "Thu, 7 Mar 2019 18:58:34 GMT", "version": "v1" }, { "created": "Wed, 27 Apr 2022 12:42:36 GMT", "version": "v2" } ]
2022-04-28
[ [ "Hotaling", "Abigail", "" ], [ "Bagrow", "James", "" ] ]
Allocation strategies improve the efficiency of crowdsourcing by decreasing the work needed to complete individual tasks accurately. However, these algorithms introduce bias by preferentially allocating workers onto easy tasks, leading to sets of completed tasks that are no longer representative of all tasks. This bias challenges inference of problem-wide properties such as typical task difficulty or crowd properties such as worker completion times, important information that goes beyond the crowd responses themselves. Here we study inference about problem properties when using an allocation algorithm to improve crowd efficiency. We introduce Decision-Explicit Probability Sampling (DEPS), a novel method to perform inference of problem properties while accounting for the potential bias introduced by an allocation strategy. Experiments on real and synthetic crowdsourcing data show that DEPS outperforms baseline inference methods while still leveraging the efficiency gains of the allocation method. The ability to perform accurate inference of general properties when using non-representative data allows crowdsourcers to extract more knowledge out of a given crowdsourced dataset.
1803.06604
Haichuan Yang
Ke Ren, Haichuan Yang, Yu Zhao, Mingshan Xue, Hongyu Miao, Shuai Huang, Ji Liu
A Robust AUC Maximization Framework with Simultaneous Outlier Detection and Feature Selection for Positive-Unlabeled Classification
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The positive-unlabeled (PU) classification is a common scenario in real-world applications such as healthcare, text classification, and bioinformatics, in which we only observe a few samples labeled as "positive" together with a large volume of "unlabeled" samples that may contain both positive and negative samples. Building robust classifier for the PU problem is very challenging, especially for complex data where the negative samples overwhelm and mislabeled samples or corrupted features exist. To address these three issues, we propose a robust learning framework that unifies AUC maximization (a robust metric for biased labels), outlier detection (for excluding wrong labels), and feature selection (for excluding corrupted features). The generalization error bounds are provided for the proposed model that give valuable insight into the theoretical performance of the method and lead to useful practical guidance, e.g., to train a model, we find that the included unlabeled samples are sufficient as long as the sample size is comparable to the number of positive samples in the training process. Empirical comparisons and two real-world applications on surgical site infection (SSI) and EEG seizure detection are also conducted to show the effectiveness of the proposed model.
[ { "created": "Sun, 18 Mar 2018 05:09:53 GMT", "version": "v1" } ]
2018-03-20
[ [ "Ren", "Ke", "" ], [ "Yang", "Haichuan", "" ], [ "Zhao", "Yu", "" ], [ "Xue", "Mingshan", "" ], [ "Miao", "Hongyu", "" ], [ "Huang", "Shuai", "" ], [ "Liu", "Ji", "" ] ]
The positive-unlabeled (PU) classification is a common scenario in real-world applications such as healthcare, text classification, and bioinformatics, in which we only observe a few samples labeled as "positive" together with a large volume of "unlabeled" samples that may contain both positive and negative samples. Building robust classifier for the PU problem is very challenging, especially for complex data where the negative samples overwhelm and mislabeled samples or corrupted features exist. To address these three issues, we propose a robust learning framework that unifies AUC maximization (a robust metric for biased labels), outlier detection (for excluding wrong labels), and feature selection (for excluding corrupted features). The generalization error bounds are provided for the proposed model that give valuable insight into the theoretical performance of the method and lead to useful practical guidance, e.g., to train a model, we find that the included unlabeled samples are sufficient as long as the sample size is comparable to the number of positive samples in the training process. Empirical comparisons and two real-world applications on surgical site infection (SSI) and EEG seizure detection are also conducted to show the effectiveness of the proposed model.
2311.00444
Vahan Hovhannisyan
Peter A. Zachares, Vahan Hovhannisyan, Alan Mosca, Yarin Gal
Form follows Function: Text-to-Text Conditional Graph Generation based on Functional Requirements
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
This work focuses on the novel problem setting of generating graphs conditioned on a description of the graph's functional requirements in a downstream task. We pose the problem as a text-to-text generation problem and focus on the approach of fine-tuning a pretrained large language model (LLM) to generate graphs. We propose an inductive bias which incorporates information about the structure of the graph into the LLM's generation process by incorporating message passing layers into an LLM's architecture. To evaluate our proposed method, we design a novel set of experiments using publicly available and widely studied molecule and knowledge graph data sets. Results suggest our proposed approach generates graphs which more closely meet the requested functional requirements, outperforming baselines developed on similar tasks by a statistically significant margin.
[ { "created": "Wed, 1 Nov 2023 11:12:02 GMT", "version": "v1" } ]
2023-11-02
[ [ "Zachares", "Peter A.", "" ], [ "Hovhannisyan", "Vahan", "" ], [ "Mosca", "Alan", "" ], [ "Gal", "Yarin", "" ] ]
This work focuses on the novel problem setting of generating graphs conditioned on a description of the graph's functional requirements in a downstream task. We pose the problem as a text-to-text generation problem and focus on the approach of fine-tuning a pretrained large language model (LLM) to generate graphs. We propose an inductive bias which incorporates information about the structure of the graph into the LLM's generation process by incorporating message passing layers into an LLM's architecture. To evaluate our proposed method, we design a novel set of experiments using publicly available and widely studied molecule and knowledge graph data sets. Results suggest our proposed approach generates graphs which more closely meet the requested functional requirements, outperforming baselines developed on similar tasks by a statistically significant margin.
2305.16038
Arthur Jacot
Zihan Wang, Arthur Jacot
Implicit bias of SGD in $L_{2}$-regularized linear DNNs: One-way jumps from high to low rank
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The $L_{2}$-regularized loss of Deep Linear Networks (DLNs) with more than one hidden layers has multiple local minima, corresponding to matrices with different ranks. In tasks such as matrix completion, the goal is to converge to the local minimum with the smallest rank that still fits the training data. While rank-underestimating minima can be avoided since they do not fit the data, GD might get stuck at rank-overestimating minima. We show that with SGD, there is always a probability to jump from a higher rank minimum to a lower rank one, but the probability of jumping back is zero. More precisely, we define a sequence of sets $B_{1}\subset B_{2}\subset\cdots\subset B_{R}$ so that $B_{r}$ contains all minima of rank $r$ or less (and not more) that are absorbing for small enough ridge parameters $\lambda$ and learning rates $\eta$: SGD has prob. 0 of leaving $B_{r}$, and from any starting point there is a non-zero prob. for SGD to go in $B_{r}$.
[ { "created": "Thu, 25 May 2023 13:17:32 GMT", "version": "v1" }, { "created": "Fri, 29 Sep 2023 13:18:59 GMT", "version": "v2" } ]
2023-10-02
[ [ "Wang", "Zihan", "" ], [ "Jacot", "Arthur", "" ] ]
The $L_{2}$-regularized loss of Deep Linear Networks (DLNs) with more than one hidden layers has multiple local minima, corresponding to matrices with different ranks. In tasks such as matrix completion, the goal is to converge to the local minimum with the smallest rank that still fits the training data. While rank-underestimating minima can be avoided since they do not fit the data, GD might get stuck at rank-overestimating minima. We show that with SGD, there is always a probability to jump from a higher rank minimum to a lower rank one, but the probability of jumping back is zero. More precisely, we define a sequence of sets $B_{1}\subset B_{2}\subset\cdots\subset B_{R}$ so that $B_{r}$ contains all minima of rank $r$ or less (and not more) that are absorbing for small enough ridge parameters $\lambda$ and learning rates $\eta$: SGD has prob. 0 of leaving $B_{r}$, and from any starting point there is a non-zero prob. for SGD to go in $B_{r}$.
1311.7283
Dmitry N. Kozlov
Dmitry N. Kozlov
Topology of the view complex
accepted for publication in Homotopy, Homology Appl
null
null
null
cs.DC math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we consider a family of simplicial complexes, which we call the view complexes. Our choice of objects of study is motivated by theoretical distributed computing, since the view complex is a key simplicial construction used for protocol complexes in the snapshot computational model. We show that the view complex $\view$ can be collapsed to the well-known complex $\chi(\Delta^n)$, called standard chromatic subdivision of a simplex, and that $\chi(\Delta^n)$ is itself collapsible. Furthermore, we show that the collapses can be performed simultaneously in entire orbits of the natural symmetric group action. Our results yield a purely combinatorial and constructive understanding of the topology of view complexes, at the same time as they enhance our knowledge about the standard chromatic subdivision of a simplex.
[ { "created": "Thu, 28 Nov 2013 11:43:44 GMT", "version": "v1" }, { "created": "Thu, 26 Jun 2014 13:12:55 GMT", "version": "v2" }, { "created": "Fri, 5 Dec 2014 13:24:44 GMT", "version": "v3" } ]
2014-12-08
[ [ "Kozlov", "Dmitry N.", "" ] ]
In this paper we consider a family of simplicial complexes, which we call the view complexes. Our choice of objects of study is motivated by theoretical distributed computing, since the view complex is a key simplicial construction used for protocol complexes in the snapshot computational model. We show that the view complex $\view$ can be collapsed to the well-known complex $\chi(\Delta^n)$, called standard chromatic subdivision of a simplex, and that $\chi(\Delta^n)$ is itself collapsible. Furthermore, we show that the collapses can be performed simultaneously in entire orbits of the natural symmetric group action. Our results yield a purely combinatorial and constructive understanding of the topology of view complexes, at the same time as they enhance our knowledge about the standard chromatic subdivision of a simplex.
2112.07599
Givi Meishvili
Givi Meishvili, Attila Szab\'o, Simon Jenni, Paolo Favaro
Learning to Deblur and Rotate Motion-Blurred Faces
British Machine Vision Conference 2021
null
null
null
cs.CV cs.AI cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a solution to the novel task of rendering sharp videos from new viewpoints from a single motion-blurred image of a face. Our method handles the complexity of face blur by implicitly learning the geometry and motion of faces through the joint training on three large datasets: FFHQ and 300VW, which are publicly available, and a new Bern Multi-View Face Dataset (BMFD) that we built. The first two datasets provide a large variety of faces and allow our model to generalize better. BMFD instead allows us to introduce multi-view constraints, which are crucial to synthesizing sharp videos from a new camera view. It consists of high frame rate synchronized videos from multiple views of several subjects displaying a wide range of facial expressions. We use the high frame rate videos to simulate realistic motion blur through averaging. Thanks to this dataset, we train a neural network to reconstruct a 3D video representation from a single image and the corresponding face gaze. We then provide a camera viewpoint relative to the estimated gaze and the blurry image as input to an encoder-decoder network to generate a video of sharp frames with a novel camera viewpoint. We demonstrate our approach on test subjects of our multi-view dataset and VIDTIMIT.
[ { "created": "Tue, 14 Dec 2021 17:51:19 GMT", "version": "v1" } ]
2021-12-15
[ [ "Meishvili", "Givi", "" ], [ "Szabó", "Attila", "" ], [ "Jenni", "Simon", "" ], [ "Favaro", "Paolo", "" ] ]
We propose a solution to the novel task of rendering sharp videos from new viewpoints from a single motion-blurred image of a face. Our method handles the complexity of face blur by implicitly learning the geometry and motion of faces through the joint training on three large datasets: FFHQ and 300VW, which are publicly available, and a new Bern Multi-View Face Dataset (BMFD) that we built. The first two datasets provide a large variety of faces and allow our model to generalize better. BMFD instead allows us to introduce multi-view constraints, which are crucial to synthesizing sharp videos from a new camera view. It consists of high frame rate synchronized videos from multiple views of several subjects displaying a wide range of facial expressions. We use the high frame rate videos to simulate realistic motion blur through averaging. Thanks to this dataset, we train a neural network to reconstruct a 3D video representation from a single image and the corresponding face gaze. We then provide a camera viewpoint relative to the estimated gaze and the blurry image as input to an encoder-decoder network to generate a video of sharp frames with a novel camera viewpoint. We demonstrate our approach on test subjects of our multi-view dataset and VIDTIMIT.
2111.10541
Hanning Gao
Hanning Gao, Lingfei Wu, Po Hu, Zhihua Wei, Fangli Xu and Bo Long
Graph-augmented Learning to Rank for Querying Large-scale Knowledge Graph
Accepted by AACL 2022
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge graph question answering (KGQA) based on information retrieval aims to answer a question by retrieving answer from a large-scale knowledge graph. Most existing methods first roughly retrieve the knowledge subgraphs (KSG) that may contain candidate answer, and then search for the exact answer in the KSG. However, the KSG may contain thousands of candidate nodes since the knowledge graph involved in querying is often of large scale, thus decreasing the performance of answer selection. To tackle this problem, we first propose to partition the retrieved KSG to several smaller sub-KSGs via a new subgraph partition algorithm and then present a graph-augmented learning to rank model to select the top-ranked sub-KSGs from them. Our proposed model combines a novel subgraph matching networks to capture global interactions in both question and subgraphs, and an Enhanced Bilateral Multi-Perspective Matching model is proposed to capture local interactions. Finally, we apply an answer selection model on the full KSG and the top-ranked sub-KSGs respectively to validate the effectiveness of our proposed graph-augmented learning to rank method. The experimental results on multiple benchmark datasets have demonstrated the effectiveness of our approach.
[ { "created": "Sat, 20 Nov 2021 08:27:37 GMT", "version": "v1" }, { "created": "Fri, 15 Apr 2022 01:34:30 GMT", "version": "v2" }, { "created": "Tue, 3 May 2022 12:47:41 GMT", "version": "v3" }, { "created": "Wed, 5 Oct 2022 00:52:01 GMT", "version": "v4" } ]
2022-10-06
[ [ "Gao", "Hanning", "" ], [ "Wu", "Lingfei", "" ], [ "Hu", "Po", "" ], [ "Wei", "Zhihua", "" ], [ "Xu", "Fangli", "" ], [ "Long", "Bo", "" ] ]
Knowledge graph question answering (KGQA) based on information retrieval aims to answer a question by retrieving answer from a large-scale knowledge graph. Most existing methods first roughly retrieve the knowledge subgraphs (KSG) that may contain candidate answer, and then search for the exact answer in the KSG. However, the KSG may contain thousands of candidate nodes since the knowledge graph involved in querying is often of large scale, thus decreasing the performance of answer selection. To tackle this problem, we first propose to partition the retrieved KSG to several smaller sub-KSGs via a new subgraph partition algorithm and then present a graph-augmented learning to rank model to select the top-ranked sub-KSGs from them. Our proposed model combines a novel subgraph matching networks to capture global interactions in both question and subgraphs, and an Enhanced Bilateral Multi-Perspective Matching model is proposed to capture local interactions. Finally, we apply an answer selection model on the full KSG and the top-ranked sub-KSGs respectively to validate the effectiveness of our proposed graph-augmented learning to rank method. The experimental results on multiple benchmark datasets have demonstrated the effectiveness of our approach.
1711.08589
Benjamin Klein
Benjamin Klein and Lior Wolf
End-to-End Supervised Product Quantization for Image Search and Retrieval
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Product Quantization, a dictionary based hashing method, is one of the leading unsupervised hashing techniques. While it ignores the labels, it harnesses the features to construct look up tables that can approximate the feature space. In recent years, several works have achieved state of the art results on hashing benchmarks by learning binary representations in a supervised manner. This work presents Deep Product Quantization (DPQ), a technique that leads to more accurate retrieval and classification than the latest state of the art methods, while having similar computational complexity and memory footprint as the Product Quantization method. To our knowledge, this is the first work to introduce a dictionary-based representation that is inspired by Product Quantization and which is learned end-to-end, and thus benefits from the supervised signal. DPQ explicitly learns soft and hard representations to enable an efficient and accurate asymmetric search, by using a straight-through estimator. Our method obtains state of the art results on an extensive array of retrieval and classification experiments.
[ { "created": "Thu, 23 Nov 2017 06:40:28 GMT", "version": "v1" }, { "created": "Fri, 17 Jan 2020 22:56:50 GMT", "version": "v2" } ]
2020-01-22
[ [ "Klein", "Benjamin", "" ], [ "Wolf", "Lior", "" ] ]
Product Quantization, a dictionary based hashing method, is one of the leading unsupervised hashing techniques. While it ignores the labels, it harnesses the features to construct look up tables that can approximate the feature space. In recent years, several works have achieved state of the art results on hashing benchmarks by learning binary representations in a supervised manner. This work presents Deep Product Quantization (DPQ), a technique that leads to more accurate retrieval and classification than the latest state of the art methods, while having similar computational complexity and memory footprint as the Product Quantization method. To our knowledge, this is the first work to introduce a dictionary-based representation that is inspired by Product Quantization and which is learned end-to-end, and thus benefits from the supervised signal. DPQ explicitly learns soft and hard representations to enable an efficient and accurate asymmetric search, by using a straight-through estimator. Our method obtains state of the art results on an extensive array of retrieval and classification experiments.
2005.13829
Yitong Ji
Yitong Ji, Aixin Sun, Jie Zhang, Chenliang Li
A Re-visit of the Popularity Baseline in Recommender Systems
Accepted by SIGIR2020
null
10.1145/3397271.3401233
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Popularity is often included in experimental evaluation to provide a reference performance for a recommendation task. To understand how popularity baseline is defined and evaluated, we sample 12 papers from top-tier conferences including KDD, WWW, SIGIR, and RecSys, and 6 open source toolkits. We note that the widely adopted MostPop baseline simply ranks items based on the number of interactions in the training data. We argue that the current evaluation of popularity (i) does not reflect the popular items at the time when a user interacts with the system, and (ii) may recommend items released after a user's last interaction with the system. On the widely used MovieLens dataset, we show that the performance of popularity could be significantly improved by 70% or more, if we consider the popular items at the time point when a user interacts with the system. We further show that, on MovieLens dataset, the users having lower tendencies on movies tend to follow the crowd and rate more popular movies. Movie lovers who rate a large number of movies, rate movies based on their own preferences and interests. Through this study, we call for a re-visit of the popularity baseline in recommender system to better reflect its effectiveness.
[ { "created": "Thu, 28 May 2020 08:04:40 GMT", "version": "v1" }, { "created": "Tue, 2 Jun 2020 06:37:06 GMT", "version": "v2" } ]
2020-06-03
[ [ "Ji", "Yitong", "" ], [ "Sun", "Aixin", "" ], [ "Zhang", "Jie", "" ], [ "Li", "Chenliang", "" ] ]
Popularity is often included in experimental evaluation to provide a reference performance for a recommendation task. To understand how popularity baseline is defined and evaluated, we sample 12 papers from top-tier conferences including KDD, WWW, SIGIR, and RecSys, and 6 open source toolkits. We note that the widely adopted MostPop baseline simply ranks items based on the number of interactions in the training data. We argue that the current evaluation of popularity (i) does not reflect the popular items at the time when a user interacts with the system, and (ii) may recommend items released after a user's last interaction with the system. On the widely used MovieLens dataset, we show that the performance of popularity could be significantly improved by 70% or more, if we consider the popular items at the time point when a user interacts with the system. We further show that, on MovieLens dataset, the users having lower tendencies on movies tend to follow the crowd and rate more popular movies. Movie lovers who rate a large number of movies, rate movies based on their own preferences and interests. Through this study, we call for a re-visit of the popularity baseline in recommender system to better reflect its effectiveness.
2308.15870
EPTCS
Christian Hatschka (TU Vienna), Agata Ciabattoni (TU Vienna), Thomas Eiter (TU Vienna)
Deontic Paradoxes in ASP with Weak Constraints
In Proceedings ICLP 2023, arXiv:2308.14898
EPTCS 385, 2023, pp. 367-380
10.4204/EPTCS.385.39
null
cs.LO cs.AI cs.CY cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rise of powerful AI technology for a range of applications that are sensitive to legal, social, and ethical norms demands decision-making support in presence of norms and regulations. Normative reasoning is the realm of deontic logics, that are challenged by well-known benchmark problems (deontic paradoxes), and lack efficient computational tools. In this paper, we use Answer Set Programming (ASP) for addressing these shortcomings and showcase how to encode and resolve several well-known deontic paradoxes utilizing weak constraints. By abstracting and generalizing this encoding, we present a methodology for translating normative systems in ASP with weak constraints. This methodology is applied to "ethical" versions of Pac-man, where we obtain a comparable performance with related works, but ethically preferable results.
[ { "created": "Wed, 30 Aug 2023 08:56:54 GMT", "version": "v1" } ]
2023-08-31
[ [ "Hatschka", "Christian", "", "TU Vienna" ], [ "Ciabattoni", "Agata", "", "TU Vienna" ], [ "Eiter", "Thomas", "", "TU Vienna" ] ]
The rise of powerful AI technology for a range of applications that are sensitive to legal, social, and ethical norms demands decision-making support in presence of norms and regulations. Normative reasoning is the realm of deontic logics, that are challenged by well-known benchmark problems (deontic paradoxes), and lack efficient computational tools. In this paper, we use Answer Set Programming (ASP) for addressing these shortcomings and showcase how to encode and resolve several well-known deontic paradoxes utilizing weak constraints. By abstracting and generalizing this encoding, we present a methodology for translating normative systems in ASP with weak constraints. This methodology is applied to "ethical" versions of Pac-man, where we obtain a comparable performance with related works, but ethically preferable results.
2204.05959
Jeffrey Young
Sara Karamati, Clayton Hughes, K. Scott Hemmert, Ryan E. Grant, W. Whit Schonbein, Scott Levy, Thomas M. Conte, Jeffrey Young, Richard W. Vuduc
"Smarter" NICs for faster molecular dynamics: a case study
null
null
null
null
cs.DC cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work evaluates the benefits of using a "smart" network interface card (SmartNIC) as a compute accelerator for the example of the MiniMD molecular dynamics proxy application. The accelerator is NVIDIA's BlueField-2 card, which includes an 8-core Arm processor along with a small amount of DRAM and storage. We test the networking and data movement performance of these cards compared to a standard Intel server host using microbenchmarks and MiniMD. In MiniMD, we identify two distinct classes of computation, namely core computation and maintenance computation, which are executed in sequence. We restructure the algorithm and code to weaken this dependence and increase task parallelism, thereby making it possible to increase utilization of the BlueField-2 concurrently with the host. We evaluate our implementation on a cluster consisting of 16 dual-socket Intel Broadwell host nodes with one BlueField-2 per host-node. Our results show that while the overall compute performance of BlueField-2 is limited, using them with a modified MiniMD algorithm allows for up to 20% speedup over the host CPU baseline with no loss in simulation accuracy.
[ { "created": "Tue, 12 Apr 2022 17:17:05 GMT", "version": "v1" } ]
2022-04-13
[ [ "Karamati", "Sara", "" ], [ "Hughes", "Clayton", "" ], [ "Hemmert", "K. Scott", "" ], [ "Grant", "Ryan E.", "" ], [ "Schonbein", "W. Whit", "" ], [ "Levy", "Scott", "" ], [ "Conte", "Thomas M.", "" ], [ "Young", "Jeffrey", "" ], [ "Vuduc", "Richard W.", "" ] ]
This work evaluates the benefits of using a "smart" network interface card (SmartNIC) as a compute accelerator for the example of the MiniMD molecular dynamics proxy application. The accelerator is NVIDIA's BlueField-2 card, which includes an 8-core Arm processor along with a small amount of DRAM and storage. We test the networking and data movement performance of these cards compared to a standard Intel server host using microbenchmarks and MiniMD. In MiniMD, we identify two distinct classes of computation, namely core computation and maintenance computation, which are executed in sequence. We restructure the algorithm and code to weaken this dependence and increase task parallelism, thereby making it possible to increase utilization of the BlueField-2 concurrently with the host. We evaluate our implementation on a cluster consisting of 16 dual-socket Intel Broadwell host nodes with one BlueField-2 per host-node. Our results show that while the overall compute performance of BlueField-2 is limited, using them with a modified MiniMD algorithm allows for up to 20% speedup over the host CPU baseline with no loss in simulation accuracy.
cs/0102013
Hirotada Kobayashi
Hirotada Kobayashi, Keiji Matsumoto
Quantum Multi-Prover Interactive Proof Systems with Limited Prior Entanglement
LaTeX2e, 19 pages, 2 figures, title changed, some of the sections are fully revised, journal version in Journal of Computer and System Sciences
Journal of Computer and System Sciences, 66(3):429--450, 2003
null
null
cs.CC quant-ph
null
This paper gives the first formal treatment of a quantum analogue of multi-prover interactive proof systems. It is proved that the class of languages having quantum multi-prover interactive proof systems is necessarily contained in NEXP, under the assumption that provers are allowed to share at most polynomially many prior-entangled qubits. This implies that, in particular, if provers do not share any prior entanglement with each other, the class of languages having quantum multi-prover interactive proof systems is equal to NEXP. Related to these, it is shown that, in the case a prover does not have his private qubits, the class of languages having quantum single-prover interactive proof systems is also equal to NEXP.
[ { "created": "Mon, 19 Feb 2001 19:46:12 GMT", "version": "v1" }, { "created": "Thu, 12 Apr 2001 11:31:46 GMT", "version": "v2" }, { "created": "Tue, 15 May 2001 12:32:31 GMT", "version": "v3" }, { "created": "Fri, 16 Nov 2001 13:34:35 GMT", "version": "v4" }, { "created": "Tue, 10 Jun 2003 17:07:59 GMT", "version": "v5" } ]
2007-05-23
[ [ "Kobayashi", "Hirotada", "" ], [ "Matsumoto", "Keiji", "" ] ]
This paper gives the first formal treatment of a quantum analogue of multi-prover interactive proof systems. It is proved that the class of languages having quantum multi-prover interactive proof systems is necessarily contained in NEXP, under the assumption that provers are allowed to share at most polynomially many prior-entangled qubits. This implies that, in particular, if provers do not share any prior entanglement with each other, the class of languages having quantum multi-prover interactive proof systems is equal to NEXP. Related to these, it is shown that, in the case a prover does not have his private qubits, the class of languages having quantum single-prover interactive proof systems is also equal to NEXP.
1805.05980
Kendeas Theofanous Mr
Kendeas Theofanous
Dynamic Walkng of Legged Machines
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Locomotion of legged machines faces the problems of model complexity and computational costs. Algorithms based on complex models and/or reinforcement learning exist to solve the walking control task. In this project, we aim to develop a bipedal walking control system based on a simple model the Linear Inverted Pendulum model. In order to simplify the complex process of controlling legged locomotion, we make use of the technique of splitting the control into three parts as height control, forward velocity control and balance control. The forward velocity of the body has a linear relationship with the foot placement, therefore we use a linear function to realise foot placement. Our control system achieves stable walking gait in a simulated environment, where our bipedal robot walks more than 200 steps with a cyclic pattern in a stable, dynamic and almost natural manner. The experimental data are presented and analysed.
[ { "created": "Tue, 15 May 2018 18:25:49 GMT", "version": "v1" } ]
2018-05-17
[ [ "Theofanous", "Kendeas", "" ] ]
Locomotion of legged machines faces the problems of model complexity and computational costs. Algorithms based on complex models and/or reinforcement learning exist to solve the walking control task. In this project, we aim to develop a bipedal walking control system based on a simple model the Linear Inverted Pendulum model. In order to simplify the complex process of controlling legged locomotion, we make use of the technique of splitting the control into three parts as height control, forward velocity control and balance control. The forward velocity of the body has a linear relationship with the foot placement, therefore we use a linear function to realise foot placement. Our control system achieves stable walking gait in a simulated environment, where our bipedal robot walks more than 200 steps with a cyclic pattern in a stable, dynamic and almost natural manner. The experimental data are presented and analysed.
2112.06106
Donsuk Lee
Donsuk Lee, Pranav Gujarathi, Justin N. Wood
Controlled-rearing studies of newborn chicks and deep neural networks
NeurIPS 2021 Workshop on Shared Visual Representations in Human & Machine Intelligence
null
null
null
cs.CV cs.AI q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Convolutional neural networks (CNNs) can now achieve human-level performance on challenging object recognition tasks. CNNs are also the leading quantitative models in terms of predicting neural and behavioral responses in visual recognition tasks. However, there is a widely accepted critique of CNN models: unlike newborn animals, which learn rapidly and efficiently, CNNs are thought to be "data hungry," requiring massive amounts of training data to develop accurate models for object recognition. This critique challenges the promise of using CNNs as models of visual development. Here, we directly examined whether CNNs are more data hungry than newborn animals by performing parallel controlled-rearing experiments on newborn chicks and CNNs. We raised newborn chicks in strictly controlled visual environments, then simulated the training data available in that environment by constructing a virtual animal chamber in a video game engine. We recorded the visual images acquired by an agent moving through the virtual chamber and used those images to train CNNs. When CNNs received similar visual training data as chicks, the CNNs successfully solved the same challenging view-invariant object recognition tasks as the chicks. Thus, the CNNs were not more data hungry than animals: both CNNs and chicks successfully developed robust object models from training data of a single object.
[ { "created": "Sun, 12 Dec 2021 00:45:07 GMT", "version": "v1" } ]
2021-12-14
[ [ "Lee", "Donsuk", "" ], [ "Gujarathi", "Pranav", "" ], [ "Wood", "Justin N.", "" ] ]
Convolutional neural networks (CNNs) can now achieve human-level performance on challenging object recognition tasks. CNNs are also the leading quantitative models in terms of predicting neural and behavioral responses in visual recognition tasks. However, there is a widely accepted critique of CNN models: unlike newborn animals, which learn rapidly and efficiently, CNNs are thought to be "data hungry," requiring massive amounts of training data to develop accurate models for object recognition. This critique challenges the promise of using CNNs as models of visual development. Here, we directly examined whether CNNs are more data hungry than newborn animals by performing parallel controlled-rearing experiments on newborn chicks and CNNs. We raised newborn chicks in strictly controlled visual environments, then simulated the training data available in that environment by constructing a virtual animal chamber in a video game engine. We recorded the visual images acquired by an agent moving through the virtual chamber and used those images to train CNNs. When CNNs received similar visual training data as chicks, the CNNs successfully solved the same challenging view-invariant object recognition tasks as the chicks. Thus, the CNNs were not more data hungry than animals: both CNNs and chicks successfully developed robust object models from training data of a single object.
1410.1864
Maurice Margenstern
Maurice Margenstern
A weakly universal cellular automaton in the heptagrid with three states
27 pages, 21 figures. arXiv admin note: substantial text overlap with arXiv:1403.2373
null
null
null
cs.DM nlin.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we construct a cellular automaton on the heptagrid which is planar, weakly universal and which have three states only. This result improves the best result which was with four states.
[ { "created": "Tue, 7 Oct 2014 19:54:18 GMT", "version": "v1" } ]
2014-10-08
[ [ "Margenstern", "Maurice", "" ] ]
In this paper, we construct a cellular automaton on the heptagrid which is planar, weakly universal and which have three states only. This result improves the best result which was with four states.
2301.13311
Ahmed Alkhateeb
Yu Zhang, Tawfik Osman, and Ahmed Alkhateeb
A Digital Twin Assisted Framework for Interference Nulling in Millimeter Wave MIMO Systems
arXiv admin note: substantial text overlap with arXiv:2209.04509
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by-nc-sa/4.0/
Millimeter wave (mmWave) and terahertz MIMO systems rely on pre-defined beamforming codebooks for both initial access and data transmission. However, most of the existing codebooks adopt pre-defined beams that focus mainly on improving the gain of their target users, without taking interference into account, which could incur critical performance degradation in dense networks. To address this problem, in this paper, we propose a sample-efficient digital twin-assisted beam pattern design framework that learns how to form the beam pattern to reject the signals from the interfering directions. The proposed approach does not require any explicit channel knowledge or any coordination with the interferers. The adoption of the digital twin improves the sample efficiency by better leveraging the underlying signal relationship and by incorporating a demand-based data acquisition strategy. Simulation results show that the developed signal model-based learning framework can significantly reduce the actual interaction with the radio environment (i.e., the number of measurements) compared to the model-unaware design, leading to a more practical and efficient interference-aware beam design approach.
[ { "created": "Mon, 30 Jan 2023 22:10:15 GMT", "version": "v1" } ]
2023-02-01
[ [ "Zhang", "Yu", "" ], [ "Osman", "Tawfik", "" ], [ "Alkhateeb", "Ahmed", "" ] ]
Millimeter wave (mmWave) and terahertz MIMO systems rely on pre-defined beamforming codebooks for both initial access and data transmission. However, most of the existing codebooks adopt pre-defined beams that focus mainly on improving the gain of their target users, without taking interference into account, which could incur critical performance degradation in dense networks. To address this problem, in this paper, we propose a sample-efficient digital twin-assisted beam pattern design framework that learns how to form the beam pattern to reject the signals from the interfering directions. The proposed approach does not require any explicit channel knowledge or any coordination with the interferers. The adoption of the digital twin improves the sample efficiency by better leveraging the underlying signal relationship and by incorporating a demand-based data acquisition strategy. Simulation results show that the developed signal model-based learning framework can significantly reduce the actual interaction with the radio environment (i.e., the number of measurements) compared to the model-unaware design, leading to a more practical and efficient interference-aware beam design approach.
2104.07972
Vincent Micheli
Vincent Micheli, Fran\c{c}ois Fleuret
Language Models are Few-Shot Butlers
EMNLP 2021
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pretrained language models demonstrate strong performance in most NLP tasks when fine-tuned on small task-specific datasets. Hence, these autoregressive models constitute ideal agents to operate in text-based environments where language understanding and generative capabilities are essential. Nonetheless, collecting expert demonstrations in such environments is a time-consuming endeavour. We introduce a two-stage procedure to learn from a small set of demonstrations and further improve by interacting with an environment. We show that language models fine-tuned with only 1.2% of the expert demonstrations and a simple reinforcement learning algorithm achieve a 51% absolute improvement in success rate over existing methods in the ALFWorld environment.
[ { "created": "Fri, 16 Apr 2021 08:47:07 GMT", "version": "v1" }, { "created": "Mon, 20 Sep 2021 11:49:49 GMT", "version": "v2" } ]
2021-09-21
[ [ "Micheli", "Vincent", "" ], [ "Fleuret", "François", "" ] ]
Pretrained language models demonstrate strong performance in most NLP tasks when fine-tuned on small task-specific datasets. Hence, these autoregressive models constitute ideal agents to operate in text-based environments where language understanding and generative capabilities are essential. Nonetheless, collecting expert demonstrations in such environments is a time-consuming endeavour. We introduce a two-stage procedure to learn from a small set of demonstrations and further improve by interacting with an environment. We show that language models fine-tuned with only 1.2% of the expert demonstrations and a simple reinforcement learning algorithm achieve a 51% absolute improvement in success rate over existing methods in the ALFWorld environment.
0811.1301
Amit Bhosle
Amit M. Bhosle and Teofilo F. Gonzalez
Distributed Algorithms for Computing Alternate Paths Avoiding Failed Nodes and Links
8 pages, 2 columns, 1 figure
null
null
null
cs.DC cs.DS cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A recent study characterizing failures in computer networks shows that transient single element (node/link) failures are the dominant failures in large communication networks like the Internet. Thus, having the routing paths globally recomputed on a failure does not pay off since the failed element recovers fairly quickly, and the recomputed routing paths need to be discarded. In this paper, we present the first distributed algorithm that computes the alternate paths required by some "proactive recovery schemes" for handling transient failures. Our algorithm computes paths that avoid a failed node, and provides an alternate path to a particular destination from an upstream neighbor of the failed node. With minor modifications, we can have the algorithm compute alternate paths that avoid a failed link as well. To the best of our knowledge all previous algorithms proposed for computing alternate paths are centralized, and need complete information of the network graph as input to the algorithm.
[ { "created": "Sun, 9 Nov 2008 03:34:39 GMT", "version": "v1" } ]
2008-11-11
[ [ "Bhosle", "Amit M.", "" ], [ "Gonzalez", "Teofilo F.", "" ] ]
A recent study characterizing failures in computer networks shows that transient single element (node/link) failures are the dominant failures in large communication networks like the Internet. Thus, having the routing paths globally recomputed on a failure does not pay off since the failed element recovers fairly quickly, and the recomputed routing paths need to be discarded. In this paper, we present the first distributed algorithm that computes the alternate paths required by some "proactive recovery schemes" for handling transient failures. Our algorithm computes paths that avoid a failed node, and provides an alternate path to a particular destination from an upstream neighbor of the failed node. With minor modifications, we can have the algorithm compute alternate paths that avoid a failed link as well. To the best of our knowledge all previous algorithms proposed for computing alternate paths are centralized, and need complete information of the network graph as input to the algorithm.
2406.01917
Anindya Sarkar
Anindya Sarkar, Srikumar Sastry, Aleksis Pirinen, Chongjie Zhang, Nathan Jacobs, Yevgeniy Vorobeychik
GOMAA-Geo: GOal Modality Agnostic Active Geo-localization
23 pages, 17 figures
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
We consider the task of active geo-localization (AGL) in which an agent uses a sequence of visual cues observed during aerial navigation to find a target specified through multiple possible modalities. This could emulate a UAV involved in a search-and-rescue operation navigating through an area, observing a stream of aerial images as it goes. The AGL task is associated with two important challenges. Firstly, an agent must deal with a goal specification in one of multiple modalities (e.g., through a natural language description) while the search cues are provided in other modalities (aerial imagery). The second challenge is limited localization time (e.g., limited battery life, urgency) so that the goal must be localized as efficiently as possible, i.e. the agent must effectively leverage its sequentially observed aerial views when searching for the goal. To address these challenges, we propose GOMAA-Geo - a goal modality agnostic active geo-localization agent - for zero-shot generalization between different goal modalities. Our approach combines cross-modality contrastive learning to align representations across modalities with supervised foundation model pretraining and reinforcement learning to obtain highly effective navigation and localization policies. Through extensive evaluations, we show that GOMAA-Geo outperforms alternative learnable approaches and that it generalizes across datasets - e.g., to disaster-hit areas without seeing a single disaster scenario during training - and goal modalities - e.g., to ground-level imagery or textual descriptions, despite only being trained with goals specified as aerial views. Code and models are publicly available at https://github.com/mvrl/GOMAA-Geo/tree/main.
[ { "created": "Tue, 4 Jun 2024 02:59:36 GMT", "version": "v1" } ]
2024-06-05
[ [ "Sarkar", "Anindya", "" ], [ "Sastry", "Srikumar", "" ], [ "Pirinen", "Aleksis", "" ], [ "Zhang", "Chongjie", "" ], [ "Jacobs", "Nathan", "" ], [ "Vorobeychik", "Yevgeniy", "" ] ]
We consider the task of active geo-localization (AGL) in which an agent uses a sequence of visual cues observed during aerial navigation to find a target specified through multiple possible modalities. This could emulate a UAV involved in a search-and-rescue operation navigating through an area, observing a stream of aerial images as it goes. The AGL task is associated with two important challenges. Firstly, an agent must deal with a goal specification in one of multiple modalities (e.g., through a natural language description) while the search cues are provided in other modalities (aerial imagery). The second challenge is limited localization time (e.g., limited battery life, urgency) so that the goal must be localized as efficiently as possible, i.e. the agent must effectively leverage its sequentially observed aerial views when searching for the goal. To address these challenges, we propose GOMAA-Geo - a goal modality agnostic active geo-localization agent - for zero-shot generalization between different goal modalities. Our approach combines cross-modality contrastive learning to align representations across modalities with supervised foundation model pretraining and reinforcement learning to obtain highly effective navigation and localization policies. Through extensive evaluations, we show that GOMAA-Geo outperforms alternative learnable approaches and that it generalizes across datasets - e.g., to disaster-hit areas without seeing a single disaster scenario during training - and goal modalities - e.g., to ground-level imagery or textual descriptions, despite only being trained with goals specified as aerial views. Code and models are publicly available at https://github.com/mvrl/GOMAA-Geo/tree/main.
2405.17991
Roy Miles
Roy Miles, Pradyumna Reddy, Ismail Elezi, Jiankang Deng
VeLoRA: Memory Efficient Training using Rank-1 Sub-Token Projections
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Large language models (LLMs) have recently emerged as powerful tools for tackling many language-processing tasks. Despite their success, training and fine-tuning these models is still far too computationally and memory intensive. In this paper, we identify and characterise the important components needed for effective model convergence using gradient descent. In doing so we find that the intermediate activations used to implement backpropagation can be excessively compressed without incurring any degradation in performance. This result leads us to a cheap and memory-efficient algorithm for both fine-tuning and pre-training LLMs. The proposed algorithm simply divides the tokens up into smaller sub-tokens before projecting them onto a fixed 1-dimensional subspace during the forward pass. These features are then coarsely reconstructed during the backward pass to implement the update rules. We confirm the effectiveness of our algorithm as being complimentary to many state-of-the-art PEFT methods on the VTAB-1k fine-tuning benchmark. Furthermore, we outperform QLoRA for fine-tuning LLaMA and show competitive performance against other memory-efficient pre-training methods on the large-scale C4 dataset.
[ { "created": "Tue, 28 May 2024 09:23:14 GMT", "version": "v1" } ]
2024-05-29
[ [ "Miles", "Roy", "" ], [ "Reddy", "Pradyumna", "" ], [ "Elezi", "Ismail", "" ], [ "Deng", "Jiankang", "" ] ]
Large language models (LLMs) have recently emerged as powerful tools for tackling many language-processing tasks. Despite their success, training and fine-tuning these models is still far too computationally and memory intensive. In this paper, we identify and characterise the important components needed for effective model convergence using gradient descent. In doing so we find that the intermediate activations used to implement backpropagation can be excessively compressed without incurring any degradation in performance. This result leads us to a cheap and memory-efficient algorithm for both fine-tuning and pre-training LLMs. The proposed algorithm simply divides the tokens up into smaller sub-tokens before projecting them onto a fixed 1-dimensional subspace during the forward pass. These features are then coarsely reconstructed during the backward pass to implement the update rules. We confirm the effectiveness of our algorithm as being complimentary to many state-of-the-art PEFT methods on the VTAB-1k fine-tuning benchmark. Furthermore, we outperform QLoRA for fine-tuning LLaMA and show competitive performance against other memory-efficient pre-training methods on the large-scale C4 dataset.
2102.10801
Wei Lin
Qunxi Zhu, Yao Guo, Wei Lin
Neural Delay Differential Equations
Accepted as a poster in ICLR 2021 (submitted 28 Sep 2020, revised 22 Nov 2020, accepted 08 Jan 2021)
null
null
null
cs.LG cs.AI math.DS nlin.CD
http://creativecommons.org/licenses/by/4.0/
Neural Ordinary Differential Equations (NODEs), a framework of continuous-depth neural networks, have been widely applied, showing exceptional efficacy in coping with some representative datasets. Recently, an augmented framework has been successfully developed for conquering some limitations emergent in application of the original framework. Here we propose a new class of continuous-depth neural networks with delay, named as Neural Delay Differential Equations (NDDEs), and, for computing the corresponding gradients, we use the adjoint sensitivity method to obtain the delayed dynamics of the adjoint. Since the differential equations with delays are usually seen as dynamical systems of infinite dimension possessing more fruitful dynamics, the NDDEs, compared to the NODEs, own a stronger capacity of nonlinear representations. Indeed, we analytically validate that the NDDEs are of universal approximators, and further articulate an extension of the NDDEs, where the initial function of the NDDEs is supposed to satisfy ODEs. More importantly, we use several illustrative examples to demonstrate the outstanding capacities of the NDDEs and the NDDEs with ODEs' initial value. Specifically, (1) we successfully model the delayed dynamics where the trajectories in the lower-dimensional phase space could be mutually intersected, while the traditional NODEs without any argumentation are not directly applicable for such modeling, and (2) we achieve lower loss and higher accuracy not only for the data produced synthetically by complex models but also for the real-world image datasets, i.e., CIFAR10, MNIST, and SVHN. Our results on the NDDEs reveal that appropriately articulating the elements of dynamical systems into the network design is truly beneficial to promoting the network performance.
[ { "created": "Mon, 22 Feb 2021 06:53:51 GMT", "version": "v1" } ]
2021-02-23
[ [ "Zhu", "Qunxi", "" ], [ "Guo", "Yao", "" ], [ "Lin", "Wei", "" ] ]
Neural Ordinary Differential Equations (NODEs), a framework of continuous-depth neural networks, have been widely applied, showing exceptional efficacy in coping with some representative datasets. Recently, an augmented framework has been successfully developed for conquering some limitations emergent in application of the original framework. Here we propose a new class of continuous-depth neural networks with delay, named as Neural Delay Differential Equations (NDDEs), and, for computing the corresponding gradients, we use the adjoint sensitivity method to obtain the delayed dynamics of the adjoint. Since the differential equations with delays are usually seen as dynamical systems of infinite dimension possessing more fruitful dynamics, the NDDEs, compared to the NODEs, own a stronger capacity of nonlinear representations. Indeed, we analytically validate that the NDDEs are of universal approximators, and further articulate an extension of the NDDEs, where the initial function of the NDDEs is supposed to satisfy ODEs. More importantly, we use several illustrative examples to demonstrate the outstanding capacities of the NDDEs and the NDDEs with ODEs' initial value. Specifically, (1) we successfully model the delayed dynamics where the trajectories in the lower-dimensional phase space could be mutually intersected, while the traditional NODEs without any argumentation are not directly applicable for such modeling, and (2) we achieve lower loss and higher accuracy not only for the data produced synthetically by complex models but also for the real-world image datasets, i.e., CIFAR10, MNIST, and SVHN. Our results on the NDDEs reveal that appropriately articulating the elements of dynamical systems into the network design is truly beneficial to promoting the network performance.
2307.00552
R\'emy Chaput
R\'emy Chaput, Olivier Boissier, Mathieu Guillermin
Adaptive reinforcement learning of multi-agent ethically-aligned behaviours: the QSOM and QDSOM algorithms
30 pages, 7 figures, 7 tables
null
null
null
cs.LG cs.AI cs.CY cs.MA
http://creativecommons.org/licenses/by-sa/4.0/
The numerous deployed Artificial Intelligence systems need to be aligned with our ethical considerations. However, such ethical considerations might change as time passes: our society is not fixed, and our social mores evolve. This makes it difficult for these AI systems; in the Machine Ethics field especially, it has remained an under-studied challenge. In this paper, we present two algorithms, named QSOM and QDSOM, which are able to adapt to changes in the environment, and especially in the reward function, which represents the ethical considerations that we want these systems to be aligned with. They associate the well-known Q-Table to (Dynamic) Self-Organizing Maps to handle the continuous and multi-dimensional state and action spaces. We evaluate them on a use-case of multi-agent energy repartition within a small Smart Grid neighborhood, and prove their ability to adapt, and their higher performance compared to baseline Reinforcement Learning algorithms.
[ { "created": "Sun, 2 Jul 2023 12:22:02 GMT", "version": "v1" } ]
2023-07-04
[ [ "Chaput", "Rémy", "" ], [ "Boissier", "Olivier", "" ], [ "Guillermin", "Mathieu", "" ] ]
The numerous deployed Artificial Intelligence systems need to be aligned with our ethical considerations. However, such ethical considerations might change as time passes: our society is not fixed, and our social mores evolve. This makes it difficult for these AI systems; in the Machine Ethics field especially, it has remained an under-studied challenge. In this paper, we present two algorithms, named QSOM and QDSOM, which are able to adapt to changes in the environment, and especially in the reward function, which represents the ethical considerations that we want these systems to be aligned with. They associate the well-known Q-Table to (Dynamic) Self-Organizing Maps to handle the continuous and multi-dimensional state and action spaces. We evaluate them on a use-case of multi-agent energy repartition within a small Smart Grid neighborhood, and prove their ability to adapt, and their higher performance compared to baseline Reinforcement Learning algorithms.
1905.09068
Konstantinos Nikolaidis
Konstantinos Nikolaidis, Stein Kristiansen, Vera Goebel, Thomas Plagemann, Knut Liest{\o}l, Mohan Kankanhalli
Augmenting Physiological Time Series Data: A Case Study for Sleep Apnea Detection
null
ECML-PKDD 2019
null
null
cs.LG eess.SP stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Supervised machine learning applications in the health domain often face the problem of insufficient training datasets. The quantity of labelled data is small due to privacy concerns and the cost of data acquisition and labelling by a medical expert. Furthermore, it is quite common that collected data are unbalanced and getting enough data to personalize models for individuals is very expensive or even infeasible. This paper addresses these problems by (1) designing a recurrent Generative Adversarial Network to generate realistic synthetic data and to augment the original dataset, (2) enabling the generation of balanced datasets based on heavily unbalanced dataset, and (3) to control the data generation in such a way that the generated data resembles data from specific individuals. We apply these solutions for sleep apnea detection and study in the evaluation the performance of four well-known techniques, i.e., K-Nearest Neighbour, Random Forest, Multi-Layer Perceptron, and Support Vector Machine. All classifiers exhibit in the experiments a consistent increase in sensitivity and a kappa statistic increase by between 0.007 and 0.182.
[ { "created": "Wed, 22 May 2019 11:01:34 GMT", "version": "v1" } ]
2021-12-10
[ [ "Nikolaidis", "Konstantinos", "" ], [ "Kristiansen", "Stein", "" ], [ "Goebel", "Vera", "" ], [ "Plagemann", "Thomas", "" ], [ "Liestøl", "Knut", "" ], [ "Kankanhalli", "Mohan", "" ] ]
Supervised machine learning applications in the health domain often face the problem of insufficient training datasets. The quantity of labelled data is small due to privacy concerns and the cost of data acquisition and labelling by a medical expert. Furthermore, it is quite common that collected data are unbalanced and getting enough data to personalize models for individuals is very expensive or even infeasible. This paper addresses these problems by (1) designing a recurrent Generative Adversarial Network to generate realistic synthetic data and to augment the original dataset, (2) enabling the generation of balanced datasets based on heavily unbalanced dataset, and (3) to control the data generation in such a way that the generated data resembles data from specific individuals. We apply these solutions for sleep apnea detection and study in the evaluation the performance of four well-known techniques, i.e., K-Nearest Neighbour, Random Forest, Multi-Layer Perceptron, and Support Vector Machine. All classifiers exhibit in the experiments a consistent increase in sensitivity and a kappa statistic increase by between 0.007 and 0.182.
1910.06493
Benjamin Adams
Mathew Darling, Benjamin Adams, Caroline Orchiston, Thomas Wilson, Brendon Bradley
Understanding population fluctuations through volunteered geographic information and novel indicators: The experience of Rakiura, Stewart Island, New Zealand
8 pages, GeoComputation 2019
null
10.17608/k6.auckland.9846323.v1
null
cs.SI
http://creativecommons.org/licenses/by/4.0/
In an era of heterogeneous data, novel methods and volunteered geographic information provide opportunities to understand how people interact with a place. However, it is not enough to simply have such heterogeneous data, instead an understanding of its usability and reliability needs to be undertaken. Here, we draw upon the case study of Rakiura, Stewart Island where manifested passenger numbers across the Foveaux Strait are known. We have built a population model to ground truth such novel indicators. In our preliminary study, we find that a number of indicators offer the opportunity to understand fluctuations in populations. Some indicators (such as wastewater volumes) can suggest relative changes in populations in a raw form. While other indicators (such as TripAdvisor reviews or Instagram posts) require further data enrichment to get insights into population fluctuations. This research forms part of a larger research project looking to test and apply such novel indicators to inform disaster risk assessments.
[ { "created": "Tue, 15 Oct 2019 02:43:03 GMT", "version": "v1" } ]
2019-10-16
[ [ "Darling", "Mathew", "" ], [ "Adams", "Benjamin", "" ], [ "Orchiston", "Caroline", "" ], [ "Wilson", "Thomas", "" ], [ "Bradley", "Brendon", "" ] ]
In an era of heterogeneous data, novel methods and volunteered geographic information provide opportunities to understand how people interact with a place. However, it is not enough to simply have such heterogeneous data, instead an understanding of its usability and reliability needs to be undertaken. Here, we draw upon the case study of Rakiura, Stewart Island where manifested passenger numbers across the Foveaux Strait are known. We have built a population model to ground truth such novel indicators. In our preliminary study, we find that a number of indicators offer the opportunity to understand fluctuations in populations. Some indicators (such as wastewater volumes) can suggest relative changes in populations in a raw form. While other indicators (such as TripAdvisor reviews or Instagram posts) require further data enrichment to get insights into population fluctuations. This research forms part of a larger research project looking to test and apply such novel indicators to inform disaster risk assessments.
2008.05563
Naimul Mefraz Khan
Bita Houshmand, Naimul Khan
Facial Expression Recognition Under Partial Occlusion from Virtual Reality Headsets based on Transfer Learning
To be presented at the IEEE BigMM 2020
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Facial expressions of emotion are a major channel in our daily communications, and it has been subject of intense research in recent years. To automatically infer facial expressions, convolutional neural network based approaches has become widely adopted due to their proven applicability to Facial Expression Recognition (FER) task.On the other hand Virtual Reality (VR) has gained popularity as an immersive multimedia platform, where FER can provide enriched media experiences. However, recognizing facial expression while wearing a head-mounted VR headset is a challenging task due to the upper half of the face being completely occluded. In this paper we attempt to overcome these issues and focus on facial expression recognition in presence of a severe occlusion where the user is wearing a head-mounted display in a VR setting. We propose a geometric model to simulate occlusion resulting from a Samsung Gear VR headset that can be applied to existing FER datasets. Then, we adopt a transfer learning approach, starting from two pretrained networks, namely VGG and ResNet. We further fine-tune the networks on FER+ and RAF-DB datasets. Experimental results show that our approach achieves comparable results to existing methods while training on three modified benchmark datasets that adhere to realistic occlusion resulting from wearing a commodity VR headset. Code for this paper is available at: https://github.com/bita-github/MRP-FER
[ { "created": "Wed, 12 Aug 2020 20:25:07 GMT", "version": "v1" } ]
2020-08-14
[ [ "Houshmand", "Bita", "" ], [ "Khan", "Naimul", "" ] ]
Facial expressions of emotion are a major channel in our daily communications, and it has been subject of intense research in recent years. To automatically infer facial expressions, convolutional neural network based approaches has become widely adopted due to their proven applicability to Facial Expression Recognition (FER) task.On the other hand Virtual Reality (VR) has gained popularity as an immersive multimedia platform, where FER can provide enriched media experiences. However, recognizing facial expression while wearing a head-mounted VR headset is a challenging task due to the upper half of the face being completely occluded. In this paper we attempt to overcome these issues and focus on facial expression recognition in presence of a severe occlusion where the user is wearing a head-mounted display in a VR setting. We propose a geometric model to simulate occlusion resulting from a Samsung Gear VR headset that can be applied to existing FER datasets. Then, we adopt a transfer learning approach, starting from two pretrained networks, namely VGG and ResNet. We further fine-tune the networks on FER+ and RAF-DB datasets. Experimental results show that our approach achieves comparable results to existing methods while training on three modified benchmark datasets that adhere to realistic occlusion resulting from wearing a commodity VR headset. Code for this paper is available at: https://github.com/bita-github/MRP-FER
1810.03048
Gilwoo Lee
Gilwoo Lee, Sanjiban Choudhury, Brian Hou, Siddhartha S. Srinivasa
Bayes-CPACE: PAC Optimal Exploration in Continuous Space Bayes-Adaptive Markov Decision Processes
null
null
null
null
cs.LG cs.RO stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the first PAC optimal algorithm for Bayes-Adaptive Markov Decision Processes (BAMDPs) in continuous state and action spaces, to the best of our knowledge. The BAMDP framework elegantly addresses model uncertainty by incorporating Bayesian belief updates into long-term expected return. However, computing an exact optimal Bayesian policy is intractable. Our key insight is to compute a near-optimal value function by covering the continuous state-belief-action space with a finite set of representative samples and exploiting the Lipschitz continuity of the value function. We prove the near-optimality of our algorithm and analyze a number of schemes that boost the algorithm's efficiency. Finally, we empirically validate our approach on a number of discrete and continuous BAMDPs and show that the learned policy has consistently competitive performance against baseline approaches.
[ { "created": "Sat, 6 Oct 2018 20:37:38 GMT", "version": "v1" } ]
2018-10-09
[ [ "Lee", "Gilwoo", "" ], [ "Choudhury", "Sanjiban", "" ], [ "Hou", "Brian", "" ], [ "Srinivasa", "Siddhartha S.", "" ] ]
We present the first PAC optimal algorithm for Bayes-Adaptive Markov Decision Processes (BAMDPs) in continuous state and action spaces, to the best of our knowledge. The BAMDP framework elegantly addresses model uncertainty by incorporating Bayesian belief updates into long-term expected return. However, computing an exact optimal Bayesian policy is intractable. Our key insight is to compute a near-optimal value function by covering the continuous state-belief-action space with a finite set of representative samples and exploiting the Lipschitz continuity of the value function. We prove the near-optimality of our algorithm and analyze a number of schemes that boost the algorithm's efficiency. Finally, we empirically validate our approach on a number of discrete and continuous BAMDPs and show that the learned policy has consistently competitive performance against baseline approaches.
2312.05404
Debo Cheng
Debo Cheng (1), Yang Xie (2), Ziqi Xu (1), Jiuyong Li (1), Lin Liu (1), Jixue Liu (1), Yinghao Zhang (2) and Zaiwen Feng (2) ((1) UniSA STEM, University of South Australia, Adelaide, Australia and (2) College of Informatics, Huazhong Agricultural University, Wuhan, China)
Disentangled Latent Representation Learning for Tackling the Confounding M-Bias Problem in Causal Inference
10 pages, 3 figures and 5 tables. Accepted by ICDM2023
null
null
null
cs.LG cs.AI stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In causal inference, it is a fundamental task to estimate the causal effect from observational data. However, latent confounders pose major challenges in causal inference in observational data, for example, confounding bias and M-bias. Recent data-driven causal effect estimators tackle the confounding bias problem via balanced representation learning, but assume no M-bias in the system, thus they fail to handle the M-bias. In this paper, we identify a challenging and unsolved problem caused by a variable that leads to confounding bias and M-bias simultaneously. To address this problem with co-occurring M-bias and confounding bias, we propose a novel Disentangled Latent Representation learning framework for learning latent representations from proxy variables for unbiased Causal effect Estimation (DLRCE) from observational data. Specifically, DLRCE learns three sets of latent representations from the measured proxy variables to adjust for the confounding bias and M-bias. Extensive experiments on both synthetic and three real-world datasets demonstrate that DLRCE significantly outperforms the state-of-the-art estimators in the case of the presence of both confounding bias and M-bias.
[ { "created": "Fri, 8 Dec 2023 23:25:45 GMT", "version": "v1" } ]
2023-12-12
[ [ "Cheng", "Debo", "" ], [ "Xie", "Yang", "" ], [ "Xu", "Ziqi", "" ], [ "Li", "Jiuyong", "" ], [ "Liu", "Lin", "" ], [ "Liu", "Jixue", "" ], [ "Zhang", "Yinghao", "" ], [ "Feng", "Zaiwen", "" ] ]
In causal inference, it is a fundamental task to estimate the causal effect from observational data. However, latent confounders pose major challenges in causal inference in observational data, for example, confounding bias and M-bias. Recent data-driven causal effect estimators tackle the confounding bias problem via balanced representation learning, but assume no M-bias in the system, thus they fail to handle the M-bias. In this paper, we identify a challenging and unsolved problem caused by a variable that leads to confounding bias and M-bias simultaneously. To address this problem with co-occurring M-bias and confounding bias, we propose a novel Disentangled Latent Representation learning framework for learning latent representations from proxy variables for unbiased Causal effect Estimation (DLRCE) from observational data. Specifically, DLRCE learns three sets of latent representations from the measured proxy variables to adjust for the confounding bias and M-bias. Extensive experiments on both synthetic and three real-world datasets demonstrate that DLRCE significantly outperforms the state-of-the-art estimators in the case of the presence of both confounding bias and M-bias.
2008.00710
Yuting He
Yuting He, Tiantian Li, Guanyu Yang, Youyong Kong, Yang Chen, Huazhong Shu, Jean-Louis Coatrieux, Jean-Louis Dillenseger, Shuo Li
Deep Complementary Joint Model for Complex Scene Registration and Few-shot Segmentation on Medical Images
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning-based medical image registration and segmentation joint models utilize the complementarity (augmentation data or weakly supervised data from registration, region constraints from segmentation) to bring mutual improvement in complex scene and few-shot situation. However, further adoption of the joint models are hindered: 1) the diversity of augmentation data is reduced limiting the further enhancement of segmentation, 2) misaligned regions in weakly supervised data disturb the training process, 3) lack of label-based region constraints in few-shot situation limits the registration performance. We propose a novel Deep Complementary Joint Model (DeepRS) for complex scene registration and few-shot segmentation. We embed a perturbation factor in the registration to increase the activity of deformation thus maintaining the augmentation data diversity. We take a pixel-wise discriminator to extract alignment confidence maps which highlight aligned regions in weakly supervised data so the misaligned regions' disturbance will be suppressed via weighting. The outputs from segmentation model are utilized to implement deep-based region constraints thus relieving the label requirements and bringing fine registration. Extensive experiments on the CT dataset of MM-WHS 2017 Challenge show great advantages of our DeepRS that outperforms the existing state-of-the-art models.
[ { "created": "Mon, 3 Aug 2020 08:25:59 GMT", "version": "v1" } ]
2020-08-04
[ [ "He", "Yuting", "" ], [ "Li", "Tiantian", "" ], [ "Yang", "Guanyu", "" ], [ "Kong", "Youyong", "" ], [ "Chen", "Yang", "" ], [ "Shu", "Huazhong", "" ], [ "Coatrieux", "Jean-Louis", "" ], [ "Dillenseger", "Jean-Louis", "" ], [ "Li", "Shuo", "" ] ]
Deep learning-based medical image registration and segmentation joint models utilize the complementarity (augmentation data or weakly supervised data from registration, region constraints from segmentation) to bring mutual improvement in complex scene and few-shot situation. However, further adoption of the joint models are hindered: 1) the diversity of augmentation data is reduced limiting the further enhancement of segmentation, 2) misaligned regions in weakly supervised data disturb the training process, 3) lack of label-based region constraints in few-shot situation limits the registration performance. We propose a novel Deep Complementary Joint Model (DeepRS) for complex scene registration and few-shot segmentation. We embed a perturbation factor in the registration to increase the activity of deformation thus maintaining the augmentation data diversity. We take a pixel-wise discriminator to extract alignment confidence maps which highlight aligned regions in weakly supervised data so the misaligned regions' disturbance will be suppressed via weighting. The outputs from segmentation model are utilized to implement deep-based region constraints thus relieving the label requirements and bringing fine registration. Extensive experiments on the CT dataset of MM-WHS 2017 Challenge show great advantages of our DeepRS that outperforms the existing state-of-the-art models.