id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2206.05897
Lin Tian
Lin Tian, Hastings Greer, Fran\c{c}ois-Xavier Vialard, Roland Kwitt, Ra\'ul San Jos\'e Est\'epar, Richard Jarrett Rushmore, Nikolaos Makris, Sylvain Bouix, Marc Niethammer
$\texttt{GradICON}$: Approximate Diffeomorphisms via Gradient Inverse Consistency
29 pages, 16 figures, CVPR 2023
null
null
null
cs.CV eess.IV
http://creativecommons.org/licenses/by-nc-nd/4.0/
We present an approach to learning regular spatial transformations between image pairs in the context of medical image registration. Contrary to optimization-based registration techniques and many modern learning-based methods, we do not directly penalize transformation irregularities but instead promote transformation regularity via an inverse consistency penalty. We use a neural network to predict a map between a source and a target image as well as the map when swapping the source and target images. Different from existing approaches, we compose these two resulting maps and regularize deviations of the $\bf{Jacobian}$ of this composition from the identity matrix. This regularizer -- $\texttt{GradICON}$ -- results in much better convergence when training registration models compared to promoting inverse consistency of the composition of maps directly while retaining the desirable implicit regularization effects of the latter. We achieve state-of-the-art registration performance on a variety of real-world medical image datasets using a single set of hyperparameters and a single non-dataset-specific training protocol.
[ { "created": "Mon, 13 Jun 2022 04:03:49 GMT", "version": "v1" }, { "created": "Thu, 1 Dec 2022 19:56:37 GMT", "version": "v2" }, { "created": "Tue, 4 Apr 2023 05:44:39 GMT", "version": "v3" }, { "created": "Tue, 10 Oct 2023 03:42:57 GMT", "version": "v4" } ]
2023-10-11
[ [ "Tian", "Lin", "" ], [ "Greer", "Hastings", "" ], [ "Vialard", "François-Xavier", "" ], [ "Kwitt", "Roland", "" ], [ "Estépar", "Raúl San José", "" ], [ "Rushmore", "Richard Jarrett", "" ], [ "Makris", "Nikolaos", "" ], [ "Bouix", "Sylvain", "" ], [ "Niethammer", "Marc", "" ] ]
We present an approach to learning regular spatial transformations between image pairs in the context of medical image registration. Contrary to optimization-based registration techniques and many modern learning-based methods, we do not directly penalize transformation irregularities but instead promote transformation regularity via an inverse consistency penalty. We use a neural network to predict a map between a source and a target image as well as the map when swapping the source and target images. Different from existing approaches, we compose these two resulting maps and regularize deviations of the $\bf{Jacobian}$ of this composition from the identity matrix. This regularizer -- $\texttt{GradICON}$ -- results in much better convergence when training registration models compared to promoting inverse consistency of the composition of maps directly while retaining the desirable implicit regularization effects of the latter. We achieve state-of-the-art registration performance on a variety of real-world medical image datasets using a single set of hyperparameters and a single non-dataset-specific training protocol.
2311.01717
Zherong Pan
Chen Liang, Xifeng Gao, Kui Wu, Zherong Pan
Second-Order Convergent Collision-Constrained Optimization-Based Planner
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Finding robot poses and trajectories represents a foundational aspect of robot motion planning. Despite decades of research, efficiently and robustly addressing these challenges is still difficult. Existing approaches are often plagued by various limitations, such as intricate geometric approximations, violations of collision constraints, or slow first-order convergence. In this paper, we introduce two novel optimization formulations that offer provable robustness, achieving second-order convergence while requiring only a convex approximation of the robot's links and obstacles. Our first method, known as the Explicit Collision Barrier (ECB) method, employs a barrier function to guarantee separation between convex objects. ECB uses an efficient matrix factorization technique, enabling a second-order Newton's method with an iterative complexity linear in the number of separating planes. Our second method, referred to as the Implicit Collision Barrier (ICB) method, further transforms the separating planes into implicit functions of robot poses. We show such an implicit objective function is twice-differentiable, with derivatives evaluated at a linear complexity. To assess the effectiveness of our approaches, we conduct a comparative study with a first-order baseline algorithm across six testing scenarios. Our results unequivocally justify that our method exhibits significantly faster convergence rates compared to the baseline algorithm.
[ { "created": "Fri, 3 Nov 2023 05:21:03 GMT", "version": "v1" } ]
2023-11-06
[ [ "Liang", "Chen", "" ], [ "Gao", "Xifeng", "" ], [ "Wu", "Kui", "" ], [ "Pan", "Zherong", "" ] ]
Finding robot poses and trajectories represents a foundational aspect of robot motion planning. Despite decades of research, efficiently and robustly addressing these challenges is still difficult. Existing approaches are often plagued by various limitations, such as intricate geometric approximations, violations of collision constraints, or slow first-order convergence. In this paper, we introduce two novel optimization formulations that offer provable robustness, achieving second-order convergence while requiring only a convex approximation of the robot's links and obstacles. Our first method, known as the Explicit Collision Barrier (ECB) method, employs a barrier function to guarantee separation between convex objects. ECB uses an efficient matrix factorization technique, enabling a second-order Newton's method with an iterative complexity linear in the number of separating planes. Our second method, referred to as the Implicit Collision Barrier (ICB) method, further transforms the separating planes into implicit functions of robot poses. We show such an implicit objective function is twice-differentiable, with derivatives evaluated at a linear complexity. To assess the effectiveness of our approaches, we conduct a comparative study with a first-order baseline algorithm across six testing scenarios. Our results unequivocally justify that our method exhibits significantly faster convergence rates compared to the baseline algorithm.
2309.05133
David Heath
David Heath
Parallel RAM from Cyclic Circuits
null
null
null
null
cs.DS cs.CC
http://creativecommons.org/licenses/by/4.0/
Known simulations of random access machines (RAMs) or parallel RAMs (PRAMs) by Boolean circuits incur significant polynomial blowup, due to the need to repeatedly simulate accesses to a large main memory. Consider a single modification to Boolean circuits that removes the restriction that circuit graphs are acyclic. We call this the cyclic circuit model. Note, cyclic circuits remain combinational, as they do not allow wire values to change over time. We simulate PRAM with a cyclic circuit, and the blowup from our simulation is only polylogarithmic. Consider a PRAM program $P$ that on a length-$n$ input uses an arbitrary number of processors to manipulate words of size $\Theta(\log n)$ bits and then halts within $W(n)$ work. We construct a size-$O(W(n)\cdot \log^4 n)$ cyclic circuit that simulates $P$. Suppose that on a particular input, $P$ halts in time $T$; our circuit computes the same output within $T \cdot O(\log^3 n)$ gate delay. This implies theoretical feasibility of powerful parallel machines. Cyclic circuits can be implemented in hardware, and our circuit achieves performance within polylog factors of PRAM. Our simulated PRAM synchronizes processors via logical dependencies between wires.
[ { "created": "Sun, 10 Sep 2023 20:53:18 GMT", "version": "v1" }, { "created": "Mon, 16 Oct 2023 16:39:37 GMT", "version": "v2" }, { "created": "Fri, 27 Oct 2023 05:04:51 GMT", "version": "v3" } ]
2023-10-30
[ [ "Heath", "David", "" ] ]
Known simulations of random access machines (RAMs) or parallel RAMs (PRAMs) by Boolean circuits incur significant polynomial blowup, due to the need to repeatedly simulate accesses to a large main memory. Consider a single modification to Boolean circuits that removes the restriction that circuit graphs are acyclic. We call this the cyclic circuit model. Note, cyclic circuits remain combinational, as they do not allow wire values to change over time. We simulate PRAM with a cyclic circuit, and the blowup from our simulation is only polylogarithmic. Consider a PRAM program $P$ that on a length-$n$ input uses an arbitrary number of processors to manipulate words of size $\Theta(\log n)$ bits and then halts within $W(n)$ work. We construct a size-$O(W(n)\cdot \log^4 n)$ cyclic circuit that simulates $P$. Suppose that on a particular input, $P$ halts in time $T$; our circuit computes the same output within $T \cdot O(\log^3 n)$ gate delay. This implies theoretical feasibility of powerful parallel machines. Cyclic circuits can be implemented in hardware, and our circuit achieves performance within polylog factors of PRAM. Our simulated PRAM synchronizes processors via logical dependencies between wires.
1804.09194
Martin R\"unz
Martin R\"unz, Maud Buffier, Lourdes Agapito
MaskFusion: Real-Time Recognition, Tracking and Reconstruction of Multiple Moving Objects
Presented at IEEE International Symposium on Mixed and Augmented Reality (ISMAR) 2018
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present MaskFusion, a real-time, object-aware, semantic and dynamic RGB-D SLAM system that goes beyond traditional systems which output a purely geometric map of a static scene. MaskFusion recognizes, segments and assigns semantic class labels to different objects in the scene, while tracking and reconstructing them even when they move independently from the camera. As an RGB-D camera scans a cluttered scene, image-based instance-level semantic segmentation creates semantic object masks that enable real-time object recognition and the creation of an object-level representation for the world map. Unlike previous recognition-based SLAM systems, MaskFusion does not require known models of the objects it can recognize, and can deal with multiple independent motions. MaskFusion takes full advantage of using instance-level semantic segmentation to enable semantic labels to be fused into an object-aware map, unlike recent semantics enabled SLAM systems that perform voxel-level semantic segmentation. We show augmented-reality applications that demonstrate the unique features of the map output by MaskFusion: instance-aware, semantic and dynamic.
[ { "created": "Tue, 24 Apr 2018 18:15:15 GMT", "version": "v1" }, { "created": "Mon, 22 Oct 2018 17:47:27 GMT", "version": "v2" } ]
2018-10-23
[ [ "Rünz", "Martin", "" ], [ "Buffier", "Maud", "" ], [ "Agapito", "Lourdes", "" ] ]
We present MaskFusion, a real-time, object-aware, semantic and dynamic RGB-D SLAM system that goes beyond traditional systems which output a purely geometric map of a static scene. MaskFusion recognizes, segments and assigns semantic class labels to different objects in the scene, while tracking and reconstructing them even when they move independently from the camera. As an RGB-D camera scans a cluttered scene, image-based instance-level semantic segmentation creates semantic object masks that enable real-time object recognition and the creation of an object-level representation for the world map. Unlike previous recognition-based SLAM systems, MaskFusion does not require known models of the objects it can recognize, and can deal with multiple independent motions. MaskFusion takes full advantage of using instance-level semantic segmentation to enable semantic labels to be fused into an object-aware map, unlike recent semantics enabled SLAM systems that perform voxel-level semantic segmentation. We show augmented-reality applications that demonstrate the unique features of the map output by MaskFusion: instance-aware, semantic and dynamic.
2408.06872
Mike Perkins
Mike Perkins (1), Jasper Roe (2) ((1) British University Vietnam, (2) James Cook University Singapore)
Generative AI Tools in Academic Research: Applications and Implications for Qualitative and Quantitative Research Methodologies
null
null
null
null
cs.HC cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
This study examines the impact of Generative Artificial Intelligence (GenAI) on academic research, focusing on its application to qualitative and quantitative data analysis. As GenAI tools evolve rapidly, they offer new possibilities for enhancing research productivity and democratising complex analytical processes. However, their integration into academic practice raises significant questions regarding research integrity and security, authorship, and the changing nature of scholarly work. Through an examination of current capabilities and potential future applications, this study provides insights into how researchers may utilise GenAI tools responsibly and ethically. We present case studies that demonstrate the application of GenAI in various research methodologies, discuss the challenges of replicability and consistency in AI-assisted research, and consider the ethical implications of increased AI integration in academia. This study explores both qualitative and quantitative applications of GenAI, highlighting tools for transcription, coding, thematic analysis, visual analytics, and statistical analysis. By addressing these issues, we aim to contribute to the ongoing discourse on the role of AI in shaping the future of academic research and provide guidance for researchers exploring the rapidly evolving landscape of AI-assisted research tools and research.
[ { "created": "Tue, 13 Aug 2024 13:10:03 GMT", "version": "v1" } ]
2024-08-14
[ [ "Perkins", "Mike", "" ], [ "Roe", "Jasper", "" ] ]
This study examines the impact of Generative Artificial Intelligence (GenAI) on academic research, focusing on its application to qualitative and quantitative data analysis. As GenAI tools evolve rapidly, they offer new possibilities for enhancing research productivity and democratising complex analytical processes. However, their integration into academic practice raises significant questions regarding research integrity and security, authorship, and the changing nature of scholarly work. Through an examination of current capabilities and potential future applications, this study provides insights into how researchers may utilise GenAI tools responsibly and ethically. We present case studies that demonstrate the application of GenAI in various research methodologies, discuss the challenges of replicability and consistency in AI-assisted research, and consider the ethical implications of increased AI integration in academia. This study explores both qualitative and quantitative applications of GenAI, highlighting tools for transcription, coding, thematic analysis, visual analytics, and statistical analysis. By addressing these issues, we aim to contribute to the ongoing discourse on the role of AI in shaping the future of academic research and provide guidance for researchers exploring the rapidly evolving landscape of AI-assisted research tools and research.
2310.18011
Kathleen Gregory
Kathleen Gregory, Laura Koesten, Regina Schuster, Torsten M\"oller, Sarah Davies
Data journeys in popular science: Producing climate change and COVID-19 data visualizations at Scientific American
44 pages, 4 figures, 3 boxes
null
10.1162/99608f92.141c99cf
null
cs.DL cs.HC physics.pop-ph
http://creativecommons.org/licenses/by/4.0/
Vast amounts of (open) data are increasingly used to make arguments about crisis topics such as climate change and global pandemics. Data visualizations are central to bringing these viewpoints to broader publics. However, visualizations often conceal the many contexts involved in their production, ranging from decisions made in research labs about collecting and sharing data to choices made in editorial rooms about which data stories to tell. In this paper, we examine how data visualizations about climate change and COVID-19 are produced in popular science magazines, using Scientific American, an established English-language popular science magazine, as a case study. To do this, we apply the analytical concept of data journeys (Leonelli, 2020) in a mixed methods study that centers on interviews with Scientific American staff and is supplemented by a visualization analysis of selected charts. In particular, we discuss the affordances of working with open data, the role of collaborative data practices, and how the magazine works to counter misinformation and increase transparency. This work provides an empirical contribution by providing insight into the data (visualization) practices of science communicators and demonstrating how the concept of data journeys can be used as an analytical framework.
[ { "created": "Fri, 27 Oct 2023 09:39:06 GMT", "version": "v1" }, { "created": "Tue, 26 Mar 2024 08:01:34 GMT", "version": "v2" }, { "created": "Wed, 27 Mar 2024 08:13:14 GMT", "version": "v3" } ]
2024-03-28
[ [ "Gregory", "Kathleen", "" ], [ "Koesten", "Laura", "" ], [ "Schuster", "Regina", "" ], [ "Möller", "Torsten", "" ], [ "Davies", "Sarah", "" ] ]
Vast amounts of (open) data are increasingly used to make arguments about crisis topics such as climate change and global pandemics. Data visualizations are central to bringing these viewpoints to broader publics. However, visualizations often conceal the many contexts involved in their production, ranging from decisions made in research labs about collecting and sharing data to choices made in editorial rooms about which data stories to tell. In this paper, we examine how data visualizations about climate change and COVID-19 are produced in popular science magazines, using Scientific American, an established English-language popular science magazine, as a case study. To do this, we apply the analytical concept of data journeys (Leonelli, 2020) in a mixed methods study that centers on interviews with Scientific American staff and is supplemented by a visualization analysis of selected charts. In particular, we discuss the affordances of working with open data, the role of collaborative data practices, and how the magazine works to counter misinformation and increase transparency. This work provides an empirical contribution by providing insight into the data (visualization) practices of science communicators and demonstrating how the concept of data journeys can be used as an analytical framework.
1209.4420
Chao Wang
Lan-Ting LI
An Efficient Color Face Verification Based on 2-Directional 2-Dimensional Feature Extraction
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A novel and uniform framework for face verification is presented in this paper. First of all, a 2-directional 2-dimensional feature extraction method is adopted to extract client-specific template - 2D discrimant projection matrix. Then the face skin color information is utilized as an additive feature to enhance decision making strategy that makes use of not only 2D grey feature but also 2D skin color feature. A fusion decision of both is applied to experiment the performance on the XM2VTS database according to Lausanne protocol. Experimental results show that the framework achieves high verification accuracy and verification speed.
[ { "created": "Thu, 20 Sep 2012 04:20:40 GMT", "version": "v1" } ]
2012-09-21
[ [ "LI", "Lan-Ting", "" ] ]
A novel and uniform framework for face verification is presented in this paper. First of all, a 2-directional 2-dimensional feature extraction method is adopted to extract client-specific template - 2D discrimant projection matrix. Then the face skin color information is utilized as an additive feature to enhance decision making strategy that makes use of not only 2D grey feature but also 2D skin color feature. A fusion decision of both is applied to experiment the performance on the XM2VTS database according to Lausanne protocol. Experimental results show that the framework achieves high verification accuracy and verification speed.
2101.05623
David Pardo
M. Shahriari, A. Hazra, D. Pardo
Design of borehole resistivity measurement acquisition systems using deep learning
null
null
null
null
cs.LG cs.NA eess.SP math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Borehole resistivity measurements recorded with logging-while-drilling (LWD) instruments are widely used for characterizing the earth's subsurface properties. They facilitate the extraction of natural resources such as oil and gas. LWD instruments require real-time inversions of electromagnetic measurements to estimate the electrical properties of the earth's subsurface near the well and possibly correct the well trajectory. Deep Neural Network (DNN)-based methods are suitable for the rapid inversion of borehole resistivity measurements as they approximate the forward and inverse problem offline during the training phase and they only require a fraction of a second for the evaluation (aka prediction). However, the inverse problem generally admits multiple solutions. DNNs with traditional loss functions based on data misfit are ill-equipped for solving an inverse problem. This can be partially overcome by adding regularization terms to a loss function specifically designed for encoder-decoder architectures. But adding regularization seriously limits the number of possible solutions to a set of a priori desirable physical solutions. To avoid this, we use a two-step loss function without any regularization. In addition, to guarantee an inverse solution, we need a carefully selected measurement acquisition system with a sufficient number of measurements. In this work, we propose a DNN-based iterative algorithm for designing such a measurement acquisition system. We illustrate our DNN-based iterative algorithm via several synthetic examples. Numerical results show that the obtained measurement acquisition system is sufficient to identify and characterize both resistive and conductive layers above and below the logging instrument. Numerical results are promising, although further improvements are required to make our method amenable for industrial purposes.
[ { "created": "Tue, 12 Jan 2021 12:49:44 GMT", "version": "v1" } ]
2021-01-15
[ [ "Shahriari", "M.", "" ], [ "Hazra", "A.", "" ], [ "Pardo", "D.", "" ] ]
Borehole resistivity measurements recorded with logging-while-drilling (LWD) instruments are widely used for characterizing the earth's subsurface properties. They facilitate the extraction of natural resources such as oil and gas. LWD instruments require real-time inversions of electromagnetic measurements to estimate the electrical properties of the earth's subsurface near the well and possibly correct the well trajectory. Deep Neural Network (DNN)-based methods are suitable for the rapid inversion of borehole resistivity measurements as they approximate the forward and inverse problem offline during the training phase and they only require a fraction of a second for the evaluation (aka prediction). However, the inverse problem generally admits multiple solutions. DNNs with traditional loss functions based on data misfit are ill-equipped for solving an inverse problem. This can be partially overcome by adding regularization terms to a loss function specifically designed for encoder-decoder architectures. But adding regularization seriously limits the number of possible solutions to a set of a priori desirable physical solutions. To avoid this, we use a two-step loss function without any regularization. In addition, to guarantee an inverse solution, we need a carefully selected measurement acquisition system with a sufficient number of measurements. In this work, we propose a DNN-based iterative algorithm for designing such a measurement acquisition system. We illustrate our DNN-based iterative algorithm via several synthetic examples. Numerical results show that the obtained measurement acquisition system is sufficient to identify and characterize both resistive and conductive layers above and below the logging instrument. Numerical results are promising, although further improvements are required to make our method amenable for industrial purposes.
1604.05603
Wolfgang Dvo\v{r}\'ak
Wolfgang Dvo\v{r}\'ak and Monika Henzinger
Online Ad Assignment with an Ad Exchange
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ad exchanges are becoming an increasingly popular way to sell advertisement slots on the internet. An ad exchange is basically a spot market for ad impressions. A publisher who has already signed contracts reserving advertisement impressions on his pages can choose between assigning a new ad impression for a new page view to a contracted advertiser or to sell it at an ad exchange. This leads to an online revenue maximization problem for the publisher. Given a new impression to sell decide whether (a) to assign it to a contracted advertiser and if so to which one or (b) to sell it at the ad exchange and if so at which reserve price. We make no assumptions about the distribution of the advertiser valuations that participate in the ad exchange and show that there exists a simple primal-dual based online algorithm, whose lower bound for the revenue converges to $R_{ADX} + R_A (1 - 1/e)$, where $R_{ADX}$ is the revenue that the optimum algorithm achieves from the ad exchange and $R_A$ is the revenue that the optimum algorithm achieves from the contracted advertisers.
[ { "created": "Tue, 19 Apr 2016 14:44:55 GMT", "version": "v1" } ]
2016-04-20
[ [ "Dvořák", "Wolfgang", "" ], [ "Henzinger", "Monika", "" ] ]
Ad exchanges are becoming an increasingly popular way to sell advertisement slots on the internet. An ad exchange is basically a spot market for ad impressions. A publisher who has already signed contracts reserving advertisement impressions on his pages can choose between assigning a new ad impression for a new page view to a contracted advertiser or to sell it at an ad exchange. This leads to an online revenue maximization problem for the publisher. Given a new impression to sell decide whether (a) to assign it to a contracted advertiser and if so to which one or (b) to sell it at the ad exchange and if so at which reserve price. We make no assumptions about the distribution of the advertiser valuations that participate in the ad exchange and show that there exists a simple primal-dual based online algorithm, whose lower bound for the revenue converges to $R_{ADX} + R_A (1 - 1/e)$, where $R_{ADX}$ is the revenue that the optimum algorithm achieves from the ad exchange and $R_A$ is the revenue that the optimum algorithm achieves from the contracted advertisers.
1808.09408
Maximin Coavoux
Maximin Coavoux, Shashi Narayan, Shay B. Cohen
Privacy-preserving Neural Representations of Text
EMNLP 2018
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article deals with adversarial attacks towards deep learning systems for Natural Language Processing (NLP), in the context of privacy protection. We study a specific type of attack: an attacker eavesdrops on the hidden representations of a neural text classifier and tries to recover information about the input text. Such scenario may arise in situations when the computation of a neural network is shared across multiple devices, e.g. some hidden representation is computed by a user's device and sent to a cloud-based model. We measure the privacy of a hidden representation by the ability of an attacker to predict accurately specific private information from it and characterize the tradeoff between the privacy and the utility of neural representations. Finally, we propose several defense methods based on modified training objectives and show that they improve the privacy of neural representations.
[ { "created": "Tue, 28 Aug 2018 16:57:37 GMT", "version": "v1" } ]
2018-08-29
[ [ "Coavoux", "Maximin", "" ], [ "Narayan", "Shashi", "" ], [ "Cohen", "Shay B.", "" ] ]
This article deals with adversarial attacks towards deep learning systems for Natural Language Processing (NLP), in the context of privacy protection. We study a specific type of attack: an attacker eavesdrops on the hidden representations of a neural text classifier and tries to recover information about the input text. Such scenario may arise in situations when the computation of a neural network is shared across multiple devices, e.g. some hidden representation is computed by a user's device and sent to a cloud-based model. We measure the privacy of a hidden representation by the ability of an attacker to predict accurately specific private information from it and characterize the tradeoff between the privacy and the utility of neural representations. Finally, we propose several defense methods based on modified training objectives and show that they improve the privacy of neural representations.
2307.05735
Germ\'an Abrevaya
Germ\'an Abrevaya, Mahta Ramezanian-Panahi, Jean-Christophe Gagnon-Audet, Pablo Polosecki, Irina Rish, Silvina Ponce Dawson, Guillermo Cecchi, Guillaume Dumas
Effective Latent Differential Equation Models via Attention and Multiple Shooting
null
null
null
null
cs.LG nlin.CD physics.data-an physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scientific Machine Learning (SciML) is a burgeoning field that synergistically combines domain-aware and interpretable models with agnostic machine learning techniques. In this work, we introduce GOKU-UI, an evolution of the SciML generative model GOKU-nets. GOKU-UI not only broadens the original model's spectrum to incorporate other classes of differential equations, such as Stochastic Differential Equations (SDEs), but also integrates attention mechanisms and a novel multiple shooting training strategy in the latent space. These modifications have led to a significant increase in its performance in both reconstruction and forecast tasks, as demonstrated by our evaluation of simulated and empirical data. Specifically, GOKU-UI outperformed all baseline models on synthetic datasets even with a training set 16-fold smaller, underscoring its remarkable data efficiency. Furthermore, when applied to empirical human brain data, while incorporating stochastic Stuart-Landau oscillators into its dynamical core, our proposed enhancements markedly increased the model's effectiveness in capturing complex brain dynamics. This augmented version not only surpassed all baseline methods in the reconstruction task, but also demonstrated lower prediction error of future brain activity up to 15 seconds ahead. By training GOKU-UI on resting state fMRI data, we encoded whole-brain dynamics into a latent representation, learning a low-dimensional dynamical system model that could offer insights into brain functionality and open avenues for practical applications such as the classification of mental states or psychiatric conditions. Ultimately, our research provides further impetus for the field of Scientific Machine Learning, showcasing the potential for advancements when established scientific insights are interwoven with modern machine learning.
[ { "created": "Tue, 11 Jul 2023 19:03:17 GMT", "version": "v1" }, { "created": "Wed, 6 Sep 2023 13:23:39 GMT", "version": "v2" }, { "created": "Thu, 14 Sep 2023 16:10:39 GMT", "version": "v3" } ]
2023-09-15
[ [ "Abrevaya", "Germán", "" ], [ "Ramezanian-Panahi", "Mahta", "" ], [ "Gagnon-Audet", "Jean-Christophe", "" ], [ "Polosecki", "Pablo", "" ], [ "Rish", "Irina", "" ], [ "Dawson", "Silvina Ponce", "" ], [ "Cecchi", "Guillermo", "" ], [ "Dumas", "Guillaume", "" ] ]
Scientific Machine Learning (SciML) is a burgeoning field that synergistically combines domain-aware and interpretable models with agnostic machine learning techniques. In this work, we introduce GOKU-UI, an evolution of the SciML generative model GOKU-nets. GOKU-UI not only broadens the original model's spectrum to incorporate other classes of differential equations, such as Stochastic Differential Equations (SDEs), but also integrates attention mechanisms and a novel multiple shooting training strategy in the latent space. These modifications have led to a significant increase in its performance in both reconstruction and forecast tasks, as demonstrated by our evaluation of simulated and empirical data. Specifically, GOKU-UI outperformed all baseline models on synthetic datasets even with a training set 16-fold smaller, underscoring its remarkable data efficiency. Furthermore, when applied to empirical human brain data, while incorporating stochastic Stuart-Landau oscillators into its dynamical core, our proposed enhancements markedly increased the model's effectiveness in capturing complex brain dynamics. This augmented version not only surpassed all baseline methods in the reconstruction task, but also demonstrated lower prediction error of future brain activity up to 15 seconds ahead. By training GOKU-UI on resting state fMRI data, we encoded whole-brain dynamics into a latent representation, learning a low-dimensional dynamical system model that could offer insights into brain functionality and open avenues for practical applications such as the classification of mental states or psychiatric conditions. Ultimately, our research provides further impetus for the field of Scientific Machine Learning, showcasing the potential for advancements when established scientific insights are interwoven with modern machine learning.
2303.15676
Niluthpol Chowdhury Mithun
Niluthpol Chowdhury Mithun, Kshitij Minhas, Han-Pang Chiu, Taragay Oskiper, Mikhail Sizintsev, Supun Samarasekera, Rakesh Kumar
Cross-View Visual Geo-Localization for Outdoor Augmented Reality
IEEE VR 2023
null
10.1109/VR55154.2023.00064
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Precise estimation of global orientation and location is critical to ensure a compelling outdoor Augmented Reality (AR) experience. We address the problem of geo-pose estimation by cross-view matching of query ground images to a geo-referenced aerial satellite image database. Recently, neural network-based methods have shown state-of-the-art performance in cross-view matching. However, most of the prior works focus only on location estimation, ignoring orientation, which cannot meet the requirements in outdoor AR applications. We propose a new transformer neural network-based model and a modified triplet ranking loss for joint location and orientation estimation. Experiments on several benchmark cross-view geo-localization datasets show that our model achieves state-of-the-art performance. Furthermore, we present an approach to extend the single image query-based geo-localization approach by utilizing temporal information from a navigation pipeline for robust continuous geo-localization. Experimentation on several large-scale real-world video sequences demonstrates that our approach enables high-precision and stable AR insertion.
[ { "created": "Tue, 28 Mar 2023 01:58:03 GMT", "version": "v1" } ]
2023-03-30
[ [ "Mithun", "Niluthpol Chowdhury", "" ], [ "Minhas", "Kshitij", "" ], [ "Chiu", "Han-Pang", "" ], [ "Oskiper", "Taragay", "" ], [ "Sizintsev", "Mikhail", "" ], [ "Samarasekera", "Supun", "" ], [ "Kumar", "Rakesh", "" ] ]
Precise estimation of global orientation and location is critical to ensure a compelling outdoor Augmented Reality (AR) experience. We address the problem of geo-pose estimation by cross-view matching of query ground images to a geo-referenced aerial satellite image database. Recently, neural network-based methods have shown state-of-the-art performance in cross-view matching. However, most of the prior works focus only on location estimation, ignoring orientation, which cannot meet the requirements in outdoor AR applications. We propose a new transformer neural network-based model and a modified triplet ranking loss for joint location and orientation estimation. Experiments on several benchmark cross-view geo-localization datasets show that our model achieves state-of-the-art performance. Furthermore, we present an approach to extend the single image query-based geo-localization approach by utilizing temporal information from a navigation pipeline for robust continuous geo-localization. Experimentation on several large-scale real-world video sequences demonstrates that our approach enables high-precision and stable AR insertion.
2306.07643
Grischa Liebel
Grischa Liebel and Steinunn Gr\'oa Sigur{\dh}ard\'ottir
Economical Accommodations for Neurodivergent Students in Software Engineering Education: Experiences from an Intervention in Four Undergraduate Courses
null
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Neurodiversity is an umbrella term that describes variation in brain function among individuals, including conditions such as Attention deficit hyperactivity disorder (ADHD), or dyslexia. Neurodiversity is common in the general population, with an estimated 5.0% to 7.1% and 7% of the world population being diagnosed with ADHD and dyslexia respectively. Neurodivergent (ND) individuals often experience challenges in specific tasks, such as difficulties in communication or a reduced attention span in comparison to neurotypical (NT) individuals. However, they also exhibit specific strengths, such as high creativity or attention to detail. Therefore, improving the inclusion of ND individuals is desirable for economic, ethical, and for talent reasons. In higher education, struggles of ND students are well-documented. Common issues in this area are a lack of awareness among other students and staff, forms of assessment that are particularly challenging for some students, and a lack of offered accommodations. These factors commonly lead to stress, anxiety, and ultimately a risk of dropping out of the studies. Accommodations for ND students can require substantial effort. However, smaller changes in course material can already have major impact. In this chapter, we summarise the lessons learned from an intervention in four courses in undergraduate computer science programmes at Reykjavik University, Iceland, over a period of two terms. Following accessibility guidelines produced by interest groups for different ND conditions, we created course material in the form of slides and assignments specifically tailored to ND audiences. We focused on small, economical changes that could be replicated by educators with a minimal investment of time. We evaluated the success of our intervention through two surveys, showing an overall positive response among ND students and NT students.
[ { "created": "Tue, 13 Jun 2023 09:27:09 GMT", "version": "v1" } ]
2023-06-14
[ [ "Liebel", "Grischa", "" ], [ "Sigurðardóttir", "Steinunn Gróa", "" ] ]
Neurodiversity is an umbrella term that describes variation in brain function among individuals, including conditions such as Attention deficit hyperactivity disorder (ADHD), or dyslexia. Neurodiversity is common in the general population, with an estimated 5.0% to 7.1% and 7% of the world population being diagnosed with ADHD and dyslexia respectively. Neurodivergent (ND) individuals often experience challenges in specific tasks, such as difficulties in communication or a reduced attention span in comparison to neurotypical (NT) individuals. However, they also exhibit specific strengths, such as high creativity or attention to detail. Therefore, improving the inclusion of ND individuals is desirable for economic, ethical, and for talent reasons. In higher education, struggles of ND students are well-documented. Common issues in this area are a lack of awareness among other students and staff, forms of assessment that are particularly challenging for some students, and a lack of offered accommodations. These factors commonly lead to stress, anxiety, and ultimately a risk of dropping out of the studies. Accommodations for ND students can require substantial effort. However, smaller changes in course material can already have major impact. In this chapter, we summarise the lessons learned from an intervention in four courses in undergraduate computer science programmes at Reykjavik University, Iceland, over a period of two terms. Following accessibility guidelines produced by interest groups for different ND conditions, we created course material in the form of slides and assignments specifically tailored to ND audiences. We focused on small, economical changes that could be replicated by educators with a minimal investment of time. We evaluated the success of our intervention through two surveys, showing an overall positive response among ND students and NT students.
2401.03429
Zheng Lian
Zheng Lian, Licai Sun, Yong Ren, Hao Gu, Haiyang Sun, Lan Chen, Bin Liu, Jianhua Tao
MERBench: A Unified Evaluation Benchmark for Multimodal Emotion Recognition
null
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multimodal emotion recognition plays a crucial role in enhancing user experience in human-computer interaction. Over the past few decades, researchers have proposed a series of algorithms and achieved impressive progress. Although each method shows its superior performance, different methods lack a fair comparison due to inconsistencies in feature extractors, evaluation manners, and experimental settings. These inconsistencies severely hinder the development of this field. Therefore, we build MERBench, a unified evaluation benchmark for multimodal emotion recognition. We aim to reveal the contribution of some important techniques employed in previous works, such as feature selection, multimodal fusion, robustness analysis, fine-tuning, pre-training, etc. We hope this benchmark can provide clear and comprehensive guidance for follow-up researchers. Based on the evaluation results of MERBench, we further point out some promising research directions. Additionally, we introduce a new emotion dataset MER2023, focusing on the Chinese language environment. This dataset can serve as a benchmark dataset for research on multi-label learning, noise robustness, and semi-supervised learning. We encourage the follow-up researchers to evaluate their algorithms under the same experimental setup as MERBench for fair comparisons. Our code is available at: https://github.com/zeroQiaoba/MERTools.
[ { "created": "Sun, 7 Jan 2024 09:09:32 GMT", "version": "v1" }, { "created": "Wed, 10 Jan 2024 07:55:16 GMT", "version": "v2" }, { "created": "Sun, 21 Apr 2024 02:18:47 GMT", "version": "v3" } ]
2024-04-23
[ [ "Lian", "Zheng", "" ], [ "Sun", "Licai", "" ], [ "Ren", "Yong", "" ], [ "Gu", "Hao", "" ], [ "Sun", "Haiyang", "" ], [ "Chen", "Lan", "" ], [ "Liu", "Bin", "" ], [ "Tao", "Jianhua", "" ] ]
Multimodal emotion recognition plays a crucial role in enhancing user experience in human-computer interaction. Over the past few decades, researchers have proposed a series of algorithms and achieved impressive progress. Although each method shows its superior performance, different methods lack a fair comparison due to inconsistencies in feature extractors, evaluation manners, and experimental settings. These inconsistencies severely hinder the development of this field. Therefore, we build MERBench, a unified evaluation benchmark for multimodal emotion recognition. We aim to reveal the contribution of some important techniques employed in previous works, such as feature selection, multimodal fusion, robustness analysis, fine-tuning, pre-training, etc. We hope this benchmark can provide clear and comprehensive guidance for follow-up researchers. Based on the evaluation results of MERBench, we further point out some promising research directions. Additionally, we introduce a new emotion dataset MER2023, focusing on the Chinese language environment. This dataset can serve as a benchmark dataset for research on multi-label learning, noise robustness, and semi-supervised learning. We encourage the follow-up researchers to evaluate their algorithms under the same experimental setup as MERBench for fair comparisons. Our code is available at: https://github.com/zeroQiaoba/MERTools.
1905.00538
Sunghoon Im
Sunghoon Im, Hae-Gon Jeon, Stephen Lin, In So Kweon
DPSNet: End-to-end Deep Plane Sweep Stereo
ICLR2019 accepted
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiview stereo aims to reconstruct scene depth from images acquired by a camera under arbitrary motion. Recent methods address this problem through deep learning, which can utilize semantic cues to deal with challenges such as textureless and reflective regions. In this paper, we present a convolutional neural network called DPSNet (Deep Plane Sweep Network) whose design is inspired by best practices of traditional geometry-based approaches for dense depth reconstruction. Rather than directly estimating depth and/or optical flow correspondence from image pairs as done in many previous deep learning methods, DPSNet takes a plane sweep approach that involves building a cost volume from deep features using the plane sweep algorithm, regularizing the cost volume via a context-aware cost aggregation, and regressing the dense depth map from the cost volume. The cost volume is constructed using a differentiable warping process that allows for end-to-end training of the network. Through the effective incorporation of conventional multiview stereo concepts within a deep learning framework, DPSNet achieves state-of-the-art reconstruction results on a variety of challenging datasets.
[ { "created": "Thu, 2 May 2019 00:59:31 GMT", "version": "v1" } ]
2019-05-03
[ [ "Im", "Sunghoon", "" ], [ "Jeon", "Hae-Gon", "" ], [ "Lin", "Stephen", "" ], [ "Kweon", "In So", "" ] ]
Multiview stereo aims to reconstruct scene depth from images acquired by a camera under arbitrary motion. Recent methods address this problem through deep learning, which can utilize semantic cues to deal with challenges such as textureless and reflective regions. In this paper, we present a convolutional neural network called DPSNet (Deep Plane Sweep Network) whose design is inspired by best practices of traditional geometry-based approaches for dense depth reconstruction. Rather than directly estimating depth and/or optical flow correspondence from image pairs as done in many previous deep learning methods, DPSNet takes a plane sweep approach that involves building a cost volume from deep features using the plane sweep algorithm, regularizing the cost volume via a context-aware cost aggregation, and regressing the dense depth map from the cost volume. The cost volume is constructed using a differentiable warping process that allows for end-to-end training of the network. Through the effective incorporation of conventional multiview stereo concepts within a deep learning framework, DPSNet achieves state-of-the-art reconstruction results on a variety of challenging datasets.
2308.03415
Christian Huber
Christian Huber, Tu Anh Dinh, Carlos Mullov, Ngoc Quan Pham, Thai Binh Nguyen, Fabian Retkowski, Stefan Constantin, Enes Yavuz Ugan, Danni Liu, Zhaolin Li, Sai Koneru, Jan Niehues and Alexander Waibel
End-to-End Evaluation for Low-Latency Simultaneous Speech Translation
Demo paper at EMNLP 2023
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
The challenge of low-latency speech translation has recently draw significant interest in the research community as shown by several publications and shared tasks. Therefore, it is essential to evaluate these different approaches in realistic scenarios. However, currently only specific aspects of the systems are evaluated and often it is not possible to compare different approaches. In this work, we propose the first framework to perform and evaluate the various aspects of low-latency speech translation under realistic conditions. The evaluation is carried out in an end-to-end fashion. This includes the segmentation of the audio as well as the run-time of the different components. Secondly, we compare different approaches to low-latency speech translation using this framework. We evaluate models with the option to revise the output as well as methods with fixed output. Furthermore, we directly compare state-of-the-art cascaded as well as end-to-end systems. Finally, the framework allows to automatically evaluate the translation quality as well as latency and also provides a web interface to show the low-latency model outputs to the user.
[ { "created": "Mon, 7 Aug 2023 09:06:20 GMT", "version": "v1" }, { "created": "Mon, 23 Oct 2023 11:47:04 GMT", "version": "v2" }, { "created": "Wed, 17 Jul 2024 11:29:10 GMT", "version": "v3" } ]
2024-07-18
[ [ "Huber", "Christian", "" ], [ "Dinh", "Tu Anh", "" ], [ "Mullov", "Carlos", "" ], [ "Pham", "Ngoc Quan", "" ], [ "Nguyen", "Thai Binh", "" ], [ "Retkowski", "Fabian", "" ], [ "Constantin", "Stefan", "" ], [ "Ugan", "Enes Yavuz", "" ], [ "Liu", "Danni", "" ], [ "Li", "Zhaolin", "" ], [ "Koneru", "Sai", "" ], [ "Niehues", "Jan", "" ], [ "Waibel", "Alexander", "" ] ]
The challenge of low-latency speech translation has recently draw significant interest in the research community as shown by several publications and shared tasks. Therefore, it is essential to evaluate these different approaches in realistic scenarios. However, currently only specific aspects of the systems are evaluated and often it is not possible to compare different approaches. In this work, we propose the first framework to perform and evaluate the various aspects of low-latency speech translation under realistic conditions. The evaluation is carried out in an end-to-end fashion. This includes the segmentation of the audio as well as the run-time of the different components. Secondly, we compare different approaches to low-latency speech translation using this framework. We evaluate models with the option to revise the output as well as methods with fixed output. Furthermore, we directly compare state-of-the-art cascaded as well as end-to-end systems. Finally, the framework allows to automatically evaluate the translation quality as well as latency and also provides a web interface to show the low-latency model outputs to the user.
1907.10206
Jim Buchan
Jim Buchan, Stephen MacDonell, Jennifer Yang
Effective team onboarding in Agile software development: techniques and goals
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Context: It is not uncommon for a new team member to join an existing Agile software development team, even after development has started. This new team member faces a number of challenges before they are integrated into the team and can contribute productively to team progress. Ideally, each newcomer should be supported in this transition through an effective team onboarding program, although prior evidence suggests that this is challenging for many organisations. Objective: We seek to understand how Agile teams address the challenge of team onboarding in order to inform future onboarding design. Method: We conducted an interview survey of eleven participants from eight organisations to investigate what onboarding activities are common across Agile software development teams. We also identify common goals of onboarding from a synthesis of literature. A repertory grid instrument is used to map the contributions of onboarding techniques to onboarding goals. Results: Our study reveals that a broad range of team onboarding techniques, both formal and informal, are used in practice. It also shows that particular techniques that have high contributions to a given goal or set of goals. Conclusions: In presenting a set of onboarding goals to consider and an evidence-based mechanism for selecting techniques to achieve the desired goals it is expected that this study will contribute to better-informed onboarding design and planning. An increase in practitioner awareness of the options for supporting new team members is also an expected outcome.
[ { "created": "Wed, 24 Jul 2019 02:04:46 GMT", "version": "v1" } ]
2019-07-25
[ [ "Buchan", "Jim", "" ], [ "MacDonell", "Stephen", "" ], [ "Yang", "Jennifer", "" ] ]
Context: It is not uncommon for a new team member to join an existing Agile software development team, even after development has started. This new team member faces a number of challenges before they are integrated into the team and can contribute productively to team progress. Ideally, each newcomer should be supported in this transition through an effective team onboarding program, although prior evidence suggests that this is challenging for many organisations. Objective: We seek to understand how Agile teams address the challenge of team onboarding in order to inform future onboarding design. Method: We conducted an interview survey of eleven participants from eight organisations to investigate what onboarding activities are common across Agile software development teams. We also identify common goals of onboarding from a synthesis of literature. A repertory grid instrument is used to map the contributions of onboarding techniques to onboarding goals. Results: Our study reveals that a broad range of team onboarding techniques, both formal and informal, are used in practice. It also shows that particular techniques that have high contributions to a given goal or set of goals. Conclusions: In presenting a set of onboarding goals to consider and an evidence-based mechanism for selecting techniques to achieve the desired goals it is expected that this study will contribute to better-informed onboarding design and planning. An increase in practitioner awareness of the options for supporting new team members is also an expected outcome.
1405.1524
Mohammad Mohammadi
Mohammad Mohammadi, Shahram Jafari
An expert system for recommending suitable ornamental fish addition to an aquarium based on aquarium condition
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Expert systems prove to be suitable replacement for human experts when human experts are unavailable for different reasons. Various expert system has been developed for wide range of application. Although some expert systems in the field of fishery and aquaculture has been developed but a system that aids user in process of selecting a new addition to their aquarium tank never been designed. This paper proposed an expert system that suggests new addition to an aquarium tank based on current environmental condition of aquarium and currently existing fishes in aquarium. The system suggest the best fit for aquarium condition and most compatible to other fishes in aquarium.
[ { "created": "Wed, 7 May 2014 07:45:09 GMT", "version": "v1" } ]
2014-05-08
[ [ "Mohammadi", "Mohammad", "" ], [ "Jafari", "Shahram", "" ] ]
Expert systems prove to be suitable replacement for human experts when human experts are unavailable for different reasons. Various expert system has been developed for wide range of application. Although some expert systems in the field of fishery and aquaculture has been developed but a system that aids user in process of selecting a new addition to their aquarium tank never been designed. This paper proposed an expert system that suggests new addition to an aquarium tank based on current environmental condition of aquarium and currently existing fishes in aquarium. The system suggest the best fit for aquarium condition and most compatible to other fishes in aquarium.
2405.15080
Tom Goertzen
Tom Goertzen
Constructing Interlocking Assemblies with Crystallographic Symmetries
null
null
null
null
cs.CG math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work presents a construction method for interlocking assemblies based on planar crystallographic symmetries. Planar crystallographic groups, also known as wallpaper groups, correspond to tessellations of the plane with a tile, called a fundamental domain, such that the action of the group can be used to tessellate the plane with the given tile. The main idea of this method is to extend the action of a wallpaper group so that it acts on three-dimensional space and places two fundamental domains into parallel planes. Next, we interpolate between these domains to obtain a block that serves as a candidate for interlocking assemblies. We show that the resulting blocks can be triangulated, and we can also approximate blocks with smooth surfaces using this approach. Finally, we show that there exists a family of blocks derived from this construction that can be tiled in multiple ways, characterised by generalised Truchet tiles. The assemblies of one block in this family, which we call RhomBlock, correspond to tessellations with lozenges.
[ { "created": "Thu, 23 May 2024 22:05:01 GMT", "version": "v1" } ]
2024-05-27
[ [ "Goertzen", "Tom", "" ] ]
This work presents a construction method for interlocking assemblies based on planar crystallographic symmetries. Planar crystallographic groups, also known as wallpaper groups, correspond to tessellations of the plane with a tile, called a fundamental domain, such that the action of the group can be used to tessellate the plane with the given tile. The main idea of this method is to extend the action of a wallpaper group so that it acts on three-dimensional space and places two fundamental domains into parallel planes. Next, we interpolate between these domains to obtain a block that serves as a candidate for interlocking assemblies. We show that the resulting blocks can be triangulated, and we can also approximate blocks with smooth surfaces using this approach. Finally, we show that there exists a family of blocks derived from this construction that can be tiled in multiple ways, characterised by generalised Truchet tiles. The assemblies of one block in this family, which we call RhomBlock, correspond to tessellations with lozenges.
2405.19832
Herman Cappelen
Herman Cappelen, Josh Dever and John Hawthorne
AI Safety: A Climb To Armageddon?
20 page article
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an argument that certain AI safety measures, rather than mitigating existential risk, may instead exacerbate it. Under certain key assumptions - the inevitability of AI failure, the expected correlation between an AI system's power at the point of failure and the severity of the resulting harm, and the tendency of safety measures to enable AI systems to become more powerful before failing - safety efforts have negative expected utility. The paper examines three response strategies: Optimism, Mitigation, and Holism. Each faces challenges stemming from intrinsic features of the AI safety landscape that we term Bottlenecking, the Perfection Barrier, and Equilibrium Fluctuation. The surprising robustness of the argument forces a re-examination of core assumptions around AI safety and points to several avenues for further research.
[ { "created": "Thu, 30 May 2024 08:41:54 GMT", "version": "v1" }, { "created": "Sun, 2 Jun 2024 22:32:46 GMT", "version": "v2" } ]
2024-06-04
[ [ "Cappelen", "Herman", "" ], [ "Dever", "Josh", "" ], [ "Hawthorne", "John", "" ] ]
This paper presents an argument that certain AI safety measures, rather than mitigating existential risk, may instead exacerbate it. Under certain key assumptions - the inevitability of AI failure, the expected correlation between an AI system's power at the point of failure and the severity of the resulting harm, and the tendency of safety measures to enable AI systems to become more powerful before failing - safety efforts have negative expected utility. The paper examines three response strategies: Optimism, Mitigation, and Holism. Each faces challenges stemming from intrinsic features of the AI safety landscape that we term Bottlenecking, the Perfection Barrier, and Equilibrium Fluctuation. The surprising robustness of the argument forces a re-examination of core assumptions around AI safety and points to several avenues for further research.
2401.02804
Yuta Okuyama
Yuta Okuyama, Yuki Endo, Yoshihiro Kanamori
DiffBody: Diffusion-based Pose and Shape Editing of Human Images
Accepted to WACV 2024, project page: https://www.cgg.cs.tsukuba.ac.jp/~okuyama/pub/diffbody/
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pose and body shape editing in a human image has received increasing attention. However, current methods often struggle with dataset biases and deteriorate realism and the person's identity when users make large edits. We propose a one-shot approach that enables large edits with identity preservation. To enable large edits, we fit a 3D body model, project the input image onto the 3D model, and change the body's pose and shape. Because this initial textured body model has artifacts due to occlusion and the inaccurate body shape, the rendered image undergoes a diffusion-based refinement, in which strong noise destroys body structure and identity whereas insufficient noise does not help. We thus propose an iterative refinement with weak noise, applied first for the whole body and then for the face. We further enhance the realism by fine-tuning text embeddings via self-supervised learning. Our quantitative and qualitative evaluations demonstrate that our method outperforms other existing methods across various datasets.
[ { "created": "Fri, 5 Jan 2024 13:36:19 GMT", "version": "v1" }, { "created": "Mon, 8 Jan 2024 04:41:30 GMT", "version": "v2" } ]
2024-01-09
[ [ "Okuyama", "Yuta", "" ], [ "Endo", "Yuki", "" ], [ "Kanamori", "Yoshihiro", "" ] ]
Pose and body shape editing in a human image has received increasing attention. However, current methods often struggle with dataset biases and deteriorate realism and the person's identity when users make large edits. We propose a one-shot approach that enables large edits with identity preservation. To enable large edits, we fit a 3D body model, project the input image onto the 3D model, and change the body's pose and shape. Because this initial textured body model has artifacts due to occlusion and the inaccurate body shape, the rendered image undergoes a diffusion-based refinement, in which strong noise destroys body structure and identity whereas insufficient noise does not help. We thus propose an iterative refinement with weak noise, applied first for the whole body and then for the face. We further enhance the realism by fine-tuning text embeddings via self-supervised learning. Our quantitative and qualitative evaluations demonstrate that our method outperforms other existing methods across various datasets.
1910.11911
Ariel Calderon
A. A. Calder\'on, Y. Chen, X. Yang, L. Chang, X.-T. Nguyen, E. K. Singer, and N. O. P\'erez-Arancibia
Control of Flying Robotic Insects: A Perspective and Unifying Approach
ICAR 2019
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We discuss the problem of designing and implementing controllers for insect-scale flapping-wing micro air vehicles (FWMAVs), from a unifying perspective and employing two different experimental platforms; namely, a Harvard RoboBee-like two-winged robot and the four-winged USC Bee+. Through experiments, we demonstrate that a method that employs quaternion coordinates for attitude control, developed to control quadrotors, can be applied to drive both robotic insects considered in this work. The proposed notion that a generic strategy can be used to control several types of artificial insects with some common characteristics was preliminarily tested and validated using a set of experiments, which include position- and attitude-controlled flights. We believe that the presented results are interesting and valuable from both the research and educational perspectives.
[ { "created": "Fri, 25 Oct 2019 19:36:57 GMT", "version": "v1" } ]
2019-10-29
[ [ "Calderón", "A. A.", "" ], [ "Chen", "Y.", "" ], [ "Yang", "X.", "" ], [ "Chang", "L.", "" ], [ "Nguyen", "X. -T.", "" ], [ "Singer", "E. K.", "" ], [ "Pérez-Arancibia", "N. O.", "" ] ]
We discuss the problem of designing and implementing controllers for insect-scale flapping-wing micro air vehicles (FWMAVs), from a unifying perspective and employing two different experimental platforms; namely, a Harvard RoboBee-like two-winged robot and the four-winged USC Bee+. Through experiments, we demonstrate that a method that employs quaternion coordinates for attitude control, developed to control quadrotors, can be applied to drive both robotic insects considered in this work. The proposed notion that a generic strategy can be used to control several types of artificial insects with some common characteristics was preliminarily tested and validated using a set of experiments, which include position- and attitude-controlled flights. We believe that the presented results are interesting and valuable from both the research and educational perspectives.
2210.17367
Yuya Yamamoto
Yuya Yamamoto, Juhan Nam, Hiroko Terasawa
Analysis and Detection of Singing Techniques in Repertoires of J-POP Solo Singers
Accepted at ISMIR 2022, appendix website: https://yamathcy.github.io/ISMIR2022J-POP/
null
null
null
cs.SD cs.DL cs.IR cs.MM eess.AS
http://creativecommons.org/licenses/by/4.0/
In this paper, we focus on singing techniques within the scope of music information retrieval research. We investigate how singers use singing techniques using real-world recordings of famous solo singers in Japanese popular music songs (J-POP). First, we built a new dataset of singing techniques. The dataset consists of 168 commercial J-POP songs, and each song is annotated using various singing techniques with timestamps and vocal pitch contours. We also present descriptive statistics of singing techniques on the dataset to clarify what and how often singing techniques appear. We further explored the difficulty of the automatic detection of singing techniques using previously proposed machine learning techniques. In the detection, we also investigate the effectiveness of auxiliary information (i.e., pitch and distribution of label duration), not only providing the baseline. The best result achieves 40.4% at macro-average F-measure on nine-way multi-class detection. We provide the annotation of the dataset and its detail on the appendix website 0 .
[ { "created": "Mon, 31 Oct 2022 14:45:01 GMT", "version": "v1" }, { "created": "Tue, 15 Nov 2022 19:31:27 GMT", "version": "v2" } ]
2022-11-17
[ [ "Yamamoto", "Yuya", "" ], [ "Nam", "Juhan", "" ], [ "Terasawa", "Hiroko", "" ] ]
In this paper, we focus on singing techniques within the scope of music information retrieval research. We investigate how singers use singing techniques using real-world recordings of famous solo singers in Japanese popular music songs (J-POP). First, we built a new dataset of singing techniques. The dataset consists of 168 commercial J-POP songs, and each song is annotated using various singing techniques with timestamps and vocal pitch contours. We also present descriptive statistics of singing techniques on the dataset to clarify what and how often singing techniques appear. We further explored the difficulty of the automatic detection of singing techniques using previously proposed machine learning techniques. In the detection, we also investigate the effectiveness of auxiliary information (i.e., pitch and distribution of label duration), not only providing the baseline. The best result achieves 40.4% at macro-average F-measure on nine-way multi-class detection. We provide the annotation of the dataset and its detail on the appendix website 0 .
1912.10764
S\'ebastien Henwood
S\'ebastien Henwood, Fran\c{c}ois Leduc-Primeau and Yvon Savaria
Layerwise Noise Maximisation to Train Low-Energy Deep Neural Networks
To be presented at AICAS 2020
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks (DNNs) depend on the storage of a large number of parameters, which consumes an important portion of the energy used during inference. This paper considers the case where the energy usage of memory elements can be reduced at the cost of reduced reliability. A training algorithm is proposed to optimize the reliability of the storage separately for each layer of the network, while incurring a negligible complexity overhead compared to a conventional stochastic gradient descent training. For an exponential energy-reliability model, the proposed training approach can decrease the memory energy consumption of a DNN with binary parameters by 3.3$\times$ at isoaccuracy, compared to a reliable implementation.
[ { "created": "Mon, 23 Dec 2019 12:36:51 GMT", "version": "v1" } ]
2019-12-24
[ [ "Henwood", "Sébastien", "" ], [ "Leduc-Primeau", "François", "" ], [ "Savaria", "Yvon", "" ] ]
Deep neural networks (DNNs) depend on the storage of a large number of parameters, which consumes an important portion of the energy used during inference. This paper considers the case where the energy usage of memory elements can be reduced at the cost of reduced reliability. A training algorithm is proposed to optimize the reliability of the storage separately for each layer of the network, while incurring a negligible complexity overhead compared to a conventional stochastic gradient descent training. For an exponential energy-reliability model, the proposed training approach can decrease the memory energy consumption of a DNN with binary parameters by 3.3$\times$ at isoaccuracy, compared to a reliable implementation.
2202.07883
Mohamed Nabeel
Wathsara Daluwatta, Ravindu De Silva, Sanduni Kariyawasam, Mohamed Nabeel, Charith Elvitigala, Kasun De Zoysa, Chamath Keppitiyagama
CGraph: Graph Based Extensible Predictive Domain Threat Intelligence Platform
threat intelligence graph investigation
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Ability to effectively investigate indicators of compromise and associated network resources involved in cyber attacks is paramount not only to identify affected network resources but also to detect related malicious resources. Today, most of the cyber threat intelligence platforms are reactive in that they can identify attack resources only after the attack is carried out. Further, these systems have limited functionality to investigate associated network resources. In this work, we propose an extensible predictive cyber threat intelligence platform called cGraph that addresses the above limitations. cGraph is built as a graph-first system where investigators can explore network resources utilizing a graph based API. Further, cGraph provides real-time predictive capabilities based on state-of-the-art inference algorithms to predict malicious domains from network graphs with a few known malicious and benign seeds. To the best of our knowledge, cGraph is the only threat intelligence platform to do so. cGraph is extensible in that additional network resources can be added to the system transparently.
[ { "created": "Wed, 16 Feb 2022 06:28:07 GMT", "version": "v1" } ]
2022-02-17
[ [ "Daluwatta", "Wathsara", "" ], [ "De Silva", "Ravindu", "" ], [ "Kariyawasam", "Sanduni", "" ], [ "Nabeel", "Mohamed", "" ], [ "Elvitigala", "Charith", "" ], [ "De Zoysa", "Kasun", "" ], [ "Keppitiyagama", "Chamath", "" ] ]
Ability to effectively investigate indicators of compromise and associated network resources involved in cyber attacks is paramount not only to identify affected network resources but also to detect related malicious resources. Today, most of the cyber threat intelligence platforms are reactive in that they can identify attack resources only after the attack is carried out. Further, these systems have limited functionality to investigate associated network resources. In this work, we propose an extensible predictive cyber threat intelligence platform called cGraph that addresses the above limitations. cGraph is built as a graph-first system where investigators can explore network resources utilizing a graph based API. Further, cGraph provides real-time predictive capabilities based on state-of-the-art inference algorithms to predict malicious domains from network graphs with a few known malicious and benign seeds. To the best of our knowledge, cGraph is the only threat intelligence platform to do so. cGraph is extensible in that additional network resources can be added to the system transparently.
2107.05146
Alexander Lambert
Alexander Lambert, Byron Boots
Entropy Regularized Motion Planning via Stein Variational Inference
RSS 2021 Workshop on Integrating Planning and Learning
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Many Imitation and Reinforcement Learning approaches rely on the availability of expert-generated demonstrations for learning policies or value functions from data. Obtaining a reliable distribution of trajectories from motion planners is non-trivial, since it must broadly cover the space of states likely to be encountered during execution while also satisfying task-based constraints. We propose a sampling strategy based on variational inference to generate distributions of feasible, low-cost trajectories for high-dof motion planning tasks. This includes a distributed, particle-based motion planning algorithm which leverages a structured graphical representations for inference over multi-modal posterior distributions. We also make explicit connections to both approximate inference for trajectory optimization and entropy-regularized reinforcement learning.
[ { "created": "Sun, 11 Jul 2021 23:39:24 GMT", "version": "v1" } ]
2021-07-13
[ [ "Lambert", "Alexander", "" ], [ "Boots", "Byron", "" ] ]
Many Imitation and Reinforcement Learning approaches rely on the availability of expert-generated demonstrations for learning policies or value functions from data. Obtaining a reliable distribution of trajectories from motion planners is non-trivial, since it must broadly cover the space of states likely to be encountered during execution while also satisfying task-based constraints. We propose a sampling strategy based on variational inference to generate distributions of feasible, low-cost trajectories for high-dof motion planning tasks. This includes a distributed, particle-based motion planning algorithm which leverages a structured graphical representations for inference over multi-modal posterior distributions. We also make explicit connections to both approximate inference for trajectory optimization and entropy-regularized reinforcement learning.
2205.02211
Bashar Alhafni
Bashar Alhafni, Nizar Habash, Houda Bouamor
User-Centric Gender Rewriting
Accepted at NAACL 2022
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we define the task of gender rewriting in contexts involving two users (I and/or You) - first and second grammatical persons with independent grammatical gender preferences. We focus on Arabic, a gender-marking morphologically rich language. We develop a multi-step system that combines the positive aspects of both rule-based and neural rewriting models. Our results successfully demonstrate the viability of this approach on a recently created corpus for Arabic gender rewriting, achieving 88.42 M2 F0.5 on a blind test set. Our proposed system improves over previous work on the first-person-only version of this task, by 3.05 absolute increase in M2 F0.5. We demonstrate a use case of our gender rewriting system by using it to post-edit the output of a commercial MT system to provide personalized outputs based on the users' grammatical gender preferences. We make our code, data, and models publicly available.
[ { "created": "Wed, 4 May 2022 17:46:17 GMT", "version": "v1" } ]
2022-05-05
[ [ "Alhafni", "Bashar", "" ], [ "Habash", "Nizar", "" ], [ "Bouamor", "Houda", "" ] ]
In this paper, we define the task of gender rewriting in contexts involving two users (I and/or You) - first and second grammatical persons with independent grammatical gender preferences. We focus on Arabic, a gender-marking morphologically rich language. We develop a multi-step system that combines the positive aspects of both rule-based and neural rewriting models. Our results successfully demonstrate the viability of this approach on a recently created corpus for Arabic gender rewriting, achieving 88.42 M2 F0.5 on a blind test set. Our proposed system improves over previous work on the first-person-only version of this task, by 3.05 absolute increase in M2 F0.5. We demonstrate a use case of our gender rewriting system by using it to post-edit the output of a commercial MT system to provide personalized outputs based on the users' grammatical gender preferences. We make our code, data, and models publicly available.
0710.4815
EDA Publishing Association
Raul Blazquez, Fred Lee, David Wentzloff, Brian Ginsburg, Johnna Powell, Anantha Chandrakasan
Direct Conversion Pulsed UWB Transceiver Architecture
Submitted on behalf of EDAA (http://www.edaa.com/)
Dans Design, Automation and Test in Europe | Designers'Forum - DATE'05, Munich : Allemagne (2005)
null
null
cs.NI
null
Ultra-wideband (UWB) communication is an emerging wireless technology that promises high data rates over short distances and precise locationing. The large available bandwidth and the constraint of a maximum power spectral density drives a unique set of system challenges. This paper addresses these challenges using two UWB transceivers and a discrete prototype platform.
[ { "created": "Thu, 25 Oct 2007 12:01:54 GMT", "version": "v1" } ]
2011-11-09
[ [ "Blazquez", "Raul", "" ], [ "Lee", "Fred", "" ], [ "Wentzloff", "David", "" ], [ "Ginsburg", "Brian", "" ], [ "Powell", "Johnna", "" ], [ "Chandrakasan", "Anantha", "" ] ]
Ultra-wideband (UWB) communication is an emerging wireless technology that promises high data rates over short distances and precise locationing. The large available bandwidth and the constraint of a maximum power spectral density drives a unique set of system challenges. This paper addresses these challenges using two UWB transceivers and a discrete prototype platform.
1912.08100
Ning Li
Ning Li, Zhaoxin Zhang, Jose-Fernan Martinez-Ortega, Xin Yuan
Geographical and Topology Control based Opportunistic Routing for Ad Hoc Networks
arXiv admin note: substantial text overlap with arXiv:1709.10317
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The opportunistic routing has great advantages on improving packet delivery probability between the source node and candidate forwarding set. For improving and reducing energy consumption and network interference, in this paper, we propose an efficient and reliable transmission power control based opportunistic routing (ERTO) for wireless ad hoc networks. In ERTO, the network interference and fading, which are critical to routing performances but not taken into account in previous works, are considered during the prediction of. The, the expected energy consumption, and the relationship between transmission power and node degree are considered to optimize the optimal transmission power and forwarding node degree jointly. For improving routing effectiveness and reducing computational complexity, we introduce the multi-objective optimization and Pareto optimal into ERTO. During the routing process, nodes calculate the optimal transmission power and forwarding node degree according to the properties of the Pareto optimal solution set. Based on these innovations, the energy consumption, the transmission delay, and the throughout have been improved greatly.
[ { "created": "Mon, 16 Dec 2019 08:44:29 GMT", "version": "v1" } ]
2019-12-18
[ [ "Li", "Ning", "" ], [ "Zhang", "Zhaoxin", "" ], [ "Martinez-Ortega", "Jose-Fernan", "" ], [ "Yuan", "Xin", "" ] ]
The opportunistic routing has great advantages on improving packet delivery probability between the source node and candidate forwarding set. For improving and reducing energy consumption and network interference, in this paper, we propose an efficient and reliable transmission power control based opportunistic routing (ERTO) for wireless ad hoc networks. In ERTO, the network interference and fading, which are critical to routing performances but not taken into account in previous works, are considered during the prediction of. The, the expected energy consumption, and the relationship between transmission power and node degree are considered to optimize the optimal transmission power and forwarding node degree jointly. For improving routing effectiveness and reducing computational complexity, we introduce the multi-objective optimization and Pareto optimal into ERTO. During the routing process, nodes calculate the optimal transmission power and forwarding node degree according to the properties of the Pareto optimal solution set. Based on these innovations, the energy consumption, the transmission delay, and the throughout have been improved greatly.
2307.16019
Lia Morra
Francesco Manigrasso and Lia Morra and Fabrizio Lamberti
Fuzzy Logic Visual Network (FLVN): A neuro-symbolic approach for visual features matching
Accepted for publication at ICIAP 2023
null
null
null
cs.CV cs.LG cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neuro-symbolic integration aims at harnessing the power of symbolic knowledge representation combined with the learning capabilities of deep neural networks. In particular, Logic Tensor Networks (LTNs) allow to incorporate background knowledge in the form of logical axioms by grounding a first order logic language as differentiable operations between real tensors. Yet, few studies have investigated the potential benefits of this approach to improve zero-shot learning (ZSL) classification. In this study, we present the Fuzzy Logic Visual Network (FLVN) that formulates the task of learning a visual-semantic embedding space within a neuro-symbolic LTN framework. FLVN incorporates prior knowledge in the form of class hierarchies (classes and macro-classes) along with robust high-level inductive biases. The latter allow, for instance, to handle exceptions in class-level attributes, and to enforce similarity between images of the same class, preventing premature overfitting to seen classes and improving overall performance. FLVN reaches state of the art performance on the Generalized ZSL (GZSL) benchmarks AWA2 and CUB, improving by 1.3% and 3%, respectively. Overall, it achieves competitive performance to recent ZSL methods with less computational overhead. FLVN is available at https://gitlab.com/grains2/flvn.
[ { "created": "Sat, 29 Jul 2023 16:21:40 GMT", "version": "v1" } ]
2023-08-01
[ [ "Manigrasso", "Francesco", "" ], [ "Morra", "Lia", "" ], [ "Lamberti", "Fabrizio", "" ] ]
Neuro-symbolic integration aims at harnessing the power of symbolic knowledge representation combined with the learning capabilities of deep neural networks. In particular, Logic Tensor Networks (LTNs) allow to incorporate background knowledge in the form of logical axioms by grounding a first order logic language as differentiable operations between real tensors. Yet, few studies have investigated the potential benefits of this approach to improve zero-shot learning (ZSL) classification. In this study, we present the Fuzzy Logic Visual Network (FLVN) that formulates the task of learning a visual-semantic embedding space within a neuro-symbolic LTN framework. FLVN incorporates prior knowledge in the form of class hierarchies (classes and macro-classes) along with robust high-level inductive biases. The latter allow, for instance, to handle exceptions in class-level attributes, and to enforce similarity between images of the same class, preventing premature overfitting to seen classes and improving overall performance. FLVN reaches state of the art performance on the Generalized ZSL (GZSL) benchmarks AWA2 and CUB, improving by 1.3% and 3%, respectively. Overall, it achieves competitive performance to recent ZSL methods with less computational overhead. FLVN is available at https://gitlab.com/grains2/flvn.
2204.00536
Chuhan Wu
Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang
Semi-FairVAE: Semi-supervised Fair Representation Learning with Adversarial Variational Autoencoder
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Adversarial learning is a widely used technique in fair representation learning to remove the biases on sensitive attributes from data representations. It usually requires to incorporate the sensitive attribute labels as prediction targets. However, in many scenarios the sensitive attribute labels of many samples can be unknown, and it is difficult to train a strong discriminator based on the scarce data with observed attribute labels, which may lead to generate unfair representations. In this paper, we propose a semi-supervised fair representation learning approach based on adversarial variational autoencoder, which can reduce the dependency of adversarial fair models on data with labeled sensitive attributes. More specifically, we use a bias-aware model to capture inherent bias information on sensitive attribute by accurately predicting sensitive attributes from input data, and we use a bias-free model to learn debiased fair representations by using adversarial learning to remove bias information from them. The hidden representations learned by the two models are regularized to be orthogonal. In addition, the soft labels predicted by the two models are further integrated into a semi-supervised variational autoencoder to reconstruct the input data, and we apply an additional entropy regularization to encourage the attribute labels inferred from the bias-free model to be high-entropy. In this way, the bias-aware model can better capture attribute information while the bias-free model is less discriminative on sensitive attributes if the input data is well reconstructed. Extensive experiments on two datasets for different tasks validate that our approach can achieve good representation learning fairness under limited data with sensitive attribute labels.
[ { "created": "Fri, 1 Apr 2022 15:57:47 GMT", "version": "v1" } ]
2022-04-04
[ [ "Wu", "Chuhan", "" ], [ "Wu", "Fangzhao", "" ], [ "Qi", "Tao", "" ], [ "Huang", "Yongfeng", "" ] ]
Adversarial learning is a widely used technique in fair representation learning to remove the biases on sensitive attributes from data representations. It usually requires to incorporate the sensitive attribute labels as prediction targets. However, in many scenarios the sensitive attribute labels of many samples can be unknown, and it is difficult to train a strong discriminator based on the scarce data with observed attribute labels, which may lead to generate unfair representations. In this paper, we propose a semi-supervised fair representation learning approach based on adversarial variational autoencoder, which can reduce the dependency of adversarial fair models on data with labeled sensitive attributes. More specifically, we use a bias-aware model to capture inherent bias information on sensitive attribute by accurately predicting sensitive attributes from input data, and we use a bias-free model to learn debiased fair representations by using adversarial learning to remove bias information from them. The hidden representations learned by the two models are regularized to be orthogonal. In addition, the soft labels predicted by the two models are further integrated into a semi-supervised variational autoencoder to reconstruct the input data, and we apply an additional entropy regularization to encourage the attribute labels inferred from the bias-free model to be high-entropy. In this way, the bias-aware model can better capture attribute information while the bias-free model is less discriminative on sensitive attributes if the input data is well reconstructed. Extensive experiments on two datasets for different tasks validate that our approach can achieve good representation learning fairness under limited data with sensitive attribute labels.
2204.09350
Tiejun Lv
Zhongyu Wang, Tiejun Lv, Jie Zeng, and Wei Ni
Placement and Resource Allocation of Wireless-Powered Multiantenna UAV for Energy-Efficient Multiuser NOMA
15 pages, 11 figures, Accepted by IEEE Transactions on Wireless Communications
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/publicdomain/zero/1.0/
This paper investigates a new downlink nonorthogonal multiple access (NOMA) system, where a multiantenna unmanned aerial vehicle (UAV) is powered by wireless power transfer (WPT) and serves as the base station for multiple pairs of ground users (GUs) running NOMA in each pair. An energy efficiency (EE) maximization problem is formulated to jointly optimize the WPT time and the placement for the UAV, and the allocation of the UAV's transmit power between different NOMA user pairs and within each pair. To efficiently solve this nonconvex problem, we decompose the problem into three subproblems using block coordinate descent. For the subproblem of intra-pair power allocation within each NOMA user pair, we construct a supermodular game with confirmed convergence to a Nash equilibrium. Given the intra-pair power allocation, successive convex approximation is applied to convexify and solve the subproblem of WPT time allocation and inter-pair power allocation between the user pairs. Finally, we solve the subproblem of UAV placement by using the Lagrange multiplier method. Simulations show that our approach can substantially outperform its alternatives that do not use NOMA and WPT techniques or that do not optimize the UAV location.
[ { "created": "Wed, 20 Apr 2022 09:51:28 GMT", "version": "v1" } ]
2022-04-21
[ [ "Wang", "Zhongyu", "" ], [ "Lv", "Tiejun", "" ], [ "Zeng", "Jie", "" ], [ "Ni", "Wei", "" ] ]
This paper investigates a new downlink nonorthogonal multiple access (NOMA) system, where a multiantenna unmanned aerial vehicle (UAV) is powered by wireless power transfer (WPT) and serves as the base station for multiple pairs of ground users (GUs) running NOMA in each pair. An energy efficiency (EE) maximization problem is formulated to jointly optimize the WPT time and the placement for the UAV, and the allocation of the UAV's transmit power between different NOMA user pairs and within each pair. To efficiently solve this nonconvex problem, we decompose the problem into three subproblems using block coordinate descent. For the subproblem of intra-pair power allocation within each NOMA user pair, we construct a supermodular game with confirmed convergence to a Nash equilibrium. Given the intra-pair power allocation, successive convex approximation is applied to convexify and solve the subproblem of WPT time allocation and inter-pair power allocation between the user pairs. Finally, we solve the subproblem of UAV placement by using the Lagrange multiplier method. Simulations show that our approach can substantially outperform its alternatives that do not use NOMA and WPT techniques or that do not optimize the UAV location.
1709.08436
Zhijie Zhang
Xiaoming Sun, Jialin Zhang, Zhijie Zhang
A Linear Algorithm for Finding the Sink of Unique Sink Orientations on Grids
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An orientation of a grid is called unique sink orientation (USO) if each of its nonempty subgrids has a unique sink. Particularly, the original grid itself has a unique global sink. In this work we investigate the problem of how to find the global sink using minimum number of queries to an oracle. There are two different oracle models: the vertex query model where the orientation of all edges incident to the queried vertex are provided, and the edge query model where the orientation of the queried edge is provided. In the 2-dimensional case, we design an optimal linear deterministic algorithm for the vertex query model and an almost linear deterministic algorithm for the edge query model, previously the best known algorithms run in O(N logN) time for the vertex query model and O(N^1.404) time for the edge query model.
[ { "created": "Mon, 25 Sep 2017 11:49:16 GMT", "version": "v1" } ]
2017-09-26
[ [ "Sun", "Xiaoming", "" ], [ "Zhang", "Jialin", "" ], [ "Zhang", "Zhijie", "" ] ]
An orientation of a grid is called unique sink orientation (USO) if each of its nonempty subgrids has a unique sink. Particularly, the original grid itself has a unique global sink. In this work we investigate the problem of how to find the global sink using minimum number of queries to an oracle. There are two different oracle models: the vertex query model where the orientation of all edges incident to the queried vertex are provided, and the edge query model where the orientation of the queried edge is provided. In the 2-dimensional case, we design an optimal linear deterministic algorithm for the vertex query model and an almost linear deterministic algorithm for the edge query model, previously the best known algorithms run in O(N logN) time for the vertex query model and O(N^1.404) time for the edge query model.
1810.01015
Salimeh Yasaei Sekeh
Salimeh Yasaei Sekeh, Morteza Noshad, Kevin R. Moon, Alfred O. Hero
Convergence Rates for Empirical Estimation of Binary Classification Bounds
27 pages, 8 figures
null
null
null
cs.IT cs.LG math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bounding the best achievable error probability for binary classification problems is relevant to many applications including machine learning, signal processing, and information theory. Many bounds on the Bayes binary classification error rate depend on information divergences between the pair of class distributions. Recently, the Henze-Penrose (HP) divergence has been proposed for bounding classification error probability. We consider the problem of empirically estimating the HP-divergence from random samples. We derive a bound on the convergence rate for the Friedman-Rafsky (FR) estimator of the HP-divergence, which is related to a multivariate runs statistic for testing between two distributions. The FR estimator is derived from a multicolored Euclidean minimal spanning tree (MST) that spans the merged samples. We obtain a concentration inequality for the Friedman-Rafsky estimator of the Henze-Penrose divergence. We validate our results experimentally and illustrate their application to real datasets.
[ { "created": "Mon, 1 Oct 2018 23:53:54 GMT", "version": "v1" } ]
2018-10-03
[ [ "Sekeh", "Salimeh Yasaei", "" ], [ "Noshad", "Morteza", "" ], [ "Moon", "Kevin R.", "" ], [ "Hero", "Alfred O.", "" ] ]
Bounding the best achievable error probability for binary classification problems is relevant to many applications including machine learning, signal processing, and information theory. Many bounds on the Bayes binary classification error rate depend on information divergences between the pair of class distributions. Recently, the Henze-Penrose (HP) divergence has been proposed for bounding classification error probability. We consider the problem of empirically estimating the HP-divergence from random samples. We derive a bound on the convergence rate for the Friedman-Rafsky (FR) estimator of the HP-divergence, which is related to a multivariate runs statistic for testing between two distributions. The FR estimator is derived from a multicolored Euclidean minimal spanning tree (MST) that spans the merged samples. We obtain a concentration inequality for the Friedman-Rafsky estimator of the Henze-Penrose divergence. We validate our results experimentally and illustrate their application to real datasets.
2407.03821
Bardh Prenkaj
Davide Gabrielli, Bardh Prenkaj, Paola Velardi
Seamless Monitoring of Stress Levels Leveraging a Universal Model for Time Sequences
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Monitoring the stress level in patients with neurodegenerative diseases can help manage symptoms, improve patient's quality of life, and provide insight into disease progression. In the literature, ECG, actigraphy, speech, voice, and facial analysis have proven effective at detecting patients' emotions. On the other hand, these tools are invasive and do not integrate smoothly into the patient's daily life. HRV has also been proven to effectively indicate stress conditions, especially in combination with other signals. However, when HRV is derived from less invasive devices than the ECG, like smartwatches and bracelets, the quality of measurements significantly degrades. This paper presents a methodology for stress detection from a smartwatch based on a universal model for time series, UniTS, which we fine-tuned for the task. We cast the problem as anomaly detection rather than classification to favor model adaptation to individual patients and allow the clinician to maintain greater control over the system's predictions. We demonstrate that our proposed model considerably surpasses 12 top-performing methods on 3 benchmark datasets. Furthermore, unlike other state-of-the-art systems, UniTS enables seamless monitoring, as it shows comparable performance when using signals from invasive or lightweight devices.
[ { "created": "Thu, 4 Jul 2024 10:46:09 GMT", "version": "v1" } ]
2024-07-08
[ [ "Gabrielli", "Davide", "" ], [ "Prenkaj", "Bardh", "" ], [ "Velardi", "Paola", "" ] ]
Monitoring the stress level in patients with neurodegenerative diseases can help manage symptoms, improve patient's quality of life, and provide insight into disease progression. In the literature, ECG, actigraphy, speech, voice, and facial analysis have proven effective at detecting patients' emotions. On the other hand, these tools are invasive and do not integrate smoothly into the patient's daily life. HRV has also been proven to effectively indicate stress conditions, especially in combination with other signals. However, when HRV is derived from less invasive devices than the ECG, like smartwatches and bracelets, the quality of measurements significantly degrades. This paper presents a methodology for stress detection from a smartwatch based on a universal model for time series, UniTS, which we fine-tuned for the task. We cast the problem as anomaly detection rather than classification to favor model adaptation to individual patients and allow the clinician to maintain greater control over the system's predictions. We demonstrate that our proposed model considerably surpasses 12 top-performing methods on 3 benchmark datasets. Furthermore, unlike other state-of-the-art systems, UniTS enables seamless monitoring, as it shows comparable performance when using signals from invasive or lightweight devices.
2106.07758
Sahil Verma
Sahil Verma, Aditya Lahiri, John P. Dickerson, Su-In Lee
Pitfalls of Explainable ML: An Industry Perspective
Presented at JOURNE workshop at MLSYS 2021 (https://sites.google.com/view/workshop-journe/home)
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As machine learning (ML) systems take a more prominent and central role in contributing to life-impacting decisions, ensuring their trustworthiness and accountability is of utmost importance. Explanations sit at the core of these desirable attributes of a ML system. The emerging field is frequently called ``Explainable AI (XAI)'' or ``Explainable ML.'' The goal of explainable ML is to intuitively explain the predictions of a ML system, while adhering to the needs to various stakeholders. Many explanation techniques were developed with contributions from both academia and industry. However, there are several existing challenges that have not garnered enough interest and serve as roadblocks to widespread adoption of explainable ML. In this short paper, we enumerate challenges in explainable ML from an industry perspective. We hope these challenges will serve as promising future research directions, and would contribute to democratizing explainable ML.
[ { "created": "Mon, 14 Jun 2021 21:05:05 GMT", "version": "v1" } ]
2021-06-16
[ [ "Verma", "Sahil", "" ], [ "Lahiri", "Aditya", "" ], [ "Dickerson", "John P.", "" ], [ "Lee", "Su-In", "" ] ]
As machine learning (ML) systems take a more prominent and central role in contributing to life-impacting decisions, ensuring their trustworthiness and accountability is of utmost importance. Explanations sit at the core of these desirable attributes of a ML system. The emerging field is frequently called ``Explainable AI (XAI)'' or ``Explainable ML.'' The goal of explainable ML is to intuitively explain the predictions of a ML system, while adhering to the needs to various stakeholders. Many explanation techniques were developed with contributions from both academia and industry. However, there are several existing challenges that have not garnered enough interest and serve as roadblocks to widespread adoption of explainable ML. In this short paper, we enumerate challenges in explainable ML from an industry perspective. We hope these challenges will serve as promising future research directions, and would contribute to democratizing explainable ML.
2106.09051
Paul Henderson
Paul Henderson, Christoph H. Lampert, Bernd Bickel
Unsupervised Video Prediction from a Single Frame by Estimating 3D Dynamic Scene Structure
null
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Our goal in this work is to generate realistic videos given just one initial frame as input. Existing unsupervised approaches to this task do not consider the fact that a video typically shows a 3D environment, and that this should remain coherent from frame to frame even as the camera and objects move. We address this by developing a model that first estimates the latent 3D structure of the scene, including the segmentation of any moving objects. It then predicts future frames by simulating the object and camera dynamics, and rendering the resulting views. Importantly, it is trained end-to-end using only the unsupervised objective of predicting future frames, without any 3D information nor segmentation annotations. Experiments on two challenging datasets of natural videos show that our model can estimate 3D structure and motion segmentation from a single frame, and hence generate plausible and varied predictions.
[ { "created": "Wed, 16 Jun 2021 18:00:12 GMT", "version": "v1" } ]
2021-06-18
[ [ "Henderson", "Paul", "" ], [ "Lampert", "Christoph H.", "" ], [ "Bickel", "Bernd", "" ] ]
Our goal in this work is to generate realistic videos given just one initial frame as input. Existing unsupervised approaches to this task do not consider the fact that a video typically shows a 3D environment, and that this should remain coherent from frame to frame even as the camera and objects move. We address this by developing a model that first estimates the latent 3D structure of the scene, including the segmentation of any moving objects. It then predicts future frames by simulating the object and camera dynamics, and rendering the resulting views. Importantly, it is trained end-to-end using only the unsupervised objective of predicting future frames, without any 3D information nor segmentation annotations. Experiments on two challenging datasets of natural videos show that our model can estimate 3D structure and motion segmentation from a single frame, and hence generate plausible and varied predictions.
2310.09952
Arshia Soltani Moakhar
Arshia Soltani Moakhar, Mohammad Azizmalayeri, Hossein Mirzaei, Mohammad Taghi Manzuri, Mohammad Hossein Rohban
Seeking Next Layer Neurons' Attention for Error-Backpropagation-Like Training in a Multi-Agent Network Framework
null
null
null
null
cs.NE cs.AI cs.GT cs.LG cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite considerable theoretical progress in the training of neural networks viewed as a multi-agent system of neurons, particularly concerning biological plausibility and decentralized training, their applicability to real-world problems remains limited due to scalability issues. In contrast, error-backpropagation has demonstrated its effectiveness for training deep networks in practice. In this study, we propose a local objective for neurons that, when pursued by neurons individually, align them to exhibit similarities to error-backpropagation in terms of efficiency and scalability during training. For this purpose, we examine a neural network comprising decentralized, self-interested neurons seeking to maximize their local objective -- attention from subsequent layer neurons -- and identify the optimal strategy for neurons. We also analyze the relationship between this strategy and backpropagation, establishing conditions under which the derived strategy is equivalent to error-backpropagation. Lastly, we demonstrate the learning capacity of these multi-agent neural networks through experiments on three datasets and showcase their superior performance relative to error-backpropagation in a catastrophic forgetting benchmark.
[ { "created": "Sun, 15 Oct 2023 21:07:09 GMT", "version": "v1" } ]
2023-10-17
[ [ "Moakhar", "Arshia Soltani", "" ], [ "Azizmalayeri", "Mohammad", "" ], [ "Mirzaei", "Hossein", "" ], [ "Manzuri", "Mohammad Taghi", "" ], [ "Rohban", "Mohammad Hossein", "" ] ]
Despite considerable theoretical progress in the training of neural networks viewed as a multi-agent system of neurons, particularly concerning biological plausibility and decentralized training, their applicability to real-world problems remains limited due to scalability issues. In contrast, error-backpropagation has demonstrated its effectiveness for training deep networks in practice. In this study, we propose a local objective for neurons that, when pursued by neurons individually, align them to exhibit similarities to error-backpropagation in terms of efficiency and scalability during training. For this purpose, we examine a neural network comprising decentralized, self-interested neurons seeking to maximize their local objective -- attention from subsequent layer neurons -- and identify the optimal strategy for neurons. We also analyze the relationship between this strategy and backpropagation, establishing conditions under which the derived strategy is equivalent to error-backpropagation. Lastly, we demonstrate the learning capacity of these multi-agent neural networks through experiments on three datasets and showcase their superior performance relative to error-backpropagation in a catastrophic forgetting benchmark.
2301.00236
Sandipan Sarma
Sandipan Sarma, Arijit Sur
DiRaC-I: Identifying Diverse and Rare Training Classes for Zero-Shot Learning
22 pages, 10 Figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Inspired by strategies like Active Learning, it is intuitive that intelligently selecting the training classes from a dataset for Zero-Shot Learning (ZSL) can improve the performance of existing ZSL methods. In this work, we propose a framework called Diverse and Rare Class Identifier (DiRaC-I) which, given an attribute-based dataset, can intelligently yield the most suitable "seen classes" for training ZSL models. DiRaC-I has two main goals - constructing a diversified set of seed classes, followed by a visual-semantic mining algorithm initialized by these seed classes that acquires the classes capturing both diversity and rarity in the object domain adequately. These classes can then be used as "seen classes" to train ZSL models for image classification. We adopt a real-world scenario where novel object classes are available to neither DiRaC-I nor the ZSL models during training and conducted extensive experiments on two benchmark data sets for zero-shot image classification - CUB and SUN. Our results demonstrate DiRaC-I helps ZSL models to achieve significant classification accuracy improvements.
[ { "created": "Sat, 31 Dec 2022 16:05:09 GMT", "version": "v1" } ]
2023-01-03
[ [ "Sarma", "Sandipan", "" ], [ "Sur", "Arijit", "" ] ]
Inspired by strategies like Active Learning, it is intuitive that intelligently selecting the training classes from a dataset for Zero-Shot Learning (ZSL) can improve the performance of existing ZSL methods. In this work, we propose a framework called Diverse and Rare Class Identifier (DiRaC-I) which, given an attribute-based dataset, can intelligently yield the most suitable "seen classes" for training ZSL models. DiRaC-I has two main goals - constructing a diversified set of seed classes, followed by a visual-semantic mining algorithm initialized by these seed classes that acquires the classes capturing both diversity and rarity in the object domain adequately. These classes can then be used as "seen classes" to train ZSL models for image classification. We adopt a real-world scenario where novel object classes are available to neither DiRaC-I nor the ZSL models during training and conducted extensive experiments on two benchmark data sets for zero-shot image classification - CUB and SUN. Our results demonstrate DiRaC-I helps ZSL models to achieve significant classification accuracy improvements.
2006.15555
Aviad Aberdam
Aviad Aberdam, Dror Simon, Michael Elad
When and How Can Deep Generative Models be Inverted?
null
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep generative models (e.g. GANs and VAEs) have been developed quite extensively in recent years. Lately, there has been an increased interest in the inversion of such a model, i.e. given a (possibly corrupted) signal, we wish to recover the latent vector that generated it. Building upon sparse representation theory, we define conditions that are applicable to any inversion algorithm (gradient descent, deep encoder, etc.), under which such generative models are invertible with a unique solution. Importantly, the proposed analysis is applicable to any trained model, and does not depend on Gaussian i.i.d. weights. Furthermore, we introduce two layer-wise inversion pursuit algorithms for trained generative networks of arbitrary depth, and accompany these with recovery guarantees. Finally, we validate our theoretical results numerically and show that our method outperforms gradient descent when inverting such generators, both for clean and corrupted signals.
[ { "created": "Sun, 28 Jun 2020 09:37:52 GMT", "version": "v1" } ]
2020-06-30
[ [ "Aberdam", "Aviad", "" ], [ "Simon", "Dror", "" ], [ "Elad", "Michael", "" ] ]
Deep generative models (e.g. GANs and VAEs) have been developed quite extensively in recent years. Lately, there has been an increased interest in the inversion of such a model, i.e. given a (possibly corrupted) signal, we wish to recover the latent vector that generated it. Building upon sparse representation theory, we define conditions that are applicable to any inversion algorithm (gradient descent, deep encoder, etc.), under which such generative models are invertible with a unique solution. Importantly, the proposed analysis is applicable to any trained model, and does not depend on Gaussian i.i.d. weights. Furthermore, we introduce two layer-wise inversion pursuit algorithms for trained generative networks of arbitrary depth, and accompany these with recovery guarantees. Finally, we validate our theoretical results numerically and show that our method outperforms gradient descent when inverting such generators, both for clean and corrupted signals.
2308.02560
Robin San Roman
Robin San Roman and Yossi Adi and Antoine Deleforge and Romain Serizel and Gabriel Synnaeve and Alexandre D\'efossez
From Discrete Tokens to High-Fidelity Audio Using Multi-Band Diffusion
10 pages
Thirty-seventh Conference on Neural Information Processing Systems (2023)
null
null
cs.SD cs.LG eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep generative models can generate high-fidelity audio conditioned on various types of representations (e.g., mel-spectrograms, Mel-frequency Cepstral Coefficients (MFCC)). Recently, such models have been used to synthesize audio waveforms conditioned on highly compressed representations. Although such methods produce impressive results, they are prone to generate audible artifacts when the conditioning is flawed or imperfect. An alternative modeling approach is to use diffusion models. However, these have mainly been used as speech vocoders (i.e., conditioned on mel-spectrograms) or generating relatively low sampling rate signals. In this work, we propose a high-fidelity multi-band diffusion-based framework that generates any type of audio modality (e.g., speech, music, environmental sounds) from low-bitrate discrete representations. At equal bit rate, the proposed approach outperforms state-of-the-art generative techniques in terms of perceptual quality. Training and, evaluation code, along with audio samples, are available on the facebookresearch/audiocraft Github page.
[ { "created": "Wed, 2 Aug 2023 22:14:29 GMT", "version": "v1" }, { "created": "Wed, 8 Nov 2023 10:04:00 GMT", "version": "v2" } ]
2023-11-09
[ [ "Roman", "Robin San", "" ], [ "Adi", "Yossi", "" ], [ "Deleforge", "Antoine", "" ], [ "Serizel", "Romain", "" ], [ "Synnaeve", "Gabriel", "" ], [ "Défossez", "Alexandre", "" ] ]
Deep generative models can generate high-fidelity audio conditioned on various types of representations (e.g., mel-spectrograms, Mel-frequency Cepstral Coefficients (MFCC)). Recently, such models have been used to synthesize audio waveforms conditioned on highly compressed representations. Although such methods produce impressive results, they are prone to generate audible artifacts when the conditioning is flawed or imperfect. An alternative modeling approach is to use diffusion models. However, these have mainly been used as speech vocoders (i.e., conditioned on mel-spectrograms) or generating relatively low sampling rate signals. In this work, we propose a high-fidelity multi-band diffusion-based framework that generates any type of audio modality (e.g., speech, music, environmental sounds) from low-bitrate discrete representations. At equal bit rate, the proposed approach outperforms state-of-the-art generative techniques in terms of perceptual quality. Training and, evaluation code, along with audio samples, are available on the facebookresearch/audiocraft Github page.
2001.06641
Nikita Polyanskii
Andreas Lenz and Nikita Polyanskii
Optimal Codes Correcting a Burst of Deletions of Variable Length
6 pages
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present an efficiently encodable and decodable code construction that is capable of correction a burst of deletions of length at most $k$. The redundancy of this code is $\log n + k(k+1)/2\log \log n+c_k$ for some constant $c_k$ that only depends on $k$ and thus is scaling-optimal. The code can be split into two main components. First, we impose a constraint that allows to locate the burst of deletions up to an interval of size roughly $\log n$. Then, with the knowledge of the approximate location of the burst, we use several {shifted Varshamov-Tenengolts} codes to correct the burst of deletions, which only requires a small amount of redundancy since the location is already known up to an interval of small size. Finally, we show how to efficiently encode and decode the code.
[ { "created": "Sat, 18 Jan 2020 09:59:52 GMT", "version": "v1" } ]
2020-01-22
[ [ "Lenz", "Andreas", "" ], [ "Polyanskii", "Nikita", "" ] ]
In this paper, we present an efficiently encodable and decodable code construction that is capable of correction a burst of deletions of length at most $k$. The redundancy of this code is $\log n + k(k+1)/2\log \log n+c_k$ for some constant $c_k$ that only depends on $k$ and thus is scaling-optimal. The code can be split into two main components. First, we impose a constraint that allows to locate the burst of deletions up to an interval of size roughly $\log n$. Then, with the knowledge of the approximate location of the burst, we use several {shifted Varshamov-Tenengolts} codes to correct the burst of deletions, which only requires a small amount of redundancy since the location is already known up to an interval of small size. Finally, we show how to efficiently encode and decode the code.
2209.14401
Kaiyu Wu
Rathish Das, Meng He, Eitan Kondratovsky, J. Ian Munro, Anurag Murty Naredla, Kaiyu Wu
Shortest Beer Path Queries in Interval Graphs
To appear in ISAAC 2022
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Our interest is in paths between pairs of vertices that go through at least one of a subset of the vertices known as beer vertices. Such a path is called a beer path, and the beer distance between two vertices is the length of the shortest beer path. We show that we can represent unweighted interval graphs using $2n \log n + O(n) + O(|B|\log n)$ bits where $|B|$ is the number of beer vertices. This data structure answers beer distance queries in $O(\log^\varepsilon n)$ time for any constant $\varepsilon > 0$ and shortest beer path queries in $O(\log^\varepsilon n + d)$ time, where $d$ is the beer distance between the two nodes. We also show that proper interval graphs may be represented using $3n + o(n)$ bits to support beer distance queries in $O(f(n)\log n)$ time for any $f(n) \in \omega(1)$ and shortest beer path queries in $O(d)$ time. All of these results also have time-space trade-offs. Lastly we show that the information theoretic lower bound for beer proper interval graphs is very close to the space of our structure, namely $\log(4+2\sqrt{3})n - o(n)$ (or about $ 2.9 n$) bits.
[ { "created": "Wed, 28 Sep 2022 19:56:28 GMT", "version": "v1" } ]
2022-09-30
[ [ "Das", "Rathish", "" ], [ "He", "Meng", "" ], [ "Kondratovsky", "Eitan", "" ], [ "Munro", "J. Ian", "" ], [ "Naredla", "Anurag Murty", "" ], [ "Wu", "Kaiyu", "" ] ]
Our interest is in paths between pairs of vertices that go through at least one of a subset of the vertices known as beer vertices. Such a path is called a beer path, and the beer distance between two vertices is the length of the shortest beer path. We show that we can represent unweighted interval graphs using $2n \log n + O(n) + O(|B|\log n)$ bits where $|B|$ is the number of beer vertices. This data structure answers beer distance queries in $O(\log^\varepsilon n)$ time for any constant $\varepsilon > 0$ and shortest beer path queries in $O(\log^\varepsilon n + d)$ time, where $d$ is the beer distance between the two nodes. We also show that proper interval graphs may be represented using $3n + o(n)$ bits to support beer distance queries in $O(f(n)\log n)$ time for any $f(n) \in \omega(1)$ and shortest beer path queries in $O(d)$ time. All of these results also have time-space trade-offs. Lastly we show that the information theoretic lower bound for beer proper interval graphs is very close to the space of our structure, namely $\log(4+2\sqrt{3})n - o(n)$ (or about $ 2.9 n$) bits.
2204.07010
Yue Ning
Jiaxuan Li and Yue Ning
Anti-Asian Hate Speech Detection via Data Augmented Semantic Relation Inference
To appear in Proceedings of the 16th International AAAI Conference on Web and Social Media (ICWSM)
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
With the spreading of hate speech on social media in recent years, automatic detection of hate speech is becoming a crucial task and has attracted attention from various communities. This task aims to recognize online posts (e.g., tweets) that contain hateful information. The peculiarities of languages in social media, such as short and poorly written content, lead to the difficulty of learning semantics and capturing discriminative features of hate speech. Previous studies have utilized additional useful resources, such as sentiment hashtags, to improve the performance of hate speech detection. Hashtags are added as input features serving either as sentiment-lexicons or extra context information. However, our close investigation shows that directly leveraging these features without considering their context may introduce noise to classifiers. In this paper, we propose a novel approach to leverage sentiment hashtags to enhance hate speech detection in a natural language inference framework. We design a novel framework SRIC that simultaneously performs two tasks: (1) semantic relation inference between online posts and sentiment hashtags, and (2) sentiment classification on these posts. The semantic relation inference aims to encourage the model to encode sentiment-indicative information into representations of online posts. We conduct extensive experiments on two real-world datasets and demonstrate the effectiveness of our proposed framework compared with state-of-the-art representation learning models.
[ { "created": "Thu, 14 Apr 2022 15:03:35 GMT", "version": "v1" } ]
2022-04-15
[ [ "Li", "Jiaxuan", "" ], [ "Ning", "Yue", "" ] ]
With the spreading of hate speech on social media in recent years, automatic detection of hate speech is becoming a crucial task and has attracted attention from various communities. This task aims to recognize online posts (e.g., tweets) that contain hateful information. The peculiarities of languages in social media, such as short and poorly written content, lead to the difficulty of learning semantics and capturing discriminative features of hate speech. Previous studies have utilized additional useful resources, such as sentiment hashtags, to improve the performance of hate speech detection. Hashtags are added as input features serving either as sentiment-lexicons or extra context information. However, our close investigation shows that directly leveraging these features without considering their context may introduce noise to classifiers. In this paper, we propose a novel approach to leverage sentiment hashtags to enhance hate speech detection in a natural language inference framework. We design a novel framework SRIC that simultaneously performs two tasks: (1) semantic relation inference between online posts and sentiment hashtags, and (2) sentiment classification on these posts. The semantic relation inference aims to encourage the model to encode sentiment-indicative information into representations of online posts. We conduct extensive experiments on two real-world datasets and demonstrate the effectiveness of our proposed framework compared with state-of-the-art representation learning models.
1110.1687
Chi-Yao Hong
Ankit Singla, Chi-Yao Hong, Lucian Popa, P. Brighten Godfrey
Jellyfish: Networking Data Centers Randomly
14 pages, 12 figures
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Industry experience indicates that the ability to incrementally expand data centers is essential. However, existing high-bandwidth network designs have rigid structure that interferes with incremental expansion. We present Jellyfish, a high-capacity network interconnect, which, by adopting a random graph topology, yields itself naturally to incremental expansion. Somewhat surprisingly, Jellyfish is more cost-efficient than a fat-tree: A Jellyfish interconnect built using the same equipment as a fat-tree, supports as many as 25% more servers at full capacity at the scale of a few thousand nodes, and this advantage improves with scale. Jellyfish also allows great flexibility in building networks with different degrees of oversubscription. However, Jellyfish's unstructured design brings new challenges in routing, physical layout, and wiring. We describe and evaluate approaches that resolve these challenges effectively, indicating that Jellyfish could be deployed in today's data centers.
[ { "created": "Sat, 8 Oct 2011 01:24:57 GMT", "version": "v1" }, { "created": "Wed, 12 Oct 2011 15:05:52 GMT", "version": "v2" }, { "created": "Fri, 20 Apr 2012 20:38:43 GMT", "version": "v3" } ]
2012-04-24
[ [ "Singla", "Ankit", "" ], [ "Hong", "Chi-Yao", "" ], [ "Popa", "Lucian", "" ], [ "Godfrey", "P. Brighten", "" ] ]
Industry experience indicates that the ability to incrementally expand data centers is essential. However, existing high-bandwidth network designs have rigid structure that interferes with incremental expansion. We present Jellyfish, a high-capacity network interconnect, which, by adopting a random graph topology, yields itself naturally to incremental expansion. Somewhat surprisingly, Jellyfish is more cost-efficient than a fat-tree: A Jellyfish interconnect built using the same equipment as a fat-tree, supports as many as 25% more servers at full capacity at the scale of a few thousand nodes, and this advantage improves with scale. Jellyfish also allows great flexibility in building networks with different degrees of oversubscription. However, Jellyfish's unstructured design brings new challenges in routing, physical layout, and wiring. We describe and evaluate approaches that resolve these challenges effectively, indicating that Jellyfish could be deployed in today's data centers.
1601.08059
Nikos Bikakis
Nikos Bikakis, Timos Sellis
Exploration and Visualization in the Web of Big Linked Data: A Survey of the State of the Art
6th International Workshop on Linked Web Data Management (LWDM 2016)
null
null
null
cs.HC cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data exploration and visualization systems are of great importance in the Big Data era. Exploring and visualizing very large datasets has become a major research challenge, of which scalability is a vital requirement. In this survey, we describe the major prerequisites and challenges that should be addressed by the modern exploration and visualization systems. Considering these challenges, we present how state-of-the-art approaches from the Database and Information Visualization communities attempt to handle them. Finally, we survey the systems developed by Semantic Web community in the context of the Web of Linked Data, and discuss to which extent these satisfy the contemporary requirements.
[ { "created": "Fri, 29 Jan 2016 11:30:44 GMT", "version": "v1" } ]
2016-02-01
[ [ "Bikakis", "Nikos", "" ], [ "Sellis", "Timos", "" ] ]
Data exploration and visualization systems are of great importance in the Big Data era. Exploring and visualizing very large datasets has become a major research challenge, of which scalability is a vital requirement. In this survey, we describe the major prerequisites and challenges that should be addressed by the modern exploration and visualization systems. Considering these challenges, we present how state-of-the-art approaches from the Database and Information Visualization communities attempt to handle them. Finally, we survey the systems developed by Semantic Web community in the context of the Web of Linked Data, and discuss to which extent these satisfy the contemporary requirements.
1911.12562
Guozhu Meng
Yingzhe He and Guozhu Meng and Kai Chen and Xingbo Hu and Jinwen He
Towards Security Threats of Deep Learning Systems: A Survey
28 pages, 6 figures
IEEE Transactions on Software Engineering 2020
null
null
cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning has gained tremendous success and great popularity in the past few years. However, deep learning systems are suffering several inherent weaknesses, which can threaten the security of learning models. Deep learning's wide use further magnifies the impact and consequences. To this end, lots of research has been conducted with the purpose of exhaustively identifying intrinsic weaknesses and subsequently proposing feasible mitigation. Yet few are clear about how these weaknesses are incurred and how effective these attack approaches are in assaulting deep learning. In order to unveil the security weaknesses and aid in the development of a robust deep learning system, we undertake an investigation on attacks towards deep learning, and analyze these attacks to conclude some findings in multiple views. In particular, we focus on four types of attacks associated with security threats of deep learning: model extraction attack, model inversion attack, poisoning attack and adversarial attack. For each type of attack, we construct its essential workflow as well as adversary capabilities and attack goals. Pivot metrics are devised for comparing the attack approaches, by which we perform quantitative and qualitative analyses. From the analysis, we have identified significant and indispensable factors in an attack vector, e.g., how to reduce queries to target models, what distance should be used for measuring perturbation. We shed light on 18 findings covering these approaches' merits and demerits, success probability, deployment complexity and prospects. Moreover, we discuss other potential security weaknesses and possible mitigation which can inspire relevant research in this area.
[ { "created": "Thu, 28 Nov 2019 07:16:05 GMT", "version": "v1" }, { "created": "Tue, 27 Oct 2020 17:27:53 GMT", "version": "v2" } ]
2020-10-28
[ [ "He", "Yingzhe", "" ], [ "Meng", "Guozhu", "" ], [ "Chen", "Kai", "" ], [ "Hu", "Xingbo", "" ], [ "He", "Jinwen", "" ] ]
Deep learning has gained tremendous success and great popularity in the past few years. However, deep learning systems are suffering several inherent weaknesses, which can threaten the security of learning models. Deep learning's wide use further magnifies the impact and consequences. To this end, lots of research has been conducted with the purpose of exhaustively identifying intrinsic weaknesses and subsequently proposing feasible mitigation. Yet few are clear about how these weaknesses are incurred and how effective these attack approaches are in assaulting deep learning. In order to unveil the security weaknesses and aid in the development of a robust deep learning system, we undertake an investigation on attacks towards deep learning, and analyze these attacks to conclude some findings in multiple views. In particular, we focus on four types of attacks associated with security threats of deep learning: model extraction attack, model inversion attack, poisoning attack and adversarial attack. For each type of attack, we construct its essential workflow as well as adversary capabilities and attack goals. Pivot metrics are devised for comparing the attack approaches, by which we perform quantitative and qualitative analyses. From the analysis, we have identified significant and indispensable factors in an attack vector, e.g., how to reduce queries to target models, what distance should be used for measuring perturbation. We shed light on 18 findings covering these approaches' merits and demerits, success probability, deployment complexity and prospects. Moreover, we discuss other potential security weaknesses and possible mitigation which can inspire relevant research in this area.
2007.13723
Claudio Santos
Claudio Filipi Goncalves do Santos, Danilo Colombo, Mateus Roder, Jo\~ao Paulo Papa
MaxDropout: Deep Neural Network Regularization Based on Maximum Output Values
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Different techniques have emerged in the deep learning scenario, such as Convolutional Neural Networks, Deep Belief Networks, and Long Short-Term Memory Networks, to cite a few. In lockstep, regularization methods, which aim to prevent overfitting by penalizing the weight connections, or turning off some units, have been widely studied either. In this paper, we present a novel approach called MaxDropout, a regularizer for deep neural network models that works in a supervised fashion by removing (shutting off) the prominent neurons (i.e., most active) in each hidden layer. The model forces fewer activated units to learn more representative information, thus providing sparsity. Regarding the experiments, we show that it is possible to improve existing neural networks and provide better results in neural networks when Dropout is replaced by MaxDropout. The proposed method was evaluated in image classification, achieving comparable results to existing regularizers, such as Cutout and RandomErasing, also improving the accuracy of neural networks that uses Dropout by replacing the existing layer by MaxDropout.
[ { "created": "Mon, 27 Jul 2020 17:55:54 GMT", "version": "v1" } ]
2020-07-28
[ [ "Santos", "Claudio Filipi Goncalves do", "" ], [ "Colombo", "Danilo", "" ], [ "Roder", "Mateus", "" ], [ "Papa", "João Paulo", "" ] ]
Different techniques have emerged in the deep learning scenario, such as Convolutional Neural Networks, Deep Belief Networks, and Long Short-Term Memory Networks, to cite a few. In lockstep, regularization methods, which aim to prevent overfitting by penalizing the weight connections, or turning off some units, have been widely studied either. In this paper, we present a novel approach called MaxDropout, a regularizer for deep neural network models that works in a supervised fashion by removing (shutting off) the prominent neurons (i.e., most active) in each hidden layer. The model forces fewer activated units to learn more representative information, thus providing sparsity. Regarding the experiments, we show that it is possible to improve existing neural networks and provide better results in neural networks when Dropout is replaced by MaxDropout. The proposed method was evaluated in image classification, achieving comparable results to existing regularizers, such as Cutout and RandomErasing, also improving the accuracy of neural networks that uses Dropout by replacing the existing layer by MaxDropout.
2002.01218
Eduard Eiben
Eduard Eiben and Daniel Lokshtanov
Removing Connected Obstacles in the Plane is FPT
null
null
null
null
cs.DS cs.CG cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given two points in the plane, a set of obstacles defined by closed curves, and an integer $k$, does there exist a path between the two designated points intersecting at most $k$ of the obstacles? This is a fundamental and well-studied problem arising naturally in computational geometry, graph theory, wireless computing, and motion planning. It remains $\textsf{NP}$-hard even when the obstacles are very simple geometric shapes (e.g., unit-length line segments). In this paper, we show that the problem is fixed-parameter tractable ($\textsf{FPT}$) parameterized by $k$, by giving an algorithm with running time $k^{O(k^3)}n^{O(1)}$. Here $n$ is the number connected areas in the plane drawing of all the obstacles.
[ { "created": "Tue, 4 Feb 2020 10:50:28 GMT", "version": "v1" } ]
2020-02-05
[ [ "Eiben", "Eduard", "" ], [ "Lokshtanov", "Daniel", "" ] ]
Given two points in the plane, a set of obstacles defined by closed curves, and an integer $k$, does there exist a path between the two designated points intersecting at most $k$ of the obstacles? This is a fundamental and well-studied problem arising naturally in computational geometry, graph theory, wireless computing, and motion planning. It remains $\textsf{NP}$-hard even when the obstacles are very simple geometric shapes (e.g., unit-length line segments). In this paper, we show that the problem is fixed-parameter tractable ($\textsf{FPT}$) parameterized by $k$, by giving an algorithm with running time $k^{O(k^3)}n^{O(1)}$. Here $n$ is the number connected areas in the plane drawing of all the obstacles.
2110.15701
Chris Reinke
Chris Reinke, Xavier Alameda-Pineda
Successor Feature Representations
published in Transactions on Machine Learning Research (05/2023), source code: https://gitlab.inria.fr/robotlearn/sfr_learning, [v2] added experiments with learned features, [v3] renamed paper and changed scope, [v4] published version
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transfer in Reinforcement Learning aims to improve learning performance on target tasks using knowledge from experienced source tasks. Successor Representations (SR) and their extension Successor Features (SF) are prominent transfer mechanisms in domains where reward functions change between tasks. They reevaluate the expected return of previously learned policies in a new target task to transfer their knowledge. The SF framework extended SR by linearly decomposing rewards into successor features and a reward weight vector allowing their application in high-dimensional tasks. But this came with the cost of having a linear relationship between reward functions and successor features, limiting its application to tasks where such a linear relationship exists. We propose a novel formulation of SR based on learning the cumulative discounted probability of successor features, called Successor Feature Representations (SFR). Crucially, SFR allows to reevaluate the expected return of policies for general reward functions. We introduce different SFR variations, prove its convergence, and provide a guarantee on its transfer performance. Experimental evaluations based on SFR with function approximation demonstrate its advantage over SF not only for general reward functions, but also in the case of linearly decomposable reward functions.
[ { "created": "Fri, 29 Oct 2021 12:01:48 GMT", "version": "v1" }, { "created": "Wed, 16 Feb 2022 13:13:18 GMT", "version": "v2" }, { "created": "Fri, 16 Dec 2022 10:11:53 GMT", "version": "v3" }, { "created": "Wed, 2 Aug 2023 09:14:54 GMT", "version": "v4" } ]
2023-08-03
[ [ "Reinke", "Chris", "" ], [ "Alameda-Pineda", "Xavier", "" ] ]
Transfer in Reinforcement Learning aims to improve learning performance on target tasks using knowledge from experienced source tasks. Successor Representations (SR) and their extension Successor Features (SF) are prominent transfer mechanisms in domains where reward functions change between tasks. They reevaluate the expected return of previously learned policies in a new target task to transfer their knowledge. The SF framework extended SR by linearly decomposing rewards into successor features and a reward weight vector allowing their application in high-dimensional tasks. But this came with the cost of having a linear relationship between reward functions and successor features, limiting its application to tasks where such a linear relationship exists. We propose a novel formulation of SR based on learning the cumulative discounted probability of successor features, called Successor Feature Representations (SFR). Crucially, SFR allows to reevaluate the expected return of policies for general reward functions. We introduce different SFR variations, prove its convergence, and provide a guarantee on its transfer performance. Experimental evaluations based on SFR with function approximation demonstrate its advantage over SF not only for general reward functions, but also in the case of linearly decomposable reward functions.
1904.02459
Neeru Dubey
Neeru Dubey, Shreya Ghosh, Abhinav Dhall
Unsupervised Learning of Eye Gaze Representation from the Web
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic eye gaze estimation has interested researchers for a while now. In this paper, we propose an unsupervised learning based method for estimating the eye gaze region. To train the proposed network "Ize-Net" in self-supervised manner, we collect a large `in the wild' dataset containing 1,54,251 images from the web. For the images in the database, we divide the gaze into three regions based on an automatic technique based on pupil-centers localization and then use a feature-based technique to determine the gaze region. The performance is evaluated on the Tablet Gaze and CAVE datasets by fine-tuning results of Ize-Net for the task of eye gaze estimation. The feature representation learned is also used to train traditional machine learning algorithms for eye gaze estimation. The results demonstrate that the proposed method learns a rich data representation, which can be efficiently fine-tuned for any eye gaze estimation dataset.
[ { "created": "Thu, 4 Apr 2019 10:25:13 GMT", "version": "v1" } ]
2019-04-05
[ [ "Dubey", "Neeru", "" ], [ "Ghosh", "Shreya", "" ], [ "Dhall", "Abhinav", "" ] ]
Automatic eye gaze estimation has interested researchers for a while now. In this paper, we propose an unsupervised learning based method for estimating the eye gaze region. To train the proposed network "Ize-Net" in self-supervised manner, we collect a large `in the wild' dataset containing 1,54,251 images from the web. For the images in the database, we divide the gaze into three regions based on an automatic technique based on pupil-centers localization and then use a feature-based technique to determine the gaze region. The performance is evaluated on the Tablet Gaze and CAVE datasets by fine-tuning results of Ize-Net for the task of eye gaze estimation. The feature representation learned is also used to train traditional machine learning algorithms for eye gaze estimation. The results demonstrate that the proposed method learns a rich data representation, which can be efficiently fine-tuned for any eye gaze estimation dataset.
2405.20495
Amrit Singh Bedi
Souradip Chakraborty, Soumya Suvra Ghosal, Ming Yin, Dinesh Manocha, Mengdi Wang, Amrit Singh Bedi, and Furong Huang
Transfer Q Star: Principled Decoding for LLM Alignment
null
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Aligning foundation models is essential for their safe and trustworthy deployment. However, traditional fine-tuning methods are computationally intensive and require updating billions of model parameters. A promising alternative, alignment via decoding, adjusts the response distribution directly without model updates to maximize a target reward $r$, thus providing a lightweight and adaptable framework for alignment. However, principled decoding methods rely on oracle access to an optimal Q-function ($Q^*$), which is often unavailable in practice. Hence, prior SoTA methods either approximate this $Q^*$ using $Q^{\pi_{\texttt{sft}}}$ (derived from the reference $\texttt{SFT}$ model) or rely on short-term rewards, resulting in sub-optimal decoding performance. In this work, we propose Transfer $Q^*$, which implicitly estimates the optimal value function for a target reward $r$ through a baseline model $\rho_{\texttt{BL}}$ aligned with a baseline reward $\rho_{\texttt{BL}}$ (which can be different from the target reward $r$). Theoretical analyses of Transfer $Q^*$ provide a rigorous characterization of its optimality, deriving an upper bound on the sub-optimality gap and identifying a hyperparameter to control the deviation from the pre-trained reference $\texttt{SFT}$ model based on user needs. Our approach significantly reduces the sub-optimality gap observed in prior SoTA methods and demonstrates superior empirical performance across key metrics such as coherence, diversity, and quality in extensive tests on several synthetic and real datasets.
[ { "created": "Thu, 30 May 2024 21:36:12 GMT", "version": "v1" } ]
2024-06-03
[ [ "Chakraborty", "Souradip", "" ], [ "Ghosal", "Soumya Suvra", "" ], [ "Yin", "Ming", "" ], [ "Manocha", "Dinesh", "" ], [ "Wang", "Mengdi", "" ], [ "Bedi", "Amrit Singh", "" ], [ "Huang", "Furong", "" ] ]
Aligning foundation models is essential for their safe and trustworthy deployment. However, traditional fine-tuning methods are computationally intensive and require updating billions of model parameters. A promising alternative, alignment via decoding, adjusts the response distribution directly without model updates to maximize a target reward $r$, thus providing a lightweight and adaptable framework for alignment. However, principled decoding methods rely on oracle access to an optimal Q-function ($Q^*$), which is often unavailable in practice. Hence, prior SoTA methods either approximate this $Q^*$ using $Q^{\pi_{\texttt{sft}}}$ (derived from the reference $\texttt{SFT}$ model) or rely on short-term rewards, resulting in sub-optimal decoding performance. In this work, we propose Transfer $Q^*$, which implicitly estimates the optimal value function for a target reward $r$ through a baseline model $\rho_{\texttt{BL}}$ aligned with a baseline reward $\rho_{\texttt{BL}}$ (which can be different from the target reward $r$). Theoretical analyses of Transfer $Q^*$ provide a rigorous characterization of its optimality, deriving an upper bound on the sub-optimality gap and identifying a hyperparameter to control the deviation from the pre-trained reference $\texttt{SFT}$ model based on user needs. Our approach significantly reduces the sub-optimality gap observed in prior SoTA methods and demonstrates superior empirical performance across key metrics such as coherence, diversity, and quality in extensive tests on several synthetic and real datasets.
2310.13073
Parth Padalkar
Parth Padalkar, Gopal Gupta
Using Logic Programming and Kernel-Grouping for Improving Interpretability of Convolutional Neural Networks
arXiv admin note: text overlap with arXiv:2301.12667
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Within the realm of deep learning, the interpretability of Convolutional Neural Networks (CNNs), particularly in the context of image classification tasks, remains a formidable challenge. To this end we present a neurosymbolic framework, NeSyFOLD-G that generates a symbolic rule-set using the last layer kernels of the CNN to make its underlying knowledge interpretable. What makes NeSyFOLD-G different from other similar frameworks is that we first find groups of similar kernels in the CNN (kernel-grouping) using the cosine-similarity between the feature maps generated by various kernels. Once such kernel groups are found, we binarize each kernel group's output in the CNN and use it to generate a binarization table which serves as input data to FOLD-SE-M which is a Rule Based Machine Learning (RBML) algorithm. FOLD-SE-M then generates a rule-set that can be used to make predictions. We present a novel kernel grouping algorithm and show that grouping similar kernels leads to a significant reduction in the size of the rule-set generated by FOLD-SE-M, consequently, improving the interpretability. This rule-set symbolically encapsulates the connectionist knowledge of the trained CNN. The rule-set can be viewed as a normal logic program wherein each predicate's truth value depends on a kernel group in the CNN. Each predicate in the rule-set is mapped to a concept using a few semantic segmentation masks of the images used for training, to make it human-understandable. The last layers of the CNN can then be replaced by this rule-set to obtain the NeSy-G model which can then be used for the image classification task. The goal directed ASP system s(CASP) can be used to obtain the justification of any prediction made using the NeSy-G model. We also propose a novel algorithm for labeling each predicate in the rule-set with the semantic concept(s) that its corresponding kernel group represents.
[ { "created": "Thu, 19 Oct 2023 18:12:49 GMT", "version": "v1" } ]
2023-10-23
[ [ "Padalkar", "Parth", "" ], [ "Gupta", "Gopal", "" ] ]
Within the realm of deep learning, the interpretability of Convolutional Neural Networks (CNNs), particularly in the context of image classification tasks, remains a formidable challenge. To this end we present a neurosymbolic framework, NeSyFOLD-G that generates a symbolic rule-set using the last layer kernels of the CNN to make its underlying knowledge interpretable. What makes NeSyFOLD-G different from other similar frameworks is that we first find groups of similar kernels in the CNN (kernel-grouping) using the cosine-similarity between the feature maps generated by various kernels. Once such kernel groups are found, we binarize each kernel group's output in the CNN and use it to generate a binarization table which serves as input data to FOLD-SE-M which is a Rule Based Machine Learning (RBML) algorithm. FOLD-SE-M then generates a rule-set that can be used to make predictions. We present a novel kernel grouping algorithm and show that grouping similar kernels leads to a significant reduction in the size of the rule-set generated by FOLD-SE-M, consequently, improving the interpretability. This rule-set symbolically encapsulates the connectionist knowledge of the trained CNN. The rule-set can be viewed as a normal logic program wherein each predicate's truth value depends on a kernel group in the CNN. Each predicate in the rule-set is mapped to a concept using a few semantic segmentation masks of the images used for training, to make it human-understandable. The last layers of the CNN can then be replaced by this rule-set to obtain the NeSy-G model which can then be used for the image classification task. The goal directed ASP system s(CASP) can be used to obtain the justification of any prediction made using the NeSy-G model. We also propose a novel algorithm for labeling each predicate in the rule-set with the semantic concept(s) that its corresponding kernel group represents.
1811.08531
Kamer Vishi
Nils Gruschka, Vasileios Mavroeidis, Kamer Vishi, Meiko Jensen
Privacy Issues and Data Protection in Big Data: A Case Study Analysis under GDPR
7 pages, 1 figure, GDPR, Privacy, Cyber Threat Intelligence, Biometrics. To be appeared in the Proceedings of the 2018 IEEE International Conference on Big Data
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Big data has become a great asset for many organizations, promising improved operations and new business opportunities. However, big data has increased access to sensitive information that when processed can directly jeopardize the privacy of individuals and violate data protection laws. As a consequence, data controllers and data processors may be imposed tough penalties for non-compliance that can result even to bankruptcy. In this paper, we discuss the current state of the legal regulations and analyze different data protection and privacy-preserving techniques in the context of big data analysis. In addition, we present and analyze two real-life research projects as case studies dealing with sensitive data and actions for complying with the data regulation laws. We show which types of information might become a privacy risk, the employed privacy-preserving techniques in accordance with the legal requirements, and the influence of these techniques on the data processing phase and the research results.
[ { "created": "Tue, 20 Nov 2018 23:42:12 GMT", "version": "v1" } ]
2018-11-22
[ [ "Gruschka", "Nils", "" ], [ "Mavroeidis", "Vasileios", "" ], [ "Vishi", "Kamer", "" ], [ "Jensen", "Meiko", "" ] ]
Big data has become a great asset for many organizations, promising improved operations and new business opportunities. However, big data has increased access to sensitive information that when processed can directly jeopardize the privacy of individuals and violate data protection laws. As a consequence, data controllers and data processors may be imposed tough penalties for non-compliance that can result even to bankruptcy. In this paper, we discuss the current state of the legal regulations and analyze different data protection and privacy-preserving techniques in the context of big data analysis. In addition, we present and analyze two real-life research projects as case studies dealing with sensitive data and actions for complying with the data regulation laws. We show which types of information might become a privacy risk, the employed privacy-preserving techniques in accordance with the legal requirements, and the influence of these techniques on the data processing phase and the research results.
2105.09217
Gautam K. Das
Pawan K. Mishra and Gautam K. Das
Approximation Algorithms For The Euclidean Dispersion Problems
17
null
null
null
cs.CG cs.DS
http://creativecommons.org/licenses/by/4.0/
In this article, we consider the Euclidean dispersion problems. Let $P=\{p_{1}, p_{2}, \ldots, p_{n}\}$ be a set of $n$ points in $\mathbb{R}^2$. For each point $p \in P$ and $S \subseteq P$, we define $cost_{\gamma}(p,S)$ as the sum of Euclidean distance from $p$ to the nearest $\gamma $ point in $S \setminus \{p\}$. We define $cost_{\gamma}(S)=\min_{p \in S}\{cost_{\gamma}(p,S)\}$ for $S \subseteq P$. In the $\gamma$-dispersion problem, a set $P$ of $n$ points in $\mathbb{R}^2$ and a positive integer $k \in [\gamma+1,n]$ are given. The objective is to find a subset $S\subseteq P$ of size $k$ such that $cost_{\gamma}(S)$ is maximized. We consider both $2$-dispersion and $1$-dispersion problem in $\mathbb{R}^2$. Along with these, we also consider $2$-dispersion problem when points are placed on a line. In this paper, we propose a simple polynomial time $(2\sqrt 3 + \epsilon )$-factor approximation algorithm for the $2$-dispersion problem, for any $\epsilon > 0$, which is an improvement over the best known approximation factor $4\sqrt3$ [Amano, K. and Nakano, S. I., An approximation algorithm for the $2$-dispersion problem, IEICE Transactions on Information and Systems, Vol. 103(3), pp. 506-508, 2020]. Next, we develop a common framework for designing an approximation algorithm for the Euclidean dispersion problem. With this common framework, we improve the approximation factor to $2\sqrt 3$ for the $2$-dispersion problem in $\mathbb{R}^2$. Using the same framework, we propose a polynomial time algorithm, which returns an optimal solution for the $2$-dispersion problem when points are placed on a line. Moreover, to show the effectiveness of the framework, we also propose a $2$-factor approximation algorithm for the $1$-dispersion problem in $\mathbb{R}^2$.
[ { "created": "Wed, 19 May 2021 15:56:30 GMT", "version": "v1" } ]
2021-05-20
[ [ "Mishra", "Pawan K.", "" ], [ "Das", "Gautam K.", "" ] ]
In this article, we consider the Euclidean dispersion problems. Let $P=\{p_{1}, p_{2}, \ldots, p_{n}\}$ be a set of $n$ points in $\mathbb{R}^2$. For each point $p \in P$ and $S \subseteq P$, we define $cost_{\gamma}(p,S)$ as the sum of Euclidean distance from $p$ to the nearest $\gamma $ point in $S \setminus \{p\}$. We define $cost_{\gamma}(S)=\min_{p \in S}\{cost_{\gamma}(p,S)\}$ for $S \subseteq P$. In the $\gamma$-dispersion problem, a set $P$ of $n$ points in $\mathbb{R}^2$ and a positive integer $k \in [\gamma+1,n]$ are given. The objective is to find a subset $S\subseteq P$ of size $k$ such that $cost_{\gamma}(S)$ is maximized. We consider both $2$-dispersion and $1$-dispersion problem in $\mathbb{R}^2$. Along with these, we also consider $2$-dispersion problem when points are placed on a line. In this paper, we propose a simple polynomial time $(2\sqrt 3 + \epsilon )$-factor approximation algorithm for the $2$-dispersion problem, for any $\epsilon > 0$, which is an improvement over the best known approximation factor $4\sqrt3$ [Amano, K. and Nakano, S. I., An approximation algorithm for the $2$-dispersion problem, IEICE Transactions on Information and Systems, Vol. 103(3), pp. 506-508, 2020]. Next, we develop a common framework for designing an approximation algorithm for the Euclidean dispersion problem. With this common framework, we improve the approximation factor to $2\sqrt 3$ for the $2$-dispersion problem in $\mathbb{R}^2$. Using the same framework, we propose a polynomial time algorithm, which returns an optimal solution for the $2$-dispersion problem when points are placed on a line. Moreover, to show the effectiveness of the framework, we also propose a $2$-factor approximation algorithm for the $1$-dispersion problem in $\mathbb{R}^2$.
2211.05555
Joon-Ha Kim
Joon-Ha Kim
Real time A* Adaptive Action Set Footstep Planning with Human Locomotion Energy Approximations Considering Angle Difference for Heuristic Function
Master's Degree Thesis
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
The problem of navigating a bipedal robot to a desired destination in various environments is very important. However, it is very difficult to solve the navigation problem in real time because the computation time is very long due to the nature of the biped robot having a high degree of freedom. In order to overcome this, many scientists suggested navigation through the footstep planning. Usually footstep planning use the shortest distance or angles as the objective function based on the A * algorithm. Recently, the energy required for human walking, which is widely used in human dynamics, approximated by a polynomial function is proposed as a better cost function that explains the movement of the bipedal robot. In addition, for the real time navigation, using the action set of the A * algorithm not fixed, but the number changing according to the situation, so that the computation time does not increase much and the methods of considering the collision with the external environment are suggested as a practical method. In this thesis, polynomial function approximating the energy required for human walking is adopted as a cost function, and heuristic function considering the angular difference between the robot and the destination which is not shown in the previous studies is newly proposed and proved. In addition, a new method to integrate the adaptive behavior set and energy related to human walking is proposed. Furthermore, efficient collision avoidance method and a method to reduce the local minimum problem is proposed in this framework. Finally, footstep planning algorithm with all of these features into the mapping algorithm and the walking algorithm to solve the navigation problem is validated with simulation and real robot.
[ { "created": "Wed, 2 Nov 2022 02:28:16 GMT", "version": "v1" } ]
2022-11-11
[ [ "Kim", "Joon-Ha", "" ] ]
The problem of navigating a bipedal robot to a desired destination in various environments is very important. However, it is very difficult to solve the navigation problem in real time because the computation time is very long due to the nature of the biped robot having a high degree of freedom. In order to overcome this, many scientists suggested navigation through the footstep planning. Usually footstep planning use the shortest distance or angles as the objective function based on the A * algorithm. Recently, the energy required for human walking, which is widely used in human dynamics, approximated by a polynomial function is proposed as a better cost function that explains the movement of the bipedal robot. In addition, for the real time navigation, using the action set of the A * algorithm not fixed, but the number changing according to the situation, so that the computation time does not increase much and the methods of considering the collision with the external environment are suggested as a practical method. In this thesis, polynomial function approximating the energy required for human walking is adopted as a cost function, and heuristic function considering the angular difference between the robot and the destination which is not shown in the previous studies is newly proposed and proved. In addition, a new method to integrate the adaptive behavior set and energy related to human walking is proposed. Furthermore, efficient collision avoidance method and a method to reduce the local minimum problem is proposed in this framework. Finally, footstep planning algorithm with all of these features into the mapping algorithm and the walking algorithm to solve the navigation problem is validated with simulation and real robot.
1304.5213
Hany SalahEldeen
Hany M. SalahEldeen and Michael L. Nelson
Carbon Dating The Web: Estimating the Age of Web Resources
This work is published at TempWeb03 workshop at WWW 2013 conference in Rio de Janeiro, Brazil
null
null
null
cs.IR cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the course of web research it is often necessary to estimate the creation datetime for web resources (in the general case, this value can only be estimated). While it is feasible to manually establish likely datetime values for small numbers of resources, this becomes infeasible if the collection is large. We present "carbon date", a simple web application that estimates the creation date for a URI by polling a number of sources of evidence and returning a machine-readable structure with their respective values. To establish a likely datetime, we poll bitly for the first time someone shortened the URI, topsy for the first time someone tweeted the URI, a Memento aggregator for the first time it appeared in a public web archive, Google's time of last crawl, and the Last-Modified HTTP response header of the resource itself. We also examine the backlinks of the URI as reported by Google and apply the same techniques for the resources that link to the URI. We evaluated our tool on a gold-standard data set of 1200 URIs in which the creation date was manually verified. We were able to estimate a creation date for 75.90% of the resources, with 32.78% having the correct value. Given the different nature of the URIs, the union of the various methods produces the best results. While the Google last crawl date and topsy account for nearly 66% of the closest answers, eliminating the web archives or Last-Modified from the results produces the largest overall negative impact on the results. The carbon date application is available for download or use via a webAPI.
[ { "created": "Thu, 18 Apr 2013 18:42:45 GMT", "version": "v1" } ]
2013-04-19
[ [ "SalahEldeen", "Hany M.", "" ], [ "Nelson", "Michael L.", "" ] ]
In the course of web research it is often necessary to estimate the creation datetime for web resources (in the general case, this value can only be estimated). While it is feasible to manually establish likely datetime values for small numbers of resources, this becomes infeasible if the collection is large. We present "carbon date", a simple web application that estimates the creation date for a URI by polling a number of sources of evidence and returning a machine-readable structure with their respective values. To establish a likely datetime, we poll bitly for the first time someone shortened the URI, topsy for the first time someone tweeted the URI, a Memento aggregator for the first time it appeared in a public web archive, Google's time of last crawl, and the Last-Modified HTTP response header of the resource itself. We also examine the backlinks of the URI as reported by Google and apply the same techniques for the resources that link to the URI. We evaluated our tool on a gold-standard data set of 1200 URIs in which the creation date was manually verified. We were able to estimate a creation date for 75.90% of the resources, with 32.78% having the correct value. Given the different nature of the URIs, the union of the various methods produces the best results. While the Google last crawl date and topsy account for nearly 66% of the closest answers, eliminating the web archives or Last-Modified from the results produces the largest overall negative impact on the results. The carbon date application is available for download or use via a webAPI.
2110.11187
Matteo De Carlo
Matteo De Carlo, Eliseo Ferrante, Daan Zeeuwe, Jacintha Ellers, Gerben Meynen and A. E. Eiben
Heritability in Morphological Robot Evolution
null
null
null
null
cs.NE cs.RO
http://creativecommons.org/licenses/by/4.0/
In the field of evolutionary robotics, choosing the correct encoding is very complicated, especially when robots evolve both behaviours and morphologies at the same time. With the objective of improving our understanding of the mapping process from encodings to functional robots, we introduce the biological notion of heritability, which captures the amount of phenotypic variation caused by genotypic variation. In our analysis we measure the heritability on the first generation of robots evolved from two different encodings, a direct encoding and an indirect encoding. In addition we investigate the interplay between heritability and phenotypic diversity through the course of an entire evolutionary process. In particular, we investigate how direct and indirect genotypes can exhibit preferences for exploration or exploitation throughout the course of evolution. We observe how an exploration or exploitation tradeoff can be more easily understood by examining patterns in heritability and phenotypic diversity. In conclusion, we show how heritability can be a useful tool to better understand the relationship between genotypes and phenotypes, especially helpful when designing more complicated systems where complex individuals and environments can adapt and influence each other.
[ { "created": "Thu, 21 Oct 2021 14:58:17 GMT", "version": "v1" } ]
2021-10-22
[ [ "De Carlo", "Matteo", "" ], [ "Ferrante", "Eliseo", "" ], [ "Zeeuwe", "Daan", "" ], [ "Ellers", "Jacintha", "" ], [ "Meynen", "Gerben", "" ], [ "Eiben", "A. E.", "" ] ]
In the field of evolutionary robotics, choosing the correct encoding is very complicated, especially when robots evolve both behaviours and morphologies at the same time. With the objective of improving our understanding of the mapping process from encodings to functional robots, we introduce the biological notion of heritability, which captures the amount of phenotypic variation caused by genotypic variation. In our analysis we measure the heritability on the first generation of robots evolved from two different encodings, a direct encoding and an indirect encoding. In addition we investigate the interplay between heritability and phenotypic diversity through the course of an entire evolutionary process. In particular, we investigate how direct and indirect genotypes can exhibit preferences for exploration or exploitation throughout the course of evolution. We observe how an exploration or exploitation tradeoff can be more easily understood by examining patterns in heritability and phenotypic diversity. In conclusion, we show how heritability can be a useful tool to better understand the relationship between genotypes and phenotypes, especially helpful when designing more complicated systems where complex individuals and environments can adapt and influence each other.
1806.01106
Alejandro Linares-Barranco A. Linares-Barranco
A. Rios-Navarro, R. Tapiador-Morales, A. Jimenez-Fernandez, M. Dominguez-Morales, C. Amaya and A. Linares-Barranco
Performance evaluation over HW/SW co-design SoC memory transfers for a CNN accelerator
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many FPGAs vendors have recently included embedded processors in their devices, like Xilinx with ARM-Cortex A cores, together with programmable logic cells. These devices are known as Programmable System on Chip (PSoC). Their ARM cores (embedded in the processing system or PS) communicates with the programmable logic cells (PL) using ARM-standard AXI buses. In this paper we analyses the performance of exhaustive data transfers between PS and PL for a Xilinx Zynq FPGA in a co-design real scenario for Convolutional Neural Networks (CNN) accelerator, which processes, in dedicated hardware, a stream of visual information from a neuromorphic visual sensor for classification. In the PS side, a Linux operating system is running, which recollects visual events from the neuromorphic sensor into a normalized frame, and then it transfers these frames to the accelerator of multi-layered CNNs, and read results, using an AXI-DMA bus in a per-layer way. As these kind of accelerators try to process information as quick as possible, data bandwidth becomes critical and maintaining a good balanced data throughput rate requires some considerations. We present and evaluate several data partitioning techniques to improve the balance between RX and TX transfer and two different ways of transfers management: through a polling routine at the userlevel of the OS, and through a dedicated interrupt-based kernellevel driver. We demonstrate that for longer enough packets, the kernel-level driver solution gets better timing in computing a CNN classification example. Main advantage of using kernel-level driver is to have safer solutions and to have tasks scheduling in the OS to manage other important processes for our application, like frames collection from sensors and their normalization.
[ { "created": "Wed, 9 May 2018 08:54:15 GMT", "version": "v1" } ]
2018-06-05
[ [ "Rios-Navarro", "A.", "" ], [ "Tapiador-Morales", "R.", "" ], [ "Jimenez-Fernandez", "A.", "" ], [ "Dominguez-Morales", "M.", "" ], [ "Amaya", "C.", "" ], [ "Linares-Barranco", "A.", "" ] ]
Many FPGAs vendors have recently included embedded processors in their devices, like Xilinx with ARM-Cortex A cores, together with programmable logic cells. These devices are known as Programmable System on Chip (PSoC). Their ARM cores (embedded in the processing system or PS) communicates with the programmable logic cells (PL) using ARM-standard AXI buses. In this paper we analyses the performance of exhaustive data transfers between PS and PL for a Xilinx Zynq FPGA in a co-design real scenario for Convolutional Neural Networks (CNN) accelerator, which processes, in dedicated hardware, a stream of visual information from a neuromorphic visual sensor for classification. In the PS side, a Linux operating system is running, which recollects visual events from the neuromorphic sensor into a normalized frame, and then it transfers these frames to the accelerator of multi-layered CNNs, and read results, using an AXI-DMA bus in a per-layer way. As these kind of accelerators try to process information as quick as possible, data bandwidth becomes critical and maintaining a good balanced data throughput rate requires some considerations. We present and evaluate several data partitioning techniques to improve the balance between RX and TX transfer and two different ways of transfers management: through a polling routine at the userlevel of the OS, and through a dedicated interrupt-based kernellevel driver. We demonstrate that for longer enough packets, the kernel-level driver solution gets better timing in computing a CNN classification example. Main advantage of using kernel-level driver is to have safer solutions and to have tasks scheduling in the OS to manage other important processes for our application, like frames collection from sensors and their normalization.
2312.13596
Zhixiang Su
Zhixiang Su, Di Wang, Chunyan Miao and Lizhen Cui
Anchoring Path for Inductive Relation Prediction in Knowledge Graphs
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Aiming to accurately predict missing edges representing relations between entities, which are pervasive in real-world Knowledge Graphs (KGs), relation prediction plays a critical role in enhancing the comprehensiveness and utility of KGs. Recent research focuses on path-based methods due to their inductive and explainable properties. However, these methods face a great challenge when lots of reasoning paths do not form Closed Paths (CPs) in the KG. To address this challenge, we propose Anchoring Path Sentence Transformer (APST) by introducing Anchoring Paths (APs) to alleviate the reliance of CPs. Specifically, we develop a search-based description retrieval method to enrich entity descriptions and an assessment mechanism to evaluate the rationality of APs. APST takes both APs and CPs as the inputs of a unified Sentence Transformer architecture, enabling comprehensive predictions and high-quality explanations. We evaluate APST on three public datasets and achieve state-of-the-art (SOTA) performance in 30 of 36 transductive, inductive, and few-shot experimental settings.
[ { "created": "Thu, 21 Dec 2023 06:02:25 GMT", "version": "v1" } ]
2023-12-22
[ [ "Su", "Zhixiang", "" ], [ "Wang", "Di", "" ], [ "Miao", "Chunyan", "" ], [ "Cui", "Lizhen", "" ] ]
Aiming to accurately predict missing edges representing relations between entities, which are pervasive in real-world Knowledge Graphs (KGs), relation prediction plays a critical role in enhancing the comprehensiveness and utility of KGs. Recent research focuses on path-based methods due to their inductive and explainable properties. However, these methods face a great challenge when lots of reasoning paths do not form Closed Paths (CPs) in the KG. To address this challenge, we propose Anchoring Path Sentence Transformer (APST) by introducing Anchoring Paths (APs) to alleviate the reliance of CPs. Specifically, we develop a search-based description retrieval method to enrich entity descriptions and an assessment mechanism to evaluate the rationality of APs. APST takes both APs and CPs as the inputs of a unified Sentence Transformer architecture, enabling comprehensive predictions and high-quality explanations. We evaluate APST on three public datasets and achieve state-of-the-art (SOTA) performance in 30 of 36 transductive, inductive, and few-shot experimental settings.
2305.10411
Leonel Rozo
Hanna Ziesche and Leonel Rozo
Wasserstein Gradient Flows for Optimizing Gaussian Mixture Policies
null
null
null
null
cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robots often rely on a repertoire of previously-learned motion policies for performing tasks of diverse complexities. When facing unseen task conditions or when new task requirements arise, robots must adapt their motion policies accordingly. In this context, policy optimization is the \emph{de facto} paradigm to adapt robot policies as a function of task-specific objectives. Most commonly-used motion policies carry particular structures that are often overlooked in policy optimization algorithms. We instead propose to leverage the structure of probabilistic policies by casting the policy optimization as an optimal transport problem. Specifically, we focus on robot motion policies that build on Gaussian mixture models (GMMs) and formulate the policy optimization as a Wassertein gradient flow over the GMMs space. This naturally allows us to constrain the policy updates via the $L^2$-Wasserstein distance between GMMs to enhance the stability of the policy optimization process. Furthermore, we leverage the geometry of the Bures-Wasserstein manifold to optimize the Gaussian distributions of the GMM policy via Riemannian optimization. We evaluate our approach on common robotic settings: Reaching motions, collision-avoidance behaviors, and multi-goal tasks. Our results show that our method outperforms common policy optimization baselines in terms of task success rate and low-variance solutions.
[ { "created": "Wed, 17 May 2023 17:48:24 GMT", "version": "v1" } ]
2023-05-18
[ [ "Ziesche", "Hanna", "" ], [ "Rozo", "Leonel", "" ] ]
Robots often rely on a repertoire of previously-learned motion policies for performing tasks of diverse complexities. When facing unseen task conditions or when new task requirements arise, robots must adapt their motion policies accordingly. In this context, policy optimization is the \emph{de facto} paradigm to adapt robot policies as a function of task-specific objectives. Most commonly-used motion policies carry particular structures that are often overlooked in policy optimization algorithms. We instead propose to leverage the structure of probabilistic policies by casting the policy optimization as an optimal transport problem. Specifically, we focus on robot motion policies that build on Gaussian mixture models (GMMs) and formulate the policy optimization as a Wassertein gradient flow over the GMMs space. This naturally allows us to constrain the policy updates via the $L^2$-Wasserstein distance between GMMs to enhance the stability of the policy optimization process. Furthermore, we leverage the geometry of the Bures-Wasserstein manifold to optimize the Gaussian distributions of the GMM policy via Riemannian optimization. We evaluate our approach on common robotic settings: Reaching motions, collision-avoidance behaviors, and multi-goal tasks. Our results show that our method outperforms common policy optimization baselines in terms of task success rate and low-variance solutions.
1801.09390
Yanning Shen
Yanning Shen, Panagiotis A. Traganitis, Georgios B. Giannakis
Nonlinear Dimensionality Reduction on Graphs
Dimensionality reduction, nonlinear modeling, signal processing over graphs
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this era of data deluge, many signal processing and machine learning tasks are faced with high-dimensional datasets, including images, videos, as well as time series generated from social, commercial and brain network interactions. Their efficient processing calls for dimensionality reduction techniques capable of properly compressing the data while preserving task-related characteristics, going beyond pairwise data correlations. The present paper puts forth a nonlinear dimensionality reduction framework that accounts for data lying on known graphs. The novel framework encompasses most of the existing dimensionality reduction methods, but it is also capable of capturing and preserving possibly nonlinear correlations that are ignored by linear methods. Furthermore, it can take into account information from multiple graphs. The proposed algorithms were tested on synthetic as well as real datasets to corroborate their effectiveness.
[ { "created": "Mon, 29 Jan 2018 08:11:04 GMT", "version": "v1" }, { "created": "Thu, 29 Mar 2018 06:56:17 GMT", "version": "v2" } ]
2018-03-30
[ [ "Shen", "Yanning", "" ], [ "Traganitis", "Panagiotis A.", "" ], [ "Giannakis", "Georgios B.", "" ] ]
In this era of data deluge, many signal processing and machine learning tasks are faced with high-dimensional datasets, including images, videos, as well as time series generated from social, commercial and brain network interactions. Their efficient processing calls for dimensionality reduction techniques capable of properly compressing the data while preserving task-related characteristics, going beyond pairwise data correlations. The present paper puts forth a nonlinear dimensionality reduction framework that accounts for data lying on known graphs. The novel framework encompasses most of the existing dimensionality reduction methods, but it is also capable of capturing and preserving possibly nonlinear correlations that are ignored by linear methods. Furthermore, it can take into account information from multiple graphs. The proposed algorithms were tested on synthetic as well as real datasets to corroborate their effectiveness.
1609.02965
Ali Fatih Demir
A. Fatih Demir, Qammer H. Abbasi, Z. Esad Ankarali, Erchin Serpedin, Huseyin Arslan
Numerical Characterization of In Vivo Wireless Communication Channels
2014 IEEE MTT-S International Microwave Workshop Series on RF and Wireless Technologies for Biomedical and Healthcare Applications (IMWS-Bio)
2014 IEEE MTT-S International Microwave Workshop Series (IMWS-Bio), London, 2014, pp. 1-3
10.1109/IMWS-BIO.2014.7032392
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we numerically investigated the in vivo wireless communication channel for the human male torso at 915 MHz. The results show that in vivo channel is different from the classical communication channel, and location dependency is very critical for link budget calculations. A statistical path loss model based on the angle, depth and body region is introduced for near and far field regions. Furthermore, multipath characteristics are investigated using a power delay profile as well.
[ { "created": "Fri, 9 Sep 2016 22:53:23 GMT", "version": "v1" } ]
2016-09-13
[ [ "Demir", "A. Fatih", "" ], [ "Abbasi", "Qammer H.", "" ], [ "Ankarali", "Z. Esad", "" ], [ "Serpedin", "Erchin", "" ], [ "Arslan", "Huseyin", "" ] ]
In this paper, we numerically investigated the in vivo wireless communication channel for the human male torso at 915 MHz. The results show that in vivo channel is different from the classical communication channel, and location dependency is very critical for link budget calculations. A statistical path loss model based on the angle, depth and body region is introduced for near and far field regions. Furthermore, multipath characteristics are investigated using a power delay profile as well.
2205.06811
Quanquan Gu
Jiafan He and Dongruo Zhou and Tong Zhang and Quanquan Gu
Nearly Optimal Algorithms for Linear Contextual Bandits with Adversarial Corruptions
25 pages, 1 table. This version simplifies the proof of the regret upper bound in Version 1, and provides a stronger result for the lower bound
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the linear contextual bandit problem in the presence of adversarial corruption, where the reward at each round is corrupted by an adversary, and the corruption level (i.e., the sum of corruption magnitudes over the horizon) is $C\geq 0$. The best-known algorithms in this setting are limited in that they either are computationally inefficient or require a strong assumption on the corruption, or their regret is at least $C$ times worse than the regret without corruption. In this paper, to overcome these limitations, we propose a new algorithm based on the principle of optimism in the face of uncertainty. At the core of our algorithm is a weighted ridge regression where the weight of each chosen action depends on its confidence up to some threshold. We show that for both known $C$ and unknown $C$ cases, our algorithm with proper choice of hyperparameter achieves a regret that nearly matches the lower bounds. Thus, our algorithm is nearly optimal up to logarithmic factors for both cases. Notably, our algorithm achieves the near-optimal regret for both corrupted and uncorrupted cases ($C=0$) simultaneously.
[ { "created": "Fri, 13 May 2022 17:58:58 GMT", "version": "v1" }, { "created": "Sun, 10 Jul 2022 02:02:58 GMT", "version": "v2" } ]
2022-07-12
[ [ "He", "Jiafan", "" ], [ "Zhou", "Dongruo", "" ], [ "Zhang", "Tong", "" ], [ "Gu", "Quanquan", "" ] ]
We study the linear contextual bandit problem in the presence of adversarial corruption, where the reward at each round is corrupted by an adversary, and the corruption level (i.e., the sum of corruption magnitudes over the horizon) is $C\geq 0$. The best-known algorithms in this setting are limited in that they either are computationally inefficient or require a strong assumption on the corruption, or their regret is at least $C$ times worse than the regret without corruption. In this paper, to overcome these limitations, we propose a new algorithm based on the principle of optimism in the face of uncertainty. At the core of our algorithm is a weighted ridge regression where the weight of each chosen action depends on its confidence up to some threshold. We show that for both known $C$ and unknown $C$ cases, our algorithm with proper choice of hyperparameter achieves a regret that nearly matches the lower bounds. Thus, our algorithm is nearly optimal up to logarithmic factors for both cases. Notably, our algorithm achieves the near-optimal regret for both corrupted and uncorrupted cases ($C=0$) simultaneously.
2312.06581
Stella Biderman
Dashiell Stander and Qinan Yu and Honglu Fan and Stella Biderman
Grokking Group Multiplication with Cosets
null
null
null
null
cs.LG cs.AI math.RT
http://creativecommons.org/licenses/by/4.0/
The complex and unpredictable nature of deep neural networks prevents their safe use in many high-stakes applications. There have been many techniques developed to interpret deep neural networks, but all have substantial limitations. Algorithmic tasks have proven to be a fruitful test ground for interpreting a neural network end-to-end. Building on previous work, we completely reverse engineer fully connected one-hidden layer networks that have ``grokked'' the arithmetic of the permutation groups $S_5$ and $S_6$. The models discover the true subgroup structure of the full group and converge on neural circuits that decompose the group arithmetic using the permutation group's subgroups. We relate how we reverse engineered the model's mechanisms and confirmed our theory was a faithful description of the circuit's functionality. We also draw attention to current challenges in conducting interpretability research by comparing our work to Chughtai et al. [4] which alleges to find a different algorithm for this same problem.
[ { "created": "Mon, 11 Dec 2023 18:12:18 GMT", "version": "v1" }, { "created": "Mon, 17 Jun 2024 17:44:44 GMT", "version": "v2" } ]
2024-06-18
[ [ "Stander", "Dashiell", "" ], [ "Yu", "Qinan", "" ], [ "Fan", "Honglu", "" ], [ "Biderman", "Stella", "" ] ]
The complex and unpredictable nature of deep neural networks prevents their safe use in many high-stakes applications. There have been many techniques developed to interpret deep neural networks, but all have substantial limitations. Algorithmic tasks have proven to be a fruitful test ground for interpreting a neural network end-to-end. Building on previous work, we completely reverse engineer fully connected one-hidden layer networks that have ``grokked'' the arithmetic of the permutation groups $S_5$ and $S_6$. The models discover the true subgroup structure of the full group and converge on neural circuits that decompose the group arithmetic using the permutation group's subgroups. We relate how we reverse engineered the model's mechanisms and confirmed our theory was a faithful description of the circuit's functionality. We also draw attention to current challenges in conducting interpretability research by comparing our work to Chughtai et al. [4] which alleges to find a different algorithm for this same problem.
1601.03783
Duygu Altinok
Duygu Altinok
Towards Turkish ASR: Anatomy of a rule-based Turkish g2p
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper describes the architecture and implementation of a rule-based grapheme to phoneme converter for Turkish. The system accepts surface form as input, outputs SAMPA mapping of the all parallel pronounciations according to the morphological analysis together with stress positions. The system has been implemented in Python
[ { "created": "Fri, 15 Jan 2016 00:09:52 GMT", "version": "v1" } ]
2016-01-18
[ [ "Altinok", "Duygu", "" ] ]
This paper describes the architecture and implementation of a rule-based grapheme to phoneme converter for Turkish. The system accepts surface form as input, outputs SAMPA mapping of the all parallel pronounciations according to the morphological analysis together with stress positions. The system has been implemented in Python
1910.05728
Badri Narayana Patro
Badri N. Patro, Shivansh Patel and Vinay P. Namboodiri
Granular Multimodal Attention Networks for Visual Dialog
ICCV Workshop
null
null
null
cs.CV cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Vision and language tasks have benefited from attention. There have been a number of different attention models proposed. However, the scale at which attention needs to be applied has not been well examined. Particularly, in this work, we propose a new method Granular Multi-modal Attention, where we aim to particularly address the question of the right granularity at which one needs to attend while solving the Visual Dialog task. The proposed method shows improvement in both image and text attention networks. We then propose a granular Multi-modal Attention network that jointly attends on the image and text granules and shows the best performance. With this work, we observe that obtaining granular attention and doing exhaustive Multi-modal Attention appears to be the best way to attend while solving visual dialog.
[ { "created": "Sun, 13 Oct 2019 10:49:41 GMT", "version": "v1" } ]
2019-10-15
[ [ "Patro", "Badri N.", "" ], [ "Patel", "Shivansh", "" ], [ "Namboodiri", "Vinay P.", "" ] ]
Vision and language tasks have benefited from attention. There have been a number of different attention models proposed. However, the scale at which attention needs to be applied has not been well examined. Particularly, in this work, we propose a new method Granular Multi-modal Attention, where we aim to particularly address the question of the right granularity at which one needs to attend while solving the Visual Dialog task. The proposed method shows improvement in both image and text attention networks. We then propose a granular Multi-modal Attention network that jointly attends on the image and text granules and shows the best performance. With this work, we observe that obtaining granular attention and doing exhaustive Multi-modal Attention appears to be the best way to attend while solving visual dialog.
1104.4646
Nissim Halabi
Nissim Halabi and Guy Even
Local Optimality Certificates for LP Decoding of Tanner Codes
null
null
null
null
cs.IT math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new combinatorial characterization for local optimality of a codeword in an irregular Tanner code. The main novelty in this characterization is that it is based on a linear combination of subtrees in the computation trees. These subtrees may have any degree in the local code nodes and may have any height (even greater than the girth). We expect this new characterization to lead to improvements in bounds for successful decoding. We prove that local optimality in this new characterization implies ML-optimality and LP-optimality, as one would expect. Finally, we show that is possible to compute efficiently a certificate for the local optimality of a codeword given an LLR vector.
[ { "created": "Sun, 24 Apr 2011 19:27:55 GMT", "version": "v1" } ]
2011-04-26
[ [ "Halabi", "Nissim", "" ], [ "Even", "Guy", "" ] ]
We present a new combinatorial characterization for local optimality of a codeword in an irregular Tanner code. The main novelty in this characterization is that it is based on a linear combination of subtrees in the computation trees. These subtrees may have any degree in the local code nodes and may have any height (even greater than the girth). We expect this new characterization to lead to improvements in bounds for successful decoding. We prove that local optimality in this new characterization implies ML-optimality and LP-optimality, as one would expect. Finally, we show that is possible to compute efficiently a certificate for the local optimality of a codeword given an LLR vector.
1804.10188
Sahil Garg
Sahil Garg, Irina Rish, Guillermo Cecchi, Palash Goyal, Sarik Ghazarian, Shuyang Gao, Greg Ver Steeg, Aram Galstyan
Modeling Psychotherapy Dialogues with Kernelized Hashcode Representations: A Nonparametric Information-Theoretic Approach
Response generative based model added, along with human evaluation
null
null
null
cs.LG cs.AI cs.CL cs.IT math.IT stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel dialogue modeling framework, the first-ever nonparametric kernel functions based approach for dialogue modeling, which learns kernelized hashcodes as compressed text representations; unlike traditional deep learning models, it handles well relatively small datasets, while also scaling to large ones. We also derive a novel lower bound on mutual information, used as a model-selection criterion favoring representations with better alignment between the utterances of participants in a collaborative dialogue setting, as well as higher predictability of the generated responses. As demonstrated on three real-life datasets, including prominently psychotherapy sessions, the proposed approach significantly outperforms several state-of-art neural network based dialogue systems, both in terms of computational efficiency, reducing training time from days or weeks to hours, and the response quality, achieving an order of magnitude improvement over competitors in frequency of being chosen as the best model by human evaluators.
[ { "created": "Thu, 26 Apr 2018 17:39:28 GMT", "version": "v1" }, { "created": "Fri, 18 May 2018 00:32:09 GMT", "version": "v2" }, { "created": "Wed, 30 May 2018 03:58:19 GMT", "version": "v3" }, { "created": "Fri, 6 Jul 2018 14:54:22 GMT", "version": "v4" }, { "created": "Thu, 18 Oct 2018 15:23:28 GMT", "version": "v5" }, { "created": "Fri, 8 Mar 2019 02:16:21 GMT", "version": "v6" }, { "created": "Mon, 9 Sep 2019 19:43:38 GMT", "version": "v7" } ]
2019-09-11
[ [ "Garg", "Sahil", "" ], [ "Rish", "Irina", "" ], [ "Cecchi", "Guillermo", "" ], [ "Goyal", "Palash", "" ], [ "Ghazarian", "Sarik", "" ], [ "Gao", "Shuyang", "" ], [ "Steeg", "Greg Ver", "" ], [ "Galstyan", "Aram", "" ] ]
We propose a novel dialogue modeling framework, the first-ever nonparametric kernel functions based approach for dialogue modeling, which learns kernelized hashcodes as compressed text representations; unlike traditional deep learning models, it handles well relatively small datasets, while also scaling to large ones. We also derive a novel lower bound on mutual information, used as a model-selection criterion favoring representations with better alignment between the utterances of participants in a collaborative dialogue setting, as well as higher predictability of the generated responses. As demonstrated on three real-life datasets, including prominently psychotherapy sessions, the proposed approach significantly outperforms several state-of-art neural network based dialogue systems, both in terms of computational efficiency, reducing training time from days or weeks to hours, and the response quality, achieving an order of magnitude improvement over competitors in frequency of being chosen as the best model by human evaluators.
2405.14847
Sai Bi
Liwen Wu, Sai Bi, Zexiang Xu, Fujun Luan, Kai Zhang, Iliyan Georgiev, Kalyan Sunkavalli, Ravi Ramamoorthi
Neural Directional Encoding for Efficient and Accurate View-Dependent Appearance Modeling
Accepted to CVPR 2024
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Novel-view synthesis of specular objects like shiny metals or glossy paints remains a significant challenge. Not only the glossy appearance but also global illumination effects, including reflections of other objects in the environment, are critical components to faithfully reproduce a scene. In this paper, we present Neural Directional Encoding (NDE), a view-dependent appearance encoding of neural radiance fields (NeRF) for rendering specular objects. NDE transfers the concept of feature-grid-based spatial encoding to the angular domain, significantly improving the ability to model high-frequency angular signals. In contrast to previous methods that use encoding functions with only angular input, we additionally cone-trace spatial features to obtain a spatially varying directional encoding, which addresses the challenging interreflection effects. Extensive experiments on both synthetic and real datasets show that a NeRF model with NDE (1) outperforms the state of the art on view synthesis of specular objects, and (2) works with small networks to allow fast (real-time) inference. The project webpage and source code are available at: \url{https://lwwu2.github.io/nde/}.
[ { "created": "Thu, 23 May 2024 17:56:34 GMT", "version": "v1" } ]
2024-05-24
[ [ "Wu", "Liwen", "" ], [ "Bi", "Sai", "" ], [ "Xu", "Zexiang", "" ], [ "Luan", "Fujun", "" ], [ "Zhang", "Kai", "" ], [ "Georgiev", "Iliyan", "" ], [ "Sunkavalli", "Kalyan", "" ], [ "Ramamoorthi", "Ravi", "" ] ]
Novel-view synthesis of specular objects like shiny metals or glossy paints remains a significant challenge. Not only the glossy appearance but also global illumination effects, including reflections of other objects in the environment, are critical components to faithfully reproduce a scene. In this paper, we present Neural Directional Encoding (NDE), a view-dependent appearance encoding of neural radiance fields (NeRF) for rendering specular objects. NDE transfers the concept of feature-grid-based spatial encoding to the angular domain, significantly improving the ability to model high-frequency angular signals. In contrast to previous methods that use encoding functions with only angular input, we additionally cone-trace spatial features to obtain a spatially varying directional encoding, which addresses the challenging interreflection effects. Extensive experiments on both synthetic and real datasets show that a NeRF model with NDE (1) outperforms the state of the art on view synthesis of specular objects, and (2) works with small networks to allow fast (real-time) inference. The project webpage and source code are available at: \url{https://lwwu2.github.io/nde/}.
1909.09248
Wei-Hung Weng
Wei-Hung Weng, Peter Szolovits
Representation Learning for Electronic Health Records
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Information in electronic health records (EHR), such as clinical narratives, examination reports, lab measurements, demographics, and other patient encounter entries, can be transformed into appropriate data representations that can be used for downstream clinical machine learning tasks using representation learning. Learning better representations is critical to improve the performance of downstream tasks. Due to the advances in machine learning, we now can learn better and meaningful representations from EHR through disentangling the underlying factors inside data and distilling large amounts of information and knowledge from heterogeneous EHR sources. In this chapter, we first introduce the background of learning representations and reasons why we need good EHR representations in machine learning for medicine and healthcare in Section 1. Next, we explain the commonly-used machine learning and evaluation methods for representation learning using a deep learning approach in Section 2. Following that, we review recent related studies of learning patient state representation from EHR for clinical machine learning tasks in Section 3. Finally, in Section 4 we discuss more techniques, studies, and challenges for learning natural language representations when free texts, such as clinical notes, examination reports, or biomedical literature are used. We also discuss challenges and opportunities in these rapidly growing research fields.
[ { "created": "Thu, 19 Sep 2019 22:12:30 GMT", "version": "v1" } ]
2019-09-23
[ [ "Weng", "Wei-Hung", "" ], [ "Szolovits", "Peter", "" ] ]
Information in electronic health records (EHR), such as clinical narratives, examination reports, lab measurements, demographics, and other patient encounter entries, can be transformed into appropriate data representations that can be used for downstream clinical machine learning tasks using representation learning. Learning better representations is critical to improve the performance of downstream tasks. Due to the advances in machine learning, we now can learn better and meaningful representations from EHR through disentangling the underlying factors inside data and distilling large amounts of information and knowledge from heterogeneous EHR sources. In this chapter, we first introduce the background of learning representations and reasons why we need good EHR representations in machine learning for medicine and healthcare in Section 1. Next, we explain the commonly-used machine learning and evaluation methods for representation learning using a deep learning approach in Section 2. Following that, we review recent related studies of learning patient state representation from EHR for clinical machine learning tasks in Section 3. Finally, in Section 4 we discuss more techniques, studies, and challenges for learning natural language representations when free texts, such as clinical notes, examination reports, or biomedical literature are used. We also discuss challenges and opportunities in these rapidly growing research fields.
1409.5583
Mohamed Khalil
Mohamed Amir, Tamer Khattab, Tarek Elfouly, Amr Mohamed
Secure Degrees of Freedom of the Gaussian MIMO Wiretap and MIMO Broadcast Channels with Unknown Eavesdroppers
arXiv admin note: text overlap with arXiv:1404.5007
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the secure degrees of freedom (SDoF) of the wiretap and the K user Gaussian broadcast channels with multiple antennas at the transmitter, the legitimate receivers and an unknown number of eavesdroppers each with a number of antennas less than or equal to a known value NE. The channel matrices between the legitimate transmitter and the receivers are available everywhere, while the legitimate pair have no information about the eavesdroppers' channels. We provide the exact sum SDoF for the considered system. A new comprehensive upperbound is deduced and a new achievable scheme based on utilizing jamming is exploited. We prove that cooperative jamming is SDoF optimal even without the eavesdropper CSI available at the transmitters.
[ { "created": "Fri, 19 Sep 2014 10:29:58 GMT", "version": "v1" }, { "created": "Fri, 26 Sep 2014 11:28:51 GMT", "version": "v2" }, { "created": "Thu, 26 Feb 2015 15:10:00 GMT", "version": "v3" }, { "created": "Wed, 30 Mar 2016 20:38:15 GMT", "version": "v4" } ]
2016-04-01
[ [ "Amir", "Mohamed", "" ], [ "Khattab", "Tamer", "" ], [ "Elfouly", "Tarek", "" ], [ "Mohamed", "Amr", "" ] ]
We investigate the secure degrees of freedom (SDoF) of the wiretap and the K user Gaussian broadcast channels with multiple antennas at the transmitter, the legitimate receivers and an unknown number of eavesdroppers each with a number of antennas less than or equal to a known value NE. The channel matrices between the legitimate transmitter and the receivers are available everywhere, while the legitimate pair have no information about the eavesdroppers' channels. We provide the exact sum SDoF for the considered system. A new comprehensive upperbound is deduced and a new achievable scheme based on utilizing jamming is exploited. We prove that cooperative jamming is SDoF optimal even without the eavesdropper CSI available at the transmitters.
2211.11870
Fengyi Shen
Fengyi Shen, Zador Pataki, Akhil Gurram, Ziyuan Liu, He Wang, Alois Knoll
LoopDA: Constructing Self-loops to Adapt Nighttime Semantic Segmentation
Accepted to WACV2023
2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)
null
null
cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
Due to the lack of training labels and the difficulty of annotating, dealing with adverse driving conditions such as nighttime has posed a huge challenge to the perception system of autonomous vehicles. Therefore, adapting knowledge from a labelled daytime domain to an unlabelled nighttime domain has been widely researched. In addition to labelled daytime datasets, existing nighttime datasets usually provide nighttime images with corresponding daytime reference images captured at nearby locations for reference. The key challenge is to minimize the performance gap between the two domains. In this paper, we propose LoopDA for domain adaptive nighttime semantic segmentation. It consists of self-loops that result in reconstructing the input data using predicted semantic maps, by rendering them into the encoded features. In a warm-up training stage, the self-loops comprise of an inner-loop and an outer-loop, which are responsible for intra-domain refinement and inter-domain alignment, respectively. To reduce the impact of day-night pose shifts, in the later self-training stage, we propose a co-teaching pipeline that involves an offline pseudo-supervision signal and an online reference-guided signal `DNA' (Day-Night Agreement), bringing substantial benefits to enhance nighttime segmentation. Our model outperforms prior methods on Dark Zurich and Nighttime Driving datasets for semantic segmentation. Code and pretrained models are available at https://github.com/fy-vision/LoopDA.
[ { "created": "Mon, 21 Nov 2022 21:46:05 GMT", "version": "v1" } ]
2022-11-23
[ [ "Shen", "Fengyi", "" ], [ "Pataki", "Zador", "" ], [ "Gurram", "Akhil", "" ], [ "Liu", "Ziyuan", "" ], [ "Wang", "He", "" ], [ "Knoll", "Alois", "" ] ]
Due to the lack of training labels and the difficulty of annotating, dealing with adverse driving conditions such as nighttime has posed a huge challenge to the perception system of autonomous vehicles. Therefore, adapting knowledge from a labelled daytime domain to an unlabelled nighttime domain has been widely researched. In addition to labelled daytime datasets, existing nighttime datasets usually provide nighttime images with corresponding daytime reference images captured at nearby locations for reference. The key challenge is to minimize the performance gap between the two domains. In this paper, we propose LoopDA for domain adaptive nighttime semantic segmentation. It consists of self-loops that result in reconstructing the input data using predicted semantic maps, by rendering them into the encoded features. In a warm-up training stage, the self-loops comprise of an inner-loop and an outer-loop, which are responsible for intra-domain refinement and inter-domain alignment, respectively. To reduce the impact of day-night pose shifts, in the later self-training stage, we propose a co-teaching pipeline that involves an offline pseudo-supervision signal and an online reference-guided signal `DNA' (Day-Night Agreement), bringing substantial benefits to enhance nighttime segmentation. Our model outperforms prior methods on Dark Zurich and Nighttime Driving datasets for semantic segmentation. Code and pretrained models are available at https://github.com/fy-vision/LoopDA.
1604.07319
Mehrdad Gangeh
Mehrdad J. Gangeh, Safaa M.A. Bedawi, Ali Ghodsi, Fakhri Karray
Semi-supervised Dictionary Learning Based on Hilbert-Schmidt Independence Criterion
Accepted at International conference on Image analysis and Recognition (ICIAR) 2016
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a novel semi-supervised dictionary learning and sparse representation (SS-DLSR) is proposed. The proposed method benefits from the supervisory information by learning the dictionary in a space where the dependency between the data and class labels is maximized. This maximization is performed using Hilbert-Schmidt independence criterion (HSIC). On the other hand, the global distribution of the underlying manifolds were learned from the unlabeled data by minimizing the distances between the unlabeled data and the corresponding nearest labeled data in the space of the dictionary learned. The proposed SS-DLSR algorithm has closed-form solutions for both the dictionary and sparse coefficients, and therefore does not have to learn the two iteratively and alternately as is common in the literature of the DLSR. This makes the solution for the proposed algorithm very fast. The experiments confirm the improvement in classification performance on benchmark datasets by including the information from both labeled and unlabeled data, particularly when there are many unlabeled data.
[ { "created": "Mon, 25 Apr 2016 16:25:38 GMT", "version": "v1" } ]
2016-04-26
[ [ "Gangeh", "Mehrdad J.", "" ], [ "Bedawi", "Safaa M. A.", "" ], [ "Ghodsi", "Ali", "" ], [ "Karray", "Fakhri", "" ] ]
In this paper, a novel semi-supervised dictionary learning and sparse representation (SS-DLSR) is proposed. The proposed method benefits from the supervisory information by learning the dictionary in a space where the dependency between the data and class labels is maximized. This maximization is performed using Hilbert-Schmidt independence criterion (HSIC). On the other hand, the global distribution of the underlying manifolds were learned from the unlabeled data by minimizing the distances between the unlabeled data and the corresponding nearest labeled data in the space of the dictionary learned. The proposed SS-DLSR algorithm has closed-form solutions for both the dictionary and sparse coefficients, and therefore does not have to learn the two iteratively and alternately as is common in the literature of the DLSR. This makes the solution for the proposed algorithm very fast. The experiments confirm the improvement in classification performance on benchmark datasets by including the information from both labeled and unlabeled data, particularly when there are many unlabeled data.
2311.03382
Hangtong Xu
Hangtong Xu and Yuanbo Xu and Yongjian Yang
Causal Structure Representation Learning of Confounders in Latent Space for Recommendation
null
null
null
null
cs.IR cs.AI cs.LG stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inferring user preferences from the historical feedback of users is a valuable problem in recommender systems. Conventional approaches often rely on the assumption that user preferences in the feedback data are equivalent to the real user preferences without additional noise, which simplifies the problem modeling. However, there are various confounders during user-item interactions, such as weather and even the recommendation system itself. Therefore, neglecting the influence of confounders will result in inaccurate user preferences and suboptimal performance of the model. Furthermore, the unobservability of confounders poses a challenge in further addressing the problem. To address these issues, we refine the problem and propose a more rational solution. Specifically, we consider the influence of confounders, disentangle them from user preferences in the latent space, and employ causal graphs to model their interdependencies without specific labels. By cleverly combining local and global causal graphs, we capture the user-specificity of confounders on user preferences. We theoretically demonstrate the identifiability of the obtained causal graph. Finally, we propose our model based on Variational Autoencoders, named Causal Structure representation learning of Confounders in latent space (CSC). We conducted extensive experiments on one synthetic dataset and five real-world datasets, demonstrating the superiority of our model. Furthermore, we demonstrate that the learned causal representations of confounders are controllable, potentially offering users fine-grained control over the objectives of their recommendation lists with the learned causal graphs.
[ { "created": "Thu, 2 Nov 2023 08:46:07 GMT", "version": "v1" } ]
2023-11-08
[ [ "Xu", "Hangtong", "" ], [ "Xu", "Yuanbo", "" ], [ "Yang", "Yongjian", "" ] ]
Inferring user preferences from the historical feedback of users is a valuable problem in recommender systems. Conventional approaches often rely on the assumption that user preferences in the feedback data are equivalent to the real user preferences without additional noise, which simplifies the problem modeling. However, there are various confounders during user-item interactions, such as weather and even the recommendation system itself. Therefore, neglecting the influence of confounders will result in inaccurate user preferences and suboptimal performance of the model. Furthermore, the unobservability of confounders poses a challenge in further addressing the problem. To address these issues, we refine the problem and propose a more rational solution. Specifically, we consider the influence of confounders, disentangle them from user preferences in the latent space, and employ causal graphs to model their interdependencies without specific labels. By cleverly combining local and global causal graphs, we capture the user-specificity of confounders on user preferences. We theoretically demonstrate the identifiability of the obtained causal graph. Finally, we propose our model based on Variational Autoencoders, named Causal Structure representation learning of Confounders in latent space (CSC). We conducted extensive experiments on one synthetic dataset and five real-world datasets, demonstrating the superiority of our model. Furthermore, we demonstrate that the learned causal representations of confounders are controllable, potentially offering users fine-grained control over the objectives of their recommendation lists with the learned causal graphs.
2403.03707
Yajie Liu
Yajie Liu, Pu Ge, Qingjie Liu, Di Huang
Multi-Grained Cross-modal Alignment for Learning Open-vocabulary Semantic Segmentation from Text Supervision
17 pages, 8 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Recently, learning open-vocabulary semantic segmentation from text supervision has achieved promising downstream performance. Nevertheless, current approaches encounter an alignment granularity gap owing to the absence of dense annotations, wherein they learn coarse image/region-text alignment during training yet perform group/pixel-level predictions at inference. Such discrepancy leads to suboptimal learning efficiency and inferior zero-shot segmentation results. In this paper, we introduce a Multi-Grained Cross-modal Alignment (MGCA) framework, which explicitly learns pixel-level alignment along with object- and region-level alignment to bridge the granularity gap without any dense annotations. Specifically, MGCA ingeniously constructs pseudo multi-granular semantic correspondences upon image-text pairs and collaborates with hard sampling strategies to facilitate fine-grained cross-modal contrastive learning. Further, we point out the defects of existing group and pixel prediction units in downstream segmentation and develop an adaptive semantic unit which effectively mitigates their dilemmas including under- and over-segmentation. Training solely on CC3M, our method achieves significant advancements over state-of-the-art methods, demonstrating its effectiveness and efficiency.
[ { "created": "Wed, 6 Mar 2024 13:43:36 GMT", "version": "v1" } ]
2024-03-07
[ [ "Liu", "Yajie", "" ], [ "Ge", "Pu", "" ], [ "Liu", "Qingjie", "" ], [ "Huang", "Di", "" ] ]
Recently, learning open-vocabulary semantic segmentation from text supervision has achieved promising downstream performance. Nevertheless, current approaches encounter an alignment granularity gap owing to the absence of dense annotations, wherein they learn coarse image/region-text alignment during training yet perform group/pixel-level predictions at inference. Such discrepancy leads to suboptimal learning efficiency and inferior zero-shot segmentation results. In this paper, we introduce a Multi-Grained Cross-modal Alignment (MGCA) framework, which explicitly learns pixel-level alignment along with object- and region-level alignment to bridge the granularity gap without any dense annotations. Specifically, MGCA ingeniously constructs pseudo multi-granular semantic correspondences upon image-text pairs and collaborates with hard sampling strategies to facilitate fine-grained cross-modal contrastive learning. Further, we point out the defects of existing group and pixel prediction units in downstream segmentation and develop an adaptive semantic unit which effectively mitigates their dilemmas including under- and over-segmentation. Training solely on CC3M, our method achieves significant advancements over state-of-the-art methods, demonstrating its effectiveness and efficiency.
2104.08899
Decky Aspandi
Decky Aspandi-Latif, Sally Goldin, Preesan Rakwatin, Kurt Rudahl
Texture Based Classification of High Resolution Remotely Sensed Imagery using Weber Local Descriptor
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Traditional image classification techniques often produce unsatisfactory results when applied to high spatial resolution data because classes in high resolution images are not spectrally homogeneous. Texture offers an alternative source of information for classifying these images. This paper evaluates a recently developed, computationally simple texture metric called Weber Local Descriptor (WLD) for use in classifying high resolution QuickBird panchromatic data. We compared WLD with state-of-the art texture descriptors (TD) including Local Binary Pattern (LBP) and its rotation-invariant version LBPRIU. We also investigated whether incorporating VAR, a TD that captures brightness variation, would improve the accuracy of LBPRIU and WLD. We found that WLD generally produces more accurate classification results than the other TD we examined, and is also more robust to varying parameters. We have implemented an optimised algorithm for calculating WLD which makes the technique practical in terms of computation time. Overall, our results indicate that WLD is a promising approach for classifying high resolution remote sensing data.
[ { "created": "Sun, 18 Apr 2021 16:37:34 GMT", "version": "v1" } ]
2021-04-20
[ [ "Aspandi-Latif", "Decky", "" ], [ "Goldin", "Sally", "" ], [ "Rakwatin", "Preesan", "" ], [ "Rudahl", "Kurt", "" ] ]
Traditional image classification techniques often produce unsatisfactory results when applied to high spatial resolution data because classes in high resolution images are not spectrally homogeneous. Texture offers an alternative source of information for classifying these images. This paper evaluates a recently developed, computationally simple texture metric called Weber Local Descriptor (WLD) for use in classifying high resolution QuickBird panchromatic data. We compared WLD with state-of-the art texture descriptors (TD) including Local Binary Pattern (LBP) and its rotation-invariant version LBPRIU. We also investigated whether incorporating VAR, a TD that captures brightness variation, would improve the accuracy of LBPRIU and WLD. We found that WLD generally produces more accurate classification results than the other TD we examined, and is also more robust to varying parameters. We have implemented an optimised algorithm for calculating WLD which makes the technique practical in terms of computation time. Overall, our results indicate that WLD is a promising approach for classifying high resolution remote sensing data.
2211.00519
Yanran Guan
Yanran Guan, Andrei Chubarau, Ruby Rao, Derek Nowrouzezahrai
Learning Neural Implicit Representations with Surface Signal Parameterizations
null
null
null
null
cs.GR cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural implicit surface representations have recently emerged as popular alternative to explicit 3D object encodings, such as polygonal meshes, tabulated points, or voxels. While significant work has improved the geometric fidelity of these representations, much less attention is given to their final appearance. Traditional explicit object representations commonly couple the 3D shape data with auxiliary surface-mapped image data, such as diffuse color textures and fine-scale geometric details in normal maps that typically require a mapping of the 3D surface onto a plane, i.e., a surface parameterization; implicit representations, on the other hand, cannot be easily textured due to lack of configurable surface parameterization. Inspired by this digital content authoring methodology, we design a neural network architecture that implicitly encodes the underlying surface parameterization suitable for appearance data. As such, our model remains compatible with existing mesh-based digital content with appearance data. Motivated by recent work that overfits compact networks to individual 3D objects, we present a new weight-encoded neural implicit representation that extends the capability of neural implicit surfaces to enable various common and important applications of texture mapping. Our method outperforms reasonable baselines and state-of-the-art alternatives.
[ { "created": "Tue, 1 Nov 2022 15:10:58 GMT", "version": "v1" }, { "created": "Mon, 26 Jun 2023 00:32:56 GMT", "version": "v2" } ]
2023-06-27
[ [ "Guan", "Yanran", "" ], [ "Chubarau", "Andrei", "" ], [ "Rao", "Ruby", "" ], [ "Nowrouzezahrai", "Derek", "" ] ]
Neural implicit surface representations have recently emerged as popular alternative to explicit 3D object encodings, such as polygonal meshes, tabulated points, or voxels. While significant work has improved the geometric fidelity of these representations, much less attention is given to their final appearance. Traditional explicit object representations commonly couple the 3D shape data with auxiliary surface-mapped image data, such as diffuse color textures and fine-scale geometric details in normal maps that typically require a mapping of the 3D surface onto a plane, i.e., a surface parameterization; implicit representations, on the other hand, cannot be easily textured due to lack of configurable surface parameterization. Inspired by this digital content authoring methodology, we design a neural network architecture that implicitly encodes the underlying surface parameterization suitable for appearance data. As such, our model remains compatible with existing mesh-based digital content with appearance data. Motivated by recent work that overfits compact networks to individual 3D objects, we present a new weight-encoded neural implicit representation that extends the capability of neural implicit surfaces to enable various common and important applications of texture mapping. Our method outperforms reasonable baselines and state-of-the-art alternatives.
1411.3716
Mahmood Mohassel Feghhi
Mahmood Mohassel Feghhi, Mahtab Mirmohseni, Aliazam Abbasfar
Power Allocation in the Energy Harvesting Full-Duplex Gaussian Relay Channels
Accepted for publication in International Journal of Communication Systems (Special Issue on Energy Efficient Wireless Communication Networks with QoS), October 2014
International Journal of Communication Systems, Vol. 30, No. 2, Jan 2017
10.1002/dac.2903
null
cs.NI cs.ET cs.IT math.IT math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a general model to study the full-duplex non-coherent decode-and-forward Gaussian relay channel with energy harvesting (EH) nodes, called NC-EH-$\mathcal{RC}$, in three cases: $i)$ no energy transfer (ET), $ii)$ one-way ET from the source (S) to the relay (R), and $iii)$ two-way ET. We consider the problem of optimal power allocation in NC-EH-$\mathcal{RC}$ in order to maximize the total transmitted bits from S to the destination in a given time duration. General stochastic energy arrivals at S and R with known EH times and values are assumed. In NC-EH-$\mathcal{RC}$ with no ET, the complicated min-max optimization form along with its constraints make the problem intractable. It is shown that this problem can be transformed to a solvable convex form; however, convex optimization solution does not provide the structural properties of the optimal solution. Therefore, following an alternative perspective, we investigate conditions on harvesting process of S and R where we find optimal algorithmic solution. Further, we propose some suboptimal algorithms and provide some examples, in which the algorithms are optimal. Moreover, we find a class of problems for NC-EH-$\mathcal{RC}$ with one-way ET from S to R, where the optimal algorithmic solution is devised. For NC-EH-$\mathcal{RC}$ with two-way ET, we propose \emph{general} optimal algorithmic solution. Furthermore, the performance of the proposed algorithms are evaluated numerically and compared with optimal numerical convex optimization tools.
[ { "created": "Fri, 14 Nov 2014 08:17:31 GMT", "version": "v1" } ]
2021-04-07
[ [ "Feghhi", "Mahmood Mohassel", "" ], [ "Mirmohseni", "Mahtab", "" ], [ "Abbasfar", "Aliazam", "" ] ]
In this paper, we propose a general model to study the full-duplex non-coherent decode-and-forward Gaussian relay channel with energy harvesting (EH) nodes, called NC-EH-$\mathcal{RC}$, in three cases: $i)$ no energy transfer (ET), $ii)$ one-way ET from the source (S) to the relay (R), and $iii)$ two-way ET. We consider the problem of optimal power allocation in NC-EH-$\mathcal{RC}$ in order to maximize the total transmitted bits from S to the destination in a given time duration. General stochastic energy arrivals at S and R with known EH times and values are assumed. In NC-EH-$\mathcal{RC}$ with no ET, the complicated min-max optimization form along with its constraints make the problem intractable. It is shown that this problem can be transformed to a solvable convex form; however, convex optimization solution does not provide the structural properties of the optimal solution. Therefore, following an alternative perspective, we investigate conditions on harvesting process of S and R where we find optimal algorithmic solution. Further, we propose some suboptimal algorithms and provide some examples, in which the algorithms are optimal. Moreover, we find a class of problems for NC-EH-$\mathcal{RC}$ with one-way ET from S to R, where the optimal algorithmic solution is devised. For NC-EH-$\mathcal{RC}$ with two-way ET, we propose \emph{general} optimal algorithmic solution. Furthermore, the performance of the proposed algorithms are evaluated numerically and compared with optimal numerical convex optimization tools.
2008.01558
Yanmin Gong
Rui Hu and Yanmin Gong and Yuanxiong Guo
Federated Learning with Sparsification-Amplified Privacy and Adaptive Optimization
Accepted in IJCAI 2021, this is the full version with appendix
null
null
null
cs.LG cs.CR stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated learning (FL) enables distributed agents to collaboratively learn a centralized model without sharing their raw data with each other. However, data locality does not provide sufficient privacy protection, and it is desirable to facilitate FL with rigorous differential privacy (DP) guarantee. Existing DP mechanisms would introduce random noise with magnitude proportional to the model size, which can be quite large in deep neural networks. In this paper, we propose a new FL framework with sparsification-amplified privacy. Our approach integrates random sparsification with gradient perturbation on each agent to amplify privacy guarantee. Since sparsification would increase the number of communication rounds required to achieve a certain target accuracy, which is unfavorable for DP guarantee, we further introduce acceleration techniques to help reduce the privacy cost. We rigorously analyze the convergence of our approach and utilize Renyi DP to tightly account the end-to-end DP guarantee. Extensive experiments on benchmark datasets validate that our approach outperforms previous differentially-private FL approaches in both privacy guarantee and communication efficiency.
[ { "created": "Sat, 1 Aug 2020 20:22:57 GMT", "version": "v1" }, { "created": "Tue, 8 Jun 2021 20:18:08 GMT", "version": "v2" } ]
2021-06-15
[ [ "Hu", "Rui", "" ], [ "Gong", "Yanmin", "" ], [ "Guo", "Yuanxiong", "" ] ]
Federated learning (FL) enables distributed agents to collaboratively learn a centralized model without sharing their raw data with each other. However, data locality does not provide sufficient privacy protection, and it is desirable to facilitate FL with rigorous differential privacy (DP) guarantee. Existing DP mechanisms would introduce random noise with magnitude proportional to the model size, which can be quite large in deep neural networks. In this paper, we propose a new FL framework with sparsification-amplified privacy. Our approach integrates random sparsification with gradient perturbation on each agent to amplify privacy guarantee. Since sparsification would increase the number of communication rounds required to achieve a certain target accuracy, which is unfavorable for DP guarantee, we further introduce acceleration techniques to help reduce the privacy cost. We rigorously analyze the convergence of our approach and utilize Renyi DP to tightly account the end-to-end DP guarantee. Extensive experiments on benchmark datasets validate that our approach outperforms previous differentially-private FL approaches in both privacy guarantee and communication efficiency.
1209.1960
M. Emre Celebi
M. Emre Celebi, Hassan A. Kingravi, Patricio A. Vela
A Comparative Study of Efficient Initialization Methods for the K-Means Clustering Algorithm
17 pages, 1 figure, 7 tables
Expert Systems with Applications 40 (2013) 200-210
10.1016/j.eswa.2012.07.021
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
K-means is undoubtedly the most widely used partitional clustering algorithm. Unfortunately, due to its gradient descent nature, this algorithm is highly sensitive to the initial placement of the cluster centers. Numerous initialization methods have been proposed to address this problem. In this paper, we first present an overview of these methods with an emphasis on their computational efficiency. We then compare eight commonly used linear time complexity initialization methods on a large and diverse collection of data sets using various performance criteria. Finally, we analyze the experimental results using non-parametric statistical tests and provide recommendations for practitioners. We demonstrate that popular initialization methods often perform poorly and that there are in fact strong alternatives to these methods.
[ { "created": "Mon, 10 Sep 2012 12:22:06 GMT", "version": "v1" } ]
2012-09-11
[ [ "Celebi", "M. Emre", "" ], [ "Kingravi", "Hassan A.", "" ], [ "Vela", "Patricio A.", "" ] ]
K-means is undoubtedly the most widely used partitional clustering algorithm. Unfortunately, due to its gradient descent nature, this algorithm is highly sensitive to the initial placement of the cluster centers. Numerous initialization methods have been proposed to address this problem. In this paper, we first present an overview of these methods with an emphasis on their computational efficiency. We then compare eight commonly used linear time complexity initialization methods on a large and diverse collection of data sets using various performance criteria. Finally, we analyze the experimental results using non-parametric statistical tests and provide recommendations for practitioners. We demonstrate that popular initialization methods often perform poorly and that there are in fact strong alternatives to these methods.
2203.16648
Abigail Lee
Abigail J. Lee, Grace E. Chesmore, Kyle A. Rocha, Amanda Farah, Maryum Sayeed, Justin Myles
Predicting Winners of the Reality TV Dating Show $\textit{The Bachelor}$ Using Machine Learning Algorithms
6 Pages, 5 Figures. Submitted to Acta Prima Aprila. Code used in this work available at http://github.com/chesmore/bach-stats/
null
null
null
cs.LG astro-ph.IM physics.pop-ph
http://creativecommons.org/licenses/by/4.0/
$\textit{The Bachelor}$ is a reality TV dating show in which a single bachelor selects his wife from a pool of approximately 30 female contestants over eight weeks of filming (American Broadcasting Company 2002). We collected the following data on all 422 contestants that participated in seasons 11 through 25: their Age, Hometown, Career, Race, Week they got their first 1-on-1 date, whether they got the first impression rose, and what "place" they ended up getting. We then trained three machine learning models to predict the ideal characteristics of a successful contestant on $\textit{The Bachelor}$. The three algorithms that we tested were: random forest classification, neural networks, and linear regression. We found consistency across all three models, although the neural network performed the best overall. Our models found that a woman has the highest probability of progressing far on $\textit{The Bachelor}$ if she is: 26 years old, white, from the Northwest, works as an dancer, received a 1-on-1 in week 6, and did not receive the First Impression Rose. Our methodology is broadly applicable to all romantic reality television, and our results will inform future $\textit{The Bachelor}$ production and contestant strategies. While our models were relatively successful, we still encountered high misclassification rates. This may be because: (1) Our training dataset had fewer than 400 points or (2) Our models were too simple to parameterize the complex romantic connections contestants forge over the course of a season.
[ { "created": "Wed, 30 Mar 2022 20:00:31 GMT", "version": "v1" } ]
2022-04-01
[ [ "Lee", "Abigail J.", "" ], [ "Chesmore", "Grace E.", "" ], [ "Rocha", "Kyle A.", "" ], [ "Farah", "Amanda", "" ], [ "Sayeed", "Maryum", "" ], [ "Myles", "Justin", "" ] ]
$\textit{The Bachelor}$ is a reality TV dating show in which a single bachelor selects his wife from a pool of approximately 30 female contestants over eight weeks of filming (American Broadcasting Company 2002). We collected the following data on all 422 contestants that participated in seasons 11 through 25: their Age, Hometown, Career, Race, Week they got their first 1-on-1 date, whether they got the first impression rose, and what "place" they ended up getting. We then trained three machine learning models to predict the ideal characteristics of a successful contestant on $\textit{The Bachelor}$. The three algorithms that we tested were: random forest classification, neural networks, and linear regression. We found consistency across all three models, although the neural network performed the best overall. Our models found that a woman has the highest probability of progressing far on $\textit{The Bachelor}$ if she is: 26 years old, white, from the Northwest, works as an dancer, received a 1-on-1 in week 6, and did not receive the First Impression Rose. Our methodology is broadly applicable to all romantic reality television, and our results will inform future $\textit{The Bachelor}$ production and contestant strategies. While our models were relatively successful, we still encountered high misclassification rates. This may be because: (1) Our training dataset had fewer than 400 points or (2) Our models were too simple to parameterize the complex romantic connections contestants forge over the course of a season.
1801.03595
Junting Chen
Junting Chen and David Gesbert
Efficient Local Map Search Algorithms for the Placement of Flying Relays
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies the optimal unmanned aerial vehicle (UAV) placement problem for wireless networking. The UAV operates as a flying wireless relay to provide coverage extension for a base station (BS) and deliver capacity boost to a user shadowed by obstacles. While existing methods rely on statistical models for potential blockage of a direct propagation link, we propose an approach capable of leveraging local terrain information to offer performance guarantees. The proposed method allows to strike the best trade-off between minimizing propagation distances to ground terminals and discovering good propagation conditions. The algorithm only requires several propagation parameters, but it is capable to avoid deep propagation shadowing and is proven to find the globally optimal UAV position. Only a local exploration over the target area is required, and the maximum length of search trajectory is linear to the geographical scale. Hence, it lends itself to online search. Significant throughput gains are found when compared to other positioning approaches based on statistical propagation models.
[ { "created": "Thu, 11 Jan 2018 00:26:11 GMT", "version": "v1" }, { "created": "Thu, 31 Oct 2019 06:33:09 GMT", "version": "v2" } ]
2019-11-01
[ [ "Chen", "Junting", "" ], [ "Gesbert", "David", "" ] ]
This paper studies the optimal unmanned aerial vehicle (UAV) placement problem for wireless networking. The UAV operates as a flying wireless relay to provide coverage extension for a base station (BS) and deliver capacity boost to a user shadowed by obstacles. While existing methods rely on statistical models for potential blockage of a direct propagation link, we propose an approach capable of leveraging local terrain information to offer performance guarantees. The proposed method allows to strike the best trade-off between minimizing propagation distances to ground terminals and discovering good propagation conditions. The algorithm only requires several propagation parameters, but it is capable to avoid deep propagation shadowing and is proven to find the globally optimal UAV position. Only a local exploration over the target area is required, and the maximum length of search trajectory is linear to the geographical scale. Hence, it lends itself to online search. Significant throughput gains are found when compared to other positioning approaches based on statistical propagation models.
1710.04133
Emanuele Massaro Ph.D.
Umberto Fugiglando, Emanuele Massaro, Paolo Santi, Sebastiano Milardo, Kacem Abida, Rainer Stahlmann, Florian Netter, Carlo Ratti
Driving Behavior Analysis through CAN Bus Data in an Uncontrolled Environment
null
null
null
null
cs.LG cs.CY physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cars can nowadays record several thousands of signals through the CAN bus technology and potentially provide real-time information on the car, the driver and the surrounding environment. This paper proposes a new method for the analysis and classification of driver behavior using a selected subset of CAN bus signals, specifically gas pedal position, brake pedal pressure, steering wheel angle, steering wheel momentum, velocity, RPM, frontal and lateral acceleration. Data has been collected in a completely uncontrolled experiment, where 64 people drove 10 cars for or a total of over 2000 driving trips without any type of pre-determined driving instruction on a wide variety of road scenarios. We propose an unsupervised learning technique that clusters drivers in different groups, and offers a validation method to test the robustness of clustering in a wide range of experimental settings. The minimal amount of data needed to preserve robust driver clustering is also computed. The presented study provides a new methodology for near-real-time classification of driver behavior in uncontrolled environments.
[ { "created": "Mon, 9 Oct 2017 09:58:23 GMT", "version": "v1" } ]
2017-10-12
[ [ "Fugiglando", "Umberto", "" ], [ "Massaro", "Emanuele", "" ], [ "Santi", "Paolo", "" ], [ "Milardo", "Sebastiano", "" ], [ "Abida", "Kacem", "" ], [ "Stahlmann", "Rainer", "" ], [ "Netter", "Florian", "" ], [ "Ratti", "Carlo", "" ] ]
Cars can nowadays record several thousands of signals through the CAN bus technology and potentially provide real-time information on the car, the driver and the surrounding environment. This paper proposes a new method for the analysis and classification of driver behavior using a selected subset of CAN bus signals, specifically gas pedal position, brake pedal pressure, steering wheel angle, steering wheel momentum, velocity, RPM, frontal and lateral acceleration. Data has been collected in a completely uncontrolled experiment, where 64 people drove 10 cars for or a total of over 2000 driving trips without any type of pre-determined driving instruction on a wide variety of road scenarios. We propose an unsupervised learning technique that clusters drivers in different groups, and offers a validation method to test the robustness of clustering in a wide range of experimental settings. The minimal amount of data needed to preserve robust driver clustering is also computed. The presented study provides a new methodology for near-real-time classification of driver behavior in uncontrolled environments.
2202.02790
Fabio Ferreira
Fabio Ferreira and Thomas Nierhoff and Andreas Saelinger and Frank Hutter
Learning Synthetic Environments and Reward Networks for Reinforcement Learning
null
International Conference on Learning Representations (ICLR 2022)
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
We introduce Synthetic Environments (SEs) and Reward Networks (RNs), represented by neural networks, as proxy environment models for training Reinforcement Learning (RL) agents. We show that an agent, after being trained exclusively on the SE, is able to solve the corresponding real environment. While an SE acts as a full proxy to a real environment by learning about its state dynamics and rewards, an RN is a partial proxy that learns to augment or replace rewards. We use bi-level optimization to evolve SEs and RNs: the inner loop trains the RL agent, and the outer loop trains the parameters of the SE / RN via an evolution strategy. We evaluate our proposed new concept on a broad range of RL algorithms and classic control environments. In a one-to-one comparison, learning an SE proxy requires more interactions with the real environment than training agents only on the real environment. However, once such an SE has been learned, we do not need any interactions with the real environment to train new agents. Moreover, the learned SE proxies allow us to train agents with fewer interactions while maintaining the original task performance. Our empirical results suggest that SEs achieve this result by learning informed representations that bias the agents towards relevant states. Moreover, we find that these proxies are robust against hyperparameter variation and can also transfer to unseen agents.
[ { "created": "Sun, 6 Feb 2022 14:55:59 GMT", "version": "v1" } ]
2022-02-08
[ [ "Ferreira", "Fabio", "" ], [ "Nierhoff", "Thomas", "" ], [ "Saelinger", "Andreas", "" ], [ "Hutter", "Frank", "" ] ]
We introduce Synthetic Environments (SEs) and Reward Networks (RNs), represented by neural networks, as proxy environment models for training Reinforcement Learning (RL) agents. We show that an agent, after being trained exclusively on the SE, is able to solve the corresponding real environment. While an SE acts as a full proxy to a real environment by learning about its state dynamics and rewards, an RN is a partial proxy that learns to augment or replace rewards. We use bi-level optimization to evolve SEs and RNs: the inner loop trains the RL agent, and the outer loop trains the parameters of the SE / RN via an evolution strategy. We evaluate our proposed new concept on a broad range of RL algorithms and classic control environments. In a one-to-one comparison, learning an SE proxy requires more interactions with the real environment than training agents only on the real environment. However, once such an SE has been learned, we do not need any interactions with the real environment to train new agents. Moreover, the learned SE proxies allow us to train agents with fewer interactions while maintaining the original task performance. Our empirical results suggest that SEs achieve this result by learning informed representations that bias the agents towards relevant states. Moreover, we find that these proxies are robust against hyperparameter variation and can also transfer to unseen agents.
2405.05736
Shashank Gupta
Shashank Gupta, Olivier Jeunen, Harrie Oosterhuis, and Maarten de Rijke
Optimal Baseline Corrections for Off-Policy Contextual Bandits
null
null
10.1145/3640457.3688105
null
cs.LG cs.IR
http://creativecommons.org/licenses/by/4.0/
The off-policy learning paradigm allows for recommender systems and general ranking applications to be framed as decision-making problems, where we aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric. With unbiasedness comes potentially high variance, and prevalent methods exist to reduce estimation variance. These methods typically make use of control variates, either additive (i.e., baseline corrections or doubly robust methods) or multiplicative (i.e., self-normalisation). Our work unifies these approaches by proposing a single framework built on their equivalence in learning scenarios. The foundation of our framework is the derivation of an equivalent baseline correction for all of the existing control variates. Consequently, our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it. This optimal estimator brings significantly improved performance in both evaluation and learning, and minimizes data requirements. Empirical observations corroborate our theoretical findings.
[ { "created": "Thu, 9 May 2024 12:52:22 GMT", "version": "v1" }, { "created": "Wed, 14 Aug 2024 14:14:02 GMT", "version": "v2" } ]
2024-08-15
[ [ "Gupta", "Shashank", "" ], [ "Jeunen", "Olivier", "" ], [ "Oosterhuis", "Harrie", "" ], [ "de Rijke", "Maarten", "" ] ]
The off-policy learning paradigm allows for recommender systems and general ranking applications to be framed as decision-making problems, where we aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric. With unbiasedness comes potentially high variance, and prevalent methods exist to reduce estimation variance. These methods typically make use of control variates, either additive (i.e., baseline corrections or doubly robust methods) or multiplicative (i.e., self-normalisation). Our work unifies these approaches by proposing a single framework built on their equivalence in learning scenarios. The foundation of our framework is the derivation of an equivalent baseline correction for all of the existing control variates. Consequently, our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it. This optimal estimator brings significantly improved performance in both evaluation and learning, and minimizes data requirements. Empirical observations corroborate our theoretical findings.
1708.05682
Lu Huang
Lu Huang, Jiasong Sun, Ji Xu and Yi Yang
An Improved Residual LSTM Architecture for Acoustic Modeling
5 pages, 2 figures
null
null
null
cs.CL cs.AI cs.SD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Long Short-Term Memory (LSTM) is the primary recurrent neural networks architecture for acoustic modeling in automatic speech recognition systems. Residual learning is an efficient method to help neural networks converge easier and faster. In this paper, we propose several types of residual LSTM methods for our acoustic modeling. Our experiments indicate that, compared with classic LSTM, our architecture shows more than 8% relative reduction in Phone Error Rate (PER) on TIMIT tasks. At the same time, our residual fast LSTM approach shows 4% relative reduction in PER on the same task. Besides, we find that all this architecture could have good results on THCHS-30, Librispeech and Switchboard corpora.
[ { "created": "Thu, 17 Aug 2017 01:37:21 GMT", "version": "v1" } ]
2017-08-21
[ [ "Huang", "Lu", "" ], [ "Sun", "Jiasong", "" ], [ "Xu", "Ji", "" ], [ "Yang", "Yi", "" ] ]
Long Short-Term Memory (LSTM) is the primary recurrent neural networks architecture for acoustic modeling in automatic speech recognition systems. Residual learning is an efficient method to help neural networks converge easier and faster. In this paper, we propose several types of residual LSTM methods for our acoustic modeling. Our experiments indicate that, compared with classic LSTM, our architecture shows more than 8% relative reduction in Phone Error Rate (PER) on TIMIT tasks. At the same time, our residual fast LSTM approach shows 4% relative reduction in PER on the same task. Besides, we find that all this architecture could have good results on THCHS-30, Librispeech and Switchboard corpora.
2403.08002
Juan Manuel Zambrano Chaves
Juan Manuel Zambrano Chaves, Shih-Cheng Huang, Yanbo Xu, Hanwen Xu, Naoto Usuyama, Sheng Zhang, Fei Wang, Yujia Xie, Mahmoud Khademi, Ziyi Yang, Hany Awadalla, Julia Gong, Houdong Hu, Jianwei Yang, Chunyuan Li, Jianfeng Gao, Yu Gu, Cliff Wong, Mu Wei, Tristan Naumann, Muhao Chen, Matthew P. Lungren, Akshay Chaudhari, Serena Yeung-Levy, Curtis P. Langlotz, Sheng Wang, Hoifung Poon
Towards a clinically accessible radiology foundation model: open-access and lightweight, with automated evaluation
null
null
null
null
cs.CL cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The scaling laws and extraordinary performance of large foundation models motivate the development and utilization of such models in biomedicine. However, despite early promising results on some biomedical benchmarks, there are still major challenges that need to be addressed before these models can be used in real-world clinics. Frontier general-domain models such as GPT-4V still have significant performance gaps in multimodal biomedical applications. More importantly, less-acknowledged pragmatic issues, including accessibility, model cost, and tedious manual evaluation make it hard for clinicians to use state-of-the-art large models directly on private patient data. Here, we explore training open-source small multimodal models (SMMs) to bridge competency gaps for unmet clinical needs in radiology. To maximize data efficiency, we adopt a modular approach by incorporating state-of-the-art pre-trained models for image and text modalities, and focusing on training a lightweight adapter to ground each modality to the text embedding space, as exemplified by LLaVA-Med. For training, we assemble a large dataset of over 697 thousand radiology image-text pairs. For evaluation, we propose CheXprompt, a GPT-4-based metric for factuality evaluation, and demonstrate its parity with expert evaluation. For best practice, we conduct a systematic ablation study on various choices in data engineering and multimodal training. The resulting LlaVA-Rad (7B) model attains state-of-the-art results on standard radiology tasks such as report generation and cross-modal retrieval, even outperforming much larger models such as GPT-4V and Med-PaLM M (84B). The inference of LlaVA-Rad is fast and can be performed on a single V100 GPU in private settings, offering a promising state-of-the-art tool for real-world clinical applications.
[ { "created": "Tue, 12 Mar 2024 18:12:02 GMT", "version": "v1" }, { "created": "Wed, 20 Mar 2024 23:31:22 GMT", "version": "v2" }, { "created": "Sat, 4 May 2024 00:35:01 GMT", "version": "v3" }, { "created": "Fri, 10 May 2024 23:46:33 GMT", "version": "v4" }, { "created": "Thu, 27 Jun 2024 02:51:29 GMT", "version": "v5" } ]
2024-06-28
[ [ "Chaves", "Juan Manuel Zambrano", "" ], [ "Huang", "Shih-Cheng", "" ], [ "Xu", "Yanbo", "" ], [ "Xu", "Hanwen", "" ], [ "Usuyama", "Naoto", "" ], [ "Zhang", "Sheng", "" ], [ "Wang", "Fei", "" ], [ "Xie", "Yujia", "" ], [ "Khademi", "Mahmoud", "" ], [ "Yang", "Ziyi", "" ], [ "Awadalla", "Hany", "" ], [ "Gong", "Julia", "" ], [ "Hu", "Houdong", "" ], [ "Yang", "Jianwei", "" ], [ "Li", "Chunyuan", "" ], [ "Gao", "Jianfeng", "" ], [ "Gu", "Yu", "" ], [ "Wong", "Cliff", "" ], [ "Wei", "Mu", "" ], [ "Naumann", "Tristan", "" ], [ "Chen", "Muhao", "" ], [ "Lungren", "Matthew P.", "" ], [ "Chaudhari", "Akshay", "" ], [ "Yeung-Levy", "Serena", "" ], [ "Langlotz", "Curtis P.", "" ], [ "Wang", "Sheng", "" ], [ "Poon", "Hoifung", "" ] ]
The scaling laws and extraordinary performance of large foundation models motivate the development and utilization of such models in biomedicine. However, despite early promising results on some biomedical benchmarks, there are still major challenges that need to be addressed before these models can be used in real-world clinics. Frontier general-domain models such as GPT-4V still have significant performance gaps in multimodal biomedical applications. More importantly, less-acknowledged pragmatic issues, including accessibility, model cost, and tedious manual evaluation make it hard for clinicians to use state-of-the-art large models directly on private patient data. Here, we explore training open-source small multimodal models (SMMs) to bridge competency gaps for unmet clinical needs in radiology. To maximize data efficiency, we adopt a modular approach by incorporating state-of-the-art pre-trained models for image and text modalities, and focusing on training a lightweight adapter to ground each modality to the text embedding space, as exemplified by LLaVA-Med. For training, we assemble a large dataset of over 697 thousand radiology image-text pairs. For evaluation, we propose CheXprompt, a GPT-4-based metric for factuality evaluation, and demonstrate its parity with expert evaluation. For best practice, we conduct a systematic ablation study on various choices in data engineering and multimodal training. The resulting LlaVA-Rad (7B) model attains state-of-the-art results on standard radiology tasks such as report generation and cross-modal retrieval, even outperforming much larger models such as GPT-4V and Med-PaLM M (84B). The inference of LlaVA-Rad is fast and can be performed on a single V100 GPU in private settings, offering a promising state-of-the-art tool for real-world clinical applications.
2201.09329
Olga Kononova
Zheren Wang, Kevin Cruse, Yuxing Fei, Ann Chia, Yan Zeng, Haoyan Huo, Tanjin He, Bowen Deng, Olga Kononova and Gerbrand Ceder
ULSA: Unified Language of Synthesis Actions for Representation of Synthesis Protocols
null
null
null
null
cs.LG cond-mat.mtrl-sci
http://creativecommons.org/licenses/by/4.0/
Applying AI power to predict syntheses of novel materials requires high-quality, large-scale datasets. Extraction of synthesis information from scientific publications is still challenging, especially for extracting synthesis actions, because of the lack of a comprehensive labeled dataset using a solid, robust, and well-established ontology for describing synthesis procedures. In this work, we propose the first Unified Language of Synthesis Actions (ULSA) for describing ceramics synthesis procedures. We created a dataset of 3,040 synthesis procedures annotated by domain experts according to the proposed ULSA scheme. To demonstrate the capabilities of ULSA, we built a neural network-based model to map arbitrary ceramics synthesis paragraphs into ULSA and used it to construct synthesis flowcharts for synthesis procedures. Analysis for the flowcharts showed that (a) ULSA covers essential vocabulary used by researchers when describing synthesis procedures and (b) it can capture important features of synthesis protocols. This work is an important step towards creating a synthesis ontology and a solid foundation for autonomous robotic synthesis.
[ { "created": "Sun, 23 Jan 2022 17:44:48 GMT", "version": "v1" } ]
2022-01-25
[ [ "Wang", "Zheren", "" ], [ "Cruse", "Kevin", "" ], [ "Fei", "Yuxing", "" ], [ "Chia", "Ann", "" ], [ "Zeng", "Yan", "" ], [ "Huo", "Haoyan", "" ], [ "He", "Tanjin", "" ], [ "Deng", "Bowen", "" ], [ "Kononova", "Olga", "" ], [ "Ceder", "Gerbrand", "" ] ]
Applying AI power to predict syntheses of novel materials requires high-quality, large-scale datasets. Extraction of synthesis information from scientific publications is still challenging, especially for extracting synthesis actions, because of the lack of a comprehensive labeled dataset using a solid, robust, and well-established ontology for describing synthesis procedures. In this work, we propose the first Unified Language of Synthesis Actions (ULSA) for describing ceramics synthesis procedures. We created a dataset of 3,040 synthesis procedures annotated by domain experts according to the proposed ULSA scheme. To demonstrate the capabilities of ULSA, we built a neural network-based model to map arbitrary ceramics synthesis paragraphs into ULSA and used it to construct synthesis flowcharts for synthesis procedures. Analysis for the flowcharts showed that (a) ULSA covers essential vocabulary used by researchers when describing synthesis procedures and (b) it can capture important features of synthesis protocols. This work is an important step towards creating a synthesis ontology and a solid foundation for autonomous robotic synthesis.
2107.14050
Ali Shahaab
Ali Shahaab, Chaminda Hewage, Imtiaz Khan
Preventing Spoliation of Evidence with Blockchain: A Perspective from South Asia
ICBCT21, March 26-28, 2021, Shanghai, China
null
null
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
Evidence destruction and tempering is a time-tested tactic to protect the powerful perpetrators, criminals, and corrupt officials. Countries where law enforcing institutions and judicial system can be comprised, and evidence destroyed or tampered, ordinary citizens feel disengaged with the investigation or prosecution process, and in some instances, intimidated due to the vulnerability to exposure and retribution. Using Distributed Ledger Technologies (DLT), such as blockchain, as the underpinning technology, here we propose a conceptual model - 'EvidenceChain', through which citizens can anonymously upload digital evidence, having assurance that the integrity of the evidence will be preserved in an immutable and indestructible manner. Person uploading the evidence can anonymously share it with investigating authorities or openly with public, if coerced by the perpetrators or authorities. Transferring the ownership of evidence from authority to ordinary citizen, and custodianship of evidence from susceptible centralized repository to an immutable and indestructible distributed repository, can cause a paradigm shift of power that not only can minimize spoliation of evidence but human rights abuse too. Here the conceptual model was theoretically tested against some high-profile spoliation of evidence cases from four South Asian developing countries that often rank high in global corruption index and low in human rights index.
[ { "created": "Wed, 14 Jul 2021 11:59:12 GMT", "version": "v1" } ]
2021-07-30
[ [ "Shahaab", "Ali", "" ], [ "Hewage", "Chaminda", "" ], [ "Khan", "Imtiaz", "" ] ]
Evidence destruction and tempering is a time-tested tactic to protect the powerful perpetrators, criminals, and corrupt officials. Countries where law enforcing institutions and judicial system can be comprised, and evidence destroyed or tampered, ordinary citizens feel disengaged with the investigation or prosecution process, and in some instances, intimidated due to the vulnerability to exposure and retribution. Using Distributed Ledger Technologies (DLT), such as blockchain, as the underpinning technology, here we propose a conceptual model - 'EvidenceChain', through which citizens can anonymously upload digital evidence, having assurance that the integrity of the evidence will be preserved in an immutable and indestructible manner. Person uploading the evidence can anonymously share it with investigating authorities or openly with public, if coerced by the perpetrators or authorities. Transferring the ownership of evidence from authority to ordinary citizen, and custodianship of evidence from susceptible centralized repository to an immutable and indestructible distributed repository, can cause a paradigm shift of power that not only can minimize spoliation of evidence but human rights abuse too. Here the conceptual model was theoretically tested against some high-profile spoliation of evidence cases from four South Asian developing countries that often rank high in global corruption index and low in human rights index.
1812.04647
Ankur Gandhe
Ankur Gandhe, Ariya Rastrow, Bjorn Hoffmeister
Scalable language model adaptation for spoken dialogue systems
Accepted at SLT 2018
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Language models (LM) for interactive speech recognition systems are trained on large amounts of data and the model parameters are optimized on past user data. New application intents and interaction types are released for these systems over time, imposing challenges to adapt the LMs since the existing training data is no longer sufficient to model the future user interactions. It is unclear how to adapt LMs to new application intents without degrading the performance on existing applications. In this paper, we propose a solution to (a) estimate n-gram counts directly from the hand-written grammar for training LMs and (b) use constrained optimization to optimize the system parameters for future use cases, while not degrading the performance on past usage. We evaluated our approach on new applications intents for a personal assistant system and find that the adaptation improves the word error rate by up to 15% on new applications even when there is no adaptation data available for an application.
[ { "created": "Tue, 11 Dec 2018 19:02:05 GMT", "version": "v1" } ]
2018-12-13
[ [ "Gandhe", "Ankur", "" ], [ "Rastrow", "Ariya", "" ], [ "Hoffmeister", "Bjorn", "" ] ]
Language models (LM) for interactive speech recognition systems are trained on large amounts of data and the model parameters are optimized on past user data. New application intents and interaction types are released for these systems over time, imposing challenges to adapt the LMs since the existing training data is no longer sufficient to model the future user interactions. It is unclear how to adapt LMs to new application intents without degrading the performance on existing applications. In this paper, we propose a solution to (a) estimate n-gram counts directly from the hand-written grammar for training LMs and (b) use constrained optimization to optimize the system parameters for future use cases, while not degrading the performance on past usage. We evaluated our approach on new applications intents for a personal assistant system and find that the adaptation improves the word error rate by up to 15% on new applications even when there is no adaptation data available for an application.
0805.0648
Mathieu Bredif
Mathieu Br\'edif, Dider Boldo, Marc Pierrot-Deseilligny, Henri Ma\^itre
3D Building Model Fitting Using A New Kinetic Framework
null
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe a new approach to fit the polyhedron describing a 3D building model to the point cloud of a Digital Elevation Model (DEM). We introduce a new kinetic framework that hides to its user the combinatorial complexity of determining or maintaining the polyhedron topology, allowing the design of a simple variational optimization. This new kinetic framework allows the manipulation of a bounded polyhedron with simple faces by specifying the target plane equations of each of its faces. It proceeds by evolving continuously from the polyhedron defined by its initial topology and its initial plane equations to a polyhedron that is as topologically close as possible to the initial polyhedron but with the new plane equations. This kinetic framework handles internally the necessary topological changes that may be required to keep the faces simple and the polyhedron bounded. For each intermediate configurations where the polyhedron looses the simplicity of its faces or its boundedness, the simplest topological modification that is able to reestablish the simplicity and the boundedness is performed.
[ { "created": "Tue, 6 May 2008 06:34:31 GMT", "version": "v1" } ]
2008-12-18
[ [ "Brédif", "Mathieu", "" ], [ "Boldo", "Dider", "" ], [ "Pierrot-Deseilligny", "Marc", "" ], [ "Maître", "Henri", "" ] ]
We describe a new approach to fit the polyhedron describing a 3D building model to the point cloud of a Digital Elevation Model (DEM). We introduce a new kinetic framework that hides to its user the combinatorial complexity of determining or maintaining the polyhedron topology, allowing the design of a simple variational optimization. This new kinetic framework allows the manipulation of a bounded polyhedron with simple faces by specifying the target plane equations of each of its faces. It proceeds by evolving continuously from the polyhedron defined by its initial topology and its initial plane equations to a polyhedron that is as topologically close as possible to the initial polyhedron but with the new plane equations. This kinetic framework handles internally the necessary topological changes that may be required to keep the faces simple and the polyhedron bounded. For each intermediate configurations where the polyhedron looses the simplicity of its faces or its boundedness, the simplest topological modification that is able to reestablish the simplicity and the boundedness is performed.
2205.10822
Li Du
Li Du, Xiao Ding, Yue Zhang, Kai Xiong, Ting Liu, Bing Qin
A Graph Enhanced BERT Model for Event Prediction
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Predicting the subsequent event for an existing event context is an important but challenging task, as it requires understanding the underlying relationship between events. Previous methods propose to retrieve relational features from event graph to enhance the modeling of event correlation. However, the sparsity of event graph may restrict the acquisition of relevant graph information, and hence influence the model performance. To address this issue, we consider automatically building of event graph using a BERT model. To this end, we incorporate an additional structured variable into BERT to learn to predict the event connections in the training process. Hence, in the test process, the connection relationship for unseen events can be predicted by the structured variable. Results on two event prediction tasks: script event prediction and story ending prediction, show that our approach can outperform state-of-the-art baseline methods.
[ { "created": "Sun, 22 May 2022 13:37:38 GMT", "version": "v1" } ]
2022-05-24
[ [ "Du", "Li", "" ], [ "Ding", "Xiao", "" ], [ "Zhang", "Yue", "" ], [ "Xiong", "Kai", "" ], [ "Liu", "Ting", "" ], [ "Qin", "Bing", "" ] ]
Predicting the subsequent event for an existing event context is an important but challenging task, as it requires understanding the underlying relationship between events. Previous methods propose to retrieve relational features from event graph to enhance the modeling of event correlation. However, the sparsity of event graph may restrict the acquisition of relevant graph information, and hence influence the model performance. To address this issue, we consider automatically building of event graph using a BERT model. To this end, we incorporate an additional structured variable into BERT to learn to predict the event connections in the training process. Hence, in the test process, the connection relationship for unseen events can be predicted by the structured variable. Results on two event prediction tasks: script event prediction and story ending prediction, show that our approach can outperform state-of-the-art baseline methods.
2108.09604
Lili Su
Lili Su, Quanquan C. Liu, Neha Narula
The Power of Random Symmetry-Breaking in Nakamoto Consensus
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nakamoto consensus underlies the security of many of the world's largest cryptocurrencies, such as Bitcoin and Ethereum. Common lore is that Nakamoto consensus only achieves consistency and liveness under a regime where the difficulty of its underlying mining puzzle is very high, negatively impacting overall throughput and latency. In this work, we study Nakamoto consensus under a wide range of puzzle difficulties, including very easy puzzles. We first analyze an adversary-free setting and show that, surprisingly, the common prefix of the blockchain grows quickly even with easy puzzles. In a setting with adversaries, we provide a small backwards-compatible change to Nakamoto consensus to achieve consistency and liveness with easy puzzles. Our insight relies on a careful choice of \emph{symmetry-breaking strategy}, which was significantly underestimated in prior work. We introduce a new method -- \emph{coalescing random walks} -- to analyzing the correctness of Nakamoto consensus under the uniformly-at-random symmetry-breaking strategy. This method is more powerful than existing analysis methods that focus on bounding the number of {\it convergence opportunities}.
[ { "created": "Sun, 22 Aug 2021 00:18:44 GMT", "version": "v1" } ]
2021-08-24
[ [ "Su", "Lili", "" ], [ "Liu", "Quanquan C.", "" ], [ "Narula", "Neha", "" ] ]
Nakamoto consensus underlies the security of many of the world's largest cryptocurrencies, such as Bitcoin and Ethereum. Common lore is that Nakamoto consensus only achieves consistency and liveness under a regime where the difficulty of its underlying mining puzzle is very high, negatively impacting overall throughput and latency. In this work, we study Nakamoto consensus under a wide range of puzzle difficulties, including very easy puzzles. We first analyze an adversary-free setting and show that, surprisingly, the common prefix of the blockchain grows quickly even with easy puzzles. In a setting with adversaries, we provide a small backwards-compatible change to Nakamoto consensus to achieve consistency and liveness with easy puzzles. Our insight relies on a careful choice of \emph{symmetry-breaking strategy}, which was significantly underestimated in prior work. We introduce a new method -- \emph{coalescing random walks} -- to analyzing the correctness of Nakamoto consensus under the uniformly-at-random symmetry-breaking strategy. This method is more powerful than existing analysis methods that focus on bounding the number of {\it convergence opportunities}.
1808.08573
Zied Elloumi
Zied Elloumi, Laurent Besacier, Olivier Galibert, Benjamin Lecouteux
Analyzing Learned Representations of a Deep ASR Performance Prediction Model
EMNLP 2018 Workshop
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper addresses a relatively new task: prediction of ASR performance on unseen broadcast programs. In a previous paper, we presented an ASR performance prediction system using CNNs that encode both text (ASR transcript) and speech, in order to predict word error rate. This work is dedicated to the analysis of speech signal embeddings and text embeddings learnt by the CNN while training our prediction model. We try to better understand which information is captured by the deep model and its relation with different conditioning factors. It is shown that hidden layers convey a clear signal about speech style, accent and broadcast type. We then try to leverage these 3 types of information at training time through multi-task learning. Our experiments show that this allows to train slightly more efficient ASR performance prediction systems that - in addition - simultaneously tag the analyzed utterances according to their speech style, accent and broadcast program origin.
[ { "created": "Sun, 26 Aug 2018 15:10:47 GMT", "version": "v1" }, { "created": "Tue, 28 Aug 2018 09:59:05 GMT", "version": "v2" } ]
2018-08-29
[ [ "Elloumi", "Zied", "" ], [ "Besacier", "Laurent", "" ], [ "Galibert", "Olivier", "" ], [ "Lecouteux", "Benjamin", "" ] ]
This paper addresses a relatively new task: prediction of ASR performance on unseen broadcast programs. In a previous paper, we presented an ASR performance prediction system using CNNs that encode both text (ASR transcript) and speech, in order to predict word error rate. This work is dedicated to the analysis of speech signal embeddings and text embeddings learnt by the CNN while training our prediction model. We try to better understand which information is captured by the deep model and its relation with different conditioning factors. It is shown that hidden layers convey a clear signal about speech style, accent and broadcast type. We then try to leverage these 3 types of information at training time through multi-task learning. Our experiments show that this allows to train slightly more efficient ASR performance prediction systems that - in addition - simultaneously tag the analyzed utterances according to their speech style, accent and broadcast program origin.
2407.14108
Florian Chabot
Florian Chabot, Nicolas Granger, Guillaume Lapouge
GaussianBeV: 3D Gaussian Representation meets Perception Models for BeV Segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Bird's-eye View (BeV) representation is widely used for 3D perception from multi-view camera images. It allows to merge features from different cameras into a common space, providing a unified representation of the 3D scene. The key component is the view transformer, which transforms image views into the BeV. However, actual view transformer methods based on geometry or cross-attention do not provide a sufficiently detailed representation of the scene, as they use a sub-sampling of the 3D space that is non-optimal for modeling the fine structures of the environment. In this paper, we propose GaussianBeV, a novel method for transforming image features to BeV by finely representing the scene using a set of 3D gaussians located and oriented in 3D space. This representation is then splattered to produce the BeV feature map by adapting recent advances in 3D representation rendering based on gaussian splatting. GaussianBeV is the first approach to use this 3D gaussian modeling and 3D scene rendering process online, i.e. without optimizing it on a specific scene and directly integrated into a single stage model for BeV scene understanding. Experiments show that the proposed representation is highly effective and place GaussianBeV as the new state-of-the-art on the BeV semantic segmentation task on the nuScenes dataset.
[ { "created": "Fri, 19 Jul 2024 08:24:36 GMT", "version": "v1" } ]
2024-07-22
[ [ "Chabot", "Florian", "" ], [ "Granger", "Nicolas", "" ], [ "Lapouge", "Guillaume", "" ] ]
The Bird's-eye View (BeV) representation is widely used for 3D perception from multi-view camera images. It allows to merge features from different cameras into a common space, providing a unified representation of the 3D scene. The key component is the view transformer, which transforms image views into the BeV. However, actual view transformer methods based on geometry or cross-attention do not provide a sufficiently detailed representation of the scene, as they use a sub-sampling of the 3D space that is non-optimal for modeling the fine structures of the environment. In this paper, we propose GaussianBeV, a novel method for transforming image features to BeV by finely representing the scene using a set of 3D gaussians located and oriented in 3D space. This representation is then splattered to produce the BeV feature map by adapting recent advances in 3D representation rendering based on gaussian splatting. GaussianBeV is the first approach to use this 3D gaussian modeling and 3D scene rendering process online, i.e. without optimizing it on a specific scene and directly integrated into a single stage model for BeV scene understanding. Experiments show that the proposed representation is highly effective and place GaussianBeV as the new state-of-the-art on the BeV semantic segmentation task on the nuScenes dataset.
cs/0602075
Peter Jonsson
Vladimir Deineko, Peter Jonsson, Mikael Klasson, and Andrei Krokhin
The approximability of MAX CSP with fixed-value constraints
null
null
null
null
cs.CC
null
In the maximum constraint satisfaction problem (MAX CSP), one is given a finite collection of (possibly weighted) constraints on overlapping sets of variables, and the goal is to assign values from a given finite domain to the variables so as to maximize the number (or the total weight, for the weighted case) of satisfied constraints. This problem is NP-hard in general, and, therefore, it is natural to study how restricting the allowed types of constraints affects the approximability of the problem. In this paper, we show that any MAX CSP problem with a finite set of allowed constraint types, which includes all fixed-value constraints (i.e., constraints of the form x=a), is either solvable exactly in polynomial-time or else is APX-complete, even if the number of occurrences of variables in instances are bounded. Moreover, we present a simple description of all polynomial-time solvable cases of our problem. This description relies on the well-known algebraic combinatorial property of supermodularity.
[ { "created": "Tue, 21 Feb 2006 14:13:37 GMT", "version": "v1" } ]
2007-05-23
[ [ "Deineko", "Vladimir", "" ], [ "Jonsson", "Peter", "" ], [ "Klasson", "Mikael", "" ], [ "Krokhin", "Andrei", "" ] ]
In the maximum constraint satisfaction problem (MAX CSP), one is given a finite collection of (possibly weighted) constraints on overlapping sets of variables, and the goal is to assign values from a given finite domain to the variables so as to maximize the number (or the total weight, for the weighted case) of satisfied constraints. This problem is NP-hard in general, and, therefore, it is natural to study how restricting the allowed types of constraints affects the approximability of the problem. In this paper, we show that any MAX CSP problem with a finite set of allowed constraint types, which includes all fixed-value constraints (i.e., constraints of the form x=a), is either solvable exactly in polynomial-time or else is APX-complete, even if the number of occurrences of variables in instances are bounded. Moreover, we present a simple description of all polynomial-time solvable cases of our problem. This description relies on the well-known algebraic combinatorial property of supermodularity.
2406.07209
Siming Fu
X. Wang, Siming Fu, Qihan Huang, Wanggui He, Hao Jiang
MS-Diffusion: Multi-subject Zero-shot Image Personalization with Layout Guidance
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Recent advancements in text-to-image generation models have dramatically enhanced the generation of photorealistic images from textual prompts, leading to an increased interest in personalized text-to-image applications, particularly in multi-subject scenarios. However, these advances are hindered by two main challenges: firstly, the need to accurately maintain the details of each referenced subject in accordance with the textual descriptions; and secondly, the difficulty in achieving a cohesive representation of multiple subjects in a single image without introducing inconsistencies. To address these concerns, our research introduces the MS-Diffusion framework for layout-guided zero-shot image personalization with multi-subjects. This innovative approach integrates grounding tokens with the feature resampler to maintain detail fidelity among subjects. With the layout guidance, MS-Diffusion further improves the cross-attention to adapt to the multi-subject inputs, ensuring that each subject condition acts on specific areas. The proposed multi-subject cross-attention orchestrates harmonious inter-subject compositions while preserving the control of texts. Comprehensive quantitative and qualitative experiments affirm that this method surpasses existing models in both image and text fidelity, promoting the development of personalized text-to-image generation.
[ { "created": "Tue, 11 Jun 2024 12:32:53 GMT", "version": "v1" } ]
2024-06-12
[ [ "Wang", "X.", "" ], [ "Fu", "Siming", "" ], [ "Huang", "Qihan", "" ], [ "He", "Wanggui", "" ], [ "Jiang", "Hao", "" ] ]
Recent advancements in text-to-image generation models have dramatically enhanced the generation of photorealistic images from textual prompts, leading to an increased interest in personalized text-to-image applications, particularly in multi-subject scenarios. However, these advances are hindered by two main challenges: firstly, the need to accurately maintain the details of each referenced subject in accordance with the textual descriptions; and secondly, the difficulty in achieving a cohesive representation of multiple subjects in a single image without introducing inconsistencies. To address these concerns, our research introduces the MS-Diffusion framework for layout-guided zero-shot image personalization with multi-subjects. This innovative approach integrates grounding tokens with the feature resampler to maintain detail fidelity among subjects. With the layout guidance, MS-Diffusion further improves the cross-attention to adapt to the multi-subject inputs, ensuring that each subject condition acts on specific areas. The proposed multi-subject cross-attention orchestrates harmonious inter-subject compositions while preserving the control of texts. Comprehensive quantitative and qualitative experiments affirm that this method surpasses existing models in both image and text fidelity, promoting the development of personalized text-to-image generation.
2203.12647
Nicky Zimmerman
Nicky Zimmerman, Louis Wiesmann, Tiziano Guadagnino, Thomas L\"abe, Jens Behley, Cyrill Stachniss
Robust Onboard Localization in Changing Environments Exploiting Text Spotting
This work has been accepted to IROS 2022. Copyright may be transferred without notice, after which this version may no longer be accessible
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robust localization in a given map is a crucial component of most autonomous robots. In this paper, we address the problem of localizing in an indoor environment that changes and where prominent structures have no correspondence in the map built at a different point in time. To overcome the discrepancy between the map and the observed environment caused by such changes, we exploit human-readable localization cues to assist localization. These cues are readily available in most facilities and can be detected using RGB camera images by utilizing text spotting. We integrate these cues into a Monte Carlo localization framework using a particle filter that operates on 2D LiDAR scans and camera data. By this, we provide a robust localization solution for environments with structural changes and dynamics by humans walking. We evaluate our localization framework on multiple challenging indoor scenarios in an office environment. The experiments suggest that our approach is robust to structural changes and can run on an onboard computer. We release an open source implementation of our approach (upon paper acceptance), which uses off-the-shelf text spotting, written in C++ with a ROS wrapper.
[ { "created": "Wed, 23 Mar 2022 18:12:48 GMT", "version": "v1" }, { "created": "Sat, 23 Jul 2022 09:40:34 GMT", "version": "v2" } ]
2022-07-26
[ [ "Zimmerman", "Nicky", "" ], [ "Wiesmann", "Louis", "" ], [ "Guadagnino", "Tiziano", "" ], [ "Läbe", "Thomas", "" ], [ "Behley", "Jens", "" ], [ "Stachniss", "Cyrill", "" ] ]
Robust localization in a given map is a crucial component of most autonomous robots. In this paper, we address the problem of localizing in an indoor environment that changes and where prominent structures have no correspondence in the map built at a different point in time. To overcome the discrepancy between the map and the observed environment caused by such changes, we exploit human-readable localization cues to assist localization. These cues are readily available in most facilities and can be detected using RGB camera images by utilizing text spotting. We integrate these cues into a Monte Carlo localization framework using a particle filter that operates on 2D LiDAR scans and camera data. By this, we provide a robust localization solution for environments with structural changes and dynamics by humans walking. We evaluate our localization framework on multiple challenging indoor scenarios in an office environment. The experiments suggest that our approach is robust to structural changes and can run on an onboard computer. We release an open source implementation of our approach (upon paper acceptance), which uses off-the-shelf text spotting, written in C++ with a ROS wrapper.
1606.05675
Chang Liu
Chang Liu, Yu Cao, Yan Luo, Guanling Chen, Vinod Vokkarane and Yunsheng Ma
DeepFood: Deep Learning-Based Food Image Recognition for Computer-Aided Dietary Assessment
12 pages, 2 figures, 6 tables, ICOST 2016
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Worldwide, in 2014, more than 1.9 billion adults, 18 years and older, were overweight. Of these, over 600 million were obese. Accurately documenting dietary caloric intake is crucial to manage weight loss, but also presents challenges because most of the current methods for dietary assessment must rely on memory to recall foods eaten. The ultimate goal of our research is to develop computer-aided technical solutions to enhance and improve the accuracy of current measurements of dietary intake. Our proposed system in this paper aims to improve the accuracy of dietary assessment by analyzing the food images captured by mobile devices (e.g., smartphone). The key technique innovation in this paper is the deep learning-based food image recognition algorithms. Substantial research has demonstrated that digital imaging accurately estimates dietary intake in many environments and it has many advantages over other methods. However, how to derive the food information (e.g., food type and portion size) from food image effectively and efficiently remains a challenging and open research problem. We propose a new Convolutional Neural Network (CNN)-based food image recognition algorithm to address this problem. We applied our proposed approach to two real-world food image data sets (UEC-256 and Food-101) and achieved impressive results. To the best of our knowledge, these results outperformed all other reported work using these two data sets. Our experiments have demonstrated that the proposed approach is a promising solution for addressing the food image recognition problem. Our future work includes further improving the performance of the algorithms and integrating our system into a real-world mobile and cloud computing-based system to enhance the accuracy of current measurements of dietary intake.
[ { "created": "Fri, 17 Jun 2016 21:03:19 GMT", "version": "v1" } ]
2016-06-21
[ [ "Liu", "Chang", "" ], [ "Cao", "Yu", "" ], [ "Luo", "Yan", "" ], [ "Chen", "Guanling", "" ], [ "Vokkarane", "Vinod", "" ], [ "Ma", "Yunsheng", "" ] ]
Worldwide, in 2014, more than 1.9 billion adults, 18 years and older, were overweight. Of these, over 600 million were obese. Accurately documenting dietary caloric intake is crucial to manage weight loss, but also presents challenges because most of the current methods for dietary assessment must rely on memory to recall foods eaten. The ultimate goal of our research is to develop computer-aided technical solutions to enhance and improve the accuracy of current measurements of dietary intake. Our proposed system in this paper aims to improve the accuracy of dietary assessment by analyzing the food images captured by mobile devices (e.g., smartphone). The key technique innovation in this paper is the deep learning-based food image recognition algorithms. Substantial research has demonstrated that digital imaging accurately estimates dietary intake in many environments and it has many advantages over other methods. However, how to derive the food information (e.g., food type and portion size) from food image effectively and efficiently remains a challenging and open research problem. We propose a new Convolutional Neural Network (CNN)-based food image recognition algorithm to address this problem. We applied our proposed approach to two real-world food image data sets (UEC-256 and Food-101) and achieved impressive results. To the best of our knowledge, these results outperformed all other reported work using these two data sets. Our experiments have demonstrated that the proposed approach is a promising solution for addressing the food image recognition problem. Our future work includes further improving the performance of the algorithms and integrating our system into a real-world mobile and cloud computing-based system to enhance the accuracy of current measurements of dietary intake.