id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2210.17152
Ernie Chu
Ernie Chu, Ju-Ting Chen, Chia-Ping Chen
Audio Time-Scale Modification with Temporal Compressing Networks
null
null
null
null
cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
We propose a novel approach for time-scale modification of audio signals. Unlike traditional methods that rely on the framing technique or the short-time Fourier transform to preserve the frequency during temporal stretching, our neural network model encodes the raw audio into a high-level latent representation, dubbed Neuralgram, where each vector represents 1024 audio sample points. Due to a sufficient compression ratio, we are able to apply arbitrary spatial interpolation of the Neuralgram to perform temporal stretching. Finally, a learned neural decoder synthesizes the time-scaled audio samples based on the stretched Neuralgram representation. Both the encoder and decoder are trained with latent regression losses and adversarial losses in order to obtain high-fidelity audio samples. Despite its simplicity, our method has comparable performance compared to the existing baselines and opens a new possibility in research into modern time-scale modification. Audio samples can be found at https://tsmnet-mmasia23.github.io
[ { "created": "Mon, 31 Oct 2022 09:04:33 GMT", "version": "v1" }, { "created": "Sun, 28 May 2023 10:50:04 GMT", "version": "v2" }, { "created": "Fri, 6 Oct 2023 04:32:16 GMT", "version": "v3" } ]
2023-10-09
[ [ "Chu", "Ernie", "" ], [ "Chen", "Ju-Ting", "" ], [ "Chen", "Chia-Ping", "" ] ]
We propose a novel approach for time-scale modification of audio signals. Unlike traditional methods that rely on the framing technique or the short-time Fourier transform to preserve the frequency during temporal stretching, our neural network model encodes the raw audio into a high-level latent representation, dubbed Neuralgram, where each vector represents 1024 audio sample points. Due to a sufficient compression ratio, we are able to apply arbitrary spatial interpolation of the Neuralgram to perform temporal stretching. Finally, a learned neural decoder synthesizes the time-scaled audio samples based on the stretched Neuralgram representation. Both the encoder and decoder are trained with latent regression losses and adversarial losses in order to obtain high-fidelity audio samples. Despite its simplicity, our method has comparable performance compared to the existing baselines and opens a new possibility in research into modern time-scale modification. Audio samples can be found at https://tsmnet-mmasia23.github.io
2406.00851
Ratip Emin Berker
Ratip Emin Berker and Vincent Conitzer
Computing Optimal Equilibria in Repeated Games with Restarts
13 pages, 2 figures, main body to be published in Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (IJCAI-24), Jeju, South Korea, 2024
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Infinitely repeated games can support cooperative outcomes that are not equilibria in the one-shot game. The idea is to make sure that any gains from deviating will be offset by retaliation in future rounds. However, this model of cooperation fails in anonymous settings with many strategic agents that interact in pairs. Here, a player can defect and then avoid penalization by immediately switching partners. In this paper, we focus on a specific set of equilibria that avoids this pitfall. In them, agents follow a designated sequence of actions, and restart if their opponent ever deviates. We show that the socially-optimal sequence of actions consists of an infinitely repeating goal value, preceded by a hazing period. We introduce an equivalence relation on sequences and prove that the computational problem of finding a representative from the optimal equivalence class is (weakly) NP-hard. Nevertheless, we present a pseudo-polynomial time dynamic program for this problem, as well as an integer linear program, and show they are efficient in practice. Lastly, we introduce a fully polynomial-time approximation scheme that outputs a hazing sequence with arbitrarily small approximation ratio.
[ { "created": "Sun, 2 Jun 2024 20:07:05 GMT", "version": "v1" } ]
2024-06-04
[ [ "Berker", "Ratip Emin", "" ], [ "Conitzer", "Vincent", "" ] ]
Infinitely repeated games can support cooperative outcomes that are not equilibria in the one-shot game. The idea is to make sure that any gains from deviating will be offset by retaliation in future rounds. However, this model of cooperation fails in anonymous settings with many strategic agents that interact in pairs. Here, a player can defect and then avoid penalization by immediately switching partners. In this paper, we focus on a specific set of equilibria that avoids this pitfall. In them, agents follow a designated sequence of actions, and restart if their opponent ever deviates. We show that the socially-optimal sequence of actions consists of an infinitely repeating goal value, preceded by a hazing period. We introduce an equivalence relation on sequences and prove that the computational problem of finding a representative from the optimal equivalence class is (weakly) NP-hard. Nevertheless, we present a pseudo-polynomial time dynamic program for this problem, as well as an integer linear program, and show they are efficient in practice. Lastly, we introduce a fully polynomial-time approximation scheme that outputs a hazing sequence with arbitrarily small approximation ratio.
2102.03206
Veljko Milutinovic Prof
Miroslav Kosanic and Veljko Milutinovic
A Survey on Mathematical Aspects of Machine Learning in GeoPhysics: The Cases of Weather Forecast, Wind Energy, Wave Energy, Oil and Gas Exploration
10 pages, 3 figures, review paper
null
null
null
cs.LG physics.geo-ph
http://creativecommons.org/licenses/by/4.0/
This paper reviews the most notable works applying machine learning techniques (ML) in the context of geophysics and corresponding subbranches. We showcase both the progress achieved to date as well as the important future directions for further research while providing an adequate background in the fields of weather forecast, wind energy, wave energy, oil and gas exploration. The objective is to reflect on the previous successes and provide a comprehensive review of the synergy between these two fields in order to speed up the novel approaches of machine learning techniques in geophysics. Last but not least, we would like to point out possible improvements, some of which are related to the implementation of ML algorithms using DataFlow paradigm as a means of performance acceleration.
[ { "created": "Fri, 5 Feb 2021 14:44:34 GMT", "version": "v1" } ]
2021-02-08
[ [ "Kosanic", "Miroslav", "" ], [ "Milutinovic", "Veljko", "" ] ]
This paper reviews the most notable works applying machine learning techniques (ML) in the context of geophysics and corresponding subbranches. We showcase both the progress achieved to date as well as the important future directions for further research while providing an adequate background in the fields of weather forecast, wind energy, wave energy, oil and gas exploration. The objective is to reflect on the previous successes and provide a comprehensive review of the synergy between these two fields in order to speed up the novel approaches of machine learning techniques in geophysics. Last but not least, we would like to point out possible improvements, some of which are related to the implementation of ML algorithms using DataFlow paradigm as a means of performance acceleration.
2406.01698
Abhimanyu Rajeshkumar Bambhaniya
Abhimanyu Bambhaniya, Ritik Raj, Geonhwa Jeong, Souvik Kundu, Sudarshan Srinivasan, Midhilesh Elavazhagan, Madhu Kumar and Tushar Krishna
Demystifying Platform Requirements for Diverse LLM Inference Use Cases
12 Pages, https://github.com/abhibambhaniya/GenZ-LLM-Analyzer
null
null
null
cs.AR cs.AI cs.DC cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Large language models (LLMs) have shown remarkable performance across a wide range of applications, often outperforming human experts. However, deploying these parameter-heavy models efficiently for diverse inference use cases requires carefully designed hardware platforms with ample computing, memory, and network resources. With LLM deployment scenarios and models evolving at breakneck speed, the hardware requirements to meet SLOs remains an open research question. In this work, we present an analytical tool, GenZ, to study the relationship between LLM inference performance and various platform design parameters. Our analysis provides insights into configuring platforms for different LLM workloads and use cases. We quantify the platform requirements to support SOTA LLMs models like LLaMA and GPT-4 under diverse serving settings. Furthermore, we project the hardware capabilities needed to enable future LLMs potentially exceeding hundreds of trillions of parameters. The trends and insights derived from GenZ can guide AI engineers deploying LLMs as well as computer architects designing next-generation hardware accelerators and platforms. Ultimately, this work sheds light on the platform design considerations for unlocking the full potential of large language models across a spectrum of applications. The source code is available at https://github.com/abhibambhaniya/GenZ-LLM-Analyzer .
[ { "created": "Mon, 3 Jun 2024 18:00:50 GMT", "version": "v1" } ]
2024-06-05
[ [ "Bambhaniya", "Abhimanyu", "" ], [ "Raj", "Ritik", "" ], [ "Jeong", "Geonhwa", "" ], [ "Kundu", "Souvik", "" ], [ "Srinivasan", "Sudarshan", "" ], [ "Elavazhagan", "Midhilesh", "" ], [ "Kumar", "Madhu", "" ], [ "Krishna", "Tushar", "" ] ]
Large language models (LLMs) have shown remarkable performance across a wide range of applications, often outperforming human experts. However, deploying these parameter-heavy models efficiently for diverse inference use cases requires carefully designed hardware platforms with ample computing, memory, and network resources. With LLM deployment scenarios and models evolving at breakneck speed, the hardware requirements to meet SLOs remains an open research question. In this work, we present an analytical tool, GenZ, to study the relationship between LLM inference performance and various platform design parameters. Our analysis provides insights into configuring platforms for different LLM workloads and use cases. We quantify the platform requirements to support SOTA LLMs models like LLaMA and GPT-4 under diverse serving settings. Furthermore, we project the hardware capabilities needed to enable future LLMs potentially exceeding hundreds of trillions of parameters. The trends and insights derived from GenZ can guide AI engineers deploying LLMs as well as computer architects designing next-generation hardware accelerators and platforms. Ultimately, this work sheds light on the platform design considerations for unlocking the full potential of large language models across a spectrum of applications. The source code is available at https://github.com/abhibambhaniya/GenZ-LLM-Analyzer .
2307.08347
Che Liu
Che Liu, Sibo Cheng, Chen Chen, Mengyun Qiao, Weitong Zhang, Anand Shah, Wenjia Bai, Rossella Arcucci
M-FLAG: Medical Vision-Language Pre-training with Frozen Language Models and Latent Space Geometry Optimization
Accepted by MICCAI 2023
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Medical vision-language models enable co-learning and integrating features from medical imaging and clinical text. However, these models are not easy to train and the latent representation space can be complex. Here we propose a novel way for pre-training and regularising medical vision-language models. The proposed method, named Medical vision-language pre-training with Frozen language models and Latent spAce Geometry optimization (M-FLAG), leverages a frozen language model for training stability and efficiency and introduces a novel orthogonality loss to harmonize the latent space geometry. We demonstrate the potential of the pre-trained model on three downstream tasks: medical image classification, segmentation, and object detection. Extensive experiments across five public datasets demonstrate that M-FLAG significantly outperforms existing medical vision-language pre-training approaches and reduces the number of parameters by 78\%. Notably, M-FLAG achieves outstanding performance on the segmentation task while using only 1\% of the RSNA dataset, even outperforming ImageNet pre-trained models that have been fine-tuned using 100\% of the data.
[ { "created": "Mon, 17 Jul 2023 09:38:41 GMT", "version": "v1" }, { "created": "Wed, 19 Jul 2023 13:55:32 GMT", "version": "v2" } ]
2023-07-20
[ [ "Liu", "Che", "" ], [ "Cheng", "Sibo", "" ], [ "Chen", "Chen", "" ], [ "Qiao", "Mengyun", "" ], [ "Zhang", "Weitong", "" ], [ "Shah", "Anand", "" ], [ "Bai", "Wenjia", "" ], [ "Arcucci", "Rossella", "" ] ]
Medical vision-language models enable co-learning and integrating features from medical imaging and clinical text. However, these models are not easy to train and the latent representation space can be complex. Here we propose a novel way for pre-training and regularising medical vision-language models. The proposed method, named Medical vision-language pre-training with Frozen language models and Latent spAce Geometry optimization (M-FLAG), leverages a frozen language model for training stability and efficiency and introduces a novel orthogonality loss to harmonize the latent space geometry. We demonstrate the potential of the pre-trained model on three downstream tasks: medical image classification, segmentation, and object detection. Extensive experiments across five public datasets demonstrate that M-FLAG significantly outperforms existing medical vision-language pre-training approaches and reduces the number of parameters by 78\%. Notably, M-FLAG achieves outstanding performance on the segmentation task while using only 1\% of the RSNA dataset, even outperforming ImageNet pre-trained models that have been fine-tuned using 100\% of the data.
1905.00851
Thomas M\"ollenhoff
Thomas M\"ollenhoff, Daniel Cremers
Lifting Vectorial Variational Problems: A Natural Formulation based on Geometric Measure Theory and Discrete Exterior Calculus
Oral presentation at CVPR 2019
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Numerous tasks in imaging and vision can be formulated as variational problems over vector-valued maps. We approach the relaxation and convexification of such vectorial variational problems via a lifting to the space of currents. To that end, we recall that functionals with polyconvex Lagrangians can be reparametrized as convex one-homogeneous functionals on the graph of the function. This leads to an equivalent shape optimization problem over oriented surfaces in the product space of domain and codomain. A convex formulation is then obtained by relaxing the search space from oriented surfaces to more general currents. We propose a discretization of the resulting infinite-dimensional optimization problem using Whitney forms, which also generalizes recent "sublabel-accurate" multilabeling approaches.
[ { "created": "Thu, 2 May 2019 16:54:58 GMT", "version": "v1" } ]
2019-05-03
[ [ "Möllenhoff", "Thomas", "" ], [ "Cremers", "Daniel", "" ] ]
Numerous tasks in imaging and vision can be formulated as variational problems over vector-valued maps. We approach the relaxation and convexification of such vectorial variational problems via a lifting to the space of currents. To that end, we recall that functionals with polyconvex Lagrangians can be reparametrized as convex one-homogeneous functionals on the graph of the function. This leads to an equivalent shape optimization problem over oriented surfaces in the product space of domain and codomain. A convex formulation is then obtained by relaxing the search space from oriented surfaces to more general currents. We propose a discretization of the resulting infinite-dimensional optimization problem using Whitney forms, which also generalizes recent "sublabel-accurate" multilabeling approaches.
2405.11523
Youmin Xu
Youmin Xu, Xuanyu Zhang, Jiwen Yu, Chong Mou, Xiandong Meng, Jian Zhang
Diffusion-Based Hierarchical Image Steganography
arXiv admin note: text overlap with arXiv:2305.16936
null
null
A-01
cs.CV
http://creativecommons.org/licenses/by/4.0/
This paper introduces Hierarchical Image Steganography, a novel method that enhances the security and capacity of embedding multiple images into a single container using diffusion models. HIS assigns varying levels of robustness to images based on their importance, ensuring enhanced protection against manipulation. It adaptively exploits the robustness of the Diffusion Model alongside the reversibility of the Flow Model. The integration of Embed-Flow and Enhance-Flow improves embedding efficiency and image recovery quality, respectively, setting HIS apart from conventional multi-image steganography techniques. This innovative structure can autonomously generate a container image, thereby securely and efficiently concealing multiple images and text. Rigorous subjective and objective evaluations underscore our advantage in analytical resistance, robustness, and capacity, illustrating its expansive applicability in content safeguarding and privacy fortification.
[ { "created": "Sun, 19 May 2024 11:29:52 GMT", "version": "v1" } ]
2024-05-21
[ [ "Xu", "Youmin", "" ], [ "Zhang", "Xuanyu", "" ], [ "Yu", "Jiwen", "" ], [ "Mou", "Chong", "" ], [ "Meng", "Xiandong", "" ], [ "Zhang", "Jian", "" ] ]
This paper introduces Hierarchical Image Steganography, a novel method that enhances the security and capacity of embedding multiple images into a single container using diffusion models. HIS assigns varying levels of robustness to images based on their importance, ensuring enhanced protection against manipulation. It adaptively exploits the robustness of the Diffusion Model alongside the reversibility of the Flow Model. The integration of Embed-Flow and Enhance-Flow improves embedding efficiency and image recovery quality, respectively, setting HIS apart from conventional multi-image steganography techniques. This innovative structure can autonomously generate a container image, thereby securely and efficiently concealing multiple images and text. Rigorous subjective and objective evaluations underscore our advantage in analytical resistance, robustness, and capacity, illustrating its expansive applicability in content safeguarding and privacy fortification.
1912.12616
Sherif Tarabishy
Sherif Tarabishy, Stamatios Psarras, Marcin Kosicki, Martha Tsigkari
Deep learning surrogate models for spatial and visual connectivity
Accepted manuscript in the International Journal of Architectural Computing (2019)
null
10.1177/1478077119894483
null
cs.LG cs.CV eess.IV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spatial and visual connectivity are important metrics when developing workplace layouts. Calculating those metrics in real-time can be difficult, depending on the size of the floor plan being analysed and the resolution of the analyses. This paper investigates the possibility of considerably speeding up the outcomes of such computationally intensive simulations by using machine learning to create models capable of identifying the spatial and visual connectivity potential of a space. To that end we present the entire process of investigating different machine learning models and a pipeline for training them on such task, from the incorporation of a bespoke spatial and visual connectivity analysis engine through a distributed computation pipeline, to the process of synthesizing training data and evaluating the performance of different neural networks.
[ { "created": "Sun, 29 Dec 2019 09:17:19 GMT", "version": "v1" } ]
2020-01-01
[ [ "Tarabishy", "Sherif", "" ], [ "Psarras", "Stamatios", "" ], [ "Kosicki", "Marcin", "" ], [ "Tsigkari", "Martha", "" ] ]
Spatial and visual connectivity are important metrics when developing workplace layouts. Calculating those metrics in real-time can be difficult, depending on the size of the floor plan being analysed and the resolution of the analyses. This paper investigates the possibility of considerably speeding up the outcomes of such computationally intensive simulations by using machine learning to create models capable of identifying the spatial and visual connectivity potential of a space. To that end we present the entire process of investigating different machine learning models and a pipeline for training them on such task, from the incorporation of a bespoke spatial and visual connectivity analysis engine through a distributed computation pipeline, to the process of synthesizing training data and evaluating the performance of different neural networks.
2401.12266
Yawen Zhang
Yawen Zhang
An Exploratory Study of Multimodal Physiological Data in Jazz Improvisation Using Basic Machine Learning Techniques
Master's thesis
null
null
null
cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
Our study delves into the "Embodied Musicking Dataset," exploring the intertwined relationships and correlations between physiological and psychological dimensions during improvisational music performances. The primary objective is to ascertain the presence of a definitive causal or correlational relationship between these states and comprehend their manifestation in musical compositions. This rich dataset provides a perspective on how musicians coordinate their physicality with sonic events in real-time improvisational scenarios, emphasizing the concept of "Embodied Musicking."
[ { "created": "Mon, 22 Jan 2024 10:32:18 GMT", "version": "v1" } ]
2024-01-24
[ [ "Zhang", "Yawen", "" ] ]
Our study delves into the "Embodied Musicking Dataset," exploring the intertwined relationships and correlations between physiological and psychological dimensions during improvisational music performances. The primary objective is to ascertain the presence of a definitive causal or correlational relationship between these states and comprehend their manifestation in musical compositions. This rich dataset provides a perspective on how musicians coordinate their physicality with sonic events in real-time improvisational scenarios, emphasizing the concept of "Embodied Musicking."
1805.10407
Sang Michael Xie
Neal Jean, Sang Michael Xie, Stefano Ermon
Semi-supervised Deep Kernel Learning: Regression with Unlabeled Data by Minimizing Predictive Variance
In Proceedings of Neural Information Processing Systems (NeurIPS) 2018
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large amounts of labeled data are typically required to train deep learning models. For many real-world problems, however, acquiring additional data can be expensive or even impossible. We present semi-supervised deep kernel learning (SSDKL), a semi-supervised regression model based on minimizing predictive variance in the posterior regularization framework. SSDKL combines the hierarchical representation learning of neural networks with the probabilistic modeling capabilities of Gaussian processes. By leveraging unlabeled data, we show improvements on a diverse set of real-world regression tasks over supervised deep kernel learning and semi-supervised methods such as VAT and mean teacher adapted for regression.
[ { "created": "Sat, 26 May 2018 00:47:14 GMT", "version": "v1" }, { "created": "Mon, 26 Nov 2018 00:36:05 GMT", "version": "v2" }, { "created": "Sat, 5 Jan 2019 18:41:06 GMT", "version": "v3" }, { "created": "Mon, 4 Mar 2019 18:55:13 GMT", "version": "v4" } ]
2019-03-05
[ [ "Jean", "Neal", "" ], [ "Xie", "Sang Michael", "" ], [ "Ermon", "Stefano", "" ] ]
Large amounts of labeled data are typically required to train deep learning models. For many real-world problems, however, acquiring additional data can be expensive or even impossible. We present semi-supervised deep kernel learning (SSDKL), a semi-supervised regression model based on minimizing predictive variance in the posterior regularization framework. SSDKL combines the hierarchical representation learning of neural networks with the probabilistic modeling capabilities of Gaussian processes. By leveraging unlabeled data, we show improvements on a diverse set of real-world regression tasks over supervised deep kernel learning and semi-supervised methods such as VAT and mean teacher adapted for regression.
1107.4940
Dejan Kovachev
Dejan Kovachev, Yiwei Cao and Ralf Klamma
Mobile Cloud Computing: A Comparison of Application Models
null
null
null
null
cs.NI cs.DC cs.MM
http://creativecommons.org/licenses/by/3.0/
Cloud computing is an emerging concept combining many fields of computing. The foundation of cloud computing is the delivery of services, software and processing capacity over the Internet, reducing cost, increasing storage, automating systems, decoupling of service delivery from underlying technology, and providing flexibility and mobility of information. However, the actual realization of these benefits is far from being achieved for mobile applications and open many new research questions. In order to better understand how to facilitate the building of mobile cloud-based applications, we have surveyed existing work in mobile computing through the prism of cloud computing principles. We give a definition of mobile cloud coputing and provide an overview of the results from this review, in particular, models of mobile cloud applications. We also highlight research challenges in the area of mobile cloud computing. We conclude with recommendations for how this better understanding of mobile cloud computing can help building more powerful mobile applications.
[ { "created": "Mon, 25 Jul 2011 13:17:13 GMT", "version": "v1" } ]
2011-07-26
[ [ "Kovachev", "Dejan", "" ], [ "Cao", "Yiwei", "" ], [ "Klamma", "Ralf", "" ] ]
Cloud computing is an emerging concept combining many fields of computing. The foundation of cloud computing is the delivery of services, software and processing capacity over the Internet, reducing cost, increasing storage, automating systems, decoupling of service delivery from underlying technology, and providing flexibility and mobility of information. However, the actual realization of these benefits is far from being achieved for mobile applications and open many new research questions. In order to better understand how to facilitate the building of mobile cloud-based applications, we have surveyed existing work in mobile computing through the prism of cloud computing principles. We give a definition of mobile cloud coputing and provide an overview of the results from this review, in particular, models of mobile cloud applications. We also highlight research challenges in the area of mobile cloud computing. We conclude with recommendations for how this better understanding of mobile cloud computing can help building more powerful mobile applications.
2104.08353
Pablo Barros
Pablo Barros, Alessandra Sciutti
I Only Have Eyes for You: The Impact of Masks On Convolutional-Based Facial Expression Recognition
Accepted at the LXCV Workshop @ CVPR2021
null
null
null
cs.CV cs.NE
http://creativecommons.org/licenses/by-nc-sa/4.0/
The current COVID-19 pandemic has shown us that we are still facing unpredictable challenges in our society. The necessary constrain on social interactions affected heavily how we envision and prepare the future of social robots and artificial agents in general. Adapting current affective perception models towards constrained perception based on the hard separation between facial perception and affective understanding would help us to provide robust systems. In this paper, we perform an in-depth analysis of how recognizing affect from persons with masks differs from general facial expression perception. We evaluate how the recently proposed FaceChannel adapts towards recognizing facial expressions from persons with masks. In Our analysis, we evaluate different training and fine-tuning schemes to understand better the impact of masked facial expressions. We also perform specific feature-level visualization to demonstrate how the inherent capabilities of the FaceChannel to learn and combine facial features change when in a constrained social interaction scenario.
[ { "created": "Fri, 16 Apr 2021 20:03:30 GMT", "version": "v1" } ]
2021-04-20
[ [ "Barros", "Pablo", "" ], [ "Sciutti", "Alessandra", "" ] ]
The current COVID-19 pandemic has shown us that we are still facing unpredictable challenges in our society. The necessary constrain on social interactions affected heavily how we envision and prepare the future of social robots and artificial agents in general. Adapting current affective perception models towards constrained perception based on the hard separation between facial perception and affective understanding would help us to provide robust systems. In this paper, we perform an in-depth analysis of how recognizing affect from persons with masks differs from general facial expression perception. We evaluate how the recently proposed FaceChannel adapts towards recognizing facial expressions from persons with masks. In Our analysis, we evaluate different training and fine-tuning schemes to understand better the impact of masked facial expressions. We also perform specific feature-level visualization to demonstrate how the inherent capabilities of the FaceChannel to learn and combine facial features change when in a constrained social interaction scenario.
2111.14673
Muhammad Ferjad Naeem
Muhammad Ferjad Naeem, Evin P{\i}nar \"Ornek, Yongqin Xian, Luc Van Gool, Federico Tombari
3D Compositional Zero-shot Learning with DeCompositional Consensus
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Parts represent a basic unit of geometric and semantic similarity across different objects. We argue that part knowledge should be composable beyond the observed object classes. Towards this, we present 3D Compositional Zero-shot Learning as a problem of part generalization from seen to unseen object classes for semantic segmentation. We provide a structured study through benchmarking the task with the proposed Compositional-PartNet dataset. This dataset is created by processing the original PartNet to maximize part overlap across different objects. The existing point cloud part segmentation methods fail to generalize to unseen object classes in this setting. As a solution, we propose DeCompositional Consensus, which combines a part segmentation network with a part scoring network. The key intuition to our approach is that a segmentation mask over some parts should have a consensus with its part scores when each part is taken apart. The two networks reason over different part combinations defined in a per-object part prior to generate the most suitable segmentation mask. We demonstrate that our method allows compositional zero-shot segmentation and generalized zero-shot classification, and establishes the state of the art on both tasks.
[ { "created": "Mon, 29 Nov 2021 16:34:53 GMT", "version": "v1" }, { "created": "Fri, 15 Apr 2022 13:38:37 GMT", "version": "v2" } ]
2022-04-18
[ [ "Naeem", "Muhammad Ferjad", "" ], [ "Örnek", "Evin Pınar", "" ], [ "Xian", "Yongqin", "" ], [ "Van Gool", "Luc", "" ], [ "Tombari", "Federico", "" ] ]
Parts represent a basic unit of geometric and semantic similarity across different objects. We argue that part knowledge should be composable beyond the observed object classes. Towards this, we present 3D Compositional Zero-shot Learning as a problem of part generalization from seen to unseen object classes for semantic segmentation. We provide a structured study through benchmarking the task with the proposed Compositional-PartNet dataset. This dataset is created by processing the original PartNet to maximize part overlap across different objects. The existing point cloud part segmentation methods fail to generalize to unseen object classes in this setting. As a solution, we propose DeCompositional Consensus, which combines a part segmentation network with a part scoring network. The key intuition to our approach is that a segmentation mask over some parts should have a consensus with its part scores when each part is taken apart. The two networks reason over different part combinations defined in a per-object part prior to generate the most suitable segmentation mask. We demonstrate that our method allows compositional zero-shot segmentation and generalized zero-shot classification, and establishes the state of the art on both tasks.
2406.01435
Fan He
Fan He, Mingzhen He, Lei Shi, Xiaolin Huang, Johan A.K. Suykens
Learning Analysis of Kernel Ridgeless Regression with Asymmetric Kernel Learning
arXiv admin note: text overlap with arXiv:2310.05236
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Ridgeless regression has garnered attention among researchers, particularly in light of the ``Benign Overfitting'' phenomenon, where models interpolating noisy samples demonstrate robust generalization. However, kernel ridgeless regression does not always perform well due to the lack of flexibility. This paper enhances kernel ridgeless regression with Locally-Adaptive-Bandwidths (LAB) RBF kernels, incorporating kernel learning techniques to improve performance in both experiments and theory. For the first time, we demonstrate that functions learned from LAB RBF kernels belong to an integral space of Reproducible Kernel Hilbert Spaces (RKHSs). Despite the absence of explicit regularization in the proposed model, its optimization is equivalent to solving an $\ell_0$-regularized problem in the integral space of RKHSs, elucidating the origin of its generalization ability. Taking an approximation analysis viewpoint, we introduce an $l_q$-norm analysis technique (with $0<q<1$) to derive the learning rate for the proposed model under mild conditions. This result deepens our theoretical understanding, explaining that our algorithm's robust approximation ability arises from the large capacity of the integral space of RKHSs, while its generalization ability is ensured by sparsity, controlled by the number of support vectors. Experimental results on both synthetic and real datasets validate our theoretical conclusions.
[ { "created": "Mon, 3 Jun 2024 15:28:12 GMT", "version": "v1" } ]
2024-06-04
[ [ "He", "Fan", "" ], [ "He", "Mingzhen", "" ], [ "Shi", "Lei", "" ], [ "Huang", "Xiaolin", "" ], [ "Suykens", "Johan A. K.", "" ] ]
Ridgeless regression has garnered attention among researchers, particularly in light of the ``Benign Overfitting'' phenomenon, where models interpolating noisy samples demonstrate robust generalization. However, kernel ridgeless regression does not always perform well due to the lack of flexibility. This paper enhances kernel ridgeless regression with Locally-Adaptive-Bandwidths (LAB) RBF kernels, incorporating kernel learning techniques to improve performance in both experiments and theory. For the first time, we demonstrate that functions learned from LAB RBF kernels belong to an integral space of Reproducible Kernel Hilbert Spaces (RKHSs). Despite the absence of explicit regularization in the proposed model, its optimization is equivalent to solving an $\ell_0$-regularized problem in the integral space of RKHSs, elucidating the origin of its generalization ability. Taking an approximation analysis viewpoint, we introduce an $l_q$-norm analysis technique (with $0<q<1$) to derive the learning rate for the proposed model under mild conditions. This result deepens our theoretical understanding, explaining that our algorithm's robust approximation ability arises from the large capacity of the integral space of RKHSs, while its generalization ability is ensured by sparsity, controlled by the number of support vectors. Experimental results on both synthetic and real datasets validate our theoretical conclusions.
2303.05455
Bartosz Minch
Bartosz Minch
In search of the most efficient and memory-saving visualization of high dimensional data
PhD thesis on searching the most efficient and memory-saving visualization of high dimensional data. arXiv admin note: substantial text overlap with arXiv:1902.01108, arXiv:1602.00370 by other authors; text overlap with arXiv:2109.02508 by other authors
null
null
null
cs.LG cs.HC
http://creativecommons.org/licenses/by/4.0/
Interactive exploration of large, multidimensional datasets plays a very important role in various scientific fields. It makes it possible not only to identify important structural features and forms, such as clusters of vertices and their connection patterns, but also to evaluate their interrelationships in terms of position, distance, shape and connection density. We argue that the visualization of multidimensional data is well approximated by the problem of two-dimensional embedding of undirected nearest-neighbor graphs. The size of complex networks is a major challenge for today's computer systems and still requires more efficient data embedding algorithms. Existing reduction methods are too slow and do not allow interactive manipulation. We show that high-quality embeddings are produced with minimal time and memory complexity. We present very efficient IVHD algorithms (CPU and GPU) and compare them with the latest and most popular dimensionality reduction methods. We show that the memory and time requirements are dramatically lower than for base codes. At the cost of a slight degradation in embedding quality, IVHD preserves the main structural properties of the data well with a much lower time budget. We also present a meta-algorithm that allows the use of any unsupervised data embedding method in a supervised manner.
[ { "created": "Mon, 27 Feb 2023 20:56:13 GMT", "version": "v1" } ]
2023-03-10
[ [ "Minch", "Bartosz", "" ] ]
Interactive exploration of large, multidimensional datasets plays a very important role in various scientific fields. It makes it possible not only to identify important structural features and forms, such as clusters of vertices and their connection patterns, but also to evaluate their interrelationships in terms of position, distance, shape and connection density. We argue that the visualization of multidimensional data is well approximated by the problem of two-dimensional embedding of undirected nearest-neighbor graphs. The size of complex networks is a major challenge for today's computer systems and still requires more efficient data embedding algorithms. Existing reduction methods are too slow and do not allow interactive manipulation. We show that high-quality embeddings are produced with minimal time and memory complexity. We present very efficient IVHD algorithms (CPU and GPU) and compare them with the latest and most popular dimensionality reduction methods. We show that the memory and time requirements are dramatically lower than for base codes. At the cost of a slight degradation in embedding quality, IVHD preserves the main structural properties of the data well with a much lower time budget. We also present a meta-algorithm that allows the use of any unsupervised data embedding method in a supervised manner.
2311.17728
Patrick Lambein-Monette
Bernadette Charron-Bost and Patrick Lambein-Monette
Know your audience
null
null
null
null
cs.DC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Distributed function computation is the problem, for a networked system of $n$ autonomous agents, to collectively compute the value $f(v_1, \ldots, v_n)$ of some input values, each initially private to one agent in the network. Here, we study and organize results pertaining to distributed function computation in anonymous networks, both for the static and the dynamic case, under a communication model of directed and synchronous message exchanges, but with varying assumptions in the degree of awareness or control that a single agent has over its outneighbors. Our main argument is three-fold. First, in the "blind broadcast" model, where in each round an agent merely casts out a unique message without any knowledge or control over its addressees, the computable functions are those that only depend on the set of the input values, but not on their multiplicities or relative frequencies in the input. Second, in contrast, when we assume either that a) in each round, the agents know how many outneighbors they have; b) all communications links in the network are bidirectional; or c) the agents may address each of their outneighbors individually, then the set of computable functions grows to contain all functions that depend on the relative frequencies of each value in the input - such as the average - but not on their multiplicities - thus, not the sum. Third, however, if one or several agents are distinguished as leaders, or if the cardinality of the network is known, then under any of the above three assumptions it becomes possible to recover the complete multiset of the input values, and thus compute any function of the distributed input as long as it is invariant under permutation of its arguments. In the case of dynamic networks, we also discuss the impact of multiple connectivity assumptions.
[ { "created": "Wed, 29 Nov 2023 15:34:55 GMT", "version": "v1" } ]
2023-11-30
[ [ "Charron-Bost", "Bernadette", "" ], [ "Lambein-Monette", "Patrick", "" ] ]
Distributed function computation is the problem, for a networked system of $n$ autonomous agents, to collectively compute the value $f(v_1, \ldots, v_n)$ of some input values, each initially private to one agent in the network. Here, we study and organize results pertaining to distributed function computation in anonymous networks, both for the static and the dynamic case, under a communication model of directed and synchronous message exchanges, but with varying assumptions in the degree of awareness or control that a single agent has over its outneighbors. Our main argument is three-fold. First, in the "blind broadcast" model, where in each round an agent merely casts out a unique message without any knowledge or control over its addressees, the computable functions are those that only depend on the set of the input values, but not on their multiplicities or relative frequencies in the input. Second, in contrast, when we assume either that a) in each round, the agents know how many outneighbors they have; b) all communications links in the network are bidirectional; or c) the agents may address each of their outneighbors individually, then the set of computable functions grows to contain all functions that depend on the relative frequencies of each value in the input - such as the average - but not on their multiplicities - thus, not the sum. Third, however, if one or several agents are distinguished as leaders, or if the cardinality of the network is known, then under any of the above three assumptions it becomes possible to recover the complete multiset of the input values, and thus compute any function of the distributed input as long as it is invariant under permutation of its arguments. In the case of dynamic networks, we also discuss the impact of multiple connectivity assumptions.
1902.02164
Md Mehedi Hassan Onik
Nasr Al-Zaben, Md Mehedi Hassan Onik, Chul-Soo Kim, Jinhong Yang
Communication Interface Identifier Protocol (CIIP): An Energy Efficient Protocol for smaller IoT Sensor
Korea Institute of Information and Telecommunication Technology, 2018 Spring General conference, Kongju, South Korea
2018, Vol.1, Issue No. 1
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Today we can use technologies like switched Ethernet, TCP/IP, high-speed wide area networks, and high-performance low-cost computers very easily. However, protocols designed for those communication are inefficient or not energy efficient. Smart home, smart grid, blockchain, Internet of Things (IoT) all these technologies are coming very rapidly with higher communication facilities demands an energy efficient Ethernet. Due to controller and network equipment use a huge quantity of energy. Layer to layer communication making our communication method more complex and costly. In this work, we propose an architecture, which will make the communication of sensor devices to outside world easier. Our proposed system removes certain layer from TCP-IP communication. We used a communication interface identifier protocol (CIIP) which can be used for smaller IoT sensors.
[ { "created": "Fri, 18 Jan 2019 14:47:07 GMT", "version": "v1" } ]
2019-02-07
[ [ "Al-Zaben", "Nasr", "" ], [ "Onik", "Md Mehedi Hassan", "" ], [ "Kim", "Chul-Soo", "" ], [ "Yang", "Jinhong", "" ] ]
Today we can use technologies like switched Ethernet, TCP/IP, high-speed wide area networks, and high-performance low-cost computers very easily. However, protocols designed for those communication are inefficient or not energy efficient. Smart home, smart grid, blockchain, Internet of Things (IoT) all these technologies are coming very rapidly with higher communication facilities demands an energy efficient Ethernet. Due to controller and network equipment use a huge quantity of energy. Layer to layer communication making our communication method more complex and costly. In this work, we propose an architecture, which will make the communication of sensor devices to outside world easier. Our proposed system removes certain layer from TCP-IP communication. We used a communication interface identifier protocol (CIIP) which can be used for smaller IoT sensors.
1512.05486
Serhii Dyshko
Dyshko Serhii
When the extension property does not hold
11 pages
null
10.1142/S0219498817500980
null
cs.IT math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A complete extension theorem for linear codes over a module alphabet and the symmetrized weight composition is proved. It is shown that an extension property with respect to arbitrary weight function does not hold for module alphabets with a noncyclic socle.
[ { "created": "Thu, 17 Dec 2015 07:54:30 GMT", "version": "v1" } ]
2016-07-19
[ [ "Serhii", "Dyshko", "" ] ]
A complete extension theorem for linear codes over a module alphabet and the symmetrized weight composition is proved. It is shown that an extension property with respect to arbitrary weight function does not hold for module alphabets with a noncyclic socle.
2402.09745
Weike Fang
Xinyue Liu, Zihe Song, Weike Fang, Wei Yang, Weihang Wang
WEFix: Intelligent Automatic Generation of Explicit Waits for Efficient Web End-to-End Flaky Tests
8 pages. Accepted for publication in the proceedings of the ACM Web Conference 2024 (WWW 24)
null
10.1145/3589334.3645628
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Web end-to-end (e2e) testing evaluates the workflow of a web application. It simulates real-world user scenarios to ensure the application flows behave as expected. However, web e2e tests are notorious for being flaky, i.e., the tests can produce inconsistent results despite no changes to the code. One common type of flakiness is caused by nondeterministic execution orders between the test code and the client-side code under test. In particular, UI-based flakiness emerges as a notably prevalent and challenging issue to fix because the test code has limited knowledge about the client-side code execution. In this paper, we propose WEFix, a technique that can automatically generate fix code for UI-based flakiness in web e2e testing. The core of our approach is to leverage browser UI changes to predict the client-side code execution and generate proper wait oracles. We evaluate the effectiveness and efficiency of WEFix against 122 web e2e flaky tests from seven popular real-world projects. Our results show that WEFix dramatically reduces the overhead (from 3.7$\times$ to 1.25$\times$) while achieving a high correctness (98%).
[ { "created": "Thu, 15 Feb 2024 06:51:53 GMT", "version": "v1" } ]
2024-05-21
[ [ "Liu", "Xinyue", "" ], [ "Song", "Zihe", "" ], [ "Fang", "Weike", "" ], [ "Yang", "Wei", "" ], [ "Wang", "Weihang", "" ] ]
Web end-to-end (e2e) testing evaluates the workflow of a web application. It simulates real-world user scenarios to ensure the application flows behave as expected. However, web e2e tests are notorious for being flaky, i.e., the tests can produce inconsistent results despite no changes to the code. One common type of flakiness is caused by nondeterministic execution orders between the test code and the client-side code under test. In particular, UI-based flakiness emerges as a notably prevalent and challenging issue to fix because the test code has limited knowledge about the client-side code execution. In this paper, we propose WEFix, a technique that can automatically generate fix code for UI-based flakiness in web e2e testing. The core of our approach is to leverage browser UI changes to predict the client-side code execution and generate proper wait oracles. We evaluate the effectiveness and efficiency of WEFix against 122 web e2e flaky tests from seven popular real-world projects. Our results show that WEFix dramatically reduces the overhead (from 3.7$\times$ to 1.25$\times$) while achieving a high correctness (98%).
2302.07344
Levi Cai
Levi Cai and Nathan E. McGuire and Roger Hanlon and T. Aran Mooney and Yogesh Girdhar
Semi-Supervised Visual Tracking of Marine Animals using Autonomous Underwater Vehicles
To appear in IJCV SI: Animal Tracking
Cai, Levi, Nathan E. McGuire, Roger Hanlon, T. Aran Mooney, and Yogesh Girdhar. "Semi-supervised Visual Tracking of Marine Animals Using Autonomous Underwater Vehicles." International Journal of Computer Vision (2023): 1-22
10.1007/s11263-023-01762-5
null
cs.CV cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In-situ visual observations of marine organisms is crucial to developing behavioural understandings and their relations to their surrounding ecosystem. Typically, these observations are collected via divers, tags, and remotely-operated or human-piloted vehicles. Recently, however, autonomous underwater vehicles equipped with cameras and embedded computers with GPU capabilities are being developed for a variety of applications, and in particular, can be used to supplement these existing data collection mechanisms where human operation or tags are more difficult. Existing approaches have focused on using fully-supervised tracking methods, but labelled data for many underwater species are severely lacking. Semi-supervised trackers may offer alternative tracking solutions because they require less data than fully-supervised counterparts. However, because there are not existing realistic underwater tracking datasets, the performance of semi-supervised tracking algorithms in the marine domain is not well understood. To better evaluate their performance and utility, in this paper we provide (1) a novel dataset specific to marine animals located at http://warp.whoi.edu/vmat/, (2) an evaluation of state-of-the-art semi-supervised algorithms in the context of underwater animal tracking, and (3) an evaluation of real-world performance through demonstrations using a semi-supervised algorithm on-board an autonomous underwater vehicle to track marine animals in the wild.
[ { "created": "Tue, 14 Feb 2023 21:08:52 GMT", "version": "v1" } ]
2023-05-04
[ [ "Cai", "Levi", "" ], [ "McGuire", "Nathan E.", "" ], [ "Hanlon", "Roger", "" ], [ "Mooney", "T. Aran", "" ], [ "Girdhar", "Yogesh", "" ] ]
In-situ visual observations of marine organisms is crucial to developing behavioural understandings and their relations to their surrounding ecosystem. Typically, these observations are collected via divers, tags, and remotely-operated or human-piloted vehicles. Recently, however, autonomous underwater vehicles equipped with cameras and embedded computers with GPU capabilities are being developed for a variety of applications, and in particular, can be used to supplement these existing data collection mechanisms where human operation or tags are more difficult. Existing approaches have focused on using fully-supervised tracking methods, but labelled data for many underwater species are severely lacking. Semi-supervised trackers may offer alternative tracking solutions because they require less data than fully-supervised counterparts. However, because there are not existing realistic underwater tracking datasets, the performance of semi-supervised tracking algorithms in the marine domain is not well understood. To better evaluate their performance and utility, in this paper we provide (1) a novel dataset specific to marine animals located at http://warp.whoi.edu/vmat/, (2) an evaluation of state-of-the-art semi-supervised algorithms in the context of underwater animal tracking, and (3) an evaluation of real-world performance through demonstrations using a semi-supervised algorithm on-board an autonomous underwater vehicle to track marine animals in the wild.
2012.03519
Lin Song
Lin Song, Yanwei Li, Zhengkai Jiang, Zeming Li, Hongbin Sun, Jian Sun, Nanning Zheng
Fine-Grained Dynamic Head for Object Detection
Accepted by NeurIPS-2020
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Feature Pyramid Network (FPN) presents a remarkable approach to alleviate the scale variance in object representation by performing instance-level assignments. Nevertheless, this strategy ignores the distinct characteristics of different sub-regions in an instance. To this end, we propose a fine-grained dynamic head to conditionally select a pixel-level combination of FPN features from different scales for each instance, which further releases the ability of multi-scale feature representation. Moreover, we design a spatial gate with the new activation function to reduce computational complexity dramatically through spatially sparse convolutions. Extensive experiments demonstrate the effectiveness and efficiency of the proposed method on several state-of-the-art detection benchmarks. Code is available at https://github.com/StevenGrove/DynamicHead.
[ { "created": "Mon, 7 Dec 2020 08:16:32 GMT", "version": "v1" } ]
2020-12-08
[ [ "Song", "Lin", "" ], [ "Li", "Yanwei", "" ], [ "Jiang", "Zhengkai", "" ], [ "Li", "Zeming", "" ], [ "Sun", "Hongbin", "" ], [ "Sun", "Jian", "" ], [ "Zheng", "Nanning", "" ] ]
The Feature Pyramid Network (FPN) presents a remarkable approach to alleviate the scale variance in object representation by performing instance-level assignments. Nevertheless, this strategy ignores the distinct characteristics of different sub-regions in an instance. To this end, we propose a fine-grained dynamic head to conditionally select a pixel-level combination of FPN features from different scales for each instance, which further releases the ability of multi-scale feature representation. Moreover, we design a spatial gate with the new activation function to reduce computational complexity dramatically through spatially sparse convolutions. Extensive experiments demonstrate the effectiveness and efficiency of the proposed method on several state-of-the-art detection benchmarks. Code is available at https://github.com/StevenGrove/DynamicHead.
1402.2710
Rodrigo de Lamare
L. Wang, R. C. de Lamare and M. Haardt
Direction Finding Algorithms with Joint Iterative Subspace Optimization
11 figures, 4 tables. IEEE Transactions on Aerospace and Electronic Systems, 2014
null
10.1109/TAES.2014.120395
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a reduced-rank scheme with joint iterative optimization is presented for direction of arrival estimation. A rank-reduction matrix and an auxiliary reduced-rank parameter vector are jointly optimized to calculate the output power with respect to each scanning angle. Subspace algorithms to estimate the rank-reduction matrix and the auxiliary vector are proposed. Simulations are performed to show that the proposed algorithms achieve an enhanced performance over existing algorithms in the studied scenarios.
[ { "created": "Wed, 12 Feb 2014 01:13:12 GMT", "version": "v1" } ]
2016-11-17
[ [ "Wang", "L.", "" ], [ "de Lamare", "R. C.", "" ], [ "Haardt", "M.", "" ] ]
In this paper, a reduced-rank scheme with joint iterative optimization is presented for direction of arrival estimation. A rank-reduction matrix and an auxiliary reduced-rank parameter vector are jointly optimized to calculate the output power with respect to each scanning angle. Subspace algorithms to estimate the rank-reduction matrix and the auxiliary vector are proposed. Simulations are performed to show that the proposed algorithms achieve an enhanced performance over existing algorithms in the studied scenarios.
1811.02508
Jonathan Le Roux
Jonathan Le Roux, Scott Wisdom, Hakan Erdogan, John R. Hershey
SDR - half-baked or well done?
null
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In speech enhancement and source separation, signal-to-noise ratio is a ubiquitous objective measure of denoising/separation quality. A decade ago, the BSS_eval toolkit was developed to give researchers worldwide a way to evaluate the quality of their algorithms in a simple, fair, and hopefully insightful way: it attempted to account for channel variations, and to not only evaluate the total distortion in the estimated signal but also split it in terms of various factors such as remaining interference, newly added artifacts, and channel errors. In recent years, hundreds of papers have been relying on this toolkit to evaluate their proposed methods and compare them to previous works, often arguing that differences on the order of 0.1 dB proved the effectiveness of a method over others. We argue here that the signal-to-distortion ratio (SDR) implemented in the BSS_eval toolkit has generally been improperly used and abused, especially in the case of single-channel separation, resulting in misleading results. We propose to use a slightly modified definition, resulting in a simpler, more robust measure, called scale-invariant SDR (SI-SDR). We present various examples of critical failure of the original SDR that SI-SDR overcomes.
[ { "created": "Tue, 6 Nov 2018 17:20:05 GMT", "version": "v1" } ]
2018-11-07
[ [ "Roux", "Jonathan Le", "" ], [ "Wisdom", "Scott", "" ], [ "Erdogan", "Hakan", "" ], [ "Hershey", "John R.", "" ] ]
In speech enhancement and source separation, signal-to-noise ratio is a ubiquitous objective measure of denoising/separation quality. A decade ago, the BSS_eval toolkit was developed to give researchers worldwide a way to evaluate the quality of their algorithms in a simple, fair, and hopefully insightful way: it attempted to account for channel variations, and to not only evaluate the total distortion in the estimated signal but also split it in terms of various factors such as remaining interference, newly added artifacts, and channel errors. In recent years, hundreds of papers have been relying on this toolkit to evaluate their proposed methods and compare them to previous works, often arguing that differences on the order of 0.1 dB proved the effectiveness of a method over others. We argue here that the signal-to-distortion ratio (SDR) implemented in the BSS_eval toolkit has generally been improperly used and abused, especially in the case of single-channel separation, resulting in misleading results. We propose to use a slightly modified definition, resulting in a simpler, more robust measure, called scale-invariant SDR (SI-SDR). We present various examples of critical failure of the original SDR that SI-SDR overcomes.
2211.10105
Bicheng Guo
Bicheng Guo, Shuxuan Guo, Miaojing Shi, Peng Chen, Shibo He, Jiming Chen, Kaicheng Yu
$\alpha$ DARTS Once More: Enhancing Differentiable Architecture Search by Masked Image Modeling
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Differentiable architecture search (DARTS) has been a mainstream direction in automatic machine learning. Since the discovery that original DARTS will inevitably converge to poor architectures, recent works alleviate this by either designing rule-based architecture selection techniques or incorporating complex regularization techniques, abandoning the simplicity of the original DARTS that selects architectures based on the largest parametric value, namely $\alpha$. Moreover, we find that all the previous attempts only rely on classification labels, hence learning only single modal information and limiting the representation power of the shared network. To this end, we propose to additionally inject semantic information by formulating a patch recovery approach. Specifically, we exploit the recent trending masked image modeling and do not abandon the guidance from the downstream tasks during the search phase. Our method surpasses all previous DARTS variants and achieves state-of-the-art results on CIFAR-10, CIFAR-100, and ImageNet without complex manual-designed strategies.
[ { "created": "Fri, 18 Nov 2022 09:07:19 GMT", "version": "v1" } ]
2022-11-21
[ [ "Guo", "Bicheng", "" ], [ "Guo", "Shuxuan", "" ], [ "Shi", "Miaojing", "" ], [ "Chen", "Peng", "" ], [ "He", "Shibo", "" ], [ "Chen", "Jiming", "" ], [ "Yu", "Kaicheng", "" ] ]
Differentiable architecture search (DARTS) has been a mainstream direction in automatic machine learning. Since the discovery that original DARTS will inevitably converge to poor architectures, recent works alleviate this by either designing rule-based architecture selection techniques or incorporating complex regularization techniques, abandoning the simplicity of the original DARTS that selects architectures based on the largest parametric value, namely $\alpha$. Moreover, we find that all the previous attempts only rely on classification labels, hence learning only single modal information and limiting the representation power of the shared network. To this end, we propose to additionally inject semantic information by formulating a patch recovery approach. Specifically, we exploit the recent trending masked image modeling and do not abandon the guidance from the downstream tasks during the search phase. Our method surpasses all previous DARTS variants and achieves state-of-the-art results on CIFAR-10, CIFAR-100, and ImageNet without complex manual-designed strategies.
2404.15386
Huy Cuong Truong
Andres Tello, Huy Truong, Alexander Lazovik and Victoria Degeler
Large-Scale Multipurpose Benchmark Datasets For Assessing Data-Driven Deep Learning Approaches For Water Distribution Networks
Presented at WDSA CCWI, Ferrara, Italy, July 2024
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Currently, the number of common benchmark datasets that researchers can use straight away for assessing data-driven deep learning approaches is very limited. Most studies provide data as configuration files. It is still up to each practitioner to follow a particular data generation method and run computationally intensive simulations to obtain usable data for model training and evaluation. In this work, we provide a collection of datasets that includes several small and medium size publicly available Water Distribution Networks (WDNs), including Anytown, Modena, Balerma, C-Town, D-Town, L-Town, Ky1, Ky6, Ky8, and Ky13. In total 1,394,400 hours of WDNs data operating under normal conditions is made available to the community.
[ { "created": "Tue, 23 Apr 2024 11:58:40 GMT", "version": "v1" } ]
2024-04-25
[ [ "Tello", "Andres", "" ], [ "Truong", "Huy", "" ], [ "Lazovik", "Alexander", "" ], [ "Degeler", "Victoria", "" ] ]
Currently, the number of common benchmark datasets that researchers can use straight away for assessing data-driven deep learning approaches is very limited. Most studies provide data as configuration files. It is still up to each practitioner to follow a particular data generation method and run computationally intensive simulations to obtain usable data for model training and evaluation. In this work, we provide a collection of datasets that includes several small and medium size publicly available Water Distribution Networks (WDNs), including Anytown, Modena, Balerma, C-Town, D-Town, L-Town, Ky1, Ky6, Ky8, and Ky13. In total 1,394,400 hours of WDNs data operating under normal conditions is made available to the community.
1708.05811
Adi Akavia
Adi Akavia, Dan Feldman, Hayim Shaul
Secure Search on the Cloud via Coresets and Sketches
25 pages, 2 figures
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
\emph{Secure Search} is the problem of retrieving from a database table (or any unsorted array) the records matching specified attributes, as in SQL SELECT queries, but where the database and the query are encrypted. Secure search has been the leading example for practical applications of Fully Homomorphic Encryption (FHE) starting in Gentry's seminal work; however, to the best of our knowledge all state-of-the-art secure search algorithms to date are realized by a polynomial of degree $\Omega(m)$ for $m$ the number of records, which is typically too slow in practice even for moderate size $m$. In this work we present the first algorithm for secure search that is realized by a polynomial of degree polynomial in $\log m$. We implemented our algorithm in an open source library based on HELib implementation for the Brakerski-Gentry-Vaikuntanthan's FHE scheme, and ran experiments on Amazon's EC2 cloud. Our experiments show that we can retrieve the first match in a database of millions of entries in less than an hour using a single machine; the time reduced almost linearly with the number of machines. Our result utilizes a new paradigm of employing coresets and sketches, which are modern data summarization techniques common in computational geometry and machine learning, for efficiency enhancement for homomorphic encryption. As a central tool we design a novel sketch that returns the first positive entry in a (not necessarily sparse) array; this sketch may be of independent interest.
[ { "created": "Sat, 19 Aug 2017 06:36:11 GMT", "version": "v1" } ]
2017-08-22
[ [ "Akavia", "Adi", "" ], [ "Feldman", "Dan", "" ], [ "Shaul", "Hayim", "" ] ]
\emph{Secure Search} is the problem of retrieving from a database table (or any unsorted array) the records matching specified attributes, as in SQL SELECT queries, but where the database and the query are encrypted. Secure search has been the leading example for practical applications of Fully Homomorphic Encryption (FHE) starting in Gentry's seminal work; however, to the best of our knowledge all state-of-the-art secure search algorithms to date are realized by a polynomial of degree $\Omega(m)$ for $m$ the number of records, which is typically too slow in practice even for moderate size $m$. In this work we present the first algorithm for secure search that is realized by a polynomial of degree polynomial in $\log m$. We implemented our algorithm in an open source library based on HELib implementation for the Brakerski-Gentry-Vaikuntanthan's FHE scheme, and ran experiments on Amazon's EC2 cloud. Our experiments show that we can retrieve the first match in a database of millions of entries in less than an hour using a single machine; the time reduced almost linearly with the number of machines. Our result utilizes a new paradigm of employing coresets and sketches, which are modern data summarization techniques common in computational geometry and machine learning, for efficiency enhancement for homomorphic encryption. As a central tool we design a novel sketch that returns the first positive entry in a (not necessarily sparse) array; this sketch may be of independent interest.
1705.00583
Thomas Strasser Thomas Strasser
Arjen A. van der Meer and Peter Palensky and Kai Heussen and Daniel Esteban Morales Bondy and Oliver Gehrke and Cornelius Steinbrink and Marita Blank and Sebastian Lehnhoff and Edmund Widl and Cyndi Moyo and Thomas I. Strasser and Van Hoa Nguyen and Nabil Akroud and Mazheruddin H. Syed and Abdullah Emhemed and Sebastian Rohjans and Ron Brandl and Ata M. Khavari
Cyber-Physical Energy Systems Modeling, Test Specification, and Co-Simulation Based Testing
2017 Workshop on Modeling and Simulation of Cyber-Physical Energy Systems (MSCPES)
null
10.1109/MSCPES.2017.8064528
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The gradual deployment of intelligent and coordinated devices in the electrical power system needs careful investigation of the interactions between the various domains involved. Especially due to the coupling between ICT and power systems a holistic approach for testing and validating is required. Taking existing (quasi-) standardised smart grid system and test specification methods as a starting point, we are developing a holistic testing and validation approach that allows a very flexible way of assessing the system level aspects by various types of experiments (including virtual, real, and mixed lab settings). This paper describes the formal holistic test case specification method and applies it to a particular co-simulation experimental setup. The various building blocks of such a simulation (i.e., FMI, mosaik, domain-specific simulation federates) are covered in more detail. The presented method addresses most modeling and specification challenges in cyber-physical energy systems and is extensible for future additions such as uncertainty quantification.
[ { "created": "Mon, 1 May 2017 16:32:45 GMT", "version": "v1" } ]
2018-12-27
[ [ "van der Meer", "Arjen A.", "" ], [ "Palensky", "Peter", "" ], [ "Heussen", "Kai", "" ], [ "Bondy", "Daniel Esteban Morales", "" ], [ "Gehrke", "Oliver", "" ], [ "Steinbrink", "Cornelius", "" ], [ "Blank", "Marita", "" ], [ "Lehnhoff", "Sebastian", "" ], [ "Widl", "Edmund", "" ], [ "Moyo", "Cyndi", "" ], [ "Strasser", "Thomas I.", "" ], [ "Nguyen", "Van Hoa", "" ], [ "Akroud", "Nabil", "" ], [ "Syed", "Mazheruddin H.", "" ], [ "Emhemed", "Abdullah", "" ], [ "Rohjans", "Sebastian", "" ], [ "Brandl", "Ron", "" ], [ "Khavari", "Ata M.", "" ] ]
The gradual deployment of intelligent and coordinated devices in the electrical power system needs careful investigation of the interactions between the various domains involved. Especially due to the coupling between ICT and power systems a holistic approach for testing and validating is required. Taking existing (quasi-) standardised smart grid system and test specification methods as a starting point, we are developing a holistic testing and validation approach that allows a very flexible way of assessing the system level aspects by various types of experiments (including virtual, real, and mixed lab settings). This paper describes the formal holistic test case specification method and applies it to a particular co-simulation experimental setup. The various building blocks of such a simulation (i.e., FMI, mosaik, domain-specific simulation federates) are covered in more detail. The presented method addresses most modeling and specification challenges in cyber-physical energy systems and is extensible for future additions such as uncertainty quantification.
2210.01597
Eleonora Giunchiglia
Eleonora Giunchiglia and Mihaela C\u{a}t\u{a}lina Stoian and Salman Khan and Fabio Cuzzolin and Thomas Lukasiewicz
ROAD-R: The Autonomous Driving Dataset with Logical Requirements
null
null
10.1007/s10994-023-06322-z
null
cs.LG cs.AI cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural networks have proven to be very powerful at computer vision tasks. However, they often exhibit unexpected behaviours, violating known requirements expressing background knowledge. This calls for models (i) able to learn from the requirements, and (ii) guaranteed to be compliant with the requirements themselves. Unfortunately, the development of such models is hampered by the lack of datasets equipped with formally specified requirements. In this paper, we introduce the ROad event Awareness Dataset with logical Requirements (ROAD-R), the first publicly available dataset for autonomous driving with requirements expressed as logical constraints. Given ROAD-R, we show that current state-of-the-art models often violate its logical constraints, and that it is possible to exploit them to create models that (i) have a better performance, and (ii) are guaranteed to be compliant with the requirements themselves.
[ { "created": "Tue, 4 Oct 2022 13:22:19 GMT", "version": "v1" }, { "created": "Wed, 5 Oct 2022 11:42:42 GMT", "version": "v2" } ]
2023-06-21
[ [ "Giunchiglia", "Eleonora", "" ], [ "Stoian", "Mihaela Cătălina", "" ], [ "Khan", "Salman", "" ], [ "Cuzzolin", "Fabio", "" ], [ "Lukasiewicz", "Thomas", "" ] ]
Neural networks have proven to be very powerful at computer vision tasks. However, they often exhibit unexpected behaviours, violating known requirements expressing background knowledge. This calls for models (i) able to learn from the requirements, and (ii) guaranteed to be compliant with the requirements themselves. Unfortunately, the development of such models is hampered by the lack of datasets equipped with formally specified requirements. In this paper, we introduce the ROad event Awareness Dataset with logical Requirements (ROAD-R), the first publicly available dataset for autonomous driving with requirements expressed as logical constraints. Given ROAD-R, we show that current state-of-the-art models often violate its logical constraints, and that it is possible to exploit them to create models that (i) have a better performance, and (ii) are guaranteed to be compliant with the requirements themselves.
2211.16712
Nan Zhang
Hao Zhang, Nan Zhang, Ruixin Zhang, Lei Shen, Yingyi Zhang, and Meng Liu
Coordinating Cross-modal Distillation for Molecular Property Prediction
null
null
null
null
cs.LG q-bio.QM
http://creativecommons.org/licenses/by/4.0/
In recent years, molecular graph representation learning (GRL) has drawn much more attention in molecular property prediction (MPP) problems. The existing graph methods have demonstrated that 3D geometric information is significant for better performance in MPP. However, accurate 3D structures are often costly and time-consuming to obtain, limiting the large-scale application of GRL. It is an intuitive solution to train with 3D to 2D knowledge distillation and predict with only 2D inputs. But some challenging problems remain open for 3D to 2D distillation. One is that the 3D view is quite distinct from the 2D view, and the other is that the gradient magnitudes of atoms in distillation are discrepant and unstable due to the variable molecular size. To address these challenging problems, we exclusively propose a distillation framework that contains global molecular distillation and local atom distillation. We also provide a theoretical insight to justify how to coordinate atom and molecular information, which tackles the drawback of variable molecular size for atom information distillation. Experimental results on two popular molecular datasets demonstrate that our proposed model achieves superior performance over other methods. Specifically, on the largest MPP dataset PCQM4Mv2 served as an "ImageNet Large Scale Visual Recognition Challenge" in the field of graph ML, the proposed method achieved a 6.9% improvement compared with the best works. And we obtained fourth place with the MAE of 0.0734 on the test-challenge set for OGB-LSC 2022 Graph Regression Task. We will release the code soon.
[ { "created": "Wed, 30 Nov 2022 03:19:34 GMT", "version": "v1" } ]
2022-12-01
[ [ "Zhang", "Hao", "" ], [ "Zhang", "Nan", "" ], [ "Zhang", "Ruixin", "" ], [ "Shen", "Lei", "" ], [ "Zhang", "Yingyi", "" ], [ "Liu", "Meng", "" ] ]
In recent years, molecular graph representation learning (GRL) has drawn much more attention in molecular property prediction (MPP) problems. The existing graph methods have demonstrated that 3D geometric information is significant for better performance in MPP. However, accurate 3D structures are often costly and time-consuming to obtain, limiting the large-scale application of GRL. It is an intuitive solution to train with 3D to 2D knowledge distillation and predict with only 2D inputs. But some challenging problems remain open for 3D to 2D distillation. One is that the 3D view is quite distinct from the 2D view, and the other is that the gradient magnitudes of atoms in distillation are discrepant and unstable due to the variable molecular size. To address these challenging problems, we exclusively propose a distillation framework that contains global molecular distillation and local atom distillation. We also provide a theoretical insight to justify how to coordinate atom and molecular information, which tackles the drawback of variable molecular size for atom information distillation. Experimental results on two popular molecular datasets demonstrate that our proposed model achieves superior performance over other methods. Specifically, on the largest MPP dataset PCQM4Mv2 served as an "ImageNet Large Scale Visual Recognition Challenge" in the field of graph ML, the proposed method achieved a 6.9% improvement compared with the best works. And we obtained fourth place with the MAE of 0.0734 on the test-challenge set for OGB-LSC 2022 Graph Regression Task. We will release the code soon.
1402.3067
Tobias Fritz
John C. Baez and Tobias Fritz
A Bayesian Characterization of Relative Entropy
32 pages, minor revision
Theory and Applications of Categories, Vol. 29 No. 16 (2014), 421-456
null
null
cs.IT math-ph math.IT math.MP math.PR quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We give a new characterization of relative entropy, also known as the Kullback-Leibler divergence. We use a number of interesting categories related to probability theory. In particular, we consider a category FinStat where an object is a finite set equipped with a probability distribution, while a morphism is a measure-preserving function $f: X \to Y$ together with a stochastic right inverse $s: Y \to X$. The function $f$ can be thought of as a measurement process, while s provides a hypothesis about the state of the measured system given the result of a measurement. Given this data we can define the entropy of the probability distribution on $X$ relative to the "prior" given by pushing the probability distribution on $Y$ forwards along $s$. We say that $s$ is "optimal" if these distributions agree. We show that any convex linear, lower semicontinuous functor from FinStat to the additive monoid $[0,\infty]$ which vanishes when $s$ is optimal must be a scalar multiple of this relative entropy. Our proof is independent of all earlier characterizations, but inspired by the work of Petz.
[ { "created": "Thu, 13 Feb 2014 09:02:27 GMT", "version": "v1" }, { "created": "Fri, 11 Jul 2014 12:24:57 GMT", "version": "v2" } ]
2017-08-22
[ [ "Baez", "John C.", "" ], [ "Fritz", "Tobias", "" ] ]
We give a new characterization of relative entropy, also known as the Kullback-Leibler divergence. We use a number of interesting categories related to probability theory. In particular, we consider a category FinStat where an object is a finite set equipped with a probability distribution, while a morphism is a measure-preserving function $f: X \to Y$ together with a stochastic right inverse $s: Y \to X$. The function $f$ can be thought of as a measurement process, while s provides a hypothesis about the state of the measured system given the result of a measurement. Given this data we can define the entropy of the probability distribution on $X$ relative to the "prior" given by pushing the probability distribution on $Y$ forwards along $s$. We say that $s$ is "optimal" if these distributions agree. We show that any convex linear, lower semicontinuous functor from FinStat to the additive monoid $[0,\infty]$ which vanishes when $s$ is optimal must be a scalar multiple of this relative entropy. Our proof is independent of all earlier characterizations, but inspired by the work of Petz.
1501.00158
Zhengli Xing
Zhengli Xing, Jie Zhou, Jiangfeng Ye, Jun Yan, Jifeng Zou, Lin Zou, Qun Wan
Automatic Modulation Recognition of PSK Signals with Sub-Nyquist Sampling Based on High Order Statistics
7 pages, 8 figures, submitted to IEEE International Symposium on Signal Processing and Information Technology
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sampling rate required in the Nth Power Nonlinear Transformation (NPT) method is typically much greater than Nyquist rate, which causes heavy burden for the Analog to Digital Converter (ADC). Taking advantage of the sparse property of PSK signals' spectrum under NPT, we develop the NPT method for PSK signals with Sub-Nyquist rate samples. In this paper, combined the NPT method with Compressive Sensing (CS) theory, frequency spectrum reconstruction of the Nth power nonlinear transformation of PSK signals is presented, which can be further used for AMR and rough estimations of unknown carrier frequency and symbol rate.
[ { "created": "Wed, 31 Dec 2014 15:54:32 GMT", "version": "v1" } ]
2015-01-05
[ [ "Xing", "Zhengli", "" ], [ "Zhou", "Jie", "" ], [ "Ye", "Jiangfeng", "" ], [ "Yan", "Jun", "" ], [ "Zou", "Jifeng", "" ], [ "Zou", "Lin", "" ], [ "Wan", "Qun", "" ] ]
Sampling rate required in the Nth Power Nonlinear Transformation (NPT) method is typically much greater than Nyquist rate, which causes heavy burden for the Analog to Digital Converter (ADC). Taking advantage of the sparse property of PSK signals' spectrum under NPT, we develop the NPT method for PSK signals with Sub-Nyquist rate samples. In this paper, combined the NPT method with Compressive Sensing (CS) theory, frequency spectrum reconstruction of the Nth power nonlinear transformation of PSK signals is presented, which can be further used for AMR and rough estimations of unknown carrier frequency and symbol rate.
2111.02058
Dawei Dai
Dawei Dai and Yutang Li and Huanan Bao and Sy Xia and Guoyin Wang and Xiaoli Ma
Rethinking the Image Feature Biases Exhibited by Deep CNN Models
15 pages, 15 figures
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, convolutional neural networks (CNNs) have been applied successfully in many fields. However, such deep neural models are still regarded as black box in most tasks. One of the fundamental issues underlying this problem is understanding which features are most influential in image recognition tasks and how they are processed by CNNs. It is widely accepted that CNN models combine low-level features to form complex shapes until the object can be readily classified, however, several recent studies have argued that texture features are more important than other features. In this paper, we assume that the importance of certain features varies depending on specific tasks, i.e., specific tasks exhibit a feature bias. We designed two classification tasks based on human intuition to train deep neural models to identify anticipated biases. We devised experiments comprising many tasks to test these biases for the ResNet and DenseNet models. From the results, we conclude that (1) the combined effect of certain features is typically far more influential than any single feature; (2) in different tasks, neural models can perform different biases, that is, we can design a specific task to make a neural model biased toward a specific anticipated feature.
[ { "created": "Wed, 3 Nov 2021 08:04:06 GMT", "version": "v1" } ]
2021-11-04
[ [ "Dai", "Dawei", "" ], [ "Li", "Yutang", "" ], [ "Bao", "Huanan", "" ], [ "Xia", "Sy", "" ], [ "Wang", "Guoyin", "" ], [ "Ma", "Xiaoli", "" ] ]
In recent years, convolutional neural networks (CNNs) have been applied successfully in many fields. However, such deep neural models are still regarded as black box in most tasks. One of the fundamental issues underlying this problem is understanding which features are most influential in image recognition tasks and how they are processed by CNNs. It is widely accepted that CNN models combine low-level features to form complex shapes until the object can be readily classified, however, several recent studies have argued that texture features are more important than other features. In this paper, we assume that the importance of certain features varies depending on specific tasks, i.e., specific tasks exhibit a feature bias. We designed two classification tasks based on human intuition to train deep neural models to identify anticipated biases. We devised experiments comprising many tasks to test these biases for the ResNet and DenseNet models. From the results, we conclude that (1) the combined effect of certain features is typically far more influential than any single feature; (2) in different tasks, neural models can perform different biases, that is, we can design a specific task to make a neural model biased toward a specific anticipated feature.
1802.02562
David Garc\'ia-Soriano
David Garc\'ia-Soriano and Francesco Bonchi
Fair-by-design matching
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Matching algorithms are used routinely to match donors to recipients for solid organs transplantation, for the assignment of medical residents to hospitals, record linkage in databases, scheduling jobs on machines, network switching, online advertising, and image recognition, among others. Although many optimal solutions may exist to a given matching problem, when the elements that shall or not be included in a solution correspond to individuals, it becomes of paramount importance that the solution be selected fairly. In this paper we study individual fairness in matching problems. Given that many maximum matchings may exist, each one satisfying a different set of individuals, the only way to guarantee fairness is through randomization. Hence we introduce the distributional maxmin fairness framework which provides, for any given input instance, the strongest guarantee possible simultaneously for all individuals in terms of satisfaction probability (the probability of being matched in the solution). Specifically, a probability distribution over feasible solutions is maxmin-fair if it is not possible to improve the satisfaction probability of any individual without decreasing it for some other individual which is no better off. In the special case of matchings in bipartite graphs, our framework is equivalent to the egalitarian mechanism of Bogomolnaia and Mouline. Our main contribution is a polynomial-time algorithm for fair matching building on techniques from minimum cuts, and edge-coloring algorithms for regular bipartite graphs, and transversal theory. For bipartite graphs, our algorithm runs in $O((|V|^2 + |E||V|^{2/3}) \cdot (\log |V|)^2)$ expected time and scales to graphs with tens of millions of vertices and hundreds of millions of edges. To the best of our knowledge, this provides the first large-scale implementation of the egalitarian mechanism.
[ { "created": "Wed, 7 Feb 2018 18:44:43 GMT", "version": "v1" }, { "created": "Wed, 8 Jan 2020 11:09:47 GMT", "version": "v2" } ]
2020-01-09
[ [ "García-Soriano", "David", "" ], [ "Bonchi", "Francesco", "" ] ]
Matching algorithms are used routinely to match donors to recipients for solid organs transplantation, for the assignment of medical residents to hospitals, record linkage in databases, scheduling jobs on machines, network switching, online advertising, and image recognition, among others. Although many optimal solutions may exist to a given matching problem, when the elements that shall or not be included in a solution correspond to individuals, it becomes of paramount importance that the solution be selected fairly. In this paper we study individual fairness in matching problems. Given that many maximum matchings may exist, each one satisfying a different set of individuals, the only way to guarantee fairness is through randomization. Hence we introduce the distributional maxmin fairness framework which provides, for any given input instance, the strongest guarantee possible simultaneously for all individuals in terms of satisfaction probability (the probability of being matched in the solution). Specifically, a probability distribution over feasible solutions is maxmin-fair if it is not possible to improve the satisfaction probability of any individual without decreasing it for some other individual which is no better off. In the special case of matchings in bipartite graphs, our framework is equivalent to the egalitarian mechanism of Bogomolnaia and Mouline. Our main contribution is a polynomial-time algorithm for fair matching building on techniques from minimum cuts, and edge-coloring algorithms for regular bipartite graphs, and transversal theory. For bipartite graphs, our algorithm runs in $O((|V|^2 + |E||V|^{2/3}) \cdot (\log |V|)^2)$ expected time and scales to graphs with tens of millions of vertices and hundreds of millions of edges. To the best of our knowledge, this provides the first large-scale implementation of the egalitarian mechanism.
1804.06870
Hao Tan
Hao Tan, Mohit Bansal
Object Ordering with Bidirectional Matchings for Visual Reasoning
NAACL 2018 (8 pages; added pointer-ordering examples)
null
null
null
cs.CL cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual reasoning with compositional natural language instructions, e.g., based on the newly-released Cornell Natural Language Visual Reasoning (NLVR) dataset, is a challenging task, where the model needs to have the ability to create an accurate mapping between the diverse phrases and the several objects placed in complex arrangements in the image. Further, this mapping needs to be processed to answer the question in the statement given the ordering and relationship of the objects across three similar images. In this paper, we propose a novel end-to-end neural model for the NLVR task, where we first use joint bidirectional attention to build a two-way conditioning between the visual information and the language phrases. Next, we use an RL-based pointer network to sort and process the varying number of unordered objects (so as to match the order of the statement phrases) in each of the three images and then pool over the three decisions. Our model achieves strong improvements (of 4-6% absolute) over the state-of-the-art on both the structured representation and raw image versions of the dataset.
[ { "created": "Wed, 18 Apr 2018 18:39:17 GMT", "version": "v1" }, { "created": "Thu, 6 Sep 2018 16:56:32 GMT", "version": "v2" } ]
2018-09-07
[ [ "Tan", "Hao", "" ], [ "Bansal", "Mohit", "" ] ]
Visual reasoning with compositional natural language instructions, e.g., based on the newly-released Cornell Natural Language Visual Reasoning (NLVR) dataset, is a challenging task, where the model needs to have the ability to create an accurate mapping between the diverse phrases and the several objects placed in complex arrangements in the image. Further, this mapping needs to be processed to answer the question in the statement given the ordering and relationship of the objects across three similar images. In this paper, we propose a novel end-to-end neural model for the NLVR task, where we first use joint bidirectional attention to build a two-way conditioning between the visual information and the language phrases. Next, we use an RL-based pointer network to sort and process the varying number of unordered objects (so as to match the order of the statement phrases) in each of the three images and then pool over the three decisions. Our model achieves strong improvements (of 4-6% absolute) over the state-of-the-art on both the structured representation and raw image versions of the dataset.
2110.05910
Nathaniel Tye
Nathaniel Tye, Stephan Hofmann, Phillip Stanley-Marbell
Bridging the Band Gap: What Device Physicists Need to Know About Machine Learning
null
null
null
null
cs.ET physics.app-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article surveys the landscape of semiconductor materials and devices research for the acceleration of machine learning (ML) algorithms. We observe a disconnect between the semiconductor and device physics and engineering communities, and the digital logic and computer hardware architecture communities. The article first provides an overview of the principles of computational complexity and fundamental physical limits to computing and their relation to physical systems. The article then provides an introduction to ML by presenting three key components of ML systems: representation, evaluation, and optimisation. The article then discusses and provides examples of the application of emerging technologies from the demiconductor and device physics domains as solutions to computational problems, alongside a brief overview of emerging devices for computing applications. The article then reviews the landscape of ML accelerators, comparing fixed-function and reprogrammable digital logic with novel devices such as memristors, resistive memories, magnetic memories, and probabilistic bits. We observe broadly lower performance of ML accelerators based on novel devices and materials when compared to those based on digital complimentary metal-oxide semiconductor (CMOS) technology, particularly in the MNIST optical character recognition task, a common ML benchmark, and also highlight the lack of a trend of progress in approaches based on novel materials and devices. Lastly, the article proposes figures of merit for meaningful evaluation and comparison of different ML implementations in the hope of fostering a dialogue between the materials science, device physics, digital logic, and computer architecture communities by providing a common frame of reference for their work.
[ { "created": "Tue, 12 Oct 2021 11:43:19 GMT", "version": "v1" }, { "created": "Sat, 16 Oct 2021 19:49:09 GMT", "version": "v2" } ]
2021-10-19
[ [ "Tye", "Nathaniel", "" ], [ "Hofmann", "Stephan", "" ], [ "Stanley-Marbell", "Phillip", "" ] ]
This article surveys the landscape of semiconductor materials and devices research for the acceleration of machine learning (ML) algorithms. We observe a disconnect between the semiconductor and device physics and engineering communities, and the digital logic and computer hardware architecture communities. The article first provides an overview of the principles of computational complexity and fundamental physical limits to computing and their relation to physical systems. The article then provides an introduction to ML by presenting three key components of ML systems: representation, evaluation, and optimisation. The article then discusses and provides examples of the application of emerging technologies from the demiconductor and device physics domains as solutions to computational problems, alongside a brief overview of emerging devices for computing applications. The article then reviews the landscape of ML accelerators, comparing fixed-function and reprogrammable digital logic with novel devices such as memristors, resistive memories, magnetic memories, and probabilistic bits. We observe broadly lower performance of ML accelerators based on novel devices and materials when compared to those based on digital complimentary metal-oxide semiconductor (CMOS) technology, particularly in the MNIST optical character recognition task, a common ML benchmark, and also highlight the lack of a trend of progress in approaches based on novel materials and devices. Lastly, the article proposes figures of merit for meaningful evaluation and comparison of different ML implementations in the hope of fostering a dialogue between the materials science, device physics, digital logic, and computer architecture communities by providing a common frame of reference for their work.
2203.05918
Junhua Ma
Junhua Ma, Jiajun Li, Yuxuan Liu, Shangbo Zhou, Xue Li
Integrating Dependency Tree Into Self-attention for Sentence Representation
ICASSP 2022
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Recent progress on parse tree encoder for sentence representation learning is notable. However, these works mainly encode tree structures recursively, which is not conducive to parallelization. On the other hand, these works rarely take into account the labels of arcs in dependency trees. To address both issues, we propose Dependency-Transformer, which applies a relation-attention mechanism that works in concert with the self-attention mechanism. This mechanism aims to encode the dependency and the spatial positional relations between nodes in the dependency tree of sentences. By a score-based method, we successfully inject the syntax information without affecting Transformer's parallelizability. Our model outperforms or is comparable to the state-of-the-art methods on four tasks for sentence representation and has obvious advantages in computational efficiency.
[ { "created": "Fri, 11 Mar 2022 13:44:41 GMT", "version": "v1" }, { "created": "Sun, 24 Apr 2022 09:33:25 GMT", "version": "v2" }, { "created": "Sat, 7 May 2022 01:55:59 GMT", "version": "v3" } ]
2022-05-10
[ [ "Ma", "Junhua", "" ], [ "Li", "Jiajun", "" ], [ "Liu", "Yuxuan", "" ], [ "Zhou", "Shangbo", "" ], [ "Li", "Xue", "" ] ]
Recent progress on parse tree encoder for sentence representation learning is notable. However, these works mainly encode tree structures recursively, which is not conducive to parallelization. On the other hand, these works rarely take into account the labels of arcs in dependency trees. To address both issues, we propose Dependency-Transformer, which applies a relation-attention mechanism that works in concert with the self-attention mechanism. This mechanism aims to encode the dependency and the spatial positional relations between nodes in the dependency tree of sentences. By a score-based method, we successfully inject the syntax information without affecting Transformer's parallelizability. Our model outperforms or is comparable to the state-of-the-art methods on four tasks for sentence representation and has obvious advantages in computational efficiency.
2102.06924
Lior Shani
Lior Shani, Tom Zahavy and Shie Mannor
Online Apprenticeship Learning
AAAI 2022
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In Apprenticeship Learning (AL), we are given a Markov Decision Process (MDP) without access to the cost function. Instead, we observe trajectories sampled by an expert that acts according to some policy. The goal is to find a policy that matches the expert's performance on some predefined set of cost functions. We introduce an online variant of AL (Online Apprenticeship Learning; OAL), where the agent is expected to perform comparably to the expert while interacting with the environment. We show that the OAL problem can be effectively solved by combining two mirror descent based no-regret algorithms: one for policy optimization and another for learning the worst case cost. By employing optimistic exploration, we derive a convergent algorithm with $O(\sqrt{K})$ regret, where $K$ is the number of interactions with the MDP, and an additional linear error term that depends on the amount of expert trajectories available. Importantly, our algorithm avoids the need to solve an MDP at each iteration, making it more practical compared to prior AL methods. Finally, we implement a deep variant of our algorithm which shares some similarities to GAIL \cite{ho2016generative}, but where the discriminator is replaced with the costs learned by the OAL problem. Our simulations suggest that OAL performs well in high dimensional control problems.
[ { "created": "Sat, 13 Feb 2021 12:57:51 GMT", "version": "v1" }, { "created": "Wed, 29 Dec 2021 09:31:02 GMT", "version": "v2" } ]
2021-12-30
[ [ "Shani", "Lior", "" ], [ "Zahavy", "Tom", "" ], [ "Mannor", "Shie", "" ] ]
In Apprenticeship Learning (AL), we are given a Markov Decision Process (MDP) without access to the cost function. Instead, we observe trajectories sampled by an expert that acts according to some policy. The goal is to find a policy that matches the expert's performance on some predefined set of cost functions. We introduce an online variant of AL (Online Apprenticeship Learning; OAL), where the agent is expected to perform comparably to the expert while interacting with the environment. We show that the OAL problem can be effectively solved by combining two mirror descent based no-regret algorithms: one for policy optimization and another for learning the worst case cost. By employing optimistic exploration, we derive a convergent algorithm with $O(\sqrt{K})$ regret, where $K$ is the number of interactions with the MDP, and an additional linear error term that depends on the amount of expert trajectories available. Importantly, our algorithm avoids the need to solve an MDP at each iteration, making it more practical compared to prior AL methods. Finally, we implement a deep variant of our algorithm which shares some similarities to GAIL \cite{ho2016generative}, but where the discriminator is replaced with the costs learned by the OAL problem. Our simulations suggest that OAL performs well in high dimensional control problems.
2403.07532
Matteo Sodano
Matteo Sodano, Federico Magistri, Lucas Nunes, Jens Behley, Cyrill Stachniss
Open-World Semantic Segmentation Including Class Similarity
Accepted at CVPR 2024. Code at: https://github.com/PRBonn/ContMAV
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Interpreting camera data is key for autonomously acting systems, such as autonomous vehicles. Vision systems that operate in real-world environments must be able to understand their surroundings and need the ability to deal with novel situations. This paper tackles open-world semantic segmentation, i.e., the variant of interpreting image data in which objects occur that have not been seen during training. We propose a novel approach that performs accurate closed-world semantic segmentation and, at the same time, can identify new categories without requiring any additional training data. Our approach additionally provides a similarity measure for every newly discovered class in an image to a known category, which can be useful information in downstream tasks such as planning or mapping. Through extensive experiments, we show that our model achieves state-of-the-art results on classes known from training data as well as for anomaly segmentation and can distinguish between different unknown classes.
[ { "created": "Tue, 12 Mar 2024 11:11:19 GMT", "version": "v1" } ]
2024-03-13
[ [ "Sodano", "Matteo", "" ], [ "Magistri", "Federico", "" ], [ "Nunes", "Lucas", "" ], [ "Behley", "Jens", "" ], [ "Stachniss", "Cyrill", "" ] ]
Interpreting camera data is key for autonomously acting systems, such as autonomous vehicles. Vision systems that operate in real-world environments must be able to understand their surroundings and need the ability to deal with novel situations. This paper tackles open-world semantic segmentation, i.e., the variant of interpreting image data in which objects occur that have not been seen during training. We propose a novel approach that performs accurate closed-world semantic segmentation and, at the same time, can identify new categories without requiring any additional training data. Our approach additionally provides a similarity measure for every newly discovered class in an image to a known category, which can be useful information in downstream tasks such as planning or mapping. Through extensive experiments, we show that our model achieves state-of-the-art results on classes known from training data as well as for anomaly segmentation and can distinguish between different unknown classes.
1705.09886
Yang Yuan
Yuanzhi Li, Yang Yuan
Convergence Analysis of Two-layer Neural Networks with ReLU Activation
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, stochastic gradient descent (SGD) based techniques has become the standard tools for training neural networks. However, formal theoretical understanding of why SGD can train neural networks in practice is largely missing. In this paper, we make progress on understanding this mystery by providing a convergence analysis for SGD on a rich subset of two-layer feedforward networks with ReLU activations. This subset is characterized by a special structure called "identity mapping". We prove that, if input follows from Gaussian distribution, with standard $O(1/\sqrt{d})$ initialization of the weights, SGD converges to the global minimum in polynomial number of steps. Unlike normal vanilla networks, the "identity mapping" makes our network asymmetric and thus the global minimum is unique. To complement our theory, we are also able to show experimentally that multi-layer networks with this mapping have better performance compared with normal vanilla networks. Our convergence theorem differs from traditional non-convex optimization techniques. We show that SGD converges to optimal in "two phases": In phase I, the gradient points to the wrong direction, however, a potential function $g$ gradually decreases. Then in phase II, SGD enters a nice one point convex region and converges. We also show that the identity mapping is necessary for convergence, as it moves the initial point to a better place for optimization. Experiment verifies our claims.
[ { "created": "Sun, 28 May 2017 02:11:10 GMT", "version": "v1" }, { "created": "Wed, 1 Nov 2017 21:42:23 GMT", "version": "v2" } ]
2017-11-03
[ [ "Li", "Yuanzhi", "" ], [ "Yuan", "Yang", "" ] ]
In recent years, stochastic gradient descent (SGD) based techniques has become the standard tools for training neural networks. However, formal theoretical understanding of why SGD can train neural networks in practice is largely missing. In this paper, we make progress on understanding this mystery by providing a convergence analysis for SGD on a rich subset of two-layer feedforward networks with ReLU activations. This subset is characterized by a special structure called "identity mapping". We prove that, if input follows from Gaussian distribution, with standard $O(1/\sqrt{d})$ initialization of the weights, SGD converges to the global minimum in polynomial number of steps. Unlike normal vanilla networks, the "identity mapping" makes our network asymmetric and thus the global minimum is unique. To complement our theory, we are also able to show experimentally that multi-layer networks with this mapping have better performance compared with normal vanilla networks. Our convergence theorem differs from traditional non-convex optimization techniques. We show that SGD converges to optimal in "two phases": In phase I, the gradient points to the wrong direction, however, a potential function $g$ gradually decreases. Then in phase II, SGD enters a nice one point convex region and converges. We also show that the identity mapping is necessary for convergence, as it moves the initial point to a better place for optimization. Experiment verifies our claims.
2306.12768
Edvin Listo Zec
Marcus Toft{\aa}s, Emilie Klefbom, Edvin Listo Zec, Martin Willbo, Olof Mogren
Concept-aware clustering for decentralized deep learning under temporal shift
4 pages, 2 figures
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Decentralized deep learning requires dealing with non-iid data across clients, which may also change over time due to temporal shifts. While non-iid data has been extensively studied in distributed settings, temporal shifts have received no attention. To the best of our knowledge, we are first with tackling the novel and challenging problem of decentralized learning with non-iid and dynamic data. We propose a novel algorithm that can automatically discover and adapt to the evolving concepts in the network, without any prior knowledge or estimation of the number of concepts. We evaluate our algorithm on standard benchmark datasets and demonstrate that it outperforms previous methods for decentralized learning.
[ { "created": "Thu, 22 Jun 2023 09:45:40 GMT", "version": "v1" } ]
2023-06-23
[ [ "Toftås", "Marcus", "" ], [ "Klefbom", "Emilie", "" ], [ "Zec", "Edvin Listo", "" ], [ "Willbo", "Martin", "" ], [ "Mogren", "Olof", "" ] ]
Decentralized deep learning requires dealing with non-iid data across clients, which may also change over time due to temporal shifts. While non-iid data has been extensively studied in distributed settings, temporal shifts have received no attention. To the best of our knowledge, we are first with tackling the novel and challenging problem of decentralized learning with non-iid and dynamic data. We propose a novel algorithm that can automatically discover and adapt to the evolving concepts in the network, without any prior knowledge or estimation of the number of concepts. We evaluate our algorithm on standard benchmark datasets and demonstrate that it outperforms previous methods for decentralized learning.
2305.05389
John Conroy
John M. Conroy, Neil P Molino, Brian Baughman, Rod Gomez, Ryan Kaliszewski, and Nicholas A. Lines
Two to Five Truths in Non-Negative Matrix Factorization
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we explore the role of matrix scaling on a matrix of counts when building a topic model using non-negative matrix factorization. We present a scaling inspired by the normalized Laplacian (NL) for graphs that can greatly improve the quality of a non-negative matrix factorization. The results parallel those in the spectral graph clustering work of \cite{Priebe:2019}, where the authors proved adjacency spectral embedding (ASE) spectral clustering was more likely to discover core-periphery partitions and Laplacian Spectral Embedding (LSE) was more likely to discover affinity partitions. In text analysis non-negative matrix factorization (NMF) is typically used on a matrix of co-occurrence ``contexts'' and ``terms" counts. The matrix scaling inspired by LSE gives significant improvement for text topic models in a variety of datasets. We illustrate the dramatic difference a matrix scalings in NMF can greatly improve the quality of a topic model on three datasets where human annotation is available. Using the adjusted Rand index (ARI), a measure cluster similarity we see an increase of 50\% for Twitter data and over 200\% for a newsgroup dataset versus using counts, which is the analogue of ASE. For clean data, such as those from the Document Understanding Conference, NL gives over 40\% improvement over ASE. We conclude with some analysis of this phenomenon and some connections of this scaling with other matrix scaling methods.
[ { "created": "Sat, 6 May 2023 14:40:20 GMT", "version": "v1" }, { "created": "Tue, 5 Sep 2023 16:14:56 GMT", "version": "v2" } ]
2023-09-06
[ [ "Conroy", "John M.", "" ], [ "Molino", "Neil P", "" ], [ "Baughman", "Brian", "" ], [ "Gomez", "Rod", "" ], [ "Kaliszewski", "Ryan", "" ], [ "Lines", "Nicholas A.", "" ] ]
In this paper, we explore the role of matrix scaling on a matrix of counts when building a topic model using non-negative matrix factorization. We present a scaling inspired by the normalized Laplacian (NL) for graphs that can greatly improve the quality of a non-negative matrix factorization. The results parallel those in the spectral graph clustering work of \cite{Priebe:2019}, where the authors proved adjacency spectral embedding (ASE) spectral clustering was more likely to discover core-periphery partitions and Laplacian Spectral Embedding (LSE) was more likely to discover affinity partitions. In text analysis non-negative matrix factorization (NMF) is typically used on a matrix of co-occurrence ``contexts'' and ``terms" counts. The matrix scaling inspired by LSE gives significant improvement for text topic models in a variety of datasets. We illustrate the dramatic difference a matrix scalings in NMF can greatly improve the quality of a topic model on three datasets where human annotation is available. Using the adjusted Rand index (ARI), a measure cluster similarity we see an increase of 50\% for Twitter data and over 200\% for a newsgroup dataset versus using counts, which is the analogue of ASE. For clean data, such as those from the Document Understanding Conference, NL gives over 40\% improvement over ASE. We conclude with some analysis of this phenomenon and some connections of this scaling with other matrix scaling methods.
1504.03912
Lin Jianbiao
Hui Lin, Jianbiao Lin, Ke Ji, Jingjie Wang, Feng Lin
Promote the Industry Standard of Smart Home in China by Intelligent Router Technology
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by-nc-sa/3.0/
The reason why smart home remains not popularized lies in bad product user experience, purchasing cost, and compatibility, and a lack of industry standard[1]. Echoing problems above, and having relentless devoted to software and hardware innovation and practice, we have independently developed a set of solution which is based on innovation and integration of router technology, mobile Internet technology,Internet of things technology,communication technology, digital-to-analog conversion and codec technology, and P2P technology among others. We have also established relevant protocols (without the application of protocols abroad). By doing this, we managed to establish a system with low and moderate price, superior performance, all-inclusive functions, easy installation, convenient portability, real-time reliability, security encryption, and the capability to manage home furnitures in an intelligent way. Only a new smart home system like this can inject new idea and energy into smart home industry and thus vigorously promote the establishment of smart home industry standard.
[ { "created": "Wed, 15 Apr 2015 13:52:40 GMT", "version": "v1" } ]
2015-04-16
[ [ "Lin", "Hui", "" ], [ "Lin", "Jianbiao", "" ], [ "Ji", "Ke", "" ], [ "Wang", "Jingjie", "" ], [ "Lin", "Feng", "" ] ]
The reason why smart home remains not popularized lies in bad product user experience, purchasing cost, and compatibility, and a lack of industry standard[1]. Echoing problems above, and having relentless devoted to software and hardware innovation and practice, we have independently developed a set of solution which is based on innovation and integration of router technology, mobile Internet technology,Internet of things technology,communication technology, digital-to-analog conversion and codec technology, and P2P technology among others. We have also established relevant protocols (without the application of protocols abroad). By doing this, we managed to establish a system with low and moderate price, superior performance, all-inclusive functions, easy installation, convenient portability, real-time reliability, security encryption, and the capability to manage home furnitures in an intelligent way. Only a new smart home system like this can inject new idea and energy into smart home industry and thus vigorously promote the establishment of smart home industry standard.
2309.16231
Hanqing Zhang
Hanqing Zhang, Sun Si, Haiming Wu, Dawei Song
Controllable Text Generation with Residual Memory Transformer
github:https://github.com/littlehacker26/Residual_Memory_Transformer
ACL 2024
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Large-scale Causal Language Models (CLMs), e.g., GPT3 and ChatGPT, have brought great success in text generation. However, it is still an open challenge to control the generation process of CLM while balancing flexibility, control granularity, and generation efficiency. In this paper, we provide a new alternative for controllable text generation (CTG), by designing a non-intrusive, lightweight control plugin to accompany the generation of CLM at arbitrary time steps. The proposed control plugin, namely Residual Memory Transformer (RMT), has an encoder-decoder setup, which can accept any types of control conditions and cooperate with CLM through a residual learning paradigm, to achieve a more flexible, general, and efficient CTG. Extensive experiments are carried out on various control tasks, in the form of both automatic and human evaluations. The results show the superiority of RMT over a range of state-of-the-art approaches, proving the effectiveness and versatility of our approach.
[ { "created": "Thu, 28 Sep 2023 08:13:33 GMT", "version": "v1" } ]
2024-06-27
[ [ "Zhang", "Hanqing", "" ], [ "Si", "Sun", "" ], [ "Wu", "Haiming", "" ], [ "Song", "Dawei", "" ] ]
Large-scale Causal Language Models (CLMs), e.g., GPT3 and ChatGPT, have brought great success in text generation. However, it is still an open challenge to control the generation process of CLM while balancing flexibility, control granularity, and generation efficiency. In this paper, we provide a new alternative for controllable text generation (CTG), by designing a non-intrusive, lightweight control plugin to accompany the generation of CLM at arbitrary time steps. The proposed control plugin, namely Residual Memory Transformer (RMT), has an encoder-decoder setup, which can accept any types of control conditions and cooperate with CLM through a residual learning paradigm, to achieve a more flexible, general, and efficient CTG. Extensive experiments are carried out on various control tasks, in the form of both automatic and human evaluations. The results show the superiority of RMT over a range of state-of-the-art approaches, proving the effectiveness and versatility of our approach.
1710.07909
Bing Zhu
Bing Zhu, Kenneth W. Shum, and Hui Li
On the Duality of Fractional Repetition Codes
Accepted by the 2017 IEEE Information Theory Workshop (ITW 2017)
null
10.1109/ITW.2017.8277971
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Erasure codes have emerged as an efficient technology for providing data redundancy in distributed storage systems. However, it is a challenging task to repair the failed storage nodes in erasure-coded storage systems, which requires large quantities of network resources. In this paper, we study fractional repetition (FR) codes, which enable the minimal repair complexity and also minimum repair bandwidth during node repair. We focus on the duality of FR codes, and investigate the relationship between the supported file size of an FR code and its dual code. Furthermore, we present a dual bound on the supported file size of FR codes.
[ { "created": "Sun, 22 Oct 2017 09:03:01 GMT", "version": "v1" } ]
2020-05-15
[ [ "Zhu", "Bing", "" ], [ "Shum", "Kenneth W.", "" ], [ "Li", "Hui", "" ] ]
Erasure codes have emerged as an efficient technology for providing data redundancy in distributed storage systems. However, it is a challenging task to repair the failed storage nodes in erasure-coded storage systems, which requires large quantities of network resources. In this paper, we study fractional repetition (FR) codes, which enable the minimal repair complexity and also minimum repair bandwidth during node repair. We focus on the duality of FR codes, and investigate the relationship between the supported file size of an FR code and its dual code. Furthermore, we present a dual bound on the supported file size of FR codes.
1111.1797
Shipra Agrawal
Shipra Agrawal, Navin Goyal
Analysis of Thompson Sampling for the multi-armed bandit problem
This version corrects some minor errors, and reorganizes some content
null
null
null
cs.LG cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The multi-armed bandit problem is a popular model for studying exploration/exploitation trade-off in sequential decision problems. Many algorithms are now available for this well-studied problem. One of the earliest algorithms, given by W. R. Thompson, dates back to 1933. This algorithm, referred to as Thompson Sampling, is a natural Bayesian algorithm. The basic idea is to choose an arm to play according to its probability of being the best arm. Thompson Sampling algorithm has experimentally been shown to be close to optimal. In addition, it is efficient to implement and exhibits several desirable properties such as small regret for delayed feedback. However, theoretical understanding of this algorithm was quite limited. In this paper, for the first time, we show that Thompson Sampling algorithm achieves logarithmic expected regret for the multi-armed bandit problem. More precisely, for the two-armed bandit problem, the expected regret in time $T$ is $O(\frac{\ln T}{\Delta} + \frac{1}{\Delta^3})$. And, for the $N$-armed bandit problem, the expected regret in time $T$ is $O([(\sum_{i=2}^N \frac{1}{\Delta_i^2})^2] \ln T)$. Our bounds are optimal but for the dependence on $\Delta_i$ and the constant factors in big-Oh.
[ { "created": "Tue, 8 Nov 2011 04:27:01 GMT", "version": "v1" }, { "created": "Tue, 27 Dec 2011 08:27:25 GMT", "version": "v2" }, { "created": "Mon, 9 Apr 2012 10:43:05 GMT", "version": "v3" } ]
2012-04-10
[ [ "Agrawal", "Shipra", "" ], [ "Goyal", "Navin", "" ] ]
The multi-armed bandit problem is a popular model for studying exploration/exploitation trade-off in sequential decision problems. Many algorithms are now available for this well-studied problem. One of the earliest algorithms, given by W. R. Thompson, dates back to 1933. This algorithm, referred to as Thompson Sampling, is a natural Bayesian algorithm. The basic idea is to choose an arm to play according to its probability of being the best arm. Thompson Sampling algorithm has experimentally been shown to be close to optimal. In addition, it is efficient to implement and exhibits several desirable properties such as small regret for delayed feedback. However, theoretical understanding of this algorithm was quite limited. In this paper, for the first time, we show that Thompson Sampling algorithm achieves logarithmic expected regret for the multi-armed bandit problem. More precisely, for the two-armed bandit problem, the expected regret in time $T$ is $O(\frac{\ln T}{\Delta} + \frac{1}{\Delta^3})$. And, for the $N$-armed bandit problem, the expected regret in time $T$ is $O([(\sum_{i=2}^N \frac{1}{\Delta_i^2})^2] \ln T)$. Our bounds are optimal but for the dependence on $\Delta_i$ and the constant factors in big-Oh.
0710.1254
Hua Li
Hua Li and Edwin K.P. Chong
A Group Theoretic Model for Information
Submitted to IEEE Transactions on Information Theory
null
null
null
cs.IT math.IT
null
In this paper we formalize the notions of information elements and information lattices, first proposed by Shannon. Exploiting this formalization, we identify a comprehensive parallelism between information lattices and subgroup lattices. Qualitatively, we demonstrate isomorphisms between information lattices and subgroup lattices. Quantitatively, we establish a decisive approximation relation between the entropy structures of information lattices and the log-index structures of the corresponding subgroup lattices. This approximation extends the approximation for joint entropies carried out previously by Chan and Yeung. As a consequence of our approximation result, we show that any continuous law holds in general for the entropies of information elements if and only if the same law holds in general for the log-indices of subgroups. As an application, by constructing subgroup counterexamples we find surprisingly that common information, unlike joint information, obeys neither the submodularity nor the supermodularity law. We emphasize that the notion of information elements is conceptually significant--formalizing it helps to reveal the deep connection between information theory and group theory. The parallelism established in this paper admits an appealing group-action explanation and provides useful insights into the intrinsic structure among information elements from a group-theoretic perspective.
[ { "created": "Fri, 5 Oct 2007 18:37:21 GMT", "version": "v1" } ]
2007-10-08
[ [ "Li", "Hua", "" ], [ "Chong", "Edwin K. P.", "" ] ]
In this paper we formalize the notions of information elements and information lattices, first proposed by Shannon. Exploiting this formalization, we identify a comprehensive parallelism between information lattices and subgroup lattices. Qualitatively, we demonstrate isomorphisms between information lattices and subgroup lattices. Quantitatively, we establish a decisive approximation relation between the entropy structures of information lattices and the log-index structures of the corresponding subgroup lattices. This approximation extends the approximation for joint entropies carried out previously by Chan and Yeung. As a consequence of our approximation result, we show that any continuous law holds in general for the entropies of information elements if and only if the same law holds in general for the log-indices of subgroups. As an application, by constructing subgroup counterexamples we find surprisingly that common information, unlike joint information, obeys neither the submodularity nor the supermodularity law. We emphasize that the notion of information elements is conceptually significant--formalizing it helps to reveal the deep connection between information theory and group theory. The parallelism established in this paper admits an appealing group-action explanation and provides useful insights into the intrinsic structure among information elements from a group-theoretic perspective.
1901.10050
Emilia Ciupan
Emilia Ciupan, Mihai Ciupan, Daniela-Corina Jucan
Determining the Mechanical Properties of a New Composite Material Using Artificial Neural Networks
6 pages, 4 figures, Published with International Journal of Engineering Trends and Technology (IJETT)
International Journal of Engineering Trends and Technology 66.2 (2018): 103-108
10.14445/22315381/IJETT-V66P218
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper studies the possibility of using artificial neural networks (ANN) to determine certain mechanical properties of a new composite material. This new material is obtained by a mixture of hemp and polypropylene fibres. The material was developed for the industry of upholstered furniture. Specifically, it is intended for the making of elements of the support structure of some upholstered goods (chairs, armchairs, sofa sides) with the objective of replacing wood. The paper aims to calculate the following mechanical properties: maximum tensile strength and maximum elongation.
[ { "created": "Fri, 11 Jan 2019 17:34:30 GMT", "version": "v1" } ]
2019-01-30
[ [ "Ciupan", "Emilia", "" ], [ "Ciupan", "Mihai", "" ], [ "Jucan", "Daniela-Corina", "" ] ]
The paper studies the possibility of using artificial neural networks (ANN) to determine certain mechanical properties of a new composite material. This new material is obtained by a mixture of hemp and polypropylene fibres. The material was developed for the industry of upholstered furniture. Specifically, it is intended for the making of elements of the support structure of some upholstered goods (chairs, armchairs, sofa sides) with the objective of replacing wood. The paper aims to calculate the following mechanical properties: maximum tensile strength and maximum elongation.
2404.04739
Robert Schneider
Maxwell Schneider, Cody McCarthy, Michael G. Maxwell, Joshua Pfeffer, Robert Schneider and Andrew V. Sills
Mathematics of the MML functional quantizer modules for VCV Rack software synthesizer
4 pages, published in Infinite Loop: an online journal for undergraduate research and applied computing projects (2024)
null
null
null
cs.SD eess.AS math.HO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We detail the mathematical formulation of the line of "functional quantizer" modules developed by the Mathematics and Music Lab (MML) at Michigan Technological University, for the VCV Rack software modular synthesizer platform, which allow synthesizer players to tune oscillators to new musical scales based on mathematical functions. For example, we describe the recently-released MML Logarithmic Quantizer (LOG QNT) module that tunes synthesizer oscillators to the non-Pythagorean musical scale introduced by indie band The Apples in Stereo.
[ { "created": "Sat, 6 Apr 2024 21:56:16 GMT", "version": "v1" }, { "created": "Sat, 20 Apr 2024 00:00:07 GMT", "version": "v2" }, { "created": "Sun, 28 Apr 2024 04:56:45 GMT", "version": "v3" } ]
2024-04-30
[ [ "Schneider", "Maxwell", "" ], [ "McCarthy", "Cody", "" ], [ "Maxwell", "Michael G.", "" ], [ "Pfeffer", "Joshua", "" ], [ "Schneider", "Robert", "" ], [ "Sills", "Andrew V.", "" ] ]
We detail the mathematical formulation of the line of "functional quantizer" modules developed by the Mathematics and Music Lab (MML) at Michigan Technological University, for the VCV Rack software modular synthesizer platform, which allow synthesizer players to tune oscillators to new musical scales based on mathematical functions. For example, we describe the recently-released MML Logarithmic Quantizer (LOG QNT) module that tunes synthesizer oscillators to the non-Pythagorean musical scale introduced by indie band The Apples in Stereo.
0801.3550
Uwe Aickelin
Uwe Aickelin and Larry Bull
Partnering Strategies for Fitness Evaluation in a Pyramidal Evolutionary Algorithm
null
Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2002), pp 263-270, New York, USA, 2002
null
null
cs.NE cs.AI
null
This paper combines the idea of a hierarchical distributed genetic algorithm with different inter-agent partnering strategies. Cascading clusters of sub-populations are built from bottom up, with higher-level sub-populations optimising larger parts of the problem. Hence higher-level sub-populations search a larger search space with a lower resolution whilst lower-level sub-populations search a smaller search space with a higher resolution. The effects of different partner selection schemes for (sub-)fitness evaluation purposes are examined for two multiple-choice optimisation problems. It is shown that random partnering strategies perform best by providing better sampling and more diversity.
[ { "created": "Wed, 23 Jan 2008 11:12:39 GMT", "version": "v1" }, { "created": "Mon, 3 Mar 2008 17:08:00 GMT", "version": "v2" } ]
2010-07-05
[ [ "Aickelin", "Uwe", "" ], [ "Bull", "Larry", "" ] ]
This paper combines the idea of a hierarchical distributed genetic algorithm with different inter-agent partnering strategies. Cascading clusters of sub-populations are built from bottom up, with higher-level sub-populations optimising larger parts of the problem. Hence higher-level sub-populations search a larger search space with a lower resolution whilst lower-level sub-populations search a smaller search space with a higher resolution. The effects of different partner selection schemes for (sub-)fitness evaluation purposes are examined for two multiple-choice optimisation problems. It is shown that random partnering strategies perform best by providing better sampling and more diversity.
2303.00458
Manos Kamarianakis
Manos Kamarianakis, Antonis Protopsaltis, George Papagiannakis
AR-Assisted Surgical Care via 5G networks for First Aid Responders
3 pages, 2 figures, presented at IEEE International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD) 2022, 2-3 November 2022
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Surgeons should play a central role in disaster planning and management due to the overwhelming number of bodily injuries that are typically involved during most forms of disaster. In fact, various types of surgical procedures are performed by emergency medical teams after sudden-onset disasters, such as soft tissue wounds, orthopaedic traumas, abdominal surgeries, etc. HMD-based Augmented Reality (AR), using state-of-the-art hardware such as the Magic Leap or the Microsoft HoloLens, have long been foreseen as a key enabler for clinicians in surgical use cases, especially for procedures performed outside of the operating room. This paper describes the Use Case (UC) "AR-assisted emergency surgical care", identified in the context of the 5G-EPICENTRE EU-funded project. Specifically, the UC will experiment with holographic AR technology for emergency medical surgery teams, by overlaying deformable medical models directly on top of the patient body parts, effectively enabling surgeons to see inside (visualizing bones, blood vessels, etc.) and perform surgical actions following step-by-step instructions. The goal is to combine the computational and data-intensive nature of AR and Computer Vision algorithms with upcoming 5G network architectures deployed for edge computing so as to satisfy real-time interaction requirements and provide an efficient and powerful platform for the pervasive promotion of such applications. By developing the necessary Virtual Network Functions (VNFs) to manage data-intensive services (e.g., prerendering, caching, compression) and by exploiting available network resources and Multi-access Edge Computing (MEC) support, provided by the 5G-EPICENTRE infrastructure, this UC aims to provide powerful AR-based tools, usable on site, to first-aid responders.
[ { "created": "Wed, 1 Mar 2023 12:33:31 GMT", "version": "v1" } ]
2023-03-02
[ [ "Kamarianakis", "Manos", "" ], [ "Protopsaltis", "Antonis", "" ], [ "Papagiannakis", "George", "" ] ]
Surgeons should play a central role in disaster planning and management due to the overwhelming number of bodily injuries that are typically involved during most forms of disaster. In fact, various types of surgical procedures are performed by emergency medical teams after sudden-onset disasters, such as soft tissue wounds, orthopaedic traumas, abdominal surgeries, etc. HMD-based Augmented Reality (AR), using state-of-the-art hardware such as the Magic Leap or the Microsoft HoloLens, have long been foreseen as a key enabler for clinicians in surgical use cases, especially for procedures performed outside of the operating room. This paper describes the Use Case (UC) "AR-assisted emergency surgical care", identified in the context of the 5G-EPICENTRE EU-funded project. Specifically, the UC will experiment with holographic AR technology for emergency medical surgery teams, by overlaying deformable medical models directly on top of the patient body parts, effectively enabling surgeons to see inside (visualizing bones, blood vessels, etc.) and perform surgical actions following step-by-step instructions. The goal is to combine the computational and data-intensive nature of AR and Computer Vision algorithms with upcoming 5G network architectures deployed for edge computing so as to satisfy real-time interaction requirements and provide an efficient and powerful platform for the pervasive promotion of such applications. By developing the necessary Virtual Network Functions (VNFs) to manage data-intensive services (e.g., prerendering, caching, compression) and by exploiting available network resources and Multi-access Edge Computing (MEC) support, provided by the 5G-EPICENTRE infrastructure, this UC aims to provide powerful AR-based tools, usable on site, to first-aid responders.
2111.05791
Xuan Bi
Xuan Bi and Xiaotong Shen
Distribution-Invariant Differential Privacy
null
null
null
null
cs.CR cs.LG stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Differential privacy is becoming one gold standard for protecting the privacy of publicly shared data. It has been widely used in social science, data science, public health, information technology, and the U.S. decennial census. Nevertheless, to guarantee differential privacy, existing methods may unavoidably alter the conclusion of the original data analysis, as privatization often changes the sample distribution. This phenomenon is known as the trade-off between privacy protection and statistical accuracy. In this work, we mitigate this trade-off by developing a distribution-invariant privatization (DIP) method to reconcile both high statistical accuracy and strict differential privacy. As a result, any downstream statistical or machine learning task yields essentially the same conclusion as if one used the original data. Numerically, under the same strictness of privacy protection, DIP achieves superior statistical accuracy in a wide range of simulation studies and real-world benchmarks.
[ { "created": "Mon, 8 Nov 2021 22:26:50 GMT", "version": "v1" }, { "created": "Mon, 6 Jun 2022 16:28:56 GMT", "version": "v2" } ]
2022-06-07
[ [ "Bi", "Xuan", "" ], [ "Shen", "Xiaotong", "" ] ]
Differential privacy is becoming one gold standard for protecting the privacy of publicly shared data. It has been widely used in social science, data science, public health, information technology, and the U.S. decennial census. Nevertheless, to guarantee differential privacy, existing methods may unavoidably alter the conclusion of the original data analysis, as privatization often changes the sample distribution. This phenomenon is known as the trade-off between privacy protection and statistical accuracy. In this work, we mitigate this trade-off by developing a distribution-invariant privatization (DIP) method to reconcile both high statistical accuracy and strict differential privacy. As a result, any downstream statistical or machine learning task yields essentially the same conclusion as if one used the original data. Numerically, under the same strictness of privacy protection, DIP achieves superior statistical accuracy in a wide range of simulation studies and real-world benchmarks.
2205.05888
Hang Li
Hang Li and Ahmed Mourad and Bevan Koopman and Guido Zuccon
How does Feedback Signal Quality Impact Effectiveness of Pseudo Relevance Feedback for Passage Retrieval?
Accepted at SIGIR 2022
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pseudo-Relevance Feedback (PRF) assumes that the top results retrieved by a first-stage ranker are relevant to the original query and uses them to improve the query representation for a second round of retrieval. This assumption however is often not correct: some or even all of the feedback documents may be irrelevant. Indeed, the effectiveness of PRF methods may well depend on the quality of the feedback signal and thus on the effectiveness of the first-stage ranker. This aspect however has received little attention before. In this paper we control the quality of the feedback signal and measure its impact on a range of PRF methods, including traditional bag-of-words methods (Rocchio), and dense vector-based methods (learnt and not learnt). Our results show the important role the quality of the feedback signal plays on the effectiveness of PRF methods. Importantly, and surprisingly, our analysis reveals that not all PRF methods are the same when dealing with feedback signals of varying quality. These findings are critical to gain a better understanding of the PRF methods and of which and when they should be used, depending on the feedback signal quality, and set the basis for future research in this area.
[ { "created": "Thu, 12 May 2022 05:47:57 GMT", "version": "v1" } ]
2022-05-13
[ [ "Li", "Hang", "" ], [ "Mourad", "Ahmed", "" ], [ "Koopman", "Bevan", "" ], [ "Zuccon", "Guido", "" ] ]
Pseudo-Relevance Feedback (PRF) assumes that the top results retrieved by a first-stage ranker are relevant to the original query and uses them to improve the query representation for a second round of retrieval. This assumption however is often not correct: some or even all of the feedback documents may be irrelevant. Indeed, the effectiveness of PRF methods may well depend on the quality of the feedback signal and thus on the effectiveness of the first-stage ranker. This aspect however has received little attention before. In this paper we control the quality of the feedback signal and measure its impact on a range of PRF methods, including traditional bag-of-words methods (Rocchio), and dense vector-based methods (learnt and not learnt). Our results show the important role the quality of the feedback signal plays on the effectiveness of PRF methods. Importantly, and surprisingly, our analysis reveals that not all PRF methods are the same when dealing with feedback signals of varying quality. These findings are critical to gain a better understanding of the PRF methods and of which and when they should be used, depending on the feedback signal quality, and set the basis for future research in this area.
2108.01887
Machel Reid
Machel Reid, Mikel Artetxe
PARADISE: Exploiting Parallel Data for Multilingual Sequence-to-Sequence Pretraining
Preprint
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the success of multilingual sequence-to-sequence pretraining, most existing approaches rely on monolingual corpora, and do not make use of the strong cross-lingual signal contained in parallel data. In this paper, we present PARADISE (PARAllel & Denoising Integration in SEquence-to-sequence models), which extends the conventional denoising objective used to train these models by (i) replacing words in the noised sequence according to a multilingual dictionary, and (ii) predicting the reference translation according to a parallel corpus instead of recovering the original sequence. Our experiments on machine translation and cross-lingual natural language inference show an average improvement of 2.0 BLEU points and 6.7 accuracy points from integrating parallel data into pretraining, respectively, obtaining results that are competitive with several popular models at a fraction of their computational cost.
[ { "created": "Wed, 4 Aug 2021 07:32:56 GMT", "version": "v1" } ]
2021-08-05
[ [ "Reid", "Machel", "" ], [ "Artetxe", "Mikel", "" ] ]
Despite the success of multilingual sequence-to-sequence pretraining, most existing approaches rely on monolingual corpora, and do not make use of the strong cross-lingual signal contained in parallel data. In this paper, we present PARADISE (PARAllel & Denoising Integration in SEquence-to-sequence models), which extends the conventional denoising objective used to train these models by (i) replacing words in the noised sequence according to a multilingual dictionary, and (ii) predicting the reference translation according to a parallel corpus instead of recovering the original sequence. Our experiments on machine translation and cross-lingual natural language inference show an average improvement of 2.0 BLEU points and 6.7 accuracy points from integrating parallel data into pretraining, respectively, obtaining results that are competitive with several popular models at a fraction of their computational cost.
2106.02283
Maximilian Hils
Maximilian Hils, Daniel W. Woods, Rainer B\"ohme (University of Innsbruck)
Privacy Preference Signals: Past, Present and Future
null
Proceedings on Privacy Enhancing Technologies 2021
10.2478/popets-2021-0069
null
cs.HC cs.CY
http://creativecommons.org/licenses/by-nc-nd/4.0/
Privacy preference signals are digital representations of how users want their personal data to be processed. Such signals must be adopted by both the sender (users) and intended recipients (data processors). Adoption represents a coordination problem that remains unsolved despite efforts dating back to the 1990s. Browsers implemented standards like the Platform for Privacy Preferences (P3P) and Do Not Track (DNT), but vendors profiting from personal data faced few incentives to receive and respect the expressed wishes of data subjects. In the wake of recent privacy laws, a coalition of AdTech firms published the Transparency and Consent Framework (TCF), which defines an opt-in consent signal. This paper integrates post-GDPR developments into the wider history of privacy preference signals. Our main contribution is a high-frequency longitudinal study describing how TCF signal gained dominance as of February 2021. We explore which factors correlate with adoption at the website level. Both the number of third parties on a website and the presence of Google Ads are associated with higher adoption of TCF. Further, we show that vendors acted as early adopters of TCF 2.0 and provide two case-studies describing how Consent Management Providers shifted existing customers to TCF 2.0. We sketch ways forward for a pro-privacy signal.
[ { "created": "Fri, 4 Jun 2021 06:39:20 GMT", "version": "v1" }, { "created": "Wed, 16 Jun 2021 00:22:35 GMT", "version": "v2" }, { "created": "Thu, 17 Jun 2021 08:53:05 GMT", "version": "v3" }, { "created": "Wed, 14 Jul 2021 10:48:17 GMT", "version": "v4" } ]
2021-07-15
[ [ "Hils", "Maximilian", "", "University of\n Innsbruck" ], [ "Woods", "Daniel W.", "", "University of\n Innsbruck" ], [ "Böhme", "Rainer", "", "University of\n Innsbruck" ] ]
Privacy preference signals are digital representations of how users want their personal data to be processed. Such signals must be adopted by both the sender (users) and intended recipients (data processors). Adoption represents a coordination problem that remains unsolved despite efforts dating back to the 1990s. Browsers implemented standards like the Platform for Privacy Preferences (P3P) and Do Not Track (DNT), but vendors profiting from personal data faced few incentives to receive and respect the expressed wishes of data subjects. In the wake of recent privacy laws, a coalition of AdTech firms published the Transparency and Consent Framework (TCF), which defines an opt-in consent signal. This paper integrates post-GDPR developments into the wider history of privacy preference signals. Our main contribution is a high-frequency longitudinal study describing how TCF signal gained dominance as of February 2021. We explore which factors correlate with adoption at the website level. Both the number of third parties on a website and the presence of Google Ads are associated with higher adoption of TCF. Further, we show that vendors acted as early adopters of TCF 2.0 and provide two case-studies describing how Consent Management Providers shifted existing customers to TCF 2.0. We sketch ways forward for a pro-privacy signal.
2403.14253
Kyuhee Kim
Kyuhee Kim, Surin Lee and Sangah Lee
K-Act2Emo: Korean Commonsense Knowledge Graph for Indirect Emotional Expression
10 pages
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many literary texts, emotions are indirectly conveyed through descriptions of actions, facial expressions, and appearances, necessitating emotion inference for narrative understanding. In this paper, we introduce K-Act2Emo, a Korean commonsense knowledge graph (CSKG) comprising 1,900 indirect emotional expressions and the emotions inferable from them. We categorize reasoning types into inferences in positive situations, inferences in negative situations, and inferences when expressions do not serve as emotional cues. Unlike existing CSKGs, K-Act2Emo specializes in emotional contexts, and experimental results validate its effectiveness for training emotion inference models. Significantly, the BART-based knowledge model fine-tuned with K-Act2Emo outperforms various existing Korean large language models, achieving performance levels comparable to GPT-4 Turbo.
[ { "created": "Thu, 21 Mar 2024 09:26:04 GMT", "version": "v1" }, { "created": "Sat, 23 Mar 2024 15:53:50 GMT", "version": "v2" } ]
2024-03-26
[ [ "Kim", "Kyuhee", "" ], [ "Lee", "Surin", "" ], [ "Lee", "Sangah", "" ] ]
In many literary texts, emotions are indirectly conveyed through descriptions of actions, facial expressions, and appearances, necessitating emotion inference for narrative understanding. In this paper, we introduce K-Act2Emo, a Korean commonsense knowledge graph (CSKG) comprising 1,900 indirect emotional expressions and the emotions inferable from them. We categorize reasoning types into inferences in positive situations, inferences in negative situations, and inferences when expressions do not serve as emotional cues. Unlike existing CSKGs, K-Act2Emo specializes in emotional contexts, and experimental results validate its effectiveness for training emotion inference models. Significantly, the BART-based knowledge model fine-tuned with K-Act2Emo outperforms various existing Korean large language models, achieving performance levels comparable to GPT-4 Turbo.
0810.4658
Keqin Liu
Keqin Liu, Qing Zhao
Indexability of Restless Bandit Problems and Optimality of Whittle's Index for Dynamic Multichannel Access
submitted to IEEE Transactions on Information Theory
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a class of restless multi-armed bandit problems (RMBP) that arises in dynamic multichannel access, user/server scheduling, and optimal activation in multi-agent systems. For this class of RMBP, we establish the indexability and obtain Whittle's index in closed-form for both discounted and average reward criteria. These results lead to a direct implementation of Whittle's index policy with remarkably low complexity. When these Markov chains are stochastically identical, we show that Whittle's index policy is optimal under certain conditions. Furthermore, it has a semi-universal structure that obviates the need to know the Markov transition probabilities. The optimality and the semi-universal structure result from the equivalency between Whittle's index policy and the myopic policy established in this work. For non-identical channels, we develop efficient algorithms for computing a performance upper bound given by Lagrangian relaxation. The tightness of the upper bound and the near-optimal performance of Whittle's index policy are illustrated with simulation examples.
[ { "created": "Sun, 26 Oct 2008 01:58:35 GMT", "version": "v1" }, { "created": "Wed, 12 Nov 2008 16:02:40 GMT", "version": "v2" }, { "created": "Thu, 13 Nov 2008 02:42:59 GMT", "version": "v3" } ]
2008-11-13
[ [ "Liu", "Keqin", "" ], [ "Zhao", "Qing", "" ] ]
We consider a class of restless multi-armed bandit problems (RMBP) that arises in dynamic multichannel access, user/server scheduling, and optimal activation in multi-agent systems. For this class of RMBP, we establish the indexability and obtain Whittle's index in closed-form for both discounted and average reward criteria. These results lead to a direct implementation of Whittle's index policy with remarkably low complexity. When these Markov chains are stochastically identical, we show that Whittle's index policy is optimal under certain conditions. Furthermore, it has a semi-universal structure that obviates the need to know the Markov transition probabilities. The optimality and the semi-universal structure result from the equivalency between Whittle's index policy and the myopic policy established in this work. For non-identical channels, we develop efficient algorithms for computing a performance upper bound given by Lagrangian relaxation. The tightness of the upper bound and the near-optimal performance of Whittle's index policy are illustrated with simulation examples.
cs/0606096
Hendrik Feddes
Lea Cyrus
Building a resource for studying translation shifts
6 pages, 1 figure
Proc. LREC 2006, Genoa, May 24-26, 2006; pp. 1240-1245
null
null
cs.CL
null
This paper describes an interdisciplinary approach which brings together the fields of corpus linguistics and translation studies. It presents ongoing work on the creation of a corpus resource in which translation shifts are explicitly annotated. Translation shifts denote departures from formal correspondence between source and target text, i.e. deviations that have occurred during the translation process. A resource in which such shifts are annotated in a systematic way will make it possible to study those phenomena that need to be addressed if machine translation output is to resemble human translation. The resource described in this paper contains English source texts (parliamentary proceedings) and their German translations. The shift annotation is based on predicate-argument structures and proceeds in two steps: first, predicates and their arguments are annotated monolingually in a straightforward manner. Then, the corresponding English and German predicates and arguments are aligned with each other. Whenever a shift - mainly grammatical or semantic -has occurred, the alignment is tagged accordingly.
[ { "created": "Thu, 22 Jun 2006 13:26:52 GMT", "version": "v1" } ]
2007-05-23
[ [ "Cyrus", "Lea", "" ] ]
This paper describes an interdisciplinary approach which brings together the fields of corpus linguistics and translation studies. It presents ongoing work on the creation of a corpus resource in which translation shifts are explicitly annotated. Translation shifts denote departures from formal correspondence between source and target text, i.e. deviations that have occurred during the translation process. A resource in which such shifts are annotated in a systematic way will make it possible to study those phenomena that need to be addressed if machine translation output is to resemble human translation. The resource described in this paper contains English source texts (parliamentary proceedings) and their German translations. The shift annotation is based on predicate-argument structures and proceeds in two steps: first, predicates and their arguments are annotated monolingually in a straightforward manner. Then, the corresponding English and German predicates and arguments are aligned with each other. Whenever a shift - mainly grammatical or semantic -has occurred, the alignment is tagged accordingly.
1310.7469
Vanessa Burke
Feng Jiang, Jiemin Wang, Abram Hindle and Mario A. Nascimento
Mining the Temporal Evolution of the Android Bug Reporting Community via Sliding Windows
null
null
null
TR13-07
cs.SE
http://creativecommons.org/licenses/by/3.0/
The open source development community consists of both paid and volunteer developers as well as new and experienced users. Previous work has applied social network analysis (SNA) to open source communities and has demonstrated value in expertise discovery and triaging. One problem with applying SNA directly to the data of the entire project lifetime is that the impact of local activities will be drowned out. In this paper we provide a method for aggregating, analyzing, and visualizing local (small time periods) interactions of bug reporting participants by using the SNA to measure the betweeness centrality of these participants. In particular we mined the Android bug repository by producing social networks from overlapping 30-day windows of bug reports, each sliding over by day. In this paper we define three patterns of participant behaviour based on their local centrality. We propose a method of analyzing the centrality of bug report participants both locally and globally, then we conduct a thorough case study of the bug reporter's activity within the Android bug repository. Furthermore, we validate the conclusions of our method by mining the Android version control system and inspecting the Android release history. We found that windowed SNA analysis elicited local behaviour that were invisible during global analysis.
[ { "created": "Mon, 28 Oct 2013 15:56:25 GMT", "version": "v1" } ]
2013-10-29
[ [ "Jiang", "Feng", "" ], [ "Wang", "Jiemin", "" ], [ "Hindle", "Abram", "" ], [ "Nascimento", "Mario A.", "" ] ]
The open source development community consists of both paid and volunteer developers as well as new and experienced users. Previous work has applied social network analysis (SNA) to open source communities and has demonstrated value in expertise discovery and triaging. One problem with applying SNA directly to the data of the entire project lifetime is that the impact of local activities will be drowned out. In this paper we provide a method for aggregating, analyzing, and visualizing local (small time periods) interactions of bug reporting participants by using the SNA to measure the betweeness centrality of these participants. In particular we mined the Android bug repository by producing social networks from overlapping 30-day windows of bug reports, each sliding over by day. In this paper we define three patterns of participant behaviour based on their local centrality. We propose a method of analyzing the centrality of bug report participants both locally and globally, then we conduct a thorough case study of the bug reporter's activity within the Android bug repository. Furthermore, we validate the conclusions of our method by mining the Android version control system and inspecting the Android release history. We found that windowed SNA analysis elicited local behaviour that were invisible during global analysis.
2311.08167
Amit Chaudhary
Naveen Sahu, Mitul Gajera, Amit Chaudhary and Hamish Ivey-Law
SeDe: Balancing Blockchain Privacy and Regulatory Compliance by Selective De-Anonymization
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Privacy is one of the essential pillars for the widespread adoption of blockchains, but public blockchains are transparent by nature. Modern analytics techniques can easily subdue the pseudonymity feature of a blockchain user. Some applications have been able to provide practical privacy protections using privacy-preserving cryptography techniques. However, malicious actors have abused them illicitly, discouraging honest actors from using privacy-preserving applications as "mixing" user interactions and funds with anonymous bad actors, causing compliance and regulatory concerns. In this paper, we propose a framework that balances privacy-preserving features by establishing a regulatory and compliant framework called Selective De-Anonymization (SeDe). The adoption of this framework allows privacy-preserving applications on blockchains to de-anonymize illicit transactions by recursive traversal of subgraphs of linked transactions. Our technique achieves this without leaving de-anonymization decisions or control in the hands of a single entity but distributing it among multiple entities while holding them accountable for their respective actions. To instantiate, our framework uses threshold encryption schemes and Zero-Knowledge Proofs (ZKPs).
[ { "created": "Tue, 14 Nov 2023 13:49:13 GMT", "version": "v1" }, { "created": "Thu, 16 Nov 2023 12:38:12 GMT", "version": "v2" }, { "created": "Sat, 9 Mar 2024 16:01:27 GMT", "version": "v3" }, { "created": "Fri, 24 May 2024 09:18:10 GMT", "version": "v4" } ]
2024-05-27
[ [ "Sahu", "Naveen", "" ], [ "Gajera", "Mitul", "" ], [ "Chaudhary", "Amit", "" ], [ "Ivey-Law", "Hamish", "" ] ]
Privacy is one of the essential pillars for the widespread adoption of blockchains, but public blockchains are transparent by nature. Modern analytics techniques can easily subdue the pseudonymity feature of a blockchain user. Some applications have been able to provide practical privacy protections using privacy-preserving cryptography techniques. However, malicious actors have abused them illicitly, discouraging honest actors from using privacy-preserving applications as "mixing" user interactions and funds with anonymous bad actors, causing compliance and regulatory concerns. In this paper, we propose a framework that balances privacy-preserving features by establishing a regulatory and compliant framework called Selective De-Anonymization (SeDe). The adoption of this framework allows privacy-preserving applications on blockchains to de-anonymize illicit transactions by recursive traversal of subgraphs of linked transactions. Our technique achieves this without leaving de-anonymization decisions or control in the hands of a single entity but distributing it among multiple entities while holding them accountable for their respective actions. To instantiate, our framework uses threshold encryption schemes and Zero-Knowledge Proofs (ZKPs).
1901.08991
Luis Armando P\'erez Rey
Luis A. P\'erez Rey, Vlado Menkovski, Jacobus W. Portegies
Diffusion Variational Autoencoders
10 pages, 8 figures Added an appendix with derivation of asymptotic expansion of KL divergence for heat kernel on arbitrary Riemannian manifolds, and an appendix with new experiments on binarized MNIST. Added a previously missing factor in the asymptotic expansion of the heat kernel and corrected a coefficient in asymptotic expansion KL divergence; further minor edits
International Joint Conferences on Artificial Intelligence (IJCAI) 2020
10.24963/ijcai.2020/375
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A standard Variational Autoencoder, with a Euclidean latent space, is structurally incapable of capturing topological properties of certain datasets. To remove topological obstructions, we introduce Diffusion Variational Autoencoders with arbitrary manifolds as a latent space. A Diffusion Variational Autoencoder uses transition kernels of Brownian motion on the manifold. In particular, it uses properties of the Brownian motion to implement the reparametrization trick and fast approximations to the KL divergence. We show that the Diffusion Variational Autoencoder is capable of capturing topological properties of synthetic datasets. Additionally, we train MNIST on spheres, tori, projective spaces, SO(3), and a torus embedded in R3. Although a natural dataset like MNIST does not have latent variables with a clear-cut topological structure, training it on a manifold can still highlight topological and geometrical properties.
[ { "created": "Fri, 25 Jan 2019 17:10:25 GMT", "version": "v1" }, { "created": "Thu, 28 Mar 2019 09:10:12 GMT", "version": "v2" } ]
2022-04-07
[ [ "Rey", "Luis A. Pérez", "" ], [ "Menkovski", "Vlado", "" ], [ "Portegies", "Jacobus W.", "" ] ]
A standard Variational Autoencoder, with a Euclidean latent space, is structurally incapable of capturing topological properties of certain datasets. To remove topological obstructions, we introduce Diffusion Variational Autoencoders with arbitrary manifolds as a latent space. A Diffusion Variational Autoencoder uses transition kernels of Brownian motion on the manifold. In particular, it uses properties of the Brownian motion to implement the reparametrization trick and fast approximations to the KL divergence. We show that the Diffusion Variational Autoencoder is capable of capturing topological properties of synthetic datasets. Additionally, we train MNIST on spheres, tori, projective spaces, SO(3), and a torus embedded in R3. Although a natural dataset like MNIST does not have latent variables with a clear-cut topological structure, training it on a manifold can still highlight topological and geometrical properties.
2208.00002
Zijue Chen
Zijue Chen, Keenan Granland, Rhys Newbury, Chao Chen
HOB-CNN: Hallucination of Occluded Branches with a Convolutional Neural Network for 2D Fruit Trees
null
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Orchard automation has attracted the attention of researchers recently due to the shortage of global labor force. To automate tasks in orchards such as pruning, thinning, and harvesting, a detailed understanding of the tree structure is required. However, occlusions from foliage and fruits can make it challenging to predict the position of occluded trunks and branches. This work proposes a regression-based deep learning model, Hallucination of Occluded Branch Convolutional Neural Network (HOB-CNN), for tree branch position prediction in varying occluded conditions. We formulate tree branch position prediction as a regression problem towards the horizontal locations of the branch along the vertical direction or vice versa. We present comparative experiments on Y-shaped trees with two state-of-the-art baselines, representing common approaches to the problem. Experiments show that HOB-CNN outperform the baselines at predicting branch position and shows robustness against varying levels of occlusion. We further validated HOB-CNN against two different types of 2D trees, and HOB-CNN shows generalization across different trees and robustness under different occluded conditions.
[ { "created": "Thu, 28 Jul 2022 06:12:02 GMT", "version": "v1" } ]
2022-08-02
[ [ "Chen", "Zijue", "" ], [ "Granland", "Keenan", "" ], [ "Newbury", "Rhys", "" ], [ "Chen", "Chao", "" ] ]
Orchard automation has attracted the attention of researchers recently due to the shortage of global labor force. To automate tasks in orchards such as pruning, thinning, and harvesting, a detailed understanding of the tree structure is required. However, occlusions from foliage and fruits can make it challenging to predict the position of occluded trunks and branches. This work proposes a regression-based deep learning model, Hallucination of Occluded Branch Convolutional Neural Network (HOB-CNN), for tree branch position prediction in varying occluded conditions. We formulate tree branch position prediction as a regression problem towards the horizontal locations of the branch along the vertical direction or vice versa. We present comparative experiments on Y-shaped trees with two state-of-the-art baselines, representing common approaches to the problem. Experiments show that HOB-CNN outperform the baselines at predicting branch position and shows robustness against varying levels of occlusion. We further validated HOB-CNN against two different types of 2D trees, and HOB-CNN shows generalization across different trees and robustness under different occluded conditions.
2311.10275
Sandeep Kumar
Alan Nair, Sandeep Kumar, Aravinda Prasad, Andy Rudoff, and Sreenivas Subramoney
Telescope: Telemetry at Terabyte Scale
null
null
null
null
cs.OS cs.AR cs.DB cs.DC
http://creativecommons.org/licenses/by/4.0/
Data-hungry applications that require terabytes of memory have become widespread in recent years. To meet the memory needs of these applications, data centers are embracing tiered memory architectures with near and far memory tiers. Precise, efficient, and timely identification of hot and cold data and their placement in appropriate tiers is critical for performance in such systems. Unfortunately, the existing state-of-the-art telemetry techniques for hot and cold data detection are ineffective at the terabyte scale. We propose Telescope, a novel technique that profiles different levels of the application's page table tree for fast and efficient identification of hot and cold data. Telescope is based on the observation that, for a memory- and TLB-intensive workload, higher levels of a page table tree are also frequently accessed during a hardware page table walk. Hence, the hotness of the higher levels of the page table tree essentially captures the hotness of its subtrees or address space sub-regions at a coarser granularity. We exploit this insight to quickly converge on even a few megabytes of hot data and efficiently identify several gigabytes of cold data in terabyte-scale applications. Importantly, such a technique can seamlessly scale to petabyte-scale applications. Telescope's telemetry achieves 90%+ precision and recall at just 0.009% single CPU utilization for microbenchmarks with a 5 TB memory footprint. Memory tiering based on Telescope results in 5.6% to 34% throughput improvement for real-world benchmarks with a 1-2 TB memory footprint compared to other state-of-the-art telemetry techniques.
[ { "created": "Fri, 17 Nov 2023 01:44:14 GMT", "version": "v1" }, { "created": "Thu, 30 Nov 2023 04:14:30 GMT", "version": "v2" } ]
2023-12-01
[ [ "Nair", "Alan", "" ], [ "Kumar", "Sandeep", "" ], [ "Prasad", "Aravinda", "" ], [ "Rudoff", "Andy", "" ], [ "Subramoney", "Sreenivas", "" ] ]
Data-hungry applications that require terabytes of memory have become widespread in recent years. To meet the memory needs of these applications, data centers are embracing tiered memory architectures with near and far memory tiers. Precise, efficient, and timely identification of hot and cold data and their placement in appropriate tiers is critical for performance in such systems. Unfortunately, the existing state-of-the-art telemetry techniques for hot and cold data detection are ineffective at the terabyte scale. We propose Telescope, a novel technique that profiles different levels of the application's page table tree for fast and efficient identification of hot and cold data. Telescope is based on the observation that, for a memory- and TLB-intensive workload, higher levels of a page table tree are also frequently accessed during a hardware page table walk. Hence, the hotness of the higher levels of the page table tree essentially captures the hotness of its subtrees or address space sub-regions at a coarser granularity. We exploit this insight to quickly converge on even a few megabytes of hot data and efficiently identify several gigabytes of cold data in terabyte-scale applications. Importantly, such a technique can seamlessly scale to petabyte-scale applications. Telescope's telemetry achieves 90%+ precision and recall at just 0.009% single CPU utilization for microbenchmarks with a 5 TB memory footprint. Memory tiering based on Telescope results in 5.6% to 34% throughput improvement for real-world benchmarks with a 1-2 TB memory footprint compared to other state-of-the-art telemetry techniques.
2009.04441
Diego Antognini
Kirtan Padh, Diego Antognini, Emma Lejal Glaude, Boi Faltings, Claudiu Musat
Addressing Fairness in Classification with a Model-Agnostic Multi-Objective Algorithm
Accepted at UAI 2021. 14 pages, 5 figures, 4 tables
null
null
null
cs.LG cs.AI cs.IR stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The goal of fairness in classification is to learn a classifier that does not discriminate against groups of individuals based on sensitive attributes, such as race and gender. One approach to designing fair algorithms is to use relaxations of fairness notions as regularization terms or in a constrained optimization problem. We observe that the hyperbolic tangent function can approximate the indicator function. We leverage this property to define a differentiable relaxation that approximates fairness notions provably better than existing relaxations. In addition, we propose a model-agnostic multi-objective architecture that can simultaneously optimize for multiple fairness notions and multiple sensitive attributes and supports all statistical parity-based notions of fairness. We use our relaxation with the multi-objective architecture to learn fair classifiers. Experiments on public datasets show that our method suffers a significantly lower loss of accuracy than current debiasing algorithms relative to the unconstrained model.
[ { "created": "Wed, 9 Sep 2020 17:40:24 GMT", "version": "v1" }, { "created": "Mon, 14 Sep 2020 17:17:00 GMT", "version": "v2" }, { "created": "Tue, 8 Jun 2021 12:39:26 GMT", "version": "v3" } ]
2021-06-09
[ [ "Padh", "Kirtan", "" ], [ "Antognini", "Diego", "" ], [ "Glaude", "Emma Lejal", "" ], [ "Faltings", "Boi", "" ], [ "Musat", "Claudiu", "" ] ]
The goal of fairness in classification is to learn a classifier that does not discriminate against groups of individuals based on sensitive attributes, such as race and gender. One approach to designing fair algorithms is to use relaxations of fairness notions as regularization terms or in a constrained optimization problem. We observe that the hyperbolic tangent function can approximate the indicator function. We leverage this property to define a differentiable relaxation that approximates fairness notions provably better than existing relaxations. In addition, we propose a model-agnostic multi-objective architecture that can simultaneously optimize for multiple fairness notions and multiple sensitive attributes and supports all statistical parity-based notions of fairness. We use our relaxation with the multi-objective architecture to learn fair classifiers. Experiments on public datasets show that our method suffers a significantly lower loss of accuracy than current debiasing algorithms relative to the unconstrained model.
1410.5370
Eric Seidel
Eric L. Seidel, Niki Vazou, Ranjit Jhala
Type Targeted Testing
null
null
10.1007/978-3-662-46669-8_33
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new technique called type targeted testing, which translates precise refinement types into comprehensive test-suites. The key insight behind our approach is that through the lens of SMT solvers, refinement types can also be viewed as a high-level, declarative, test generation technique, wherein types are converted to SMT queries whose models can be decoded into concrete program inputs. Our approach enables the systematic and exhaustive testing of implementations from high-level declarative specifications, and furthermore, provides a gradual path from testing to full verification. We have implemented our approach as a Haskell testing tool called TARGET, and present an evaluation that shows how TARGET can be used to test a wide variety of properties and how it compares against state-of-the-art testing approaches.
[ { "created": "Mon, 20 Oct 2014 17:48:20 GMT", "version": "v1" }, { "created": "Fri, 16 Jan 2015 03:55:38 GMT", "version": "v2" } ]
2017-08-29
[ [ "Seidel", "Eric L.", "" ], [ "Vazou", "Niki", "" ], [ "Jhala", "Ranjit", "" ] ]
We present a new technique called type targeted testing, which translates precise refinement types into comprehensive test-suites. The key insight behind our approach is that through the lens of SMT solvers, refinement types can also be viewed as a high-level, declarative, test generation technique, wherein types are converted to SMT queries whose models can be decoded into concrete program inputs. Our approach enables the systematic and exhaustive testing of implementations from high-level declarative specifications, and furthermore, provides a gradual path from testing to full verification. We have implemented our approach as a Haskell testing tool called TARGET, and present an evaluation that shows how TARGET can be used to test a wide variety of properties and how it compares against state-of-the-art testing approaches.
2104.11079
Tamara Kolda
Aydin Buluc, Tamara G. Kolda, Stefan M. Wild, Mihai Anitescu, Anthony DeGennaro, John Jakeman, Chandrika Kamath, Ramakrishnan Kannan, Miles E. Lopes, Per-Gunnar Martinsson, Kary Myers, Jelani Nelson, Juan M. Restrepo, C. Seshadhri, Draguna Vrabie, Brendt Wohlberg, Stephen J. Wright, Chao Yang, Peter Zwart
Randomized Algorithms for Scientific Computing (RASC)
null
null
10.2172/1807223
null
cs.AI cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Randomized algorithms have propelled advances in artificial intelligence and represent a foundational research area in advancing AI for Science. Future advancements in DOE Office of Science priority areas such as climate science, astrophysics, fusion, advanced materials, combustion, and quantum computing all require randomized algorithms for surmounting challenges of complexity, robustness, and scalability. This report summarizes the outcomes of that workshop, "Randomized Algorithms for Scientific Computing (RASC)," held virtually across four days in December 2020 and January 2021.
[ { "created": "Mon, 19 Apr 2021 18:59:26 GMT", "version": "v1" }, { "created": "Tue, 28 Sep 2021 15:27:52 GMT", "version": "v2" }, { "created": "Mon, 21 Mar 2022 21:29:54 GMT", "version": "v3" } ]
2022-03-23
[ [ "Buluc", "Aydin", "" ], [ "Kolda", "Tamara G.", "" ], [ "Wild", "Stefan M.", "" ], [ "Anitescu", "Mihai", "" ], [ "DeGennaro", "Anthony", "" ], [ "Jakeman", "John", "" ], [ "Kamath", "Chandrika", "" ], [ "Kannan", "Ramakrishnan", "" ], [ "Lopes", "Miles E.", "" ], [ "Martinsson", "Per-Gunnar", "" ], [ "Myers", "Kary", "" ], [ "Nelson", "Jelani", "" ], [ "Restrepo", "Juan M.", "" ], [ "Seshadhri", "C.", "" ], [ "Vrabie", "Draguna", "" ], [ "Wohlberg", "Brendt", "" ], [ "Wright", "Stephen J.", "" ], [ "Yang", "Chao", "" ], [ "Zwart", "Peter", "" ] ]
Randomized algorithms have propelled advances in artificial intelligence and represent a foundational research area in advancing AI for Science. Future advancements in DOE Office of Science priority areas such as climate science, astrophysics, fusion, advanced materials, combustion, and quantum computing all require randomized algorithms for surmounting challenges of complexity, robustness, and scalability. This report summarizes the outcomes of that workshop, "Randomized Algorithms for Scientific Computing (RASC)," held virtually across four days in December 2020 and January 2021.
2103.14915
Shengliang Lu
Shengliang Lu, Shixuan Sun, Johns Paul, Yuchen Li, Bingsheng He
Cache-Efficient Fork-Processing Patterns on Large Graphs
in SIGMOD 2021
null
10.1145/3448016.3457253
null
cs.DB cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As large graph processing emerges, we observe a costly fork-processing pattern (FPP) that is common in many graph algorithms. The unique feature of the FPP is that it launches many independent queries from different source vertices on the same graph. For example, an algorithm in analyzing the network community profile can execute Personalized PageRanks that start from tens of thousands of source vertices at the same time. We study the efficiency of handling FPPs in state-of-the-art graph processing systems on multi-core architectures. We find that those systems suffer from severe cache miss penalty because of the irregular and uncoordinated memory accesses in processing FPPs. In this paper, we propose ForkGraph, a cache-efficient FPP processing system on multi-core architectures. To improve the cache reuse, we divide the graph into partitions each sized of LLC capacity, and the queries in an FPP are buffered and executed on the partition basis. We further develop efficient intra- and inter-partition execution strategies for efficiency. For intra-partition processing, since the graph partition fits into LLC, we propose to execute each graph query with efficient sequential algorithms (in contrast with parallel algorithms in existing parallel graph processing systems) and present an atomic-free query processing by consolidating contending operations to cache-resident graph partition. For inter-partition processing, we propose yielding and priority-based scheduling, to reduce redundant work in processing. Besides, we theoretically prove that ForkGraph performs the same amount of work, to within a constant factor, as the fastest known sequential algorithms in FPP queries processing, which is work efficient. Our evaluations on real-world graphs show that ForkGraph significantly outperforms state-of-the-art graph processing systems with two orders of magnitude speedups.
[ { "created": "Sat, 27 Mar 2021 14:29:04 GMT", "version": "v1" }, { "created": "Sun, 11 Apr 2021 01:05:52 GMT", "version": "v2" } ]
2021-04-13
[ [ "Lu", "Shengliang", "" ], [ "Sun", "Shixuan", "" ], [ "Paul", "Johns", "" ], [ "Li", "Yuchen", "" ], [ "He", "Bingsheng", "" ] ]
As large graph processing emerges, we observe a costly fork-processing pattern (FPP) that is common in many graph algorithms. The unique feature of the FPP is that it launches many independent queries from different source vertices on the same graph. For example, an algorithm in analyzing the network community profile can execute Personalized PageRanks that start from tens of thousands of source vertices at the same time. We study the efficiency of handling FPPs in state-of-the-art graph processing systems on multi-core architectures. We find that those systems suffer from severe cache miss penalty because of the irregular and uncoordinated memory accesses in processing FPPs. In this paper, we propose ForkGraph, a cache-efficient FPP processing system on multi-core architectures. To improve the cache reuse, we divide the graph into partitions each sized of LLC capacity, and the queries in an FPP are buffered and executed on the partition basis. We further develop efficient intra- and inter-partition execution strategies for efficiency. For intra-partition processing, since the graph partition fits into LLC, we propose to execute each graph query with efficient sequential algorithms (in contrast with parallel algorithms in existing parallel graph processing systems) and present an atomic-free query processing by consolidating contending operations to cache-resident graph partition. For inter-partition processing, we propose yielding and priority-based scheduling, to reduce redundant work in processing. Besides, we theoretically prove that ForkGraph performs the same amount of work, to within a constant factor, as the fastest known sequential algorithms in FPP queries processing, which is work efficient. Our evaluations on real-world graphs show that ForkGraph significantly outperforms state-of-the-art graph processing systems with two orders of magnitude speedups.
2405.10153
Moyi Li
Moyi Li, Dzmitry Katsiuba, Mateusz Dolata and Gerhard Schwabe
Firefighters' Perceptions on Collaboration and Interaction with Autonomous Drones: Results of a Field Trial
This is authors' copy of the manuscript accepted for ACM CHI Conference on Human Factors in Computing Systems 2024. Please, refer to the published article at https://doi.org/10.1145/3613904.3642061 for further information
CHI '24: Proceedings of the CHI Conference on Human Factors in Computing Systems (2024), Article No.: 265, 1-19
10.1145/3613904.3642061
null
cs.HC
http://creativecommons.org/licenses/by-sa/4.0/
Applications of drones in emergency response, like firefighting, have been promoted in the past decade. As the autonomy of drones continues to improve, the ways in which they are integrated into firefighting teams and their impact on crews are changing. This demands more understanding of how firefighters perceive and interact with autonomous drones. This paper presents a drone-based system for emergency operations with which firefighters can interact through sound, lights, and a graphical user interface. We use interviews with stakeholders collected in two field trials to explore their perceptions of the interaction and collaboration with drones. Our result shows that firefighters perceived visual interaction as adequate. However, for audio instructions and interfaces, information overload emerges as an essential problem. The potential impact of drones on current work configurations may involve shifting the position of humans closer to supervisory decision-makers and changing the training structure and content.
[ { "created": "Thu, 16 May 2024 14:48:24 GMT", "version": "v1" } ]
2024-05-17
[ [ "Li", "Moyi", "" ], [ "Katsiuba", "Dzmitry", "" ], [ "Dolata", "Mateusz", "" ], [ "Schwabe", "Gerhard", "" ] ]
Applications of drones in emergency response, like firefighting, have been promoted in the past decade. As the autonomy of drones continues to improve, the ways in which they are integrated into firefighting teams and their impact on crews are changing. This demands more understanding of how firefighters perceive and interact with autonomous drones. This paper presents a drone-based system for emergency operations with which firefighters can interact through sound, lights, and a graphical user interface. We use interviews with stakeholders collected in two field trials to explore their perceptions of the interaction and collaboration with drones. Our result shows that firefighters perceived visual interaction as adequate. However, for audio instructions and interfaces, information overload emerges as an essential problem. The potential impact of drones on current work configurations may involve shifting the position of humans closer to supervisory decision-makers and changing the training structure and content.
2003.00863
Haotian Zhang
Haotian Zhang, Jianyong Sun and Zongben Xu
Adaptive Structural Hyper-Parameter Configuration by Q-Learning
null
2020 IEEE Congress on Evolutionary Computation (CEC)
10.1109/CEC48606.2020.9185665
null
cs.NE cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tuning hyper-parameters for evolutionary algorithms is an important issue in computational intelligence. Performance of an evolutionary algorithm depends not only on its operation strategy design, but also on its hyper-parameters. Hyper-parameters can be categorized in two dimensions as structural/numerical and time-invariant/time-variant. Particularly, structural hyper-parameters in existing studies are usually tuned in advance for time-invariant parameters, or with hand-crafted scheduling for time-invariant parameters. In this paper, we make the first attempt to model the tuning of structural hyper-parameters as a reinforcement learning problem, and present to tune the structural hyper-parameter which controls computational resource allocation in the CEC 2018 winner algorithm by Q-learning. Experimental results show favorably against the winner algorithm on the CEC 2018 test functions.
[ { "created": "Mon, 2 Mar 2020 13:10:13 GMT", "version": "v1" } ]
2020-11-24
[ [ "Zhang", "Haotian", "" ], [ "Sun", "Jianyong", "" ], [ "Xu", "Zongben", "" ] ]
Tuning hyper-parameters for evolutionary algorithms is an important issue in computational intelligence. Performance of an evolutionary algorithm depends not only on its operation strategy design, but also on its hyper-parameters. Hyper-parameters can be categorized in two dimensions as structural/numerical and time-invariant/time-variant. Particularly, structural hyper-parameters in existing studies are usually tuned in advance for time-invariant parameters, or with hand-crafted scheduling for time-invariant parameters. In this paper, we make the first attempt to model the tuning of structural hyper-parameters as a reinforcement learning problem, and present to tune the structural hyper-parameter which controls computational resource allocation in the CEC 2018 winner algorithm by Q-learning. Experimental results show favorably against the winner algorithm on the CEC 2018 test functions.
1707.03186
Remy Cazabet
Giulio Rossetti and R\'emy Cazabet
Community Discovery in Dynamic Networks: a Survey
null
null
10.1145/3172867
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Networks built to model real world phenomena are characeterised by some properties that have attracted the attention of the scientific community: (i) they are organised according to community structure and (ii) their structure evolves with time. Many researchers have worked on methods that can efficiently unveil substructures in complex networks, giving birth to the field of community discovery. A novel and challenging problem started capturing researcher interest recently: the identification of evolving communities. To model the evolution of a system, dynamic networks can be used: nodes and edges are mutable and their presence, or absence, deeply impacts the community structure that composes them. The aim of this survey is to present the distinctive features and challenges of dynamic community discovery, and propose a classification of published approaches. As a "user manual", this work organizes state of art methodologies into a taxonomy, based on their rationale, and their specific instanciation. Given a desired definition of network dynamics, community characteristics and analytical needs, this survey will support researchers to identify the set of approaches that best fit their needs. The proposed classification could also help researchers to choose in which direction should future research be oriented.
[ { "created": "Tue, 11 Jul 2017 09:25:20 GMT", "version": "v1" }, { "created": "Wed, 6 Dec 2017 08:14:13 GMT", "version": "v2" }, { "created": "Tue, 3 Sep 2019 12:42:25 GMT", "version": "v3" } ]
2019-09-04
[ [ "Rossetti", "Giulio", "" ], [ "Cazabet", "Rémy", "" ] ]
Networks built to model real world phenomena are characeterised by some properties that have attracted the attention of the scientific community: (i) they are organised according to community structure and (ii) their structure evolves with time. Many researchers have worked on methods that can efficiently unveil substructures in complex networks, giving birth to the field of community discovery. A novel and challenging problem started capturing researcher interest recently: the identification of evolving communities. To model the evolution of a system, dynamic networks can be used: nodes and edges are mutable and their presence, or absence, deeply impacts the community structure that composes them. The aim of this survey is to present the distinctive features and challenges of dynamic community discovery, and propose a classification of published approaches. As a "user manual", this work organizes state of art methodologies into a taxonomy, based on their rationale, and their specific instanciation. Given a desired definition of network dynamics, community characteristics and analytical needs, this survey will support researchers to identify the set of approaches that best fit their needs. The proposed classification could also help researchers to choose in which direction should future research be oriented.
2303.09700
Mihaela Curmei
Han Zhang, Shangen Lu, Yixin Wang, Mihaela Curmei
Delayed and Indirect Impacts of Link Recommendations
null
null
null
null
cs.SI cs.AI cs.LG stat.AP
http://creativecommons.org/licenses/by/4.0/
The impacts of link recommendations on social networks are challenging to evaluate, and so far they have been studied in limited settings. Observational studies are restricted in the kinds of causal questions they can answer and naive A/B tests often lead to biased evaluations due to unaccounted network interference. Furthermore, evaluations in simulation settings are often limited to static network models that do not take into account the potential feedback loops between link recommendation and organic network evolution. To this end, we study the impacts of recommendations on social networks in dynamic settings. Adopting a simulation-based approach, we consider an explicit dynamic formation model -- an extension of the celebrated Jackson-Rogers model -- and investigate how link recommendations affect network evolution over time. Empirically, we find that link recommendations have surprising delayed and indirect effects on the structural properties of networks. Specifically, we find that link recommendations can exhibit considerably different impacts in the immediate term and in the long term. For instance, we observe that friend-of-friend recommendations can have an immediate effect in decreasing degree inequality, but in the long term, they can make the degree distribution substantially more unequal. Moreover, we show that the effects of recommendations can persist in networks, in part due to their indirect impacts on natural dynamics even after recommendations are turned off. We show that, in counterfactual simulations, removing the indirect effects of link recommendations can make the network trend faster toward what it would have been under natural growth dynamics.
[ { "created": "Fri, 17 Mar 2023 00:09:19 GMT", "version": "v1" } ]
2023-03-20
[ [ "Zhang", "Han", "" ], [ "Lu", "Shangen", "" ], [ "Wang", "Yixin", "" ], [ "Curmei", "Mihaela", "" ] ]
The impacts of link recommendations on social networks are challenging to evaluate, and so far they have been studied in limited settings. Observational studies are restricted in the kinds of causal questions they can answer and naive A/B tests often lead to biased evaluations due to unaccounted network interference. Furthermore, evaluations in simulation settings are often limited to static network models that do not take into account the potential feedback loops between link recommendation and organic network evolution. To this end, we study the impacts of recommendations on social networks in dynamic settings. Adopting a simulation-based approach, we consider an explicit dynamic formation model -- an extension of the celebrated Jackson-Rogers model -- and investigate how link recommendations affect network evolution over time. Empirically, we find that link recommendations have surprising delayed and indirect effects on the structural properties of networks. Specifically, we find that link recommendations can exhibit considerably different impacts in the immediate term and in the long term. For instance, we observe that friend-of-friend recommendations can have an immediate effect in decreasing degree inequality, but in the long term, they can make the degree distribution substantially more unequal. Moreover, we show that the effects of recommendations can persist in networks, in part due to their indirect impacts on natural dynamics even after recommendations are turned off. We show that, in counterfactual simulations, removing the indirect effects of link recommendations can make the network trend faster toward what it would have been under natural growth dynamics.
1907.02218
Ofir Geri
Edith Cohen and Ofir Geri
Sampling Sketches for Concave Sublinear Functions of Frequencies
Full version of a NeurIPS 2019 paper
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider massive distributed datasets that consist of elements modeled as key-value pairs and the task of computing statistics or aggregates where the contribution of each key is weighted by a function of its frequency (sum of values of its elements). This fundamental problem has a wealth of applications in data analytics and machine learning, in particular, with concave sublinear functions of the frequencies that mitigate the disproportionate effect of keys with high frequency. The family of concave sublinear functions includes low frequency moments ($p \leq 1$), capping, logarithms, and their compositions. A common approach is to sample keys, ideally, proportionally to their contributions and estimate statistics from the sample. A simple but costly way to do this is by aggregating the data to produce a table of keys and their frequencies, apply our function to the frequency values, and then apply a weighted sampling scheme. Our main contribution is the design of composable sampling sketches that can be tailored to any concave sublinear function of the frequencies. Our sketch structure size is very close to the desired sample size and our samples provide statistical guarantees on the estimation quality that are very close to that of an ideal sample of the same size computed over aggregated data. Finally, we demonstrate experimentally the simplicity and effectiveness of our methods.
[ { "created": "Thu, 4 Jul 2019 04:55:21 GMT", "version": "v1" }, { "created": "Fri, 6 Dec 2019 00:12:09 GMT", "version": "v2" }, { "created": "Sun, 22 Dec 2019 16:28:45 GMT", "version": "v3" } ]
2019-12-24
[ [ "Cohen", "Edith", "" ], [ "Geri", "Ofir", "" ] ]
We consider massive distributed datasets that consist of elements modeled as key-value pairs and the task of computing statistics or aggregates where the contribution of each key is weighted by a function of its frequency (sum of values of its elements). This fundamental problem has a wealth of applications in data analytics and machine learning, in particular, with concave sublinear functions of the frequencies that mitigate the disproportionate effect of keys with high frequency. The family of concave sublinear functions includes low frequency moments ($p \leq 1$), capping, logarithms, and their compositions. A common approach is to sample keys, ideally, proportionally to their contributions and estimate statistics from the sample. A simple but costly way to do this is by aggregating the data to produce a table of keys and their frequencies, apply our function to the frequency values, and then apply a weighted sampling scheme. Our main contribution is the design of composable sampling sketches that can be tailored to any concave sublinear function of the frequencies. Our sketch structure size is very close to the desired sample size and our samples provide statistical guarantees on the estimation quality that are very close to that of an ideal sample of the same size computed over aggregated data. Finally, we demonstrate experimentally the simplicity and effectiveness of our methods.
2010.10176
Markus J. Hofmann
Markus J. Hofmann, Lara M\"uller, Andre R\"olke, Ralph Radach and Chris Biemann
Individual corpora predict fast memory retrieval during reading
Proceedings of the 6th workshop on Cognitive Aspects of the Lexicon (CogALex-VI), Barcelona, Spain, December 12, 2020; accepted manuscript; 11 pages, 2 figures, 4 Tables
null
null
null
cs.CL cs.IR
http://creativecommons.org/licenses/by/4.0/
The corpus, from which a predictive language model is trained, can be considered the experience of a semantic system. We recorded everyday reading of two participants for two months on a tablet, generating individual corpus samples of 300/500K tokens. Then we trained word2vec models from individual corpora and a 70 million-sentence newspaper corpus to obtain individual and norm-based long-term memory structure. To test whether individual corpora can make better predictions for a cognitive task of long-term memory retrieval, we generated stimulus materials consisting of 134 sentences with uncorrelated individual and norm-based word probabilities. For the subsequent eye tracking study 1-2 months later, our regression analyses revealed that individual, but not norm-corpus-based word probabilities can account for first-fixation duration and first-pass gaze duration. Word length additionally affected gaze duration and total viewing duration. The results suggest that corpora representative for an individual's longterm memory structure can better explain reading performance than a norm corpus, and that recently acquired information is lexically accessed rapidly.
[ { "created": "Tue, 20 Oct 2020 10:18:20 GMT", "version": "v1" } ]
2020-10-21
[ [ "Hofmann", "Markus J.", "" ], [ "Müller", "Lara", "" ], [ "Rölke", "Andre", "" ], [ "Radach", "Ralph", "" ], [ "Biemann", "Chris", "" ] ]
The corpus, from which a predictive language model is trained, can be considered the experience of a semantic system. We recorded everyday reading of two participants for two months on a tablet, generating individual corpus samples of 300/500K tokens. Then we trained word2vec models from individual corpora and a 70 million-sentence newspaper corpus to obtain individual and norm-based long-term memory structure. To test whether individual corpora can make better predictions for a cognitive task of long-term memory retrieval, we generated stimulus materials consisting of 134 sentences with uncorrelated individual and norm-based word probabilities. For the subsequent eye tracking study 1-2 months later, our regression analyses revealed that individual, but not norm-corpus-based word probabilities can account for first-fixation duration and first-pass gaze duration. Word length additionally affected gaze duration and total viewing duration. The results suggest that corpora representative for an individual's longterm memory structure can better explain reading performance than a norm corpus, and that recently acquired information is lexically accessed rapidly.
2407.00371
Linjiang Zhou
Linjiang Zhou, Xiaochuan Shi, Chao Ma, Zepeng Wang
Axiomatization of Gradient Smoothing in Neural Networks
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Gradients play a pivotal role in neural networks explanation. The inherent high dimensionality and structural complexity of neural networks result in the original gradients containing a significant amount of noise. While several approaches were proposed to reduce noise with smoothing, there is little discussion of the rationale behind smoothing gradients in neural networks. In this work, we proposed a gradient smooth theoretical framework for neural networks based on the function mollification and Monte Carlo integration. The framework intrinsically axiomatized gradient smoothing and reveals the rationale of existing methods. Furthermore, we provided an approach to design new smooth methods derived from the framework. By experimental measurement of several newly designed smooth methods, we demonstrated the research potential of our framework.
[ { "created": "Sat, 29 Jun 2024 08:43:38 GMT", "version": "v1" } ]
2024-07-02
[ [ "Zhou", "Linjiang", "" ], [ "Shi", "Xiaochuan", "" ], [ "Ma", "Chao", "" ], [ "Wang", "Zepeng", "" ] ]
Gradients play a pivotal role in neural networks explanation. The inherent high dimensionality and structural complexity of neural networks result in the original gradients containing a significant amount of noise. While several approaches were proposed to reduce noise with smoothing, there is little discussion of the rationale behind smoothing gradients in neural networks. In this work, we proposed a gradient smooth theoretical framework for neural networks based on the function mollification and Monte Carlo integration. The framework intrinsically axiomatized gradient smoothing and reveals the rationale of existing methods. Furthermore, we provided an approach to design new smooth methods derived from the framework. By experimental measurement of several newly designed smooth methods, we demonstrated the research potential of our framework.
2302.10975
Felix Fiedler
Felix Fiedler and Sergio Lucia
Improved uncertainty quantification for neural networks with Bayesian last layer
This work has been published at IEEE Access with Digital Object Identifier 10.1109/ACCESS.2023.3329685 under a Creative Commons Attribution 4.0 License
IEEE Access, vol. 11, 2023
10.1109/ACCESS.2023.3329685
null
cs.LG cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Uncertainty quantification is an important task in machine learning - a task in which standardneural networks (NNs) have traditionally not excelled. This can be a limitation for safety-critical applications, where uncertainty-aware methods like Gaussian processes or Bayesian linear regression are often preferred. Bayesian neural networks are an approach to address this limitation. They assume probability distributions for all parameters and yield distributed predictions. However, training and inference are typically intractable and approximations must be employed. A promising approximation is NNs with Bayesian last layer (BLL). They assume distributed weights only in the linear output layer and yield a normally distributed prediction. To approximate the intractable Bayesian neural network, point estimates of the distributed weights in all but the last layer should be obtained by maximizing the marginal likelihood. This has previously been challenging, as the marginal likelihood is expensive to evaluate in this setting. We present a reformulation of the log-marginal likelihood of a NN with BLL which allows for efficient training using backpropagation. Furthermore, we address the challenge of uncertainty quantification for extrapolation points. We provide a metric to quantify the degree of extrapolation and derive a method to improve the uncertainty quantification for these points. Our methods are derived for the multivariate case and demonstrated in a simulation study. In comparison to Bayesian linear regression with fixed features, and a Bayesian neural network trained with variational inference, our proposed method achieves the highest log-predictive density on test data.
[ { "created": "Tue, 21 Feb 2023 20:23:56 GMT", "version": "v1" }, { "created": "Wed, 12 Jul 2023 07:39:28 GMT", "version": "v2" }, { "created": "Wed, 3 Jan 2024 19:40:07 GMT", "version": "v3" } ]
2024-01-05
[ [ "Fiedler", "Felix", "" ], [ "Lucia", "Sergio", "" ] ]
Uncertainty quantification is an important task in machine learning - a task in which standardneural networks (NNs) have traditionally not excelled. This can be a limitation for safety-critical applications, where uncertainty-aware methods like Gaussian processes or Bayesian linear regression are often preferred. Bayesian neural networks are an approach to address this limitation. They assume probability distributions for all parameters and yield distributed predictions. However, training and inference are typically intractable and approximations must be employed. A promising approximation is NNs with Bayesian last layer (BLL). They assume distributed weights only in the linear output layer and yield a normally distributed prediction. To approximate the intractable Bayesian neural network, point estimates of the distributed weights in all but the last layer should be obtained by maximizing the marginal likelihood. This has previously been challenging, as the marginal likelihood is expensive to evaluate in this setting. We present a reformulation of the log-marginal likelihood of a NN with BLL which allows for efficient training using backpropagation. Furthermore, we address the challenge of uncertainty quantification for extrapolation points. We provide a metric to quantify the degree of extrapolation and derive a method to improve the uncertainty quantification for these points. Our methods are derived for the multivariate case and demonstrated in a simulation study. In comparison to Bayesian linear regression with fixed features, and a Bayesian neural network trained with variational inference, our proposed method achieves the highest log-predictive density on test data.
2003.02976
Dinislam Abdulgalimov
Dinislam Abdulgalimov, Reuben Kirkham, James Nicholson, Vasilis Vlachokyriakos, Pam Briggs, Patrick Olivier
Designing for Employee Voice
10 pages, 4 figures, CHI 2020 Proceedings
null
10.1145/3313831.3376284
null
cs.HC cs.SI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Employee voice and workplace democracy have a positive impact on employee wellbeing and the performance of organizations. In this paper, we conducted interviews with employees to identify facilitators and inhibitors for the voice within the workplace and a corresponding set of appropriate qualities: Civility, Validity, Safety and Egalitarianism. We then operationalised these qualities as a set of design goals - Assured Anonymity, Constructive Moderation, Adequate Slowness and Controlled Access - in the design and development of a secure anonymous employee voice system. Our novel take on the Enterprise Social Network aims to foster good citizenship whilst also promoting frank yet constructive discussion. We reflect on a two-week deployment of our system, the diverse range of candid discussions that emerged around important workplace issues and the potential for change within the host organization. We conclude by reflecting on the ways in which our approach shaped the discourse and supported the creation of a trusted environment for employee voice.
[ { "created": "Fri, 6 Mar 2020 00:28:59 GMT", "version": "v1" } ]
2020-03-09
[ [ "Abdulgalimov", "Dinislam", "" ], [ "Kirkham", "Reuben", "" ], [ "Nicholson", "James", "" ], [ "Vlachokyriakos", "Vasilis", "" ], [ "Briggs", "Pam", "" ], [ "Olivier", "Patrick", "" ] ]
Employee voice and workplace democracy have a positive impact on employee wellbeing and the performance of organizations. In this paper, we conducted interviews with employees to identify facilitators and inhibitors for the voice within the workplace and a corresponding set of appropriate qualities: Civility, Validity, Safety and Egalitarianism. We then operationalised these qualities as a set of design goals - Assured Anonymity, Constructive Moderation, Adequate Slowness and Controlled Access - in the design and development of a secure anonymous employee voice system. Our novel take on the Enterprise Social Network aims to foster good citizenship whilst also promoting frank yet constructive discussion. We reflect on a two-week deployment of our system, the diverse range of candid discussions that emerged around important workplace issues and the potential for change within the host organization. We conclude by reflecting on the ways in which our approach shaped the discourse and supported the creation of a trusted environment for employee voice.
2407.20283
Fuling Chen
Fuling Chen, Kevin Vinsen, Arthur Filoche
Spatial Temporal Approach for High-Resolution Gridded Wind Forecasting across Southwest Western Australia
null
null
null
null
cs.LG physics.ao-ph
http://creativecommons.org/licenses/by/4.0/
Accurate wind speed and direction forecasting is paramount across many sectors, spanning agriculture, renewable energy generation, and bushfire management. However, conventional forecasting models encounter significant challenges in precisely predicting wind conditions at high spatial resolutions for individual locations or small geographical areas (< 20 km2) and capturing medium to long-range temporal trends and comprehensive spatio-temporal patterns. This study focuses on a spatial temporal approach for high-resolution gridded wind forecasting at the height of 3 and 10 metres across large areas of the Southwest of Western Australia to overcome these challenges. The model utilises the data that covers a broad geographic area and harnesses a diverse array of meteorological factors, including terrain characteristics, air pressure, 10-metre wind forecasts from the European Centre for Medium-Range Weather Forecasts, and limited observation data from sparsely distributed weather stations (such as 3-metre wind profiles, humidity, and temperature), the model demonstrates promising advancements in wind forecasting accuracy and reliability across the entire region of interest. This paper shows the potential of our machine learning model for wind forecasts across various prediction horizons and spatial coverage. It can help facilitate more informed decision-making and enhance resilience across critical sectors.
[ { "created": "Fri, 26 Jul 2024 05:44:27 GMT", "version": "v1" } ]
2024-07-31
[ [ "Chen", "Fuling", "" ], [ "Vinsen", "Kevin", "" ], [ "Filoche", "Arthur", "" ] ]
Accurate wind speed and direction forecasting is paramount across many sectors, spanning agriculture, renewable energy generation, and bushfire management. However, conventional forecasting models encounter significant challenges in precisely predicting wind conditions at high spatial resolutions for individual locations or small geographical areas (< 20 km2) and capturing medium to long-range temporal trends and comprehensive spatio-temporal patterns. This study focuses on a spatial temporal approach for high-resolution gridded wind forecasting at the height of 3 and 10 metres across large areas of the Southwest of Western Australia to overcome these challenges. The model utilises the data that covers a broad geographic area and harnesses a diverse array of meteorological factors, including terrain characteristics, air pressure, 10-metre wind forecasts from the European Centre for Medium-Range Weather Forecasts, and limited observation data from sparsely distributed weather stations (such as 3-metre wind profiles, humidity, and temperature), the model demonstrates promising advancements in wind forecasting accuracy and reliability across the entire region of interest. This paper shows the potential of our machine learning model for wind forecasts across various prediction horizons and spatial coverage. It can help facilitate more informed decision-making and enhance resilience across critical sectors.
2408.04919
Chaofan Li
Chaofan Li, Yingxia Shao, Zheng Liu
SEA-SQL: Semantic-Enhanced Text-to-SQL with Adaptive Refinement
null
null
null
null
cs.DB
http://creativecommons.org/licenses/by/4.0/
Recent advancements in large language models (LLMs) have significantly contributed to the progress of the Text-to-SQL task. A common requirement in many of these works is the post-correction of SQL queries. However, the majority of this process entails analyzing error cases to develop prompts with rules that eliminate model bias. And there is an absence of execution verification for SQL queries. In addition, the prevalent techniques primarily depend on GPT-4 and few-shot prompts, resulting in expensive costs. To investigate the effective methods for SQL refinement in a cost-efficient manner, we introduce Semantic-Enhanced Text-to-SQL with Adaptive Refinement (SEA-SQL), which includes Adaptive Bias Elimination and Dynamic Execution Adjustment, aims to improve performance while minimizing resource expenditure with zero-shot prompts. Specifically, SEA-SQL employs a semantic-enhanced schema to augment database information and optimize SQL queries. During the SQL query generation, a fine-tuned adaptive bias eliminator is applied to mitigate inherent biases caused by the LLM. The dynamic execution adjustment is utilized to guarantee the executability of the bias eliminated SQL query. We conduct experiments on the Spider and BIRD datasets to demonstrate the effectiveness of this framework. The results demonstrate that SEA-SQL achieves state-of-the-art performance in the GPT3.5 scenario with 9%-58% of the generation cost. Furthermore, SEA-SQL is comparable to GPT-4 with only 0.9%-5.3% of the generation cost.
[ { "created": "Fri, 9 Aug 2024 08:01:37 GMT", "version": "v1" } ]
2024-08-12
[ [ "Li", "Chaofan", "" ], [ "Shao", "Yingxia", "" ], [ "Liu", "Zheng", "" ] ]
Recent advancements in large language models (LLMs) have significantly contributed to the progress of the Text-to-SQL task. A common requirement in many of these works is the post-correction of SQL queries. However, the majority of this process entails analyzing error cases to develop prompts with rules that eliminate model bias. And there is an absence of execution verification for SQL queries. In addition, the prevalent techniques primarily depend on GPT-4 and few-shot prompts, resulting in expensive costs. To investigate the effective methods for SQL refinement in a cost-efficient manner, we introduce Semantic-Enhanced Text-to-SQL with Adaptive Refinement (SEA-SQL), which includes Adaptive Bias Elimination and Dynamic Execution Adjustment, aims to improve performance while minimizing resource expenditure with zero-shot prompts. Specifically, SEA-SQL employs a semantic-enhanced schema to augment database information and optimize SQL queries. During the SQL query generation, a fine-tuned adaptive bias eliminator is applied to mitigate inherent biases caused by the LLM. The dynamic execution adjustment is utilized to guarantee the executability of the bias eliminated SQL query. We conduct experiments on the Spider and BIRD datasets to demonstrate the effectiveness of this framework. The results demonstrate that SEA-SQL achieves state-of-the-art performance in the GPT3.5 scenario with 9%-58% of the generation cost. Furthermore, SEA-SQL is comparable to GPT-4 with only 0.9%-5.3% of the generation cost.
1808.01614
Rick Salay
Rick Salay, Krzysztof Czarnecki
Using Machine Learning Safely in Automotive Software: An Assessment and Adaption of Software Process Requirements in ISO 26262
null
null
null
null
cs.LG cs.SE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The use of machine learning (ML) is on the rise in many sectors of software development, and automotive software development is no different. In particular, Advanced Driver Assistance Systems (ADAS) and Automated Driving Systems (ADS) are two areas where ML plays a significant role. In automotive development, safety is a critical objective, and the emergence of standards such as ISO 26262 has helped focus industry practices to address safety in a systematic and consistent way. Unfortunately, these standards were not designed to accommodate technologies such as ML or the type of functionality that is provided by an ADS and this has created a conflict between the need to innovate and the need to improve safety. In this report, we take steps to address this conflict by doing a detailed assessment and adaption of ISO 26262 for ML, specifically in the context of supervised learning. First we analyze the key factors that are the source of the conflict. Then we assess each software development process requirement (Part 6 of ISO 26262) for applicability to ML. Where there are gaps, we propose new requirements to address the gaps. Finally we discuss the application of this adapted and extended variant of Part 6 to ML development scenarios.
[ { "created": "Sun, 5 Aug 2018 13:40:22 GMT", "version": "v1" } ]
2018-08-07
[ [ "Salay", "Rick", "" ], [ "Czarnecki", "Krzysztof", "" ] ]
The use of machine learning (ML) is on the rise in many sectors of software development, and automotive software development is no different. In particular, Advanced Driver Assistance Systems (ADAS) and Automated Driving Systems (ADS) are two areas where ML plays a significant role. In automotive development, safety is a critical objective, and the emergence of standards such as ISO 26262 has helped focus industry practices to address safety in a systematic and consistent way. Unfortunately, these standards were not designed to accommodate technologies such as ML or the type of functionality that is provided by an ADS and this has created a conflict between the need to innovate and the need to improve safety. In this report, we take steps to address this conflict by doing a detailed assessment and adaption of ISO 26262 for ML, specifically in the context of supervised learning. First we analyze the key factors that are the source of the conflict. Then we assess each software development process requirement (Part 6 of ISO 26262) for applicability to ML. Where there are gaps, we propose new requirements to address the gaps. Finally we discuss the application of this adapted and extended variant of Part 6 to ML development scenarios.
2007.14570
Long Cheng
Song Liao, Christin Wilson, Long Cheng, Hongxin Hu, Huixing Deng
Measuring the Effectiveness of Privacy Policies for Voice Assistant Applications
null
null
null
null
cs.CR cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Voice Assistants (VA) such as Amazon Alexa and Google Assistant are quickly and seamlessly integrating into people's daily lives. The increased reliance on VA services raises privacy concerns such as the leakage of private conversations and sensitive information. Privacy policies play an important role in addressing users' privacy concerns and informing them about the data collection, storage, and sharing practices. VA platforms (both Amazon Alexa and Google Assistant) allow third-party developers to build new voice-apps and publish them to the app store. Voice-app developers are required to provide privacy policies to disclose their apps' data practices. However, little is known whether these privacy policies are informative and trustworthy or not on emerging VA platforms. On the other hand, many users invoke voice-apps through voice and thus there exists a usability challenge for users to access these privacy policies. In this paper, we conduct the first large-scale data analytics to systematically measure the effectiveness of privacy policies provided by voice-app developers on two mainstream VA platforms. We seek to understand the quality and usability issues of privacy policies provided by developers in the current app stores. We analyzed 64,720 Amazon Alexa skills and 2,201 Google Assistant actions. Our work also includes a user study to understand users' perspectives on VA's privacy policies. Our findings reveal a worrisome reality of privacy policies in two mainstream voice-app stores, where there exists a substantial number of problematic privacy policies. Surprisingly, Google and Amazon even have official voice-apps violating their own requirements regarding the privacy policy.
[ { "created": "Wed, 29 Jul 2020 03:17:51 GMT", "version": "v1" } ]
2020-07-30
[ [ "Liao", "Song", "" ], [ "Wilson", "Christin", "" ], [ "Cheng", "Long", "" ], [ "Hu", "Hongxin", "" ], [ "Deng", "Huixing", "" ] ]
Voice Assistants (VA) such as Amazon Alexa and Google Assistant are quickly and seamlessly integrating into people's daily lives. The increased reliance on VA services raises privacy concerns such as the leakage of private conversations and sensitive information. Privacy policies play an important role in addressing users' privacy concerns and informing them about the data collection, storage, and sharing practices. VA platforms (both Amazon Alexa and Google Assistant) allow third-party developers to build new voice-apps and publish them to the app store. Voice-app developers are required to provide privacy policies to disclose their apps' data practices. However, little is known whether these privacy policies are informative and trustworthy or not on emerging VA platforms. On the other hand, many users invoke voice-apps through voice and thus there exists a usability challenge for users to access these privacy policies. In this paper, we conduct the first large-scale data analytics to systematically measure the effectiveness of privacy policies provided by voice-app developers on two mainstream VA platforms. We seek to understand the quality and usability issues of privacy policies provided by developers in the current app stores. We analyzed 64,720 Amazon Alexa skills and 2,201 Google Assistant actions. Our work also includes a user study to understand users' perspectives on VA's privacy policies. Our findings reveal a worrisome reality of privacy policies in two mainstream voice-app stores, where there exists a substantial number of problematic privacy policies. Surprisingly, Google and Amazon even have official voice-apps violating their own requirements regarding the privacy policy.
1907.01739
Charu Sharma
Charu Sharma, Deepak Nathani, Manohar Kaul
Solving Partial Assignment Problems using Random Clique Complexes
Accepted as a long talk at ICML 2018
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an alternate formulation of the partial assignment problem as matching random clique complexes, that are higher-order analogues of random graphs, designed to provide a set of invariants that better detect higher-order structure. The proposed method creates random clique adjacency matrices for each k-skeleton of the random clique complexes and matches them, taking into account each point as the affine combination of its geometric neighbourhood. We justify our solution theoretically, by analyzing the runtime and storage complexity of our algorithm along with the asymptotic behaviour of the quadratic assignment problem (QAP) that is associated with the underlying random clique adjacency matrices. Experiments on both synthetic and real-world datasets, containing severe occlusions and distortions, provide insight into the accuracy, efficiency, and robustness of our approach. We outperform diverse matching algorithms by a significant margin.
[ { "created": "Wed, 3 Jul 2019 04:56:34 GMT", "version": "v1" }, { "created": "Wed, 8 Jul 2020 12:28:06 GMT", "version": "v2" }, { "created": "Wed, 29 Jul 2020 15:12:50 GMT", "version": "v3" } ]
2020-07-30
[ [ "Sharma", "Charu", "" ], [ "Nathani", "Deepak", "" ], [ "Kaul", "Manohar", "" ] ]
We present an alternate formulation of the partial assignment problem as matching random clique complexes, that are higher-order analogues of random graphs, designed to provide a set of invariants that better detect higher-order structure. The proposed method creates random clique adjacency matrices for each k-skeleton of the random clique complexes and matches them, taking into account each point as the affine combination of its geometric neighbourhood. We justify our solution theoretically, by analyzing the runtime and storage complexity of our algorithm along with the asymptotic behaviour of the quadratic assignment problem (QAP) that is associated with the underlying random clique adjacency matrices. Experiments on both synthetic and real-world datasets, containing severe occlusions and distortions, provide insight into the accuracy, efficiency, and robustness of our approach. We outperform diverse matching algorithms by a significant margin.
2312.13208
Yingji Zhang
Yingji Zhang, Danilo S. Carvalho, Ian Pratt-Hartmann, Andr\'e Freitas
LlaMaVAE: Guiding Large Language Model Generation via Continuous Latent Sentence Spaces
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Deep generative neural networks, such as Variational AutoEncoders (VAEs), offer an opportunity to better understand and control language models from the perspective of sentence-level latent spaces. To combine the controllability of VAE latent spaces with the state-of-the-art performance of recent large language models (LLMs), we present in this work LlaMaVAE, which combines expressive encoder and decoder models (sentenceT5 and LlaMA) with a VAE architecture, aiming to provide better text generation control to LLMs. In addition, to conditionally guide the VAE generation, we investigate a new approach based on flow-based invertible neural networks (INNs) named Invertible CVAE. Experimental results reveal that LlaMaVAE can outperform the previous state-of-the-art VAE language model, Optimus, across various tasks, including language modelling, semantic textual similarity and definition modelling. Qualitative analysis on interpolation and traversal experiments also indicates an increased degree of semantic clustering and geometric consistency, which enables better generation control.
[ { "created": "Wed, 20 Dec 2023 17:25:23 GMT", "version": "v1" } ]
2023-12-21
[ [ "Zhang", "Yingji", "" ], [ "Carvalho", "Danilo S.", "" ], [ "Pratt-Hartmann", "Ian", "" ], [ "Freitas", "André", "" ] ]
Deep generative neural networks, such as Variational AutoEncoders (VAEs), offer an opportunity to better understand and control language models from the perspective of sentence-level latent spaces. To combine the controllability of VAE latent spaces with the state-of-the-art performance of recent large language models (LLMs), we present in this work LlaMaVAE, which combines expressive encoder and decoder models (sentenceT5 and LlaMA) with a VAE architecture, aiming to provide better text generation control to LLMs. In addition, to conditionally guide the VAE generation, we investigate a new approach based on flow-based invertible neural networks (INNs) named Invertible CVAE. Experimental results reveal that LlaMaVAE can outperform the previous state-of-the-art VAE language model, Optimus, across various tasks, including language modelling, semantic textual similarity and definition modelling. Qualitative analysis on interpolation and traversal experiments also indicates an increased degree of semantic clustering and geometric consistency, which enables better generation control.
1509.07968
Takuya Ikeda
Takuya Ikeda, Masaaki Nagahara, Shunsuke Ono
Discrete-Valued Control by Sum-of-Absolute-Values Optimization
submitted to IEEE Transactions on Automatic Control; 11 pages with 2 figures
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a new design method of discrete-valued control for continuous-time linear time-invariant systems based on sum-of-absolute-values (SOAV) optimization. We first formulate the discrete-valued control design as a finite-horizon SOAV optimal control, which is an extended version of L1 optimal control. We then give simple conditions that guarantee the existence, discreteness, and uniqueness of the SOAV optimal control. Also, we give the continuity property of the value function, by which we prove the stability of infinite-horizon model predictive SOAV control systems. We provide a fast algorithm for the SOAV optimization based on the alternating direction method of multipliers (ADMM), which has an important advantage in real-time control computation. A simulation result shows the effectiveness of the proposed method.
[ { "created": "Sat, 26 Sep 2015 12:04:40 GMT", "version": "v1" } ]
2015-09-29
[ [ "Ikeda", "Takuya", "" ], [ "Nagahara", "Masaaki", "" ], [ "Ono", "Shunsuke", "" ] ]
In this paper, we propose a new design method of discrete-valued control for continuous-time linear time-invariant systems based on sum-of-absolute-values (SOAV) optimization. We first formulate the discrete-valued control design as a finite-horizon SOAV optimal control, which is an extended version of L1 optimal control. We then give simple conditions that guarantee the existence, discreteness, and uniqueness of the SOAV optimal control. Also, we give the continuity property of the value function, by which we prove the stability of infinite-horizon model predictive SOAV control systems. We provide a fast algorithm for the SOAV optimization based on the alternating direction method of multipliers (ADMM), which has an important advantage in real-time control computation. A simulation result shows the effectiveness of the proposed method.
1312.1421
Mostafa Khoshnevisan
Mostafa Khoshnevisan and J Nicholas Laneman
Intermittent Communication
Submitted to IEEE Trans. Inform. Theory
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We formulate a model for intermittent communication that can capture bursty transmissions or a sporadically available channel, where in either case the receiver does not know a priori when the transmissions will occur. Focusing on the point-to-point case, we develop a decoding structure, decoding from pattern detection, and its achievable rate for such communication scenarios. Decoding from pattern detection first detects the locations of codeword symbols and then uses them to decode. We introduce the concept of partial divergence and study some of its properties in order to obtain stronger achievability results. As the system becomes more intermittent, the achievable rates decrease due to the additional uncertainty about the positions of the codeword symbols at the decoder. Additionally, we provide upper bounds on the capacity of binary noiseless intermittent communication with the help of a genie-aided encoder and decoder. The upper bounds imply a tradeoff between the capacity and the intermittency rate of the communication system, even if the receive window scales linearly with the codeword length.
[ { "created": "Thu, 5 Dec 2013 03:16:08 GMT", "version": "v1" }, { "created": "Fri, 17 Mar 2017 05:01:37 GMT", "version": "v2" } ]
2017-03-20
[ [ "Khoshnevisan", "Mostafa", "" ], [ "Laneman", "J Nicholas", "" ] ]
We formulate a model for intermittent communication that can capture bursty transmissions or a sporadically available channel, where in either case the receiver does not know a priori when the transmissions will occur. Focusing on the point-to-point case, we develop a decoding structure, decoding from pattern detection, and its achievable rate for such communication scenarios. Decoding from pattern detection first detects the locations of codeword symbols and then uses them to decode. We introduce the concept of partial divergence and study some of its properties in order to obtain stronger achievability results. As the system becomes more intermittent, the achievable rates decrease due to the additional uncertainty about the positions of the codeword symbols at the decoder. Additionally, we provide upper bounds on the capacity of binary noiseless intermittent communication with the help of a genie-aided encoder and decoder. The upper bounds imply a tradeoff between the capacity and the intermittency rate of the communication system, even if the receive window scales linearly with the codeword length.
2108.02707
Harrison Rosenberg
Harrison Rosenberg, Brian Tang, Kassem Fawaz, and Somesh Jha
Fairness Properties of Face Recognition and Obfuscation Systems
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
The proliferation of automated face recognition in the commercial and government sectors has caused significant privacy concerns for individuals. One approach to address these privacy concerns is to employ evasion attacks against the metric embedding networks powering face recognition systems: Face obfuscation systems generate imperceptibly perturbed images that cause face recognition systems to misidentify the user. Perturbed faces are generated on metric embedding networks, which are known to be unfair in the context of face recognition. A question of demographic fairness naturally follows: are there demographic disparities in face obfuscation system performance? We answer this question with an analytical and empirical exploration of recent face obfuscation systems. Metric embedding networks are found to be demographically aware: face embeddings are clustered by demographic. We show how this clustering behavior leads to reduced face obfuscation utility for faces in minority groups. An intuitive analytical model yields insight into these phenomena.
[ { "created": "Thu, 5 Aug 2021 16:18:15 GMT", "version": "v1" }, { "created": "Tue, 19 Oct 2021 13:18:21 GMT", "version": "v2" }, { "created": "Fri, 16 Sep 2022 17:46:37 GMT", "version": "v3" } ]
2022-09-19
[ [ "Rosenberg", "Harrison", "" ], [ "Tang", "Brian", "" ], [ "Fawaz", "Kassem", "" ], [ "Jha", "Somesh", "" ] ]
The proliferation of automated face recognition in the commercial and government sectors has caused significant privacy concerns for individuals. One approach to address these privacy concerns is to employ evasion attacks against the metric embedding networks powering face recognition systems: Face obfuscation systems generate imperceptibly perturbed images that cause face recognition systems to misidentify the user. Perturbed faces are generated on metric embedding networks, which are known to be unfair in the context of face recognition. A question of demographic fairness naturally follows: are there demographic disparities in face obfuscation system performance? We answer this question with an analytical and empirical exploration of recent face obfuscation systems. Metric embedding networks are found to be demographically aware: face embeddings are clustered by demographic. We show how this clustering behavior leads to reduced face obfuscation utility for faces in minority groups. An intuitive analytical model yields insight into these phenomena.
2212.08985
Ning Wang
Ning Wang, Jiangrong Xie, Hang Luo, Qinglin Cheng, Jihao Wu, Mingbo Jia, Linlin Li
Efficient Image Captioning for Edge Devices
To appear in AAAI 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Recent years have witnessed the rapid progress of image captioning. However, the demands for large memory storage and heavy computational burden prevent these captioning models from being deployed on mobile devices. The main obstacles lie in the heavyweight visual feature extractors (i.e., object detectors) and complicated cross-modal fusion networks. To this end, we propose LightCap, a lightweight image captioner for resource-limited devices. The core design is built on the recent CLIP model for efficient image captioning. To be specific, on the one hand, we leverage the CLIP model to extract the compact grid features without relying on the time-consuming object detectors. On the other hand, we transfer the image-text retrieval design of CLIP to image captioning scenarios by devising a novel visual concept extractor and a cross-modal modulator. We further optimize the cross-modal fusion model and parallel prediction heads via sequential and ensemble distillations. With the carefully designed architecture, our model merely contains 40M parameters, saving the model size by more than 75% and the FLOPs by more than 98% in comparison with the current state-of-the-art methods. In spite of the low capacity, our model still exhibits state-of-the-art performance on prevalent datasets, e.g., 136.6 CIDEr on COCO Karpathy test split. Testing on the smartphone with only a single CPU, the proposed LightCap exhibits a fast inference speed of 188ms per image, which is ready for practical applications.
[ { "created": "Sun, 18 Dec 2022 01:56:33 GMT", "version": "v1" } ]
2022-12-20
[ [ "Wang", "Ning", "" ], [ "Xie", "Jiangrong", "" ], [ "Luo", "Hang", "" ], [ "Cheng", "Qinglin", "" ], [ "Wu", "Jihao", "" ], [ "Jia", "Mingbo", "" ], [ "Li", "Linlin", "" ] ]
Recent years have witnessed the rapid progress of image captioning. However, the demands for large memory storage and heavy computational burden prevent these captioning models from being deployed on mobile devices. The main obstacles lie in the heavyweight visual feature extractors (i.e., object detectors) and complicated cross-modal fusion networks. To this end, we propose LightCap, a lightweight image captioner for resource-limited devices. The core design is built on the recent CLIP model for efficient image captioning. To be specific, on the one hand, we leverage the CLIP model to extract the compact grid features without relying on the time-consuming object detectors. On the other hand, we transfer the image-text retrieval design of CLIP to image captioning scenarios by devising a novel visual concept extractor and a cross-modal modulator. We further optimize the cross-modal fusion model and parallel prediction heads via sequential and ensemble distillations. With the carefully designed architecture, our model merely contains 40M parameters, saving the model size by more than 75% and the FLOPs by more than 98% in comparison with the current state-of-the-art methods. In spite of the low capacity, our model still exhibits state-of-the-art performance on prevalent datasets, e.g., 136.6 CIDEr on COCO Karpathy test split. Testing on the smartphone with only a single CPU, the proposed LightCap exhibits a fast inference speed of 188ms per image, which is ready for practical applications.
1906.04586
Ons Khemiri
Ons Khemiri
Proposition d'une nouvelle approche d'extraction des motifs ferm\'es fr\'equents
in French. arXiv admin note: substantial text overlap with arXiv:1810.07116, arXiv:1312.1558 by other authors
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work is done as part of a master's thesis project. The increase in the volume of data has given rise to various issues related to the collection, storage, analysis and exploitation of these data in order to create an added value. In this master, we are interested in the search of frequent closed patterns in the transaction bases. One way to process data is to partition the search space into subcontexts, and then explore the subcontexts simultaneously. In this context, we have proposed a new approach for extracting frequent closed itemsets. The main idea is to update frequent closed patterns with their minimal generators by applying a strategy of partitioning of the initial extraction context. Our new approach called UFCIGs-DAC was designed and implemented to perform a search in the test bases. The main originality of this approach is the simultaneous exploration of the research space by the update of the frequent closed patterns and the minimal generators. Moreover, our approach can be adapted to any algorithm of extraction of the frequent closed patterns with their minimal generators.
[ { "created": "Sun, 9 Jun 2019 19:07:37 GMT", "version": "v1" } ]
2019-06-12
[ [ "Khemiri", "Ons", "" ] ]
This work is done as part of a master's thesis project. The increase in the volume of data has given rise to various issues related to the collection, storage, analysis and exploitation of these data in order to create an added value. In this master, we are interested in the search of frequent closed patterns in the transaction bases. One way to process data is to partition the search space into subcontexts, and then explore the subcontexts simultaneously. In this context, we have proposed a new approach for extracting frequent closed itemsets. The main idea is to update frequent closed patterns with their minimal generators by applying a strategy of partitioning of the initial extraction context. Our new approach called UFCIGs-DAC was designed and implemented to perform a search in the test bases. The main originality of this approach is the simultaneous exploration of the research space by the update of the frequent closed patterns and the minimal generators. Moreover, our approach can be adapted to any algorithm of extraction of the frequent closed patterns with their minimal generators.
0710.4727
EDA Publishing Association
Paul Muller, Armin Tajalli, Mojtaba Atarodi, Yusuf Leblebici
Top-Down Design of a Low-Power Multi-Channel 2.5-Gbit/s/Channel Gated Oscillator Clock-Recovery Circuit
Submitted on behalf of EDAA (http://www.edaa.com/)
Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)
null
null
cs.AR
null
We present a complete top-down design of a low-power multi-channel clock recovery circuit based on gated current-controlled oscillators. The flow includes several tools and methods used to specify block constraints, to design and verify the topology down to the transistor level, as well as to achieve a power consumption as low as 5mW/Gbit/s. Statistical simulation is used to estimate the achievable bit error rate in presence of phase and frequency errors and to prove the feasibility of the concept. VHDL modeling provides extensive verification of the topology. Thermal noise modeling based on well-known concepts delivers design parameters for the device sizing and biasing. We present two practical examples of possible design improvements analyzed and implemented with this methodology.
[ { "created": "Thu, 25 Oct 2007 09:38:14 GMT", "version": "v1" } ]
2011-11-09
[ [ "Muller", "Paul", "" ], [ "Tajalli", "Armin", "" ], [ "Atarodi", "Mojtaba", "" ], [ "Leblebici", "Yusuf", "" ] ]
We present a complete top-down design of a low-power multi-channel clock recovery circuit based on gated current-controlled oscillators. The flow includes several tools and methods used to specify block constraints, to design and verify the topology down to the transistor level, as well as to achieve a power consumption as low as 5mW/Gbit/s. Statistical simulation is used to estimate the achievable bit error rate in presence of phase and frequency errors and to prove the feasibility of the concept. VHDL modeling provides extensive verification of the topology. Thermal noise modeling based on well-known concepts delivers design parameters for the device sizing and biasing. We present two practical examples of possible design improvements analyzed and implemented with this methodology.
2010.12718
Tanmay Gangwani
Tanmay Gangwani, Yuan Zhou, Jian Peng
Learning Guidance Rewards with Trajectory-space Smoothing
NeurIPS 2020 camera-ready
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Long-term temporal credit assignment is an important challenge in deep reinforcement learning (RL). It refers to the ability of the agent to attribute actions to consequences that may occur after a long time interval. Existing policy-gradient and Q-learning algorithms typically rely on dense environmental rewards that provide rich short-term supervision and help with credit assignment. However, they struggle to solve tasks with delays between an action and the corresponding rewarding feedback. To make credit assignment easier, recent works have proposed algorithms to learn dense "guidance" rewards that could be used in place of the sparse or delayed environmental rewards. This paper is in the same vein -- starting with a surrogate RL objective that involves smoothing in the trajectory-space, we arrive at a new algorithm for learning guidance rewards. We show that the guidance rewards have an intuitive interpretation, and can be obtained without training any additional neural networks. Due to the ease of integration, we use the guidance rewards in a few popular algorithms (Q-learning, Actor-Critic, Distributional-RL) and present results in single-agent and multi-agent tasks that elucidate the benefit of our approach when the environmental rewards are sparse or delayed.
[ { "created": "Fri, 23 Oct 2020 23:55:06 GMT", "version": "v1" } ]
2020-10-27
[ [ "Gangwani", "Tanmay", "" ], [ "Zhou", "Yuan", "" ], [ "Peng", "Jian", "" ] ]
Long-term temporal credit assignment is an important challenge in deep reinforcement learning (RL). It refers to the ability of the agent to attribute actions to consequences that may occur after a long time interval. Existing policy-gradient and Q-learning algorithms typically rely on dense environmental rewards that provide rich short-term supervision and help with credit assignment. However, they struggle to solve tasks with delays between an action and the corresponding rewarding feedback. To make credit assignment easier, recent works have proposed algorithms to learn dense "guidance" rewards that could be used in place of the sparse or delayed environmental rewards. This paper is in the same vein -- starting with a surrogate RL objective that involves smoothing in the trajectory-space, we arrive at a new algorithm for learning guidance rewards. We show that the guidance rewards have an intuitive interpretation, and can be obtained without training any additional neural networks. Due to the ease of integration, we use the guidance rewards in a few popular algorithms (Q-learning, Actor-Critic, Distributional-RL) and present results in single-agent and multi-agent tasks that elucidate the benefit of our approach when the environmental rewards are sparse or delayed.
1705.07706
Armand Vilalta
Dario Garcia-Gasulla, Armand Vilalta, Ferran Par\'es, Jonatan Moreno, Eduard Ayguad\'e, Jesus Labarta, Ulises Cort\'es and Toyotaro Suzumura
An Out-of-the-box Full-network Embedding for Convolutional Neural Networks
null
null
null
null
cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transfer learning for feature extraction can be used to exploit deep representations in contexts where there is very few training data, where there are limited computational resources, or when tuning the hyper-parameters needed for training is not an option. While previous contributions to feature extraction propose embeddings based on a single layer of the network, in this paper we propose a full-network embedding which successfully integrates convolutional and fully connected features, coming from all layers of a deep convolutional neural network. To do so, the embedding normalizes features in the context of the problem, and discretizes their values to reduce noise and regularize the embedding space. Significantly, this also reduces the computational cost of processing the resultant representations. The proposed method is shown to outperform single layer embeddings on several image classification tasks, while also being more robust to the choice of the pre-trained model used for obtaining the initial features. The performance gap in classification accuracy between thoroughly tuned solutions and the full-network embedding is also reduced, which makes of the proposed approach a competitive solution for a large set of applications.
[ { "created": "Mon, 22 May 2017 13:14:11 GMT", "version": "v1" } ]
2017-05-23
[ [ "Garcia-Gasulla", "Dario", "" ], [ "Vilalta", "Armand", "" ], [ "Parés", "Ferran", "" ], [ "Moreno", "Jonatan", "" ], [ "Ayguadé", "Eduard", "" ], [ "Labarta", "Jesus", "" ], [ "Cortés", "Ulises", "" ], [ "Suzumura", "Toyotaro", "" ] ]
Transfer learning for feature extraction can be used to exploit deep representations in contexts where there is very few training data, where there are limited computational resources, or when tuning the hyper-parameters needed for training is not an option. While previous contributions to feature extraction propose embeddings based on a single layer of the network, in this paper we propose a full-network embedding which successfully integrates convolutional and fully connected features, coming from all layers of a deep convolutional neural network. To do so, the embedding normalizes features in the context of the problem, and discretizes their values to reduce noise and regularize the embedding space. Significantly, this also reduces the computational cost of processing the resultant representations. The proposed method is shown to outperform single layer embeddings on several image classification tasks, while also being more robust to the choice of the pre-trained model used for obtaining the initial features. The performance gap in classification accuracy between thoroughly tuned solutions and the full-network embedding is also reduced, which makes of the proposed approach a competitive solution for a large set of applications.
2008.07644
Ohad Ben-Shahar
Peleg Harel and Ohad Ben-Shahar
Pictorial and apictorial polygonal jigsaw puzzles: The lazy caterer model, properties, and solvers
null
null
null
null
cs.CV cs.AI cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Jigsaw puzzle solving, the problem of constructing a coherent whole from a set of non-overlapping unordered visual fragments, is fundamental to numerous applications and yet most of the literature of the last two decades has focused thus far on less realistic puzzles whose pieces are identical squares. Here we formalize a new type of jigsaw puzzle where the pieces are general convex polygons generated by cutting through a global polygonal shape/image with an arbitrary number of straight cuts, a generation model inspired by the celebrated Lazy caterer's sequence. We analyze the theoretical properties of such puzzles, including the inherent challenges in solving them once pieces are contaminated with geometrical noise. To cope with such difficulties and obtain tractable solutions, we abstract the problem as a multi-body spring-mass dynamical system endowed with hierarchical loop constraints and a layered reconstruction process. We define evaluation metrics and present experimental results on both apictorial and pictorial puzzles to show that they are solvable completely automatically.
[ { "created": "Mon, 17 Aug 2020 22:07:40 GMT", "version": "v1" }, { "created": "Thu, 16 Dec 2021 15:32:53 GMT", "version": "v2" } ]
2021-12-17
[ [ "Harel", "Peleg", "" ], [ "Ben-Shahar", "Ohad", "" ] ]
Jigsaw puzzle solving, the problem of constructing a coherent whole from a set of non-overlapping unordered visual fragments, is fundamental to numerous applications and yet most of the literature of the last two decades has focused thus far on less realistic puzzles whose pieces are identical squares. Here we formalize a new type of jigsaw puzzle where the pieces are general convex polygons generated by cutting through a global polygonal shape/image with an arbitrary number of straight cuts, a generation model inspired by the celebrated Lazy caterer's sequence. We analyze the theoretical properties of such puzzles, including the inherent challenges in solving them once pieces are contaminated with geometrical noise. To cope with such difficulties and obtain tractable solutions, we abstract the problem as a multi-body spring-mass dynamical system endowed with hierarchical loop constraints and a layered reconstruction process. We define evaluation metrics and present experimental results on both apictorial and pictorial puzzles to show that they are solvable completely automatically.
1402.6016
Suayb Arslan
Suayb S. Arslan
Incremental Redundancy, Fountain Codes and Advanced Topics
57 pages, 22 figures, Version 0.2
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This document is written in order to establish a common base ground on which the majority of the relevant research about linear fountain codes can be analyzed and compared. As far as I am concerned, there is no unified approach that outlines and compares most of the published linear fountain codes in a single and self-contained framework. This written document has not only resulted in the review of theoretical fundamentals of efficient coding techniques for incremental redundancy and linear fountain coding, but also helped me have a comprehensive reference document and hopefully for many other graduate students who would like to have some background to pursue a research career regarding fountain codes and their various applications. Some background in information, coding, graph and probability theory is expected. Although various aspects of this topic and many other relevant research are deliberately left out, I still hope that this document shall serve researchers' need well. I have also included several exercises to warm up. The presentation style is usually informal and the presented material is not necessarily rigorous. There are many spots in the text that are product of my coauthors and myself, although some of which have not been published yet.
[ { "created": "Mon, 24 Feb 2014 23:41:50 GMT", "version": "v1" }, { "created": "Mon, 14 Jul 2014 22:40:31 GMT", "version": "v2" } ]
2014-07-16
[ [ "Arslan", "Suayb S.", "" ] ]
This document is written in order to establish a common base ground on which the majority of the relevant research about linear fountain codes can be analyzed and compared. As far as I am concerned, there is no unified approach that outlines and compares most of the published linear fountain codes in a single and self-contained framework. This written document has not only resulted in the review of theoretical fundamentals of efficient coding techniques for incremental redundancy and linear fountain coding, but also helped me have a comprehensive reference document and hopefully for many other graduate students who would like to have some background to pursue a research career regarding fountain codes and their various applications. Some background in information, coding, graph and probability theory is expected. Although various aspects of this topic and many other relevant research are deliberately left out, I still hope that this document shall serve researchers' need well. I have also included several exercises to warm up. The presentation style is usually informal and the presented material is not necessarily rigorous. There are many spots in the text that are product of my coauthors and myself, although some of which have not been published yet.
2104.10864
Karandeep Singh
Karandeep Singh, Gabriel Lima, Meeyoung Cha, Chiyoung Cha, Juhi Kulshrestha, Yong-Yeol Ahn, Onur Varol
Misinformation, Believability, and Vaccine Acceptance Over 40 Countries: Takeaways From the Initial Phase of The COVID-19 Infodemic
null
null
10.1371/journal.pone.0263381
null
cs.SI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The COVID-19 pandemic has been damaging to the lives of people all around the world. Accompanied by the pandemic is an infodemic, an abundant and uncontrolled spreading of potentially harmful misinformation. The infodemic may severely change the pandemic's course by interfering with public health interventions such as wearing masks, social distancing, and vaccination. In particular, the impact of the infodemic on vaccination is critical because it holds the key to reverting to pre-pandemic normalcy. This paper presents findings from a global survey on the extent of worldwide exposure to the COVID-19 infodemic, assesses different populations' susceptibility to false claims, and analyzes its association with vaccine acceptance. Based on responses gathered from over 18,400 individuals from 40 countries, we find a strong association between perceived believability of misinformation and vaccination hesitancy. Additionally, our study shows that only half of the online users exposed to rumors might have seen the fact-checked information. Moreover, depending on the country, between 6% and 37% of individuals considered these rumors believable. Our survey also shows that poorer regions are more susceptible to encountering and believing COVID-19 misinformation. We discuss implications of our findings on public campaigns that proactively spread accurate information to countries that are more susceptible to the infodemic. We also highlight fact-checking platforms' role in better identifying and prioritizing claims that are perceived to be believable and have wide exposure. Our findings give insights into better handling of risk communication during the initial phase of a future pandemic.
[ { "created": "Thu, 22 Apr 2021 05:09:25 GMT", "version": "v1" } ]
2022-04-06
[ [ "Singh", "Karandeep", "" ], [ "Lima", "Gabriel", "" ], [ "Cha", "Meeyoung", "" ], [ "Cha", "Chiyoung", "" ], [ "Kulshrestha", "Juhi", "" ], [ "Ahn", "Yong-Yeol", "" ], [ "Varol", "Onur", "" ] ]
The COVID-19 pandemic has been damaging to the lives of people all around the world. Accompanied by the pandemic is an infodemic, an abundant and uncontrolled spreading of potentially harmful misinformation. The infodemic may severely change the pandemic's course by interfering with public health interventions such as wearing masks, social distancing, and vaccination. In particular, the impact of the infodemic on vaccination is critical because it holds the key to reverting to pre-pandemic normalcy. This paper presents findings from a global survey on the extent of worldwide exposure to the COVID-19 infodemic, assesses different populations' susceptibility to false claims, and analyzes its association with vaccine acceptance. Based on responses gathered from over 18,400 individuals from 40 countries, we find a strong association between perceived believability of misinformation and vaccination hesitancy. Additionally, our study shows that only half of the online users exposed to rumors might have seen the fact-checked information. Moreover, depending on the country, between 6% and 37% of individuals considered these rumors believable. Our survey also shows that poorer regions are more susceptible to encountering and believing COVID-19 misinformation. We discuss implications of our findings on public campaigns that proactively spread accurate information to countries that are more susceptible to the infodemic. We also highlight fact-checking platforms' role in better identifying and prioritizing claims that are perceived to be believable and have wide exposure. Our findings give insights into better handling of risk communication during the initial phase of a future pandemic.
2301.06323
Rui Sun
Rui Sun, Xiuyu Wu, Yunfang Wu
An Error-Guided Correction Model for Chinese Spelling Error Correction
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Although existing neural network approaches have achieved great success on Chinese spelling correction, there is still room to improve. The model is required to avoid over-correction and to distinguish a correct token from its phonological and visually similar ones. In this paper, we propose an error-guided correction model (EGCM) to improve Chinese spelling correction. By borrowing the powerful ability of BERT, we propose a novel zero-shot error detection method to do a preliminary detection, which guides our model to attend more on the probably wrong tokens in encoding and to avoid modifying the correct tokens in generating. Furthermore, we introduce a new loss function to integrate the error confusion set, which enables our model to distinguish easily misused tokens. Moreover, our model supports highly parallel decoding to meet real application requirements. Experiments are conducted on widely used benchmarks. Our model achieves superior performance against state-of-the-art approaches by a remarkable margin, on both the correction quality and computation speed.
[ { "created": "Mon, 16 Jan 2023 09:27:45 GMT", "version": "v1" }, { "created": "Mon, 20 Mar 2023 08:37:45 GMT", "version": "v2" } ]
2023-03-21
[ [ "Sun", "Rui", "" ], [ "Wu", "Xiuyu", "" ], [ "Wu", "Yunfang", "" ] ]
Although existing neural network approaches have achieved great success on Chinese spelling correction, there is still room to improve. The model is required to avoid over-correction and to distinguish a correct token from its phonological and visually similar ones. In this paper, we propose an error-guided correction model (EGCM) to improve Chinese spelling correction. By borrowing the powerful ability of BERT, we propose a novel zero-shot error detection method to do a preliminary detection, which guides our model to attend more on the probably wrong tokens in encoding and to avoid modifying the correct tokens in generating. Furthermore, we introduce a new loss function to integrate the error confusion set, which enables our model to distinguish easily misused tokens. Moreover, our model supports highly parallel decoding to meet real application requirements. Experiments are conducted on widely used benchmarks. Our model achieves superior performance against state-of-the-art approaches by a remarkable margin, on both the correction quality and computation speed.
1907.09236
Isaac Ronald Ward
Isaac Ronald Ward, Hamid Laga, Mohammed Bennamoun
RGB-D image-based Object Detection: from Traditional Methods to Deep Learning Techniques
Chapter in the book 'RGB-D Image Analysis and Processing' (Paul Rosin)
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Object detection from RGB images is a long-standing problem in image processing and computer vision. It has applications in various domains including robotics, surveillance, human-computer interaction, and medical diagnosis. With the availability of low cost 3D scanners, a large number of RGB-D object detection approaches have been proposed in the past years. This chapter provides a comprehensive survey of the recent developments in this field. We structure the chapter into two parts; the focus of the first part is on techniques that are based on hand-crafted features combined with machine learning algorithms. The focus of the second part is on the more recent work, which is based on deep learning. Deep learning techniques, coupled with the availability of large training datasets, have now revolutionized the field of computer vision, including RGB-D object detection, achieving an unprecedented level of performance. We survey the key contributions, summarize the most commonly used pipelines, discuss their benefits and limitations, and highlight some important directions for future research.
[ { "created": "Mon, 22 Jul 2019 11:18:01 GMT", "version": "v1" } ]
2019-07-23
[ [ "Ward", "Isaac Ronald", "" ], [ "Laga", "Hamid", "" ], [ "Bennamoun", "Mohammed", "" ] ]
Object detection from RGB images is a long-standing problem in image processing and computer vision. It has applications in various domains including robotics, surveillance, human-computer interaction, and medical diagnosis. With the availability of low cost 3D scanners, a large number of RGB-D object detection approaches have been proposed in the past years. This chapter provides a comprehensive survey of the recent developments in this field. We structure the chapter into two parts; the focus of the first part is on techniques that are based on hand-crafted features combined with machine learning algorithms. The focus of the second part is on the more recent work, which is based on deep learning. Deep learning techniques, coupled with the availability of large training datasets, have now revolutionized the field of computer vision, including RGB-D object detection, achieving an unprecedented level of performance. We survey the key contributions, summarize the most commonly used pipelines, discuss their benefits and limitations, and highlight some important directions for future research.
2402.18774
Anna Kawakami
Anna Kawakami, Amanda Coston, Haiyi Zhu, Hoda Heidari, Kenneth Holstein
The Situate AI Guidebook: Co-Designing a Toolkit to Support Multi-Stakeholder Early-stage Deliberations Around Public Sector AI Proposals
null
null
10.1145/3613904.3642849
null
cs.HC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Public sector agencies are rapidly deploying AI systems to augment or automate critical decisions in real-world contexts like child welfare, criminal justice, and public health. A growing body of work documents how these AI systems often fail to improve services in practice. These failures can often be traced to decisions made during the early stages of AI ideation and design, such as problem formulation. However, today, we lack systematic processes to support effective, early-stage decision-making about whether and under what conditions to move forward with a proposed AI project. To understand how to scaffold such processes in real-world settings, we worked with public sector agency leaders, AI developers, frontline workers, and community advocates across four public sector agencies and three community advocacy groups in the United States. Through an iterative co-design process, we created the Situate AI Guidebook: a structured process centered around a set of deliberation questions to scaffold conversations around (1) goals and intended use or a proposed AI system, (2) societal and legal considerations, (3) data and modeling constraints, and (4) organizational governance factors. We discuss how the guidebook's design is informed by participants' challenges, needs, and desires for improved deliberation processes. We further elaborate on implications for designing responsible AI toolkits in collaboration with public sector agency stakeholders and opportunities for future work to expand upon the guidebook. This design approach can be more broadly adopted to support the co-creation of responsible AI toolkits that scaffold key decision-making processes surrounding the use of AI in the public sector and beyond.
[ { "created": "Thu, 29 Feb 2024 00:31:26 GMT", "version": "v1" }, { "created": "Tue, 5 Mar 2024 14:42:01 GMT", "version": "v2" } ]
2024-03-06
[ [ "Kawakami", "Anna", "" ], [ "Coston", "Amanda", "" ], [ "Zhu", "Haiyi", "" ], [ "Heidari", "Hoda", "" ], [ "Holstein", "Kenneth", "" ] ]
Public sector agencies are rapidly deploying AI systems to augment or automate critical decisions in real-world contexts like child welfare, criminal justice, and public health. A growing body of work documents how these AI systems often fail to improve services in practice. These failures can often be traced to decisions made during the early stages of AI ideation and design, such as problem formulation. However, today, we lack systematic processes to support effective, early-stage decision-making about whether and under what conditions to move forward with a proposed AI project. To understand how to scaffold such processes in real-world settings, we worked with public sector agency leaders, AI developers, frontline workers, and community advocates across four public sector agencies and three community advocacy groups in the United States. Through an iterative co-design process, we created the Situate AI Guidebook: a structured process centered around a set of deliberation questions to scaffold conversations around (1) goals and intended use or a proposed AI system, (2) societal and legal considerations, (3) data and modeling constraints, and (4) organizational governance factors. We discuss how the guidebook's design is informed by participants' challenges, needs, and desires for improved deliberation processes. We further elaborate on implications for designing responsible AI toolkits in collaboration with public sector agency stakeholders and opportunities for future work to expand upon the guidebook. This design approach can be more broadly adopted to support the co-creation of responsible AI toolkits that scaffold key decision-making processes surrounding the use of AI in the public sector and beyond.
1904.02181
Qiao Jin
Qiao Jin, Bhuwan Dhingra, William W. Cohen, Xinghua Lu
Probing Biomedical Embeddings from Language Models
NAACL-HLT 2019 Workshop on Evaluating Vector Space Representations for NLP (RepEval)
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contextualized word embeddings derived from pre-trained language models (LMs) show significant improvements on downstream NLP tasks. Pre-training on domain-specific corpora, such as biomedical articles, further improves their performance. In this paper, we conduct probing experiments to determine what additional information is carried intrinsically by the in-domain trained contextualized embeddings. For this we use the pre-trained LMs as fixed feature extractors and restrict the downstream task models to not have additional sequence modeling layers. We compare BERT, ELMo, BioBERT and BioELMo, a biomedical version of ELMo trained on 10M PubMed abstracts. Surprisingly, while fine-tuned BioBERT is better than BioELMo in biomedical NER and NLI tasks, as a fixed feature extractor BioELMo outperforms BioBERT in our probing tasks. We use visualization and nearest neighbor analysis to show that better encoding of entity-type and relational information leads to this superiority.
[ { "created": "Wed, 3 Apr 2019 18:05:02 GMT", "version": "v1" } ]
2019-04-05
[ [ "Jin", "Qiao", "" ], [ "Dhingra", "Bhuwan", "" ], [ "Cohen", "William W.", "" ], [ "Lu", "Xinghua", "" ] ]
Contextualized word embeddings derived from pre-trained language models (LMs) show significant improvements on downstream NLP tasks. Pre-training on domain-specific corpora, such as biomedical articles, further improves their performance. In this paper, we conduct probing experiments to determine what additional information is carried intrinsically by the in-domain trained contextualized embeddings. For this we use the pre-trained LMs as fixed feature extractors and restrict the downstream task models to not have additional sequence modeling layers. We compare BERT, ELMo, BioBERT and BioELMo, a biomedical version of ELMo trained on 10M PubMed abstracts. Surprisingly, while fine-tuned BioBERT is better than BioELMo in biomedical NER and NLI tasks, as a fixed feature extractor BioELMo outperforms BioBERT in our probing tasks. We use visualization and nearest neighbor analysis to show that better encoding of entity-type and relational information leads to this superiority.
1810.08237
Nikola Nikolov
Nikola I. Nikolov, Richard H.R. Hahnloser
Large-scale Hierarchical Alignment for Data-driven Text Rewriting
RANLP 2019
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a simple unsupervised method for extracting pseudo-parallel monolingual sentence pairs from comparable corpora representative of two different text styles, such as news articles and scientific papers. Our approach does not require a seed parallel corpus, but instead relies solely on hierarchical search over pre-trained embeddings of documents and sentences. We demonstrate the effectiveness of our method through automatic and extrinsic evaluation on text simplification from the normal to the Simple Wikipedia. We show that pseudo-parallel sentences extracted with our method not only supplement existing parallel data, but can even lead to competitive performance on their own.
[ { "created": "Thu, 18 Oct 2018 18:51:43 GMT", "version": "v1" }, { "created": "Thu, 25 Jul 2019 07:25:09 GMT", "version": "v2" } ]
2019-07-26
[ [ "Nikolov", "Nikola I.", "" ], [ "Hahnloser", "Richard H. R.", "" ] ]
We propose a simple unsupervised method for extracting pseudo-parallel monolingual sentence pairs from comparable corpora representative of two different text styles, such as news articles and scientific papers. Our approach does not require a seed parallel corpus, but instead relies solely on hierarchical search over pre-trained embeddings of documents and sentences. We demonstrate the effectiveness of our method through automatic and extrinsic evaluation on text simplification from the normal to the Simple Wikipedia. We show that pseudo-parallel sentences extracted with our method not only supplement existing parallel data, but can even lead to competitive performance on their own.
2010.06425
Esther Rodrigo Bonet
Esther Rodrigo Bonet, Duc Minh Nguyen and Nikos Deligiannis
Temporal Collaborative Filtering with Graph Convolutional Neural Networks
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Temporal collaborative filtering (TCF) methods aim at modelling non-static aspects behind recommender systems, such as the dynamics in users' preferences and social trends around items. State-of-the-art TCF methods employ recurrent neural networks (RNNs) to model such aspects. These methods deploy matrix-factorization-based (MF-based) approaches to learn the user and item representations. Recently, graph-neural-network-based (GNN-based) approaches have shown improved performance in providing accurate recommendations over traditional MF-based approaches in non-temporal CF settings. Motivated by this, we propose a novel TCF method that leverages GNNs to learn user and item representations, and RNNs to model their temporal dynamics. A challenge with this method lies in the increased data sparsity, which negatively impacts obtaining meaningful quality representations with GNNs. To overcome this challenge, we train a GNN model at each time step using a set of observed interactions accumulated time-wise. Comprehensive experiments on real-world data show the improved performance obtained by our method over several state-of-the-art temporal and non-temporal CF models.
[ { "created": "Tue, 13 Oct 2020 14:38:40 GMT", "version": "v1" } ]
2020-10-14
[ [ "Bonet", "Esther Rodrigo", "" ], [ "Nguyen", "Duc Minh", "" ], [ "Deligiannis", "Nikos", "" ] ]
Temporal collaborative filtering (TCF) methods aim at modelling non-static aspects behind recommender systems, such as the dynamics in users' preferences and social trends around items. State-of-the-art TCF methods employ recurrent neural networks (RNNs) to model such aspects. These methods deploy matrix-factorization-based (MF-based) approaches to learn the user and item representations. Recently, graph-neural-network-based (GNN-based) approaches have shown improved performance in providing accurate recommendations over traditional MF-based approaches in non-temporal CF settings. Motivated by this, we propose a novel TCF method that leverages GNNs to learn user and item representations, and RNNs to model their temporal dynamics. A challenge with this method lies in the increased data sparsity, which negatively impacts obtaining meaningful quality representations with GNNs. To overcome this challenge, we train a GNN model at each time step using a set of observed interactions accumulated time-wise. Comprehensive experiments on real-world data show the improved performance obtained by our method over several state-of-the-art temporal and non-temporal CF models.
2103.13629
Wanhua Li
Wanhua Li, Xiaoke Huang, Jiwen Lu, Jianjiang Feng, Jie Zhou
Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware Regression
Accepted by CVPR2021. Code is available at https://github.com/Li-Wanhua/POEs
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Uncertainty is the only certainty there is. Modeling data uncertainty is essential for regression, especially in unconstrained settings. Traditionally the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions. On the other hand, classification based regression and ranking based solutions are more popular in practice while the direct regression methods suffer from the limited performance. How to model the uncertainty within the present-day technologies for regression remains an open issue. In this paper, we propose to learn probabilistic ordinal embeddings which represent each data as a multivariate Gaussian distribution rather than a deterministic point in the latent space. An ordinal distribution constraint is proposed to exploit the ordinal nature of regression. Our probabilistic ordinal embeddings can be integrated into popular regression approaches and empower them with the ability of uncertainty estimation. Experimental results show that our approach achieves competitive performance. Code is available at https://github.com/Li-Wanhua/POEs.
[ { "created": "Thu, 25 Mar 2021 06:56:09 GMT", "version": "v1" } ]
2021-03-26
[ [ "Li", "Wanhua", "" ], [ "Huang", "Xiaoke", "" ], [ "Lu", "Jiwen", "" ], [ "Feng", "Jianjiang", "" ], [ "Zhou", "Jie", "" ] ]
Uncertainty is the only certainty there is. Modeling data uncertainty is essential for regression, especially in unconstrained settings. Traditionally the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions. On the other hand, classification based regression and ranking based solutions are more popular in practice while the direct regression methods suffer from the limited performance. How to model the uncertainty within the present-day technologies for regression remains an open issue. In this paper, we propose to learn probabilistic ordinal embeddings which represent each data as a multivariate Gaussian distribution rather than a deterministic point in the latent space. An ordinal distribution constraint is proposed to exploit the ordinal nature of regression. Our probabilistic ordinal embeddings can be integrated into popular regression approaches and empower them with the ability of uncertainty estimation. Experimental results show that our approach achieves competitive performance. Code is available at https://github.com/Li-Wanhua/POEs.
2209.08335
Louis Mahon
Louis Mahon and Thomas Lukasiewicz
Efficient Deep Clustering of Human Activities and How to Improve Evaluation
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
There has been much recent research on human activity re\-cog\-ni\-tion (HAR), due to the proliferation of wearable sensors in watches and phones, and the advances of deep learning methods, which avoid the need to manually extract features from raw sensor signals. A significant disadvantage of deep learning applied to HAR is the need for manually labelled training data, which is especially difficult to obtain for HAR datasets. Progress is starting to be made in the unsupervised setting, in the form of deep HAR clustering models, which can assign labels to data without having been given any labels to train on, but there are problems with evaluating deep HAR clustering models, which makes assessing the field and devising new methods difficult. In this paper, we highlight several distinct problems with how deep HAR clustering models are evaluated, describing these problems in detail and conducting careful experiments to explicate the effect that they can have on results. We then discuss solutions to these problems, and suggest standard evaluation settings for future deep HAR clustering models. Additionally, we present a new deep clustering model for HAR. When tested under our proposed settings, our model performs better than (or on par with) existing models, while also being more efficient and better able to scale to more complex datasets by avoiding the need for an autoencoder.
[ { "created": "Sat, 17 Sep 2022 14:12:42 GMT", "version": "v1" } ]
2022-09-20
[ [ "Mahon", "Louis", "" ], [ "Lukasiewicz", "Thomas", "" ] ]
There has been much recent research on human activity re\-cog\-ni\-tion (HAR), due to the proliferation of wearable sensors in watches and phones, and the advances of deep learning methods, which avoid the need to manually extract features from raw sensor signals. A significant disadvantage of deep learning applied to HAR is the need for manually labelled training data, which is especially difficult to obtain for HAR datasets. Progress is starting to be made in the unsupervised setting, in the form of deep HAR clustering models, which can assign labels to data without having been given any labels to train on, but there are problems with evaluating deep HAR clustering models, which makes assessing the field and devising new methods difficult. In this paper, we highlight several distinct problems with how deep HAR clustering models are evaluated, describing these problems in detail and conducting careful experiments to explicate the effect that they can have on results. We then discuss solutions to these problems, and suggest standard evaluation settings for future deep HAR clustering models. Additionally, we present a new deep clustering model for HAR. When tested under our proposed settings, our model performs better than (or on par with) existing models, while also being more efficient and better able to scale to more complex datasets by avoiding the need for an autoencoder.