id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2305.16269
Shady Abu-Hussein
Shady Abu-Hussein, and Raja Giryes
UDPM: Upsampling Diffusion Probabilistic Models
null
null
null
null
cs.CV cs.LG eess.IV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Denoising Diffusion Probabilistic Models (DDPM) have recently gained significant attention. DDPMs compose a Markovian process that begins in the data domain and gradually adds noise until reaching pure white noise. DDPMs generate high-quality samples from complex data distributions by defining an inverse process and training a deep neural network to learn this mapping. However, these models are inefficient because they require many diffusion steps to produce aesthetically pleasing samples. Additionally, unlike generative adversarial networks (GANs), the latent space of diffusion models is less interpretable. In this work, we propose to generalize the denoising diffusion process into an Upsampling Diffusion Probabilistic Model (UDPM). In the forward process, we reduce the latent variable dimension through downsampling, followed by the traditional noise perturbation. As a result, the reverse process gradually denoises and upsamples the latent variable to produce a sample from the data distribution. We formalize the Markovian diffusion processes of UDPM and demonstrate its generation capabilities on the popular FFHQ, AFHQv2, and CIFAR10 datasets. UDPM generates images with as few as three network evaluations, whose overall computational cost is less than a single DDPM or EDM step, while achieving an FID score of 6.86. This surpasses current state-of-the-art efficient diffusion models that use a single denoising step for sampling. Additionally, UDPM offers an interpretable and interpolable latent space, which gives it an advantage over traditional DDPMs. Our code is available online: \url{https://github.com/shadyabh/UDPM/}
[ { "created": "Thu, 25 May 2023 17:25:14 GMT", "version": "v1" }, { "created": "Mon, 27 May 2024 18:02:56 GMT", "version": "v2" }, { "created": "Mon, 8 Jul 2024 15:32:52 GMT", "version": "v3" } ]
2024-07-09
[ [ "Abu-Hussein", "Shady", "" ], [ "Giryes", "Raja", "" ] ]
Denoising Diffusion Probabilistic Models (DDPM) have recently gained significant attention. DDPMs compose a Markovian process that begins in the data domain and gradually adds noise until reaching pure white noise. DDPMs generate high-quality samples from complex data distributions by defining an inverse process and training a deep neural network to learn this mapping. However, these models are inefficient because they require many diffusion steps to produce aesthetically pleasing samples. Additionally, unlike generative adversarial networks (GANs), the latent space of diffusion models is less interpretable. In this work, we propose to generalize the denoising diffusion process into an Upsampling Diffusion Probabilistic Model (UDPM). In the forward process, we reduce the latent variable dimension through downsampling, followed by the traditional noise perturbation. As a result, the reverse process gradually denoises and upsamples the latent variable to produce a sample from the data distribution. We formalize the Markovian diffusion processes of UDPM and demonstrate its generation capabilities on the popular FFHQ, AFHQv2, and CIFAR10 datasets. UDPM generates images with as few as three network evaluations, whose overall computational cost is less than a single DDPM or EDM step, while achieving an FID score of 6.86. This surpasses current state-of-the-art efficient diffusion models that use a single denoising step for sampling. Additionally, UDPM offers an interpretable and interpolable latent space, which gives it an advantage over traditional DDPMs. Our code is available online: \url{https://github.com/shadyabh/UDPM/}
1403.0012
Xinchen Zhang
Xinchen Zhang and Martin Haenggi
A Stochastic Geometry Analysis of Inter-cell Interference Coordination and Intra-cell Diversity
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inter-cell interference coordination (ICIC) and intra-cell diversity (ICD) play important roles in improving cellular downlink coverage. Modeling cellular base stations (BSs) as a homogeneous Poisson point process (PPP), this paper provides explicit finite-integral expressions for the coverage probability with ICIC and ICD, taking into account the temporal/spectral correlation of the signal and interference. In addition, we show that in the high-reliability regime, where the user outage probability goes to zero, ICIC and ICD affect the network coverage in drastically different ways: ICD can provide order gain while ICIC only offers linear gain. In the high-spectral efficiency regime where the SIR threshold goes to infinity, the order difference in the coverage probability does not exist, however the linear difference makes ICIC a better scheme than ICD for realistic path loss exponents. Consequently, depending on the SIR requirements, different combinations of ICIC and ICD optimize the coverage probability.
[ { "created": "Fri, 28 Feb 2014 21:30:24 GMT", "version": "v1" }, { "created": "Mon, 31 Mar 2014 16:22:56 GMT", "version": "v2" }, { "created": "Sat, 10 May 2014 22:08:08 GMT", "version": "v3" }, { "created": "Sun, 15 Mar 2015 21:35:32 GMT", "version": "v4" } ]
2015-03-17
[ [ "Zhang", "Xinchen", "" ], [ "Haenggi", "Martin", "" ] ]
Inter-cell interference coordination (ICIC) and intra-cell diversity (ICD) play important roles in improving cellular downlink coverage. Modeling cellular base stations (BSs) as a homogeneous Poisson point process (PPP), this paper provides explicit finite-integral expressions for the coverage probability with ICIC and ICD, taking into account the temporal/spectral correlation of the signal and interference. In addition, we show that in the high-reliability regime, where the user outage probability goes to zero, ICIC and ICD affect the network coverage in drastically different ways: ICD can provide order gain while ICIC only offers linear gain. In the high-spectral efficiency regime where the SIR threshold goes to infinity, the order difference in the coverage probability does not exist, however the linear difference makes ICIC a better scheme than ICD for realistic path loss exponents. Consequently, depending on the SIR requirements, different combinations of ICIC and ICD optimize the coverage probability.
2006.05964
David Budden
David Budden, Adam Marblestone, Eren Sezener, Tor Lattimore, Greg Wayne, Joel Veness
Gaussian Gated Linear Networks
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose the Gaussian Gated Linear Network (G-GLN), an extension to the recently proposed GLN family of deep neural networks. Instead of using backpropagation to learn features, GLNs have a distributed and local credit assignment mechanism based on optimizing a convex objective. This gives rise to many desirable properties including universality, data-efficient online learning, trivial interpretability and robustness to catastrophic forgetting. We extend the GLN framework from classification to multiple regression and density modelling by generalizing geometric mixing to a product of Gaussian densities. The G-GLN achieves competitive or state-of-the-art performance on several univariate and multivariate regression benchmarks, and we demonstrate its applicability to practical tasks including online contextual bandits and density estimation via denoising.
[ { "created": "Wed, 10 Jun 2020 17:25:12 GMT", "version": "v1" }, { "created": "Wed, 21 Oct 2020 16:39:03 GMT", "version": "v2" } ]
2020-10-22
[ [ "Budden", "David", "" ], [ "Marblestone", "Adam", "" ], [ "Sezener", "Eren", "" ], [ "Lattimore", "Tor", "" ], [ "Wayne", "Greg", "" ], [ "Veness", "Joel", "" ] ]
We propose the Gaussian Gated Linear Network (G-GLN), an extension to the recently proposed GLN family of deep neural networks. Instead of using backpropagation to learn features, GLNs have a distributed and local credit assignment mechanism based on optimizing a convex objective. This gives rise to many desirable properties including universality, data-efficient online learning, trivial interpretability and robustness to catastrophic forgetting. We extend the GLN framework from classification to multiple regression and density modelling by generalizing geometric mixing to a product of Gaussian densities. The G-GLN achieves competitive or state-of-the-art performance on several univariate and multivariate regression benchmarks, and we demonstrate its applicability to practical tasks including online contextual bandits and density estimation via denoising.
1703.07994
Andreas Pieris
Pablo Barcelo, Gerald Berger, Andreas Pieris
Containment for Rule-Based Ontology-Mediated Queries
null
null
null
null
cs.DB cs.AI cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many efforts have been dedicated to identifying restrictions on ontologies expressed as tuple-generating dependencies (tgds), a.k.a. existential rules, that lead to the decidability for the problem of answering ontology-mediated queries (OMQs). This has given rise to three families of formalisms: guarded, non-recursive, and sticky sets of tgds. In this work, we study the containment problem for OMQs expressed in such formalisms, which is a key ingredient for solving static analysis tasks associated with them. Our main contribution is the development of specially tailored techniques for OMQ containment under the classes of tgds stated above. This enables us to obtain sharp complexity bounds for the problems at hand, which in turn allow us to delimitate its practical applicability. We also apply our techniques to pinpoint the complexity of problems associated with two emerging applications of OMQ containment: distribution over components and UCQ rewritability of OMQs.
[ { "created": "Thu, 23 Mar 2017 10:44:18 GMT", "version": "v1" }, { "created": "Sun, 2 Apr 2017 16:13:16 GMT", "version": "v2" }, { "created": "Wed, 19 Apr 2017 00:26:02 GMT", "version": "v3" } ]
2017-04-20
[ [ "Barcelo", "Pablo", "" ], [ "Berger", "Gerald", "" ], [ "Pieris", "Andreas", "" ] ]
Many efforts have been dedicated to identifying restrictions on ontologies expressed as tuple-generating dependencies (tgds), a.k.a. existential rules, that lead to the decidability for the problem of answering ontology-mediated queries (OMQs). This has given rise to three families of formalisms: guarded, non-recursive, and sticky sets of tgds. In this work, we study the containment problem for OMQs expressed in such formalisms, which is a key ingredient for solving static analysis tasks associated with them. Our main contribution is the development of specially tailored techniques for OMQ containment under the classes of tgds stated above. This enables us to obtain sharp complexity bounds for the problems at hand, which in turn allow us to delimitate its practical applicability. We also apply our techniques to pinpoint the complexity of problems associated with two emerging applications of OMQ containment: distribution over components and UCQ rewritability of OMQs.
1011.3019
Shriprakash Sinha
Shriprakash Sinha and Gert J. ter Horst
Bounded Multivariate Surfaces On Monovariate Internal Functions
23 pages, 15 figures, 1 table
IEEE Intl. Conf. on Image Processing, Brussels, Sept. 11 to 14, 2011
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Combining the properties of monovariate internal functions as proposed in Kolmogorov superimposition theorem, in tandem with the bounds wielded by the multivariate formulation of Chebyshev inequality, a hybrid model is presented, that decomposes images into homogeneous probabilistically bounded multivariate surfaces. Given an image, the model shows a novel way of working on reduced image representation while processing and capturing the interaction among the multidimensional information that describes the content of the same. Further, it tackles the practical issues of preventing leakage by bounding the growth of surface and reducing the problem sample size. The model if used, also sheds light on how the Chebyshev parameter relates to the number of pixels and the dimensionality of the feature space that associates with a pixel. Initial segmentation results on the Berkeley image segmentation benchmark indicate the effectiveness of the proposed decomposition algorithm.
[ { "created": "Fri, 12 Nov 2010 19:48:13 GMT", "version": "v1" } ]
2011-06-03
[ [ "Sinha", "Shriprakash", "" ], [ "ter Horst", "Gert J.", "" ] ]
Combining the properties of monovariate internal functions as proposed in Kolmogorov superimposition theorem, in tandem with the bounds wielded by the multivariate formulation of Chebyshev inequality, a hybrid model is presented, that decomposes images into homogeneous probabilistically bounded multivariate surfaces. Given an image, the model shows a novel way of working on reduced image representation while processing and capturing the interaction among the multidimensional information that describes the content of the same. Further, it tackles the practical issues of preventing leakage by bounding the growth of surface and reducing the problem sample size. The model if used, also sheds light on how the Chebyshev parameter relates to the number of pixels and the dimensionality of the feature space that associates with a pixel. Initial segmentation results on the Berkeley image segmentation benchmark indicate the effectiveness of the proposed decomposition algorithm.
1402.2489
Yingjie Zhou
Yingjie Zhou, Nicholas Maxemchuk, Xiangying Qian, Chen Wang
The Fair Distribution of Power to Electric Vehicles: An Alternative to Pricing
accepted in IEEE Smartgridcomm'14
null
10.1109/SmartGridComm.2014.7007727
null
cs.NI cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As the popularity of electric vehicles increases, the demand for more power can increase more rapidly than our ability to install additional generating capacity. In the long term we expect that the supply and demand will become balanced. However, in the interim the rate at which electric vehicles can be deployed will depend on our ability to charge these vehicles without inconveniencing their owners. In this paper, we investigate using fairness mechanisms to distribute power to electric vehicles on a smart grid. We assume that during peak demand there is insufficient power to charge all the vehicles simultaneously. In each five minute interval of time we select a subset of the vehicles to charge, based upon information about the vehicles. We evaluate the selection mechanisms using published data on the current demand for electric power as a function of time of day, current driving habits for commuting, and the current rates at which electric vehicles can be charged on home outlets. We found that conventional selection strategies, such as first-come-first-served or round robin, may delay a significant fraction of the vehicles by more than two hours, even when the total available power over the course of a day is two or three times the power required by the vehicles. However, a selection mechanism that minimizes the maximum delay can reduce the delays to a few minutes, even when the capacity available for charging electric vehicles exceeds their requirements by as little as 5%.
[ { "created": "Tue, 11 Feb 2014 14:06:36 GMT", "version": "v1" }, { "created": "Wed, 30 Jul 2014 01:30:44 GMT", "version": "v2" } ]
2016-11-15
[ [ "Zhou", "Yingjie", "" ], [ "Maxemchuk", "Nicholas", "" ], [ "Qian", "Xiangying", "" ], [ "Wang", "Chen", "" ] ]
As the popularity of electric vehicles increases, the demand for more power can increase more rapidly than our ability to install additional generating capacity. In the long term we expect that the supply and demand will become balanced. However, in the interim the rate at which electric vehicles can be deployed will depend on our ability to charge these vehicles without inconveniencing their owners. In this paper, we investigate using fairness mechanisms to distribute power to electric vehicles on a smart grid. We assume that during peak demand there is insufficient power to charge all the vehicles simultaneously. In each five minute interval of time we select a subset of the vehicles to charge, based upon information about the vehicles. We evaluate the selection mechanisms using published data on the current demand for electric power as a function of time of day, current driving habits for commuting, and the current rates at which electric vehicles can be charged on home outlets. We found that conventional selection strategies, such as first-come-first-served or round robin, may delay a significant fraction of the vehicles by more than two hours, even when the total available power over the course of a day is two or three times the power required by the vehicles. However, a selection mechanism that minimizes the maximum delay can reduce the delays to a few minutes, even when the capacity available for charging electric vehicles exceeds their requirements by as little as 5%.
1907.10046
Naftali Cohen
Naftali Cohen, Tucker Balch, and Manuela Veloso
Trading via Image Classification
null
null
null
null
cs.CV q-fin.CP q-fin.TR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The art of systematic financial trading evolved with an array of approaches, ranging from simple strategies to complex algorithms all relying, primary, on aspects of time-series analysis. Recently, after visiting the trading floor of a leading financial institution, we noticed that traders always execute their trade orders while observing images of financial time-series on their screens. In this work, we built upon the success in image recognition and examine the value in transforming the traditional time-series analysis to that of image classification. We create a large sample of financial time-series images encoded as candlestick (Box and Whisker) charts and label the samples following three algebraically-defined binary trade strategies. Using the images, we train over a dozen machine-learning classification models and find that the algorithms are very efficient in recovering the complicated, multiscale label-generating rules when the data is represented visually. We suggest that the transformation of continuous numeric time-series classification problem to a vision problem is useful for recovering signals typical of technical analysis.
[ { "created": "Tue, 23 Jul 2019 17:58:10 GMT", "version": "v1" }, { "created": "Wed, 2 Oct 2019 14:02:59 GMT", "version": "v2" }, { "created": "Mon, 26 Oct 2020 05:01:18 GMT", "version": "v3" } ]
2020-10-27
[ [ "Cohen", "Naftali", "" ], [ "Balch", "Tucker", "" ], [ "Veloso", "Manuela", "" ] ]
The art of systematic financial trading evolved with an array of approaches, ranging from simple strategies to complex algorithms all relying, primary, on aspects of time-series analysis. Recently, after visiting the trading floor of a leading financial institution, we noticed that traders always execute their trade orders while observing images of financial time-series on their screens. In this work, we built upon the success in image recognition and examine the value in transforming the traditional time-series analysis to that of image classification. We create a large sample of financial time-series images encoded as candlestick (Box and Whisker) charts and label the samples following three algebraically-defined binary trade strategies. Using the images, we train over a dozen machine-learning classification models and find that the algorithms are very efficient in recovering the complicated, multiscale label-generating rules when the data is represented visually. We suggest that the transformation of continuous numeric time-series classification problem to a vision problem is useful for recovering signals typical of technical analysis.
2402.17653
David Williams
David S. W. Williams, Daniele De Martini, Matthew Gadd and Paul Newman
Mitigating Distributional Shift in Semantic Segmentation via Uncertainty Estimation from Unlabelled Data
Accepted for publication in IEEE Transactions on Robotics (T-RO)
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowing when a trained segmentation model is encountering data that is different to its training data is important. Understanding and mitigating the effects of this play an important part in their application from a performance and assurance perspective - this being a safety concern in applications such as autonomous vehicles (AVs). This work presents a segmentation network that can detect errors caused by challenging test domains without any additional annotation in a single forward pass. As annotation costs limit the diversity of labelled datasets, we use easy-to-obtain, uncurated and unlabelled data to learn to perform uncertainty estimation by selectively enforcing consistency over data augmentation. To this end, a novel segmentation benchmark based on the SAX Dataset is used, which includes labelled test data spanning three autonomous-driving domains, ranging in appearance from dense urban to off-road. The proposed method, named Gamma-SSL, consistently outperforms uncertainty estimation and Out-of-Distribution (OoD) techniques on this difficult benchmark - by up to 10.7% in area under the receiver operating characteristic (ROC) curve and 19.2% in area under the precision-recall (PR) curve in the most challenging of the three scenarios.
[ { "created": "Tue, 27 Feb 2024 16:23:11 GMT", "version": "v1" } ]
2024-02-28
[ [ "Williams", "David S. W.", "" ], [ "De Martini", "Daniele", "" ], [ "Gadd", "Matthew", "" ], [ "Newman", "Paul", "" ] ]
Knowing when a trained segmentation model is encountering data that is different to its training data is important. Understanding and mitigating the effects of this play an important part in their application from a performance and assurance perspective - this being a safety concern in applications such as autonomous vehicles (AVs). This work presents a segmentation network that can detect errors caused by challenging test domains without any additional annotation in a single forward pass. As annotation costs limit the diversity of labelled datasets, we use easy-to-obtain, uncurated and unlabelled data to learn to perform uncertainty estimation by selectively enforcing consistency over data augmentation. To this end, a novel segmentation benchmark based on the SAX Dataset is used, which includes labelled test data spanning three autonomous-driving domains, ranging in appearance from dense urban to off-road. The proposed method, named Gamma-SSL, consistently outperforms uncertainty estimation and Out-of-Distribution (OoD) techniques on this difficult benchmark - by up to 10.7% in area under the receiver operating characteristic (ROC) curve and 19.2% in area under the precision-recall (PR) curve in the most challenging of the three scenarios.
2407.16975
Xinshuai Dong
Xinshuai Dong and Ignavier Ng and Biwei Huang and Yuewen Sun and Songyao Jin and Roberto Legaspi and Peter Spirtes and Kun Zhang
On the Parameter Identifiability of Partially Observed Linear Causal Models
null
null
null
null
cs.LG stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Linear causal models are important tools for modeling causal dependencies and yet in practice, only a subset of the variables can be observed. In this paper, we examine the parameter identifiability of these models by investigating whether the edge coefficients can be recovered given the causal structure and partially observed data. Our setting is more general than that of prior research - we allow all variables, including both observed and latent ones, to be flexibly related, and we consider the coefficients of all edges, whereas most existing works focus only on the edges between observed variables. Theoretically, we identify three types of indeterminacy for the parameters in partially observed linear causal models. We then provide graphical conditions that are sufficient for all parameters to be identifiable and show that some of them are provably necessary. Methodologically, we propose a novel likelihood-based parameter estimation method that addresses the variance indeterminacy of latent variables in a specific way and can asymptotically recover the underlying parameters up to trivial indeterminacy. Empirical studies on both synthetic and real-world datasets validate our identifiability theory and the effectiveness of the proposed method in the finite-sample regime.
[ { "created": "Wed, 24 Jul 2024 03:43:55 GMT", "version": "v1" } ]
2024-07-25
[ [ "Dong", "Xinshuai", "" ], [ "Ng", "Ignavier", "" ], [ "Huang", "Biwei", "" ], [ "Sun", "Yuewen", "" ], [ "Jin", "Songyao", "" ], [ "Legaspi", "Roberto", "" ], [ "Spirtes", "Peter", "" ], [ "Zhang", "Kun", "" ] ]
Linear causal models are important tools for modeling causal dependencies and yet in practice, only a subset of the variables can be observed. In this paper, we examine the parameter identifiability of these models by investigating whether the edge coefficients can be recovered given the causal structure and partially observed data. Our setting is more general than that of prior research - we allow all variables, including both observed and latent ones, to be flexibly related, and we consider the coefficients of all edges, whereas most existing works focus only on the edges between observed variables. Theoretically, we identify three types of indeterminacy for the parameters in partially observed linear causal models. We then provide graphical conditions that are sufficient for all parameters to be identifiable and show that some of them are provably necessary. Methodologically, we propose a novel likelihood-based parameter estimation method that addresses the variance indeterminacy of latent variables in a specific way and can asymptotically recover the underlying parameters up to trivial indeterminacy. Empirical studies on both synthetic and real-world datasets validate our identifiability theory and the effectiveness of the proposed method in the finite-sample regime.
2312.16335
Mariano Tepper
Mariano Tepper, Ishwar Singh Bhati, Cecilia Aguerrebere, Mark Hildebrand, Ted Willke
LeanVec: Searching vectors faster by making them fit
null
null
null
null
cs.LG cs.DB
http://creativecommons.org/licenses/by-sa/4.0/
Modern deep learning models have the ability to generate high-dimensional vectors whose similarity reflects semantic resemblance. Thus, similarity search, i.e., the operation of retrieving those vectors in a large collection that are similar to a given query, has become a critical component of a wide range of applications that demand highly accurate and timely answers. In this setting, the high vector dimensionality puts similarity search systems under compute and memory pressure, leading to subpar performance. Additionally, cross-modal retrieval tasks have become increasingly common, e.g., where a user inputs a text query to find the most relevant images for that query. However, these queries often have different distributions than the database embeddings, making it challenging to achieve high accuracy. In this work, we present LeanVec, a framework that combines linear dimensionality reduction with vector quantization to accelerate similarity search on high-dimensional vectors while maintaining accuracy. We present LeanVec variants for in-distribution (ID) and out-of-distribution (OOD) queries. LeanVec-ID yields accuracies on par with those from recently introduced deep learning alternatives whose computational overhead precludes their usage in practice. LeanVec-OOD uses two novel techniques for dimensionality reduction that consider the query and database distributions to simultaneously boost the accuracy and the performance of the framework even further (even presenting competitive results when the query and database distributions match). All in all, our extensive and varied experimental results show that LeanVec produces state-of-the-art results, with up to 3.7x improvement in search throughput and up to 4.9x faster index build time over the state of the art.
[ { "created": "Tue, 26 Dec 2023 21:14:59 GMT", "version": "v1" }, { "created": "Wed, 3 Apr 2024 16:18:24 GMT", "version": "v2" } ]
2024-04-04
[ [ "Tepper", "Mariano", "" ], [ "Bhati", "Ishwar Singh", "" ], [ "Aguerrebere", "Cecilia", "" ], [ "Hildebrand", "Mark", "" ], [ "Willke", "Ted", "" ] ]
Modern deep learning models have the ability to generate high-dimensional vectors whose similarity reflects semantic resemblance. Thus, similarity search, i.e., the operation of retrieving those vectors in a large collection that are similar to a given query, has become a critical component of a wide range of applications that demand highly accurate and timely answers. In this setting, the high vector dimensionality puts similarity search systems under compute and memory pressure, leading to subpar performance. Additionally, cross-modal retrieval tasks have become increasingly common, e.g., where a user inputs a text query to find the most relevant images for that query. However, these queries often have different distributions than the database embeddings, making it challenging to achieve high accuracy. In this work, we present LeanVec, a framework that combines linear dimensionality reduction with vector quantization to accelerate similarity search on high-dimensional vectors while maintaining accuracy. We present LeanVec variants for in-distribution (ID) and out-of-distribution (OOD) queries. LeanVec-ID yields accuracies on par with those from recently introduced deep learning alternatives whose computational overhead precludes their usage in practice. LeanVec-OOD uses two novel techniques for dimensionality reduction that consider the query and database distributions to simultaneously boost the accuracy and the performance of the framework even further (even presenting competitive results when the query and database distributions match). All in all, our extensive and varied experimental results show that LeanVec produces state-of-the-art results, with up to 3.7x improvement in search throughput and up to 4.9x faster index build time over the state of the art.
1207.6452
Frank Ruskey
Khalegh Mamakani and Frank Ruskey
A New Rose : The First Simple Symmetric 11-Venn Diagram
null
null
null
null
cs.CG math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A symmetric Venn diagram is one that is invariant under rotation, up to a relabeling of curves. A simple Venn diagram is one in which at most two curves intersect at any point. In this paper we introduce a new property of Venn diagrams called crosscut symmetry, which is related to dihedral symmetry. Utilizing a computer search restricted to crosscut symmetry we found many simple symmetric Venn diagrams with 11 curves. This answers an existence question that has been open since the 1960's. The first such diagram that was discovered is shown here.
[ { "created": "Fri, 27 Jul 2012 05:57:49 GMT", "version": "v1" } ]
2012-07-30
[ [ "Mamakani", "Khalegh", "" ], [ "Ruskey", "Frank", "" ] ]
A symmetric Venn diagram is one that is invariant under rotation, up to a relabeling of curves. A simple Venn diagram is one in which at most two curves intersect at any point. In this paper we introduce a new property of Venn diagrams called crosscut symmetry, which is related to dihedral symmetry. Utilizing a computer search restricted to crosscut symmetry we found many simple symmetric Venn diagrams with 11 curves. This answers an existence question that has been open since the 1960's. The first such diagram that was discovered is shown here.
2302.09582
Dan Zhang
Ming Li, Yusheng Su, Hsiu-Yuan Huang, Jiali Cheng, Xin Hu, Xinmiao Zhang, Huadong Wang, Yujia Qin, Xiaozhi Wang, Kristen A. Lindquist, Zhiyuan Liu, Dan Zhang
Language-Specific Representation of Emotion-Concept Knowledge Causally Supports Emotion Inference
44 pages, 14 figures, 2 tables
null
null
null
cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans no doubt use language to communicate about their emotional experiences, but does language in turn help humans understand emotions, or is language just a vehicle of communication? This study used a form of artificial intelligence (AI) known as large language models (LLMs) to assess whether language-based representations of emotion causally contribute to the AI's ability to generate inferences about the emotional meaning of novel situations. Fourteen attributes of human emotion concept representation were found to be represented by the LLM's distinct artificial neuron populations. By manipulating these attribute-related neurons, we in turn demonstrated the role of emotion concept knowledge in generative emotion inference. The attribute-specific performance deterioration was related to the importance of different attributes in human mental space. Our findings provide a proof-in-concept that even a LLM can learn about emotions in the absence of sensory-motor representations and highlight the contribution of language-derived emotion-concept knowledge for emotion inference.
[ { "created": "Sun, 19 Feb 2023 14:21:33 GMT", "version": "v1" }, { "created": "Tue, 21 Feb 2023 07:28:04 GMT", "version": "v2" }, { "created": "Wed, 12 Jul 2023 09:04:14 GMT", "version": "v3" }, { "created": "Mon, 21 Aug 2023 09:44:19 GMT", "version": "v4" }, { "created": "Tue, 12 Mar 2024 14:55:29 GMT", "version": "v5" } ]
2024-03-13
[ [ "Li", "Ming", "" ], [ "Su", "Yusheng", "" ], [ "Huang", "Hsiu-Yuan", "" ], [ "Cheng", "Jiali", "" ], [ "Hu", "Xin", "" ], [ "Zhang", "Xinmiao", "" ], [ "Wang", "Huadong", "" ], [ "Qin", "Yujia", "" ], [ "Wang", "Xiaozhi", "" ], [ "Lindquist", "Kristen A.", "" ], [ "Liu", "Zhiyuan", "" ], [ "Zhang", "Dan", "" ] ]
Humans no doubt use language to communicate about their emotional experiences, but does language in turn help humans understand emotions, or is language just a vehicle of communication? This study used a form of artificial intelligence (AI) known as large language models (LLMs) to assess whether language-based representations of emotion causally contribute to the AI's ability to generate inferences about the emotional meaning of novel situations. Fourteen attributes of human emotion concept representation were found to be represented by the LLM's distinct artificial neuron populations. By manipulating these attribute-related neurons, we in turn demonstrated the role of emotion concept knowledge in generative emotion inference. The attribute-specific performance deterioration was related to the importance of different attributes in human mental space. Our findings provide a proof-in-concept that even a LLM can learn about emotions in the absence of sensory-motor representations and highlight the contribution of language-derived emotion-concept knowledge for emotion inference.
2102.07053
Aritra Mitra
Aritra Mitra, Rayana Jaafar, George J. Pappas, and Hamed Hassani
Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients
Compared to the previous version, this version contains an additional result pertaining to a general stochastic oracle model. It also includes additional comparisons of our algorithm and results with relevant existing works
null
null
null
cs.LG cs.DC cs.SY eess.SY math.OC
http://creativecommons.org/licenses/by/4.0/
We consider a standard federated learning (FL) architecture where a group of clients periodically coordinate with a central server to train a statistical model. We develop a general algorithmic framework called FedLin to tackle some of the key challenges intrinsic to FL, namely objective heterogeneity, systems heterogeneity, and infrequent and imprecise communication. Our framework is motivated by the observation that under these challenges, various existing FL algorithms suffer from a fundamental speed-accuracy conflict: they either guarantee linear convergence but to an incorrect point, or convergence to the global minimum but at a sub-linear rate, i.e., fast convergence comes at the expense of accuracy. In contrast, when the clients' local loss functions are smooth and strongly convex, we show that FedLin guarantees linear convergence to the global minimum, despite arbitrary objective and systems heterogeneity. We then establish matching upper and lower bounds on the convergence rate of FedLin that highlight the effects of intermittent communication. Finally, we show that FedLin preserves linear convergence rates under aggressive gradient sparsification, and quantify the effect of the compression level on the convergence rate. Our work is the first to provide tight linear convergence rate guarantees, and constitutes the first comprehensive analysis of gradient sparsification in FL.
[ { "created": "Sun, 14 Feb 2021 02:47:35 GMT", "version": "v1" }, { "created": "Mon, 30 Aug 2021 18:31:11 GMT", "version": "v2" } ]
2021-09-01
[ [ "Mitra", "Aritra", "" ], [ "Jaafar", "Rayana", "" ], [ "Pappas", "George J.", "" ], [ "Hassani", "Hamed", "" ] ]
We consider a standard federated learning (FL) architecture where a group of clients periodically coordinate with a central server to train a statistical model. We develop a general algorithmic framework called FedLin to tackle some of the key challenges intrinsic to FL, namely objective heterogeneity, systems heterogeneity, and infrequent and imprecise communication. Our framework is motivated by the observation that under these challenges, various existing FL algorithms suffer from a fundamental speed-accuracy conflict: they either guarantee linear convergence but to an incorrect point, or convergence to the global minimum but at a sub-linear rate, i.e., fast convergence comes at the expense of accuracy. In contrast, when the clients' local loss functions are smooth and strongly convex, we show that FedLin guarantees linear convergence to the global minimum, despite arbitrary objective and systems heterogeneity. We then establish matching upper and lower bounds on the convergence rate of FedLin that highlight the effects of intermittent communication. Finally, we show that FedLin preserves linear convergence rates under aggressive gradient sparsification, and quantify the effect of the compression level on the convergence rate. Our work is the first to provide tight linear convergence rate guarantees, and constitutes the first comprehensive analysis of gradient sparsification in FL.
2308.15157
Leander F\'eret
L. F\'eret, A. Gepperth, S. Lambeck
On the improvement of model-predictive controllers
null
null
null
null
cs.LG cs.NE cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
This article investigates synthetic model-predictive control (MPC) problems to demonstrate that an increased precision of the internal prediction model (PM) automatially entails an improvement of the controller as a whole. In contrast to reinforcement learning (RL), MPC uses the PM to predict subsequent states of the controlled system (CS), instead of directly recommending suitable actions. To assess how the precision of the PM translates into the quality of the model-predictive controller, we compare a DNN-based PM to the optimal baseline PM for three well-known control problems of varying complexity. The baseline PM achieves perfect accuracy by accessing the simulation of the CS itself. Based on the obtained results, we argue that an improvement of the PM will always improve the controller as a whole, without considering the impact of other components such as action selection (which, in this article, relies on evolutionary optimization).
[ { "created": "Tue, 29 Aug 2023 09:39:12 GMT", "version": "v1" } ]
2023-08-30
[ [ "Féret", "L.", "" ], [ "Gepperth", "A.", "" ], [ "Lambeck", "S.", "" ] ]
This article investigates synthetic model-predictive control (MPC) problems to demonstrate that an increased precision of the internal prediction model (PM) automatially entails an improvement of the controller as a whole. In contrast to reinforcement learning (RL), MPC uses the PM to predict subsequent states of the controlled system (CS), instead of directly recommending suitable actions. To assess how the precision of the PM translates into the quality of the model-predictive controller, we compare a DNN-based PM to the optimal baseline PM for three well-known control problems of varying complexity. The baseline PM achieves perfect accuracy by accessing the simulation of the CS itself. Based on the obtained results, we argue that an improvement of the PM will always improve the controller as a whole, without considering the impact of other components such as action selection (which, in this article, relies on evolutionary optimization).
2009.00326
Christophe Lecoutre
Christophe Lecoutre and Nicolas Szczepanski
PyCSP3: Modeling Combinatorial Constrained Problems in Python
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
In this document, we introduce PyCSP$3$, a Python library that allows us to write models of combinatorial constrained problems in a declarative manner. Currently, with PyCSP$3$, you can write models of constraint satisfaction and optimization problems. More specifically, you can build CSP (Constraint Satisfaction Problem) and COP (Constraint Optimization Problem) models. Importantly, there is a complete separation between the modeling and solving phases: you write a model, you compile it (while providing some data) in order to generate an XCSP$3$ instance (file), and you solve that problem instance by means of a constraint solver. You can also directly pilot the solving procedure in PyCSP$3$, possibly conducting an incremental solving strategy. In this document, you will find all that you need to know about PyCSP$3$, with more than 50 illustrative models.
[ { "created": "Tue, 1 Sep 2020 10:11:31 GMT", "version": "v1" }, { "created": "Tue, 22 Jun 2021 16:29:31 GMT", "version": "v2" }, { "created": "Sat, 18 Dec 2021 12:48:14 GMT", "version": "v3" }, { "created": "Mon, 7 Nov 2022 10:04:07 GMT", "version": "v4" }, { "created": "Sun, 10 Dec 2023 12:46:50 GMT", "version": "v5" } ]
2023-12-12
[ [ "Lecoutre", "Christophe", "" ], [ "Szczepanski", "Nicolas", "" ] ]
In this document, we introduce PyCSP$3$, a Python library that allows us to write models of combinatorial constrained problems in a declarative manner. Currently, with PyCSP$3$, you can write models of constraint satisfaction and optimization problems. More specifically, you can build CSP (Constraint Satisfaction Problem) and COP (Constraint Optimization Problem) models. Importantly, there is a complete separation between the modeling and solving phases: you write a model, you compile it (while providing some data) in order to generate an XCSP$3$ instance (file), and you solve that problem instance by means of a constraint solver. You can also directly pilot the solving procedure in PyCSP$3$, possibly conducting an incremental solving strategy. In this document, you will find all that you need to know about PyCSP$3$, with more than 50 illustrative models.
1104.0769
Damien Chablat
Anatoly Pashkevich (IRCCyN), Alexandr Klimchik (IRCCyN), Damien Chablat (IRCCyN)
Enhanced stiffness modeling of manipulators with passive joints
null
Mechanism and Machine Theory 46, 5 (2011) 10-18
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper presents a methodology to enhance the stiffness analysis of serial and parallel manipulators with passive joints. It directly takes into account the loading influence on the manipulator configuration and, consequently, on its Jacobians and Hessians. The main contributions of this paper are the introduction of a non-linear stiffness model for the manipulators with passive joints, a relevant numerical technique for its linearization and computing of the Cartesian stiffness matrix which allows rank-deficiency. Within the developed technique, the manipulator elements are presented as pseudo-rigid bodies separated by multidimensional virtual springs and perfect passive joints. Simulation examples are presented that deal with parallel manipulators of the Ortholide family and demonstrate the ability of the developed methodology to describe non-linear behavior of the manipulator structure such as a sudden change of the elastic instability properties (buckling).
[ { "created": "Tue, 5 Apr 2011 08:36:04 GMT", "version": "v1" } ]
2011-04-06
[ [ "Pashkevich", "Anatoly", "", "IRCCyN" ], [ "Klimchik", "Alexandr", "", "IRCCyN" ], [ "Chablat", "Damien", "", "IRCCyN" ] ]
The paper presents a methodology to enhance the stiffness analysis of serial and parallel manipulators with passive joints. It directly takes into account the loading influence on the manipulator configuration and, consequently, on its Jacobians and Hessians. The main contributions of this paper are the introduction of a non-linear stiffness model for the manipulators with passive joints, a relevant numerical technique for its linearization and computing of the Cartesian stiffness matrix which allows rank-deficiency. Within the developed technique, the manipulator elements are presented as pseudo-rigid bodies separated by multidimensional virtual springs and perfect passive joints. Simulation examples are presented that deal with parallel manipulators of the Ortholide family and demonstrate the ability of the developed methodology to describe non-linear behavior of the manipulator structure such as a sudden change of the elastic instability properties (buckling).
2010.01057
Ikuya Yamada
Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto
LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention
EMNLP 2020
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Entity representations are useful in natural language tasks involving entities. In this paper, we propose new pretrained contextualized representations of words and entities based on the bidirectional transformer. The proposed model treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. Our model is trained using a new pretraining task based on the masked language model of BERT. The task involves predicting randomly masked words and entities in a large entity-annotated corpus retrieved from Wikipedia. We also propose an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores. The proposed model achieves impressive empirical performance on a wide range of entity-related tasks. In particular, it obtains state-of-the-art results on five well-known datasets: Open Entity (entity typing), TACRED (relation classification), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), and SQuAD 1.1 (extractive question answering). Our source code and pretrained representations are available at https://github.com/studio-ousia/luke.
[ { "created": "Fri, 2 Oct 2020 15:38:03 GMT", "version": "v1" } ]
2020-10-05
[ [ "Yamada", "Ikuya", "" ], [ "Asai", "Akari", "" ], [ "Shindo", "Hiroyuki", "" ], [ "Takeda", "Hideaki", "" ], [ "Matsumoto", "Yuji", "" ] ]
Entity representations are useful in natural language tasks involving entities. In this paper, we propose new pretrained contextualized representations of words and entities based on the bidirectional transformer. The proposed model treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. Our model is trained using a new pretraining task based on the masked language model of BERT. The task involves predicting randomly masked words and entities in a large entity-annotated corpus retrieved from Wikipedia. We also propose an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores. The proposed model achieves impressive empirical performance on a wide range of entity-related tasks. In particular, it obtains state-of-the-art results on five well-known datasets: Open Entity (entity typing), TACRED (relation classification), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), and SQuAD 1.1 (extractive question answering). Our source code and pretrained representations are available at https://github.com/studio-ousia/luke.
2303.11887
Cornelia Ott
Cornelia Ott, Hedongliang Liu, Antonia Wachter-Zeh
Geometrical Properties of Balls in Sum-Rank Metric
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The sum-rank metric arises as an algebraic approach for coding in MIMO block-fading channels and multishot network coding. Codes designed in the sum-rank metric have raised interest in applications such as streaming codes, robust coded distributed storage systems and post-quantum secure cryptosystems. The sum-rank metric can be seen as a generalization of the well-known Hamming metric and the rank metric. As a relatively new metric, there are still many open theoretical problems for codes in the sum-rank metric. In this paper we investigate the geometrical properties of the balls with sum-rank radii motivated by investigating covering properties of codes.
[ { "created": "Tue, 21 Mar 2023 14:28:03 GMT", "version": "v1" } ]
2023-03-22
[ [ "Ott", "Cornelia", "" ], [ "Liu", "Hedongliang", "" ], [ "Wachter-Zeh", "Antonia", "" ] ]
The sum-rank metric arises as an algebraic approach for coding in MIMO block-fading channels and multishot network coding. Codes designed in the sum-rank metric have raised interest in applications such as streaming codes, robust coded distributed storage systems and post-quantum secure cryptosystems. The sum-rank metric can be seen as a generalization of the well-known Hamming metric and the rank metric. As a relatively new metric, there are still many open theoretical problems for codes in the sum-rank metric. In this paper we investigate the geometrical properties of the balls with sum-rank radii motivated by investigating covering properties of codes.
1706.05206
Jakob Verbeek
Nitika Verma, Edmond Boyer, Jakob Verbeek
FeaStNet: Feature-Steered Graph Convolutions for 3D Shape Analysis
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Convolutional neural networks (CNNs) have massively impacted visual recognition in 2D images, and are now ubiquitous in state-of-the-art approaches. CNNs do not easily extend, however, to data that are not represented by regular grids, such as 3D shape meshes or other graph-structured data, to which traditional local convolution operators do not directly apply. To address this problem, we propose a novel graph-convolution operator to establish correspondences between filter weights and graph neighborhoods with arbitrary connectivity. The key novelty of our approach is that these correspondences are dynamically computed from features learned by the network, rather than relying on predefined static coordinates over the graph as in previous work. We obtain excellent experimental results that significantly improve over previous state-of-the-art shape correspondence results. This shows that our approach can learn effective shape representations from raw input coordinates, without relying on shape descriptors.
[ { "created": "Fri, 16 Jun 2017 10:08:53 GMT", "version": "v1" }, { "created": "Wed, 28 Mar 2018 13:27:39 GMT", "version": "v2" } ]
2018-03-29
[ [ "Verma", "Nitika", "" ], [ "Boyer", "Edmond", "" ], [ "Verbeek", "Jakob", "" ] ]
Convolutional neural networks (CNNs) have massively impacted visual recognition in 2D images, and are now ubiquitous in state-of-the-art approaches. CNNs do not easily extend, however, to data that are not represented by regular grids, such as 3D shape meshes or other graph-structured data, to which traditional local convolution operators do not directly apply. To address this problem, we propose a novel graph-convolution operator to establish correspondences between filter weights and graph neighborhoods with arbitrary connectivity. The key novelty of our approach is that these correspondences are dynamically computed from features learned by the network, rather than relying on predefined static coordinates over the graph as in previous work. We obtain excellent experimental results that significantly improve over previous state-of-the-art shape correspondence results. This shows that our approach can learn effective shape representations from raw input coordinates, without relying on shape descriptors.
2310.11346
Hao Lu
Hao Lu, Yunpeng Zhang, Qing Lian, Dalong Du, Yingcong Chen
Towards Generalizable Multi-Camera 3D Object Detection via Perspective Debiasing
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Detecting objects in 3D space using multiple cameras, known as Multi-Camera 3D Object Detection (MC3D-Det), has gained prominence with the advent of bird's-eye view (BEV) approaches. However, these methods often struggle when faced with unfamiliar testing environments due to the lack of diverse training data encompassing various viewpoints and environments. To address this, we propose a novel method that aligns 3D detection with 2D camera plane results, ensuring consistent and accurate detections. Our framework, anchored in perspective debiasing, helps the learning of features resilient to domain shifts. In our approach, we render diverse view maps from BEV features and rectify the perspective bias of these maps, leveraging implicit foreground volumes to bridge the camera and BEV planes. This two-step process promotes the learning of perspective- and context-independent features, crucial for accurate object detection across varying viewpoints, camera parameters, and environmental conditions. Notably, our model-agnostic approach preserves the original network structure without incurring additional inference costs, facilitating seamless integration across various models and simplifying deployment. Furthermore, we also show our approach achieves satisfactory results in real data when trained only with virtual datasets, eliminating the need for real scene annotations. Experimental results on both Domain Generalization (DG) and Unsupervised Domain Adaptation (UDA) clearly demonstrate its effectiveness. The codes are available at https://github.com/EnVision-Research/Generalizable-BEV.
[ { "created": "Tue, 17 Oct 2023 15:31:28 GMT", "version": "v1" }, { "created": "Thu, 30 Nov 2023 07:06:20 GMT", "version": "v2" }, { "created": "Mon, 25 Dec 2023 16:30:00 GMT", "version": "v3" } ]
2023-12-27
[ [ "Lu", "Hao", "" ], [ "Zhang", "Yunpeng", "" ], [ "Lian", "Qing", "" ], [ "Du", "Dalong", "" ], [ "Chen", "Yingcong", "" ] ]
Detecting objects in 3D space using multiple cameras, known as Multi-Camera 3D Object Detection (MC3D-Det), has gained prominence with the advent of bird's-eye view (BEV) approaches. However, these methods often struggle when faced with unfamiliar testing environments due to the lack of diverse training data encompassing various viewpoints and environments. To address this, we propose a novel method that aligns 3D detection with 2D camera plane results, ensuring consistent and accurate detections. Our framework, anchored in perspective debiasing, helps the learning of features resilient to domain shifts. In our approach, we render diverse view maps from BEV features and rectify the perspective bias of these maps, leveraging implicit foreground volumes to bridge the camera and BEV planes. This two-step process promotes the learning of perspective- and context-independent features, crucial for accurate object detection across varying viewpoints, camera parameters, and environmental conditions. Notably, our model-agnostic approach preserves the original network structure without incurring additional inference costs, facilitating seamless integration across various models and simplifying deployment. Furthermore, we also show our approach achieves satisfactory results in real data when trained only with virtual datasets, eliminating the need for real scene annotations. Experimental results on both Domain Generalization (DG) and Unsupervised Domain Adaptation (UDA) clearly demonstrate its effectiveness. The codes are available at https://github.com/EnVision-Research/Generalizable-BEV.
2209.09210
Haoyang Li
Haoyang Li
Use Classifier as Generator
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Image recognition/classification is a widely studied problem, but its reverse problem, image generation, has drawn much less attention until recently. But the vast majority of current methods for image generation require training/retraining a classifier and/or a generator with certain constraints, which can be hard to achieve. In this paper, we propose a simple approach to directly use a normally trained classifier to generate images. We evaluate our method on MNIST and show that it produces recognizable results for human eyes with limited quality with experiments.
[ { "created": "Sat, 10 Sep 2022 18:46:01 GMT", "version": "v1" } ]
2022-09-20
[ [ "Li", "Haoyang", "" ] ]
Image recognition/classification is a widely studied problem, but its reverse problem, image generation, has drawn much less attention until recently. But the vast majority of current methods for image generation require training/retraining a classifier and/or a generator with certain constraints, which can be hard to achieve. In this paper, we propose a simple approach to directly use a normally trained classifier to generate images. We evaluate our method on MNIST and show that it produces recognizable results for human eyes with limited quality with experiments.
2109.08723
Hayeong Song
Hayeong Song, Yu Fu, Bahador Saket, and John Stasko
Understanding the Effects of Visualizing Missing Values on Visual Data Exploration
null
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When performing data analysis, people often confront data sets containing missing values. We conducted an empirical study to understand the effects of visualizing those missing values on participants' decision-making processes while performing a visual data exploration task. More specifically, our study participants purchased a hypothetical portfolio of stocks based on a dataset where some stocks had missing values for attributes such as PE ratio, beta, and EPS. The experiment used scatterplots to communicate the stock data. For one group of participants, stocks with missing values simply were not shown, while the second group saw such stocks depicted with estimated values as points with error bars. We measured participants' cognitive load involved in decision-making with data with missing values. Our results indicate that their decision-making workflow was different across two conditions.
[ { "created": "Fri, 17 Sep 2021 19:12:58 GMT", "version": "v1" } ]
2021-09-21
[ [ "Song", "Hayeong", "" ], [ "Fu", "Yu", "" ], [ "Saket", "Bahador", "" ], [ "Stasko", "John", "" ] ]
When performing data analysis, people often confront data sets containing missing values. We conducted an empirical study to understand the effects of visualizing those missing values on participants' decision-making processes while performing a visual data exploration task. More specifically, our study participants purchased a hypothetical portfolio of stocks based on a dataset where some stocks had missing values for attributes such as PE ratio, beta, and EPS. The experiment used scatterplots to communicate the stock data. For one group of participants, stocks with missing values simply were not shown, while the second group saw such stocks depicted with estimated values as points with error bars. We measured participants' cognitive load involved in decision-making with data with missing values. Our results indicate that their decision-making workflow was different across two conditions.
1906.10043
Nestor Nahuel Deniz
Nestor N. Deniz, Guido Sanchez, Marina H. Murillo, Leonardo L. Giovanini
Simultaneous state estimation and control for nonlinear systems subject to bounded disturbances
null
null
null
null
cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we address the output--feedback control problem for nonlinear systems under bounded disturbances using a moving horizon approach. The controller is posed as an optimization-based problem that simultaneously estimates the state trajectory and computes future control inputs. It minimizes a criterion that involves finite forward and backward horizon with respect the unknown initial state, measurement noises and control input variables and it is maximized with respect the unknown future disturbances. Although simultaneous state estimation and control approaches are already available in the literature, the novelty of this work relies on linking the lengths of the forward and backward windows with the closed-loop stability, assuming detectability and decoding sufficient conditions to assure system stabilizability. Simulation examples are carried out to compare the performance of simultaneous and independent estimation and control approaches as well as to show the effects of simultaneously solving the control and estimation problems.
[ { "created": "Mon, 24 Jun 2019 16:06:56 GMT", "version": "v1" }, { "created": "Mon, 9 Dec 2019 14:55:09 GMT", "version": "v2" }, { "created": "Wed, 18 Nov 2020 20:17:01 GMT", "version": "v3" } ]
2020-11-20
[ [ "Deniz", "Nestor N.", "" ], [ "Sanchez", "Guido", "" ], [ "Murillo", "Marina H.", "" ], [ "Giovanini", "Leonardo L.", "" ] ]
In this work, we address the output--feedback control problem for nonlinear systems under bounded disturbances using a moving horizon approach. The controller is posed as an optimization-based problem that simultaneously estimates the state trajectory and computes future control inputs. It minimizes a criterion that involves finite forward and backward horizon with respect the unknown initial state, measurement noises and control input variables and it is maximized with respect the unknown future disturbances. Although simultaneous state estimation and control approaches are already available in the literature, the novelty of this work relies on linking the lengths of the forward and backward windows with the closed-loop stability, assuming detectability and decoding sufficient conditions to assure system stabilizability. Simulation examples are carried out to compare the performance of simultaneous and independent estimation and control approaches as well as to show the effects of simultaneously solving the control and estimation problems.
2010.08923
Chenyu You
Chenyu You, Nuo Chen, Fenglin Liu, Dongchao Yang, Yuexian Zou
Towards Data Distillation for End-to-end Spoken Conversational Question Answering
null
null
null
null
cs.CL cs.AI eess.AS eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In spoken question answering, QA systems are designed to answer questions from contiguous text spans within the related speech transcripts. However, the most natural way that human seek or test their knowledge is via human conversations. Therefore, we propose a new Spoken Conversational Question Answering task (SCQA), aiming at enabling QA systems to model complex dialogues flow given the speech utterances and text corpora. In this task, our main objective is to build a QA system to deal with conversational questions both in spoken and text forms, and to explore the plausibility of providing more cues in spoken documents with systems in information gathering. To this end, instead of adopting automatically generated speech transcripts with highly noisy data, we propose a novel unified data distillation approach, DDNet, which directly fuse audio-text features to reduce the misalignment between automatic speech recognition hypotheses and the reference transcriptions. In addition, to evaluate the capacity of QA systems in a dialogue-style interaction, we assemble a Spoken Conversational Question Answering (Spoken-CoQA) dataset with more than 120k question-answer pairs. Experiments demonstrate that our proposed method achieves superior performance in spoken conversational question answering.
[ { "created": "Sun, 18 Oct 2020 05:53:39 GMT", "version": "v1" } ]
2020-10-20
[ [ "You", "Chenyu", "" ], [ "Chen", "Nuo", "" ], [ "Liu", "Fenglin", "" ], [ "Yang", "Dongchao", "" ], [ "Zou", "Yuexian", "" ] ]
In spoken question answering, QA systems are designed to answer questions from contiguous text spans within the related speech transcripts. However, the most natural way that human seek or test their knowledge is via human conversations. Therefore, we propose a new Spoken Conversational Question Answering task (SCQA), aiming at enabling QA systems to model complex dialogues flow given the speech utterances and text corpora. In this task, our main objective is to build a QA system to deal with conversational questions both in spoken and text forms, and to explore the plausibility of providing more cues in spoken documents with systems in information gathering. To this end, instead of adopting automatically generated speech transcripts with highly noisy data, we propose a novel unified data distillation approach, DDNet, which directly fuse audio-text features to reduce the misalignment between automatic speech recognition hypotheses and the reference transcriptions. In addition, to evaluate the capacity of QA systems in a dialogue-style interaction, we assemble a Spoken Conversational Question Answering (Spoken-CoQA) dataset with more than 120k question-answer pairs. Experiments demonstrate that our proposed method achieves superior performance in spoken conversational question answering.
2201.12938
Mycal Tucker
Mycal Tucker, William Kuhl, Khizer Shahid, Seth Karten, Katia Sycara, and Julie Shah
Probe-Based Interventions for Modifying Agent Behavior
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Neural nets are powerful function approximators, but the behavior of a given neural net, once trained, cannot be easily modified. We wish, however, for people to be able to influence neural agents' actions despite the agents never training with humans, which we formalize as a human-assisted decision-making problem. Inspired by prior art initially developed for model explainability, we develop a method for updating representations in pre-trained neural nets according to externally-specified properties. In experiments, we show how our method may be used to improve human-agent team performance for a variety of neural networks from image classifiers to agents in multi-agent reinforcement learning settings.
[ { "created": "Wed, 26 Jan 2022 19:14:00 GMT", "version": "v1" } ]
2022-02-01
[ [ "Tucker", "Mycal", "" ], [ "Kuhl", "William", "" ], [ "Shahid", "Khizer", "" ], [ "Karten", "Seth", "" ], [ "Sycara", "Katia", "" ], [ "Shah", "Julie", "" ] ]
Neural nets are powerful function approximators, but the behavior of a given neural net, once trained, cannot be easily modified. We wish, however, for people to be able to influence neural agents' actions despite the agents never training with humans, which we formalize as a human-assisted decision-making problem. Inspired by prior art initially developed for model explainability, we develop a method for updating representations in pre-trained neural nets according to externally-specified properties. In experiments, we show how our method may be used to improve human-agent team performance for a variety of neural networks from image classifiers to agents in multi-agent reinforcement learning settings.
2312.01006
Xuan Feng
Jiayang Li, Xuan Feng, Tianlong Gu, Liang Chang
Dual-Teacher De-biasing Distillation Framework for Multi-domain Fake News Detection
ICDE 2024
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-domain fake news detection aims to identify whether various news from different domains is real or fake and has become urgent and important. However, existing methods are dedicated to improving the overall performance of fake news detection, ignoring the fact that unbalanced data leads to disparate treatment for different domains, i.e., the domain bias problem. To solve this problem, we propose the Dual-Teacher De-biasing Distillation framework (DTDBD) to mitigate bias across different domains. Following the knowledge distillation methods, DTDBD adopts a teacher-student structure, where pre-trained large teachers instruct a student model. In particular, the DTDBD consists of an unbiased teacher and a clean teacher that jointly guide the student model in mitigating domain bias and maintaining performance. For the unbiased teacher, we introduce an adversarial de-biasing distillation loss to instruct the student model in learning unbiased domain knowledge. For the clean teacher, we design domain knowledge distillation loss, which effectively incentivizes the student model to focus on representing domain features while maintaining performance. Moreover, we present a momentum-based dynamic adjustment algorithm to trade off the effects of two teachers. Extensive experiments on Chinese and English datasets show that the proposed method substantially outperforms the state-of-the-art baseline methods in terms of bias metrics while guaranteeing competitive performance.
[ { "created": "Sat, 2 Dec 2023 02:53:45 GMT", "version": "v1" } ]
2023-12-05
[ [ "Li", "Jiayang", "" ], [ "Feng", "Xuan", "" ], [ "Gu", "Tianlong", "" ], [ "Chang", "Liang", "" ] ]
Multi-domain fake news detection aims to identify whether various news from different domains is real or fake and has become urgent and important. However, existing methods are dedicated to improving the overall performance of fake news detection, ignoring the fact that unbalanced data leads to disparate treatment for different domains, i.e., the domain bias problem. To solve this problem, we propose the Dual-Teacher De-biasing Distillation framework (DTDBD) to mitigate bias across different domains. Following the knowledge distillation methods, DTDBD adopts a teacher-student structure, where pre-trained large teachers instruct a student model. In particular, the DTDBD consists of an unbiased teacher and a clean teacher that jointly guide the student model in mitigating domain bias and maintaining performance. For the unbiased teacher, we introduce an adversarial de-biasing distillation loss to instruct the student model in learning unbiased domain knowledge. For the clean teacher, we design domain knowledge distillation loss, which effectively incentivizes the student model to focus on representing domain features while maintaining performance. Moreover, we present a momentum-based dynamic adjustment algorithm to trade off the effects of two teachers. Extensive experiments on Chinese and English datasets show that the proposed method substantially outperforms the state-of-the-art baseline methods in terms of bias metrics while guaranteeing competitive performance.
2006.10932
Deqing Yang
Junyang Jiang and Deqing Yang and Yanghua Xiao and Chenlu Shen
Convolutional Gaussian Embeddings for Personalized Recommendation with Uncertainty
null
IJCAI 2019
null
null
cs.IR cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most of existing embedding based recommendation models use embeddings (vectors) corresponding to a single fixed point in low-dimensional space, to represent users and items. Such embeddings fail to precisely represent the users/items with uncertainty often observed in recommender systems. Addressing this problem, we propose a unified deep recommendation framework employing Gaussian embeddings, which are proven adaptive to uncertain preferences exhibited by some users, resulting in better user representations and recommendation performance. Furthermore, our framework adopts Monte-Carlo sampling and convolutional neural networks to compute the correlation between the objective user and the candidate item, based on which precise recommendations are achieved. Our extensive experiments on two benchmark datasets not only justify that our proposed Gaussian embeddings capture the uncertainty of users very well, but also demonstrate its superior performance over the state-of-the-art recommendation models.
[ { "created": "Fri, 19 Jun 2020 02:10:38 GMT", "version": "v1" } ]
2020-06-22
[ [ "Jiang", "Junyang", "" ], [ "Yang", "Deqing", "" ], [ "Xiao", "Yanghua", "" ], [ "Shen", "Chenlu", "" ] ]
Most of existing embedding based recommendation models use embeddings (vectors) corresponding to a single fixed point in low-dimensional space, to represent users and items. Such embeddings fail to precisely represent the users/items with uncertainty often observed in recommender systems. Addressing this problem, we propose a unified deep recommendation framework employing Gaussian embeddings, which are proven adaptive to uncertain preferences exhibited by some users, resulting in better user representations and recommendation performance. Furthermore, our framework adopts Monte-Carlo sampling and convolutional neural networks to compute the correlation between the objective user and the candidate item, based on which precise recommendations are achieved. Our extensive experiments on two benchmark datasets not only justify that our proposed Gaussian embeddings capture the uncertainty of users very well, but also demonstrate its superior performance over the state-of-the-art recommendation models.
1310.3902
Shaoquan Jiang
Dajiang Chen, Shaoquan Jiang, Zhiguang Qin
Message Authentication Code over a Wiretap Channel
Formulation of model is changed
ISIT 2015
null
null
cs.IT cs.CR math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Message Authentication Code (MAC) is a keyed function $f_K$ such that when Alice, who shares the secret $K$ with Bob, sends $f_K(M)$ to the latter, Bob will be assured of the integrity and authenticity of $M$. Traditionally, it is assumed that the channel is noiseless. However, Maurer showed that in this case an attacker can succeed with probability $2^{-\frac{H(K)}{\ell+1}}$ after authenticating $\ell$ messages. In this paper, we consider the setting where the channel is noisy. Specifically, Alice and Bob are connected by a discrete memoryless channel (DMC) $W_1$ and a noiseless but insecure channel. In addition, an attacker Oscar is connected with Alice through DMC $W_2$ and with Bob through a noiseless channel. In this setting, we study the framework that sends $M$ over the noiseless channel and the traditional MAC $f_K(M)$ over channel $(W_1, W_2)$. We regard the noisy channel as an expensive resource and define the authentication rate $\rho_{auth}$ as the ratio of message length to the number $n$ of channel $W_1$ uses. The security of this framework depends on the channel coding scheme for $f_K(M)$. A natural coding scheme is to use the secrecy capacity achieving code of Csisz\'{a}r and K\"{o}rner. Intuitively, this is also the optimal strategy. However, we propose a coding scheme that achieves a higher $\rho_{auth}.$ Our crucial point for this is that in the secrecy capacity setting, Bob needs to recover $f_K(M)$ while in our coding scheme this is not necessary. How to detect the attack without recovering $f_K(M)$ is the main contribution of this work. We achieve this through random coding techniques.
[ { "created": "Tue, 15 Oct 2013 02:46:58 GMT", "version": "v1" }, { "created": "Fri, 29 May 2015 17:46:37 GMT", "version": "v2" } ]
2015-06-01
[ [ "Chen", "Dajiang", "" ], [ "Jiang", "Shaoquan", "" ], [ "Qin", "Zhiguang", "" ] ]
Message Authentication Code (MAC) is a keyed function $f_K$ such that when Alice, who shares the secret $K$ with Bob, sends $f_K(M)$ to the latter, Bob will be assured of the integrity and authenticity of $M$. Traditionally, it is assumed that the channel is noiseless. However, Maurer showed that in this case an attacker can succeed with probability $2^{-\frac{H(K)}{\ell+1}}$ after authenticating $\ell$ messages. In this paper, we consider the setting where the channel is noisy. Specifically, Alice and Bob are connected by a discrete memoryless channel (DMC) $W_1$ and a noiseless but insecure channel. In addition, an attacker Oscar is connected with Alice through DMC $W_2$ and with Bob through a noiseless channel. In this setting, we study the framework that sends $M$ over the noiseless channel and the traditional MAC $f_K(M)$ over channel $(W_1, W_2)$. We regard the noisy channel as an expensive resource and define the authentication rate $\rho_{auth}$ as the ratio of message length to the number $n$ of channel $W_1$ uses. The security of this framework depends on the channel coding scheme for $f_K(M)$. A natural coding scheme is to use the secrecy capacity achieving code of Csisz\'{a}r and K\"{o}rner. Intuitively, this is also the optimal strategy. However, we propose a coding scheme that achieves a higher $\rho_{auth}.$ Our crucial point for this is that in the secrecy capacity setting, Bob needs to recover $f_K(M)$ while in our coding scheme this is not necessary. How to detect the attack without recovering $f_K(M)$ is the main contribution of this work. We achieve this through random coding techniques.
2103.04047
Xiuyuan Lu
Xiuyuan Lu, Benjamin Van Roy, Vikranth Dwaracherla, Morteza Ibrahimi, Ian Osband, Zheng Wen
Reinforcement Learning, Bit by Bit
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reinforcement learning agents have demonstrated remarkable achievements in simulated environments. Data efficiency poses an impediment to carrying this success over to real environments. The design of data-efficient agents calls for a deeper understanding of information acquisition and representation. We discuss concepts and regret analysis that together offer principled guidance. This line of thinking sheds light on questions of what information to seek, how to seek that information, and what information to retain. To illustrate concepts, we design simple agents that build on them and present computational results that highlight data efficiency.
[ { "created": "Sat, 6 Mar 2021 06:37:46 GMT", "version": "v1" }, { "created": "Sun, 14 Mar 2021 05:58:17 GMT", "version": "v2" }, { "created": "Mon, 12 Apr 2021 18:42:28 GMT", "version": "v3" }, { "created": "Tue, 11 May 2021 01:03:05 GMT", "version": "v4" }, { "created": "Mon, 23 Aug 2021 04:56:18 GMT", "version": "v5" }, { "created": "Mon, 7 Feb 2022 22:13:26 GMT", "version": "v6" }, { "created": "Fri, 25 Mar 2022 15:58:11 GMT", "version": "v7" }, { "created": "Thu, 4 May 2023 20:53:30 GMT", "version": "v8" } ]
2023-05-09
[ [ "Lu", "Xiuyuan", "" ], [ "Van Roy", "Benjamin", "" ], [ "Dwaracherla", "Vikranth", "" ], [ "Ibrahimi", "Morteza", "" ], [ "Osband", "Ian", "" ], [ "Wen", "Zheng", "" ] ]
Reinforcement learning agents have demonstrated remarkable achievements in simulated environments. Data efficiency poses an impediment to carrying this success over to real environments. The design of data-efficient agents calls for a deeper understanding of information acquisition and representation. We discuss concepts and regret analysis that together offer principled guidance. This line of thinking sheds light on questions of what information to seek, how to seek that information, and what information to retain. To illustrate concepts, we design simple agents that build on them and present computational results that highlight data efficiency.
2208.03505
Reethika Ramesh
Reethika Ramesh, Anjali Vyas, Roya Ensafi
"All of them claim to be the best": Multi-perspective study of VPN users and VPN providers
Accepted to appear at USENIX Security Symposium 2023 (32nd USENIX Security Symposium, 2023)
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As more users adopt VPNs for a variety of reasons, it is important to develop empirical knowledge of their needs and mental models of what a VPN offers. Moreover, studying VPN users alone is not enough because, by using a VPN, a user essentially transfers trust, say from their network provider, onto the VPN provider. To that end, we are the first to study the VPN ecosystem from both the users' and the providers' perspectives. In this paper, we conduct a quantitative survey of 1,252 VPN users in the U.S. and qualitative interviews of nine providers to answer several research questions regarding the motivations, needs, threat model, and mental model of users, and the key challenges and insights from VPN providers. We create novel insights by augmenting our multi-perspective results, and highlight cases where the user and provider perspectives are misaligned. Alarmingly, we find that users rely on and trust VPN review sites, but VPN providers shed light on how these sites are mostly motivated by money. Worryingly, we find that users have flawed mental models about the protection VPNs provide, and about data collected by VPNs. We present actionable recommendations for technologists and security and privacy advocates by identifying potential areas on which to focus efforts and improve the VPN ecosystem.
[ { "created": "Sat, 6 Aug 2022 12:16:15 GMT", "version": "v1" }, { "created": "Wed, 28 Sep 2022 20:47:04 GMT", "version": "v2" } ]
2022-09-30
[ [ "Ramesh", "Reethika", "" ], [ "Vyas", "Anjali", "" ], [ "Ensafi", "Roya", "" ] ]
As more users adopt VPNs for a variety of reasons, it is important to develop empirical knowledge of their needs and mental models of what a VPN offers. Moreover, studying VPN users alone is not enough because, by using a VPN, a user essentially transfers trust, say from their network provider, onto the VPN provider. To that end, we are the first to study the VPN ecosystem from both the users' and the providers' perspectives. In this paper, we conduct a quantitative survey of 1,252 VPN users in the U.S. and qualitative interviews of nine providers to answer several research questions regarding the motivations, needs, threat model, and mental model of users, and the key challenges and insights from VPN providers. We create novel insights by augmenting our multi-perspective results, and highlight cases where the user and provider perspectives are misaligned. Alarmingly, we find that users rely on and trust VPN review sites, but VPN providers shed light on how these sites are mostly motivated by money. Worryingly, we find that users have flawed mental models about the protection VPNs provide, and about data collected by VPNs. We present actionable recommendations for technologists and security and privacy advocates by identifying potential areas on which to focus efforts and improve the VPN ecosystem.
1208.0645
Zhi-Hua Zhou
Wei Gao and Zhi-Hua Zhou
On the Consistency of AUC Pairwise Optimization
null
IJCAI 2015
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
AUC (area under ROC curve) is an important evaluation criterion, which has been popularly used in many learning tasks such as class-imbalance learning, cost-sensitive learning, learning to rank, etc. Many learning approaches try to optimize AUC, while owing to the non-convexity and discontinuousness of AUC, almost all approaches work with surrogate loss functions. Thus, the consistency of AUC is crucial; however, it has been almost untouched before. In this paper, we provide a sufficient condition for the asymptotic consistency of learning approaches based on surrogate loss functions. Based on this result, we prove that exponential loss and logistic loss are consistent with AUC, but hinge loss is inconsistent. Then, we derive the $q$-norm hinge loss and general hinge loss that are consistent with AUC. We also derive the consistent bounds for exponential loss and logistic loss, and obtain the consistent bounds for many surrogate loss functions under the non-noise setting. Further, we disclose an equivalence between the exponential surrogate loss of AUC and exponential surrogate loss of accuracy, and one straightforward consequence of such finding is that AdaBoost and RankBoost are equivalent.
[ { "created": "Fri, 3 Aug 2012 02:37:44 GMT", "version": "v1" }, { "created": "Thu, 9 Aug 2012 08:35:28 GMT", "version": "v2" }, { "created": "Thu, 13 Sep 2012 07:00:09 GMT", "version": "v3" }, { "created": "Wed, 2 Jul 2014 14:46:59 GMT", "version": "v4" } ]
2020-07-07
[ [ "Gao", "Wei", "" ], [ "Zhou", "Zhi-Hua", "" ] ]
AUC (area under ROC curve) is an important evaluation criterion, which has been popularly used in many learning tasks such as class-imbalance learning, cost-sensitive learning, learning to rank, etc. Many learning approaches try to optimize AUC, while owing to the non-convexity and discontinuousness of AUC, almost all approaches work with surrogate loss functions. Thus, the consistency of AUC is crucial; however, it has been almost untouched before. In this paper, we provide a sufficient condition for the asymptotic consistency of learning approaches based on surrogate loss functions. Based on this result, we prove that exponential loss and logistic loss are consistent with AUC, but hinge loss is inconsistent. Then, we derive the $q$-norm hinge loss and general hinge loss that are consistent with AUC. We also derive the consistent bounds for exponential loss and logistic loss, and obtain the consistent bounds for many surrogate loss functions under the non-noise setting. Further, we disclose an equivalence between the exponential surrogate loss of AUC and exponential surrogate loss of accuracy, and one straightforward consequence of such finding is that AdaBoost and RankBoost are equivalent.
2307.02263
Weihao Huang
Jianxiang Luo, Junyi Hu, Tianji Pang, Weihao Huang, Chuang Liu
Dynamical Isometry based Rigorous Fair Neural Architecture Search
null
null
null
null
cs.LG cs.CY
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recently, the weight-sharing technique has significantly speeded up the training and evaluation procedure of neural architecture search. However, most existing weight-sharing strategies are solely based on experience or observation, which makes the searching results lack interpretability and rationality. In addition, due to the negligence of fairness, current methods are prone to make misjudgments in module evaluation. To address these problems, we propose a novel neural architecture search algorithm based on dynamical isometry. We use the fix point analysis method in the mean field theory to analyze the dynamics behavior in the steady state random neural network, and how dynamic isometry guarantees the fairness of weight-sharing based NAS. Meanwhile, we prove that our module selection strategy is rigorous fair by estimating the generalization error of all modules with well-conditioned Jacobian. Extensive experiments show that, with the same size, the architecture searched by the proposed method can achieve state-of-the-art top-1 validation accuracy on ImageNet classification. In addition, we demonstrate that our method is able to achieve better and more stable training performance without loss of generality.
[ { "created": "Wed, 5 Jul 2023 13:01:21 GMT", "version": "v1" }, { "created": "Thu, 6 Jul 2023 06:56:54 GMT", "version": "v2" } ]
2023-07-07
[ [ "Luo", "Jianxiang", "" ], [ "Hu", "Junyi", "" ], [ "Pang", "Tianji", "" ], [ "Huang", "Weihao", "" ], [ "Liu", "Chuang", "" ] ]
Recently, the weight-sharing technique has significantly speeded up the training and evaluation procedure of neural architecture search. However, most existing weight-sharing strategies are solely based on experience or observation, which makes the searching results lack interpretability and rationality. In addition, due to the negligence of fairness, current methods are prone to make misjudgments in module evaluation. To address these problems, we propose a novel neural architecture search algorithm based on dynamical isometry. We use the fix point analysis method in the mean field theory to analyze the dynamics behavior in the steady state random neural network, and how dynamic isometry guarantees the fairness of weight-sharing based NAS. Meanwhile, we prove that our module selection strategy is rigorous fair by estimating the generalization error of all modules with well-conditioned Jacobian. Extensive experiments show that, with the same size, the architecture searched by the proposed method can achieve state-of-the-art top-1 validation accuracy on ImageNet classification. In addition, we demonstrate that our method is able to achieve better and more stable training performance without loss of generality.
2002.05848
Keisuke Imoto
Keisuke Imoto and Noriyuki Tonami and Yuma Koizumi and Masahiro Yasuda and Ryosuke Yamanishi and Yoichi Yamashita
Sound Event Detection by Multitask Learning of Sound Events and Scenes with Soft Scene Labels
Accepted to ICASSP 2020
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sound event detection (SED) and acoustic scene classification (ASC) are major tasks in environmental sound analysis. Considering that sound events and scenes are closely related to each other, some works have addressed joint analyses of sound events and acoustic scenes based on multitask learning (MTL), in which the knowledge of sound events and scenes can help in estimating them mutually. The conventional MTL-based methods utilize one-hot scene labels to train the relationship between sound events and scenes; thus, the conventional methods cannot model the extent to which sound events and scenes are related. However, in the real environment, common sound events may occur in some acoustic scenes; on the other hand, some sound events occur only in a limited acoustic scene. In this paper, we thus propose a new method for SED based on MTL of SED and ASC using the soft labels of acoustic scenes, which enable us to model the extent to which sound events and scenes are related. Experiments conducted using TUT Sound Events 2016/2017 and TUT Acoustic Scenes 2016 datasets show that the proposed method improves the SED performance by 3.80% in F-score compared with conventional MTL-based SED.
[ { "created": "Fri, 14 Feb 2020 02:24:06 GMT", "version": "v1" } ]
2020-02-17
[ [ "Imoto", "Keisuke", "" ], [ "Tonami", "Noriyuki", "" ], [ "Koizumi", "Yuma", "" ], [ "Yasuda", "Masahiro", "" ], [ "Yamanishi", "Ryosuke", "" ], [ "Yamashita", "Yoichi", "" ] ]
Sound event detection (SED) and acoustic scene classification (ASC) are major tasks in environmental sound analysis. Considering that sound events and scenes are closely related to each other, some works have addressed joint analyses of sound events and acoustic scenes based on multitask learning (MTL), in which the knowledge of sound events and scenes can help in estimating them mutually. The conventional MTL-based methods utilize one-hot scene labels to train the relationship between sound events and scenes; thus, the conventional methods cannot model the extent to which sound events and scenes are related. However, in the real environment, common sound events may occur in some acoustic scenes; on the other hand, some sound events occur only in a limited acoustic scene. In this paper, we thus propose a new method for SED based on MTL of SED and ASC using the soft labels of acoustic scenes, which enable us to model the extent to which sound events and scenes are related. Experiments conducted using TUT Sound Events 2016/2017 and TUT Acoustic Scenes 2016 datasets show that the proposed method improves the SED performance by 3.80% in F-score compared with conventional MTL-based SED.
2208.05681
Baoxin Wang
Honghong Zhao, Baoxin Wang, Dayong Wu, Wanxiang Che, Zhigang Chen, Shijin Wang
Overview of CTC 2021: Chinese Text Correction for Native Speakers
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present an overview of the CTC 2021, a Chinese text correction task for native speakers. We give detailed descriptions of the task definition and the data for training as well as evaluation. We also summarize the approaches investigated by the participants of this task. We hope the data sets collected and annotated for this task can facilitate and expedite future development in this research area. Therefore, the pseudo training data, gold standards validation data, and entire leaderboard is publicly available online at https://destwang.github.io/CTC2021-explorer/.
[ { "created": "Thu, 11 Aug 2022 07:58:48 GMT", "version": "v1" } ]
2022-08-12
[ [ "Zhao", "Honghong", "" ], [ "Wang", "Baoxin", "" ], [ "Wu", "Dayong", "" ], [ "Che", "Wanxiang", "" ], [ "Chen", "Zhigang", "" ], [ "Wang", "Shijin", "" ] ]
In this paper, we present an overview of the CTC 2021, a Chinese text correction task for native speakers. We give detailed descriptions of the task definition and the data for training as well as evaluation. We also summarize the approaches investigated by the participants of this task. We hope the data sets collected and annotated for this task can facilitate and expedite future development in this research area. Therefore, the pseudo training data, gold standards validation data, and entire leaderboard is publicly available online at https://destwang.github.io/CTC2021-explorer/.
2008.07967
Prafullkumar Tale Mr
Saket Saurabh, U\'everton dos Santos Souza, Prafullkumar Tale
On the Parameterized Complexity Of Grid Contraction
null
null
null
null
cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For a family of graphs $\mathcal{G}$, the $\mathcal{G}$-\textsc{Contraction} problem takes as an input a graph $G$ and an integer $k$, and the goal is to decide if there exists $F \subseteq E(G)$ of size at most $k$ such that $G/F$ belongs to $\mathcal{G}$. Here, $G/F$ is the graph obtained from $G$ by contracting all the edges in $F$. In this article, we initiate the study of \textsc{Grid Contraction} from the parameterized complexity point of view. We present a fixed parameter tractable algorithm, running in time $c^k \cdot |V(G)|^{\mathcal{O}(1)}$, for this problem. We complement this result by proving that unless \ETH\ fails, there is no algorithm for \textsc{Grid Contraction} with running time $c^{o(k)} \cdot |V(G)|^{\mathcal{O}(1)}$. We also present a polynomial kernel for this problem.
[ { "created": "Tue, 18 Aug 2020 14:56:53 GMT", "version": "v1" } ]
2020-08-19
[ [ "Saurabh", "Saket", "" ], [ "Souza", "Uéverton dos Santos", "" ], [ "Tale", "Prafullkumar", "" ] ]
For a family of graphs $\mathcal{G}$, the $\mathcal{G}$-\textsc{Contraction} problem takes as an input a graph $G$ and an integer $k$, and the goal is to decide if there exists $F \subseteq E(G)$ of size at most $k$ such that $G/F$ belongs to $\mathcal{G}$. Here, $G/F$ is the graph obtained from $G$ by contracting all the edges in $F$. In this article, we initiate the study of \textsc{Grid Contraction} from the parameterized complexity point of view. We present a fixed parameter tractable algorithm, running in time $c^k \cdot |V(G)|^{\mathcal{O}(1)}$, for this problem. We complement this result by proving that unless \ETH\ fails, there is no algorithm for \textsc{Grid Contraction} with running time $c^{o(k)} \cdot |V(G)|^{\mathcal{O}(1)}$. We also present a polynomial kernel for this problem.
2309.07021
Federico Lincetto
Federico Lincetto, Gianluca Agresti, Mattia Rossi, Pietro Zanuttigh
Exploiting Multiple Priors for Neural 3D Indoor Reconstruction
Accepted at the British Machine Vision Conference (BMVC) 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Neural implicit modeling permits to achieve impressive 3D reconstruction results on small objects, while it exhibits significant limitations in large indoor scenes. In this work, we propose a novel neural implicit modeling method that leverages multiple regularization strategies to achieve better reconstructions of large indoor environments, while relying only on images. A sparse but accurate depth prior is used to anchor the scene to the initial model. A dense but less accurate depth prior is also introduced, flexible enough to still let the model diverge from it to improve the estimated geometry. Then, a novel self-supervised strategy to regularize the estimated surface normals is presented. Finally, a learnable exposure compensation scheme permits to cope with challenging lighting conditions. Experimental results show that our approach produces state-of-the-art 3D reconstructions in challenging indoor scenarios.
[ { "created": "Wed, 13 Sep 2023 15:23:43 GMT", "version": "v1" } ]
2023-09-14
[ [ "Lincetto", "Federico", "" ], [ "Agresti", "Gianluca", "" ], [ "Rossi", "Mattia", "" ], [ "Zanuttigh", "Pietro", "" ] ]
Neural implicit modeling permits to achieve impressive 3D reconstruction results on small objects, while it exhibits significant limitations in large indoor scenes. In this work, we propose a novel neural implicit modeling method that leverages multiple regularization strategies to achieve better reconstructions of large indoor environments, while relying only on images. A sparse but accurate depth prior is used to anchor the scene to the initial model. A dense but less accurate depth prior is also introduced, flexible enough to still let the model diverge from it to improve the estimated geometry. Then, a novel self-supervised strategy to regularize the estimated surface normals is presented. Finally, a learnable exposure compensation scheme permits to cope with challenging lighting conditions. Experimental results show that our approach produces state-of-the-art 3D reconstructions in challenging indoor scenarios.
2405.00017
Louis Leconte
Louis Leconte, Matthieu Jonckheere, Sergey Samsonov, Eric Moulines
Queuing dynamics of asynchronous Federated Learning
null
null
null
null
cs.DC cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
We study asynchronous federated learning mechanisms with nodes having potentially different computational speeds. In such an environment, each node is allowed to work on models with potential delays and contribute to updates to the central server at its own pace. Existing analyses of such algorithms typically depend on intractable quantities such as the maximum node delay and do not consider the underlying queuing dynamics of the system. In this paper, we propose a non-uniform sampling scheme for the central server that allows for lower delays with better complexity, taking into account the closed Jackson network structure of the associated computational graph. Our experiments clearly show a significant improvement of our method over current state-of-the-art asynchronous algorithms on an image classification problem.
[ { "created": "Mon, 12 Feb 2024 18:32:35 GMT", "version": "v1" } ]
2024-05-02
[ [ "Leconte", "Louis", "" ], [ "Jonckheere", "Matthieu", "" ], [ "Samsonov", "Sergey", "" ], [ "Moulines", "Eric", "" ] ]
We study asynchronous federated learning mechanisms with nodes having potentially different computational speeds. In such an environment, each node is allowed to work on models with potential delays and contribute to updates to the central server at its own pace. Existing analyses of such algorithms typically depend on intractable quantities such as the maximum node delay and do not consider the underlying queuing dynamics of the system. In this paper, we propose a non-uniform sampling scheme for the central server that allows for lower delays with better complexity, taking into account the closed Jackson network structure of the associated computational graph. Our experiments clearly show a significant improvement of our method over current state-of-the-art asynchronous algorithms on an image classification problem.
2305.11993
Andrey Kutuzov
Mario Giulianelli, Iris Luden, Raquel Fernandez, Andrey Kutuzov
Interpretable Word Sense Representations via Definition Generation: The Case of Semantic Change Analysis
ACL 2023
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We propose using automatically generated natural language definitions of contextualised word usages as interpretable word and word sense representations. Given a collection of usage examples for a target word, and the corresponding data-driven usage clusters (i.e., word senses), a definition is generated for each usage with a specialised Flan-T5 language model, and the most prototypical definition in a usage cluster is chosen as the sense label. We demonstrate how the resulting sense labels can make existing approaches to semantic change analysis more interpretable, and how they can allow users -- historical linguists, lexicographers, or social scientists -- to explore and intuitively explain diachronic trajectories of word meaning. Semantic change analysis is only one of many possible applications of the `definitions as representations' paradigm. Beyond being human-readable, contextualised definitions also outperform token or usage sentence embeddings in word-in-context semantic similarity judgements, making them a new promising type of lexical representation for NLP.
[ { "created": "Fri, 19 May 2023 20:36:21 GMT", "version": "v1" }, { "created": "Tue, 25 Jul 2023 11:50:48 GMT", "version": "v2" } ]
2023-07-26
[ [ "Giulianelli", "Mario", "" ], [ "Luden", "Iris", "" ], [ "Fernandez", "Raquel", "" ], [ "Kutuzov", "Andrey", "" ] ]
We propose using automatically generated natural language definitions of contextualised word usages as interpretable word and word sense representations. Given a collection of usage examples for a target word, and the corresponding data-driven usage clusters (i.e., word senses), a definition is generated for each usage with a specialised Flan-T5 language model, and the most prototypical definition in a usage cluster is chosen as the sense label. We demonstrate how the resulting sense labels can make existing approaches to semantic change analysis more interpretable, and how they can allow users -- historical linguists, lexicographers, or social scientists -- to explore and intuitively explain diachronic trajectories of word meaning. Semantic change analysis is only one of many possible applications of the `definitions as representations' paradigm. Beyond being human-readable, contextualised definitions also outperform token or usage sentence embeddings in word-in-context semantic similarity judgements, making them a new promising type of lexical representation for NLP.
2406.06500
Redwan Ahmed Rizvee
Mohidul Haque Mridul, Mohammad Foysal Khan, Redwan Ahmed Rizvee, Md Mosaddek Khan
Adaptive Opponent Policy Detection in Multi-Agent MDPs: Real-Time Strategy Switch Identification Using Running Error Estimation
null
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
In Multi-agent Reinforcement Learning (MARL), accurately perceiving opponents' strategies is essential for both cooperative and adversarial contexts, particularly within dynamic environments. While Proximal Policy Optimization (PPO) and related algorithms such as Actor-Critic with Experience Replay (ACER), Trust Region Policy Optimization (TRPO), and Deep Deterministic Policy Gradient (DDPG) perform well in single-agent, stationary environments, they suffer from high variance in MARL due to non-stationary and hidden policies of opponents, leading to diminished reward performance. Additionally, existing methods in MARL face significant challenges, including the need for inter-agent communication, reliance on explicit reward information, high computational demands, and sampling inefficiencies. These issues render them less effective in continuous environments where opponents may abruptly change their policies without prior notice. Against this background, we present OPS-DeMo (Online Policy Switch-Detection Model), an online algorithm that employs dynamic error decay to detect changes in opponents' policies. OPS-DeMo continuously updates its beliefs using an Assumed Opponent Policy (AOP) Bank and selects corresponding responses from a pre-trained Response Policy Bank. Each response policy is trained against consistently strategizing opponents, reducing training uncertainty and enabling the effective use of algorithms like PPO in multi-agent environments. Comparative assessments show that our approach outperforms PPO-trained models in dynamic scenarios like the Predator-Prey setting, providing greater robustness to sudden policy shifts and enabling more informed decision-making through precise opponent policy insights.
[ { "created": "Mon, 10 Jun 2024 17:34:44 GMT", "version": "v1" } ]
2024-06-11
[ [ "Mridul", "Mohidul Haque", "" ], [ "Khan", "Mohammad Foysal", "" ], [ "Rizvee", "Redwan Ahmed", "" ], [ "Khan", "Md Mosaddek", "" ] ]
In Multi-agent Reinforcement Learning (MARL), accurately perceiving opponents' strategies is essential for both cooperative and adversarial contexts, particularly within dynamic environments. While Proximal Policy Optimization (PPO) and related algorithms such as Actor-Critic with Experience Replay (ACER), Trust Region Policy Optimization (TRPO), and Deep Deterministic Policy Gradient (DDPG) perform well in single-agent, stationary environments, they suffer from high variance in MARL due to non-stationary and hidden policies of opponents, leading to diminished reward performance. Additionally, existing methods in MARL face significant challenges, including the need for inter-agent communication, reliance on explicit reward information, high computational demands, and sampling inefficiencies. These issues render them less effective in continuous environments where opponents may abruptly change their policies without prior notice. Against this background, we present OPS-DeMo (Online Policy Switch-Detection Model), an online algorithm that employs dynamic error decay to detect changes in opponents' policies. OPS-DeMo continuously updates its beliefs using an Assumed Opponent Policy (AOP) Bank and selects corresponding responses from a pre-trained Response Policy Bank. Each response policy is trained against consistently strategizing opponents, reducing training uncertainty and enabling the effective use of algorithms like PPO in multi-agent environments. Comparative assessments show that our approach outperforms PPO-trained models in dynamic scenarios like the Predator-Prey setting, providing greater robustness to sudden policy shifts and enabling more informed decision-making through precise opponent policy insights.
2210.08274
Xixi Wu
Xixi Wu, Yun Xiong, Yao Zhang, Yizhu Jiao, Caihua Shan, Yiheng Sun, Yangyong Zhu, and Philip S. Yu
CLARE: A Semi-supervised Community Detection Algorithm
Accepted by KDD'2022
null
10.1145/3534678.3539370
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Community detection refers to the task of discovering closely related subgraphs to understand the networks. However, traditional community detection algorithms fail to pinpoint a particular kind of community. This limits its applicability in real-world networks, e.g., distinguishing fraud groups from normal ones in transaction networks. Recently, semi-supervised community detection emerges as a solution. It aims to seek other similar communities in the network with few labeled communities as training data. Existing works can be regarded as seed-based: locate seed nodes and then develop communities around seeds. However, these methods are quite sensitive to the quality of selected seeds since communities generated around a mis-detected seed may be irrelevant. Besides, they have individual issues, e.g., inflexibility and high computational overhead. To address these issues, we propose CLARE, which consists of two key components, Community Locator and Community Rewriter. Our idea is that we can locate potential communities and then refine them. Therefore, the community locator is proposed for quickly locating potential communities by seeking subgraphs that are similar to training ones in the network. To further adjust these located communities, we devise the community rewriter. Enhanced by deep reinforcement learning, it suggests intelligent decisions, such as adding or dropping nodes, to refine community structures flexibly. Extensive experiments verify both the effectiveness and efficiency of our work compared with prior state-of-the-art approaches on multiple real-world datasets.
[ { "created": "Sat, 15 Oct 2022 12:37:46 GMT", "version": "v1" } ]
2022-10-18
[ [ "Wu", "Xixi", "" ], [ "Xiong", "Yun", "" ], [ "Zhang", "Yao", "" ], [ "Jiao", "Yizhu", "" ], [ "Shan", "Caihua", "" ], [ "Sun", "Yiheng", "" ], [ "Zhu", "Yangyong", "" ], [ "Yu", "Philip S.", "" ] ]
Community detection refers to the task of discovering closely related subgraphs to understand the networks. However, traditional community detection algorithms fail to pinpoint a particular kind of community. This limits its applicability in real-world networks, e.g., distinguishing fraud groups from normal ones in transaction networks. Recently, semi-supervised community detection emerges as a solution. It aims to seek other similar communities in the network with few labeled communities as training data. Existing works can be regarded as seed-based: locate seed nodes and then develop communities around seeds. However, these methods are quite sensitive to the quality of selected seeds since communities generated around a mis-detected seed may be irrelevant. Besides, they have individual issues, e.g., inflexibility and high computational overhead. To address these issues, we propose CLARE, which consists of two key components, Community Locator and Community Rewriter. Our idea is that we can locate potential communities and then refine them. Therefore, the community locator is proposed for quickly locating potential communities by seeking subgraphs that are similar to training ones in the network. To further adjust these located communities, we devise the community rewriter. Enhanced by deep reinforcement learning, it suggests intelligent decisions, such as adding or dropping nodes, to refine community structures flexibly. Extensive experiments verify both the effectiveness and efficiency of our work compared with prior state-of-the-art approaches on multiple real-world datasets.
2106.12124
Mohammad Rostami
Serban Stan, Mohammad Rostami
Secure Domain Adaptation with Multiple Sources
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Multi-source unsupervised domain adaptation (MUDA) is a framework to address the challenge of annotated data scarcity in a target domain via transferring knowledge from multiple annotated source domains. When the source domains are distributed, data privacy and security can become significant concerns and protocols may limit data sharing, yet existing MUDA methods overlook these constraints. We develop an algorithm to address MUDA when source domain data cannot be shared with the target or across the source domains. Our method is based on aligning the distributions of source and target domains indirectly via estimating the source feature embeddings and predicting over a confidence based combination of domain specific model predictions. We provide theoretical analysis to support our approach and conduct empirical experiments to demonstrate that our algorithm is effective.
[ { "created": "Wed, 23 Jun 2021 02:26:36 GMT", "version": "v1" }, { "created": "Mon, 14 Nov 2022 21:24:53 GMT", "version": "v2" } ]
2022-11-16
[ [ "Stan", "Serban", "" ], [ "Rostami", "Mohammad", "" ] ]
Multi-source unsupervised domain adaptation (MUDA) is a framework to address the challenge of annotated data scarcity in a target domain via transferring knowledge from multiple annotated source domains. When the source domains are distributed, data privacy and security can become significant concerns and protocols may limit data sharing, yet existing MUDA methods overlook these constraints. We develop an algorithm to address MUDA when source domain data cannot be shared with the target or across the source domains. Our method is based on aligning the distributions of source and target domains indirectly via estimating the source feature embeddings and predicting over a confidence based combination of domain specific model predictions. We provide theoretical analysis to support our approach and conduct empirical experiments to demonstrate that our algorithm is effective.
2307.15879
Victor A. Melent'ev
V. A. Melent'ev
Shortest paths search method based on the projective description of unweighted mixed graphs
7 pages, in Russian language, 1 figures, 2 table
null
null
null
cs.DC math.CO
http://creativecommons.org/licenses/by/4.0/
The method is based on the preliminary transformation of the traditionally used matrices or adjacency lists in the graph theory into refined projections free from redundant information, and their subsequent use in constructing shortest paths. Unlike adjacency matrices and lists based on enumerating binary adjacency relations, the refined projection is based on enumerating more complex relations: simple paths from a given graph vertex that are shortest. The preliminary acquisition of such projections reduces the algorithmic complexity of applications using them and improves their volumetric and real-time characteristics to linear ones for a pair of vertices. The class of graphs considered is extended to mixed graphs.
[ { "created": "Sat, 29 Jul 2023 03:31:20 GMT", "version": "v1" }, { "created": "Fri, 19 Jan 2024 12:04:27 GMT", "version": "v2" } ]
2024-01-22
[ [ "Melent'ev", "V. A.", "" ] ]
The method is based on the preliminary transformation of the traditionally used matrices or adjacency lists in the graph theory into refined projections free from redundant information, and their subsequent use in constructing shortest paths. Unlike adjacency matrices and lists based on enumerating binary adjacency relations, the refined projection is based on enumerating more complex relations: simple paths from a given graph vertex that are shortest. The preliminary acquisition of such projections reduces the algorithmic complexity of applications using them and improves their volumetric and real-time characteristics to linear ones for a pair of vertices. The class of graphs considered is extended to mixed graphs.
2003.03699
Depeng Xu
Depeng Xu, Wei Du and Xintao Wu
Removing Disparate Impact of Differentially Private Stochastic Gradient Descent on Model Accuracy
null
null
null
null
cs.LG cs.CR cs.CY stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When we enforce differential privacy in machine learning, the utility-privacy trade-off is different w.r.t. each group. Gradient clipping and random noise addition disproportionately affect underrepresented and complex classes and subgroups, which results in inequality in utility loss. In this work, we analyze the inequality in utility loss by differential privacy and propose a modified differentially private stochastic gradient descent (DPSGD), called DPSGD-F, to remove the potential disparate impact of differential privacy on the protected group. DPSGD-F adjusts the contribution of samples in a group depending on the group clipping bias such that differential privacy has no disparate impact on group utility. Our experimental evaluation shows how group sample size and group clipping bias affect the impact of differential privacy in DPSGD, and how adaptive clipping for each group helps to mitigate the disparate impact caused by differential privacy in DPSGD-F.
[ { "created": "Sun, 8 Mar 2020 02:06:15 GMT", "version": "v1" }, { "created": "Sun, 27 Sep 2020 21:04:37 GMT", "version": "v2" } ]
2020-09-29
[ [ "Xu", "Depeng", "" ], [ "Du", "Wei", "" ], [ "Wu", "Xintao", "" ] ]
When we enforce differential privacy in machine learning, the utility-privacy trade-off is different w.r.t. each group. Gradient clipping and random noise addition disproportionately affect underrepresented and complex classes and subgroups, which results in inequality in utility loss. In this work, we analyze the inequality in utility loss by differential privacy and propose a modified differentially private stochastic gradient descent (DPSGD), called DPSGD-F, to remove the potential disparate impact of differential privacy on the protected group. DPSGD-F adjusts the contribution of samples in a group depending on the group clipping bias such that differential privacy has no disparate impact on group utility. Our experimental evaluation shows how group sample size and group clipping bias affect the impact of differential privacy in DPSGD, and how adaptive clipping for each group helps to mitigate the disparate impact caused by differential privacy in DPSGD-F.
2009.11603
John Noll
John Noll and Mohammad Abdur Razzak and Sarah Beecham
Motivation and Autonomy in Global Software Development: An Empirical Study
null
21st International Conference on Evaluation and Assessment in Software Engineering (EASE 2017)
10.1145/3084226.3084277
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Distributed development involving globally distributed teams in different countries and timezones adds additional complexity into an already complex undertaking. This paper focuses on the effect of global software development on motivation. Specifically, we ask, what impact does misalignment between needed and actual autonomy have on global team motivation? We studied members of two distributed software development teams with different degrees of distribution, both following the Scrum approach to software development. One team's members are distributed across Ireland, England and Wales; the other has members in locations across Europe and North America. We observed the teams during their Scrum "ceremonies," and interviewed each team member, during which asked we asked team members to rate their motivation on a 5 point ordinal scale. Considering both the reported motivation levels, and qualitative analysis of our observations and interviews, our results suggest that autonomy appears to be just one of three job aspects that affect motivation, the others being competence and relatedness. We hypothesize that (1) autonomy is a necessary but not sufficient condition for motivation among experienced team members, and (2) autonomy is not a motivator unless accompanied by sufficient competence.
[ { "created": "Thu, 24 Sep 2020 11:22:03 GMT", "version": "v1" } ]
2020-09-25
[ [ "Noll", "John", "" ], [ "Razzak", "Mohammad Abdur", "" ], [ "Beecham", "Sarah", "" ] ]
Distributed development involving globally distributed teams in different countries and timezones adds additional complexity into an already complex undertaking. This paper focuses on the effect of global software development on motivation. Specifically, we ask, what impact does misalignment between needed and actual autonomy have on global team motivation? We studied members of two distributed software development teams with different degrees of distribution, both following the Scrum approach to software development. One team's members are distributed across Ireland, England and Wales; the other has members in locations across Europe and North America. We observed the teams during their Scrum "ceremonies," and interviewed each team member, during which asked we asked team members to rate their motivation on a 5 point ordinal scale. Considering both the reported motivation levels, and qualitative analysis of our observations and interviews, our results suggest that autonomy appears to be just one of three job aspects that affect motivation, the others being competence and relatedness. We hypothesize that (1) autonomy is a necessary but not sufficient condition for motivation among experienced team members, and (2) autonomy is not a motivator unless accompanied by sufficient competence.
2106.00455
Tongliang Liu
Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama
Instance Correction for Learning with Open-set Noisy Labels
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of open-set noisy labels denotes that part of training data have a different label space that does not contain the true class. Lots of approaches, e.g., loss correction and label correction, cannot handle such open-set noisy labels well, since they need training data and test data to share the same label space, which does not hold for learning with open-set noisy labels. The state-of-the-art methods thus employ the sample selection approach to handle open-set noisy labels, which tries to select clean data from noisy data for network parameters updates. The discarded data are seen to be mislabeled and do not participate in training. Such an approach is intuitive and reasonable at first glance. However, a natural question could be raised "can such data only be discarded during training?". In this paper, we show that the answer is no. Specifically, we discuss that the instances of discarded data could consist of some meaningful information for generalization. For this reason, we do not abandon such data, but use instance correction to modify the instances of the discarded data, which makes the predictions for the discarded data consistent with given labels. Instance correction are performed by targeted adversarial attacks. The corrected data are then exploited for training to help generalization. In addition to the analytical results, a series of empirical evidences are provided to justify our claims.
[ { "created": "Tue, 1 Jun 2021 13:05:55 GMT", "version": "v1" } ]
2021-06-02
[ [ "Xia", "Xiaobo", "" ], [ "Liu", "Tongliang", "" ], [ "Han", "Bo", "" ], [ "Gong", "Mingming", "" ], [ "Yu", "Jun", "" ], [ "Niu", "Gang", "" ], [ "Sugiyama", "Masashi", "" ] ]
The problem of open-set noisy labels denotes that part of training data have a different label space that does not contain the true class. Lots of approaches, e.g., loss correction and label correction, cannot handle such open-set noisy labels well, since they need training data and test data to share the same label space, which does not hold for learning with open-set noisy labels. The state-of-the-art methods thus employ the sample selection approach to handle open-set noisy labels, which tries to select clean data from noisy data for network parameters updates. The discarded data are seen to be mislabeled and do not participate in training. Such an approach is intuitive and reasonable at first glance. However, a natural question could be raised "can such data only be discarded during training?". In this paper, we show that the answer is no. Specifically, we discuss that the instances of discarded data could consist of some meaningful information for generalization. For this reason, we do not abandon such data, but use instance correction to modify the instances of the discarded data, which makes the predictions for the discarded data consistent with given labels. Instance correction are performed by targeted adversarial attacks. The corrected data are then exploited for training to help generalization. In addition to the analytical results, a series of empirical evidences are provided to justify our claims.
2111.05808
Lo\"ic Rakotoson
Lo\"ic Rakotoson, Charles Letaillieur, Sylvain Massip and Fr\'ejus Laleye
BagBERT: BERT-based bagging-stacking for multi-topic classification
null
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes our submission on the COVID-19 literature annotation task at Biocreative VII. We proposed an approach that exploits the knowledge of the globally non-optimal weights, usually rejected, to build a rich representation of each label. Our proposed approach consists of two stages: (1) A bagging of various initializations of the training data that features weakly trained weights, (2) A stacking of heterogeneous vocabulary models based on BERT and RoBERTa Embeddings. The aggregation of these weak insights performs better than a classical globally efficient model. The purpose is the distillation of the richness of knowledge to a simpler and lighter model. Our system obtains an Instance-based F1 of 92.96 and a Label-based micro-F1 of 91.35.
[ { "created": "Wed, 10 Nov 2021 17:00:36 GMT", "version": "v1" } ]
2021-11-12
[ [ "Rakotoson", "Loïc", "" ], [ "Letaillieur", "Charles", "" ], [ "Massip", "Sylvain", "" ], [ "Laleye", "Fréjus", "" ] ]
This paper describes our submission on the COVID-19 literature annotation task at Biocreative VII. We proposed an approach that exploits the knowledge of the globally non-optimal weights, usually rejected, to build a rich representation of each label. Our proposed approach consists of two stages: (1) A bagging of various initializations of the training data that features weakly trained weights, (2) A stacking of heterogeneous vocabulary models based on BERT and RoBERTa Embeddings. The aggregation of these weak insights performs better than a classical globally efficient model. The purpose is the distillation of the richness of knowledge to a simpler and lighter model. Our system obtains an Instance-based F1 of 92.96 and a Label-based micro-F1 of 91.35.
2012.01662
EPTCS
Gia S. Wulandari (University of York, UK, Telkom University, Bandung, Indonesia), Detlef Plump (University of York, UK)
Verifying Graph Programs with First-Order Logic
In Proceedings GCM 2020, arXiv:2012.01181. arXiv admin note: substantial text overlap with arXiv:2010.14549
EPTCS 330, 2020, pp. 181-200
10.4204/EPTCS.330.11
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider Hoare-style verification for the graph programming language GP 2. In previous work, graph properties were specified by so-called E-conditions which extend nested graph conditions. However, this type of assertions is not easy to comprehend by programmers that are used to formal specifications in standard first-order logic. In this paper, we present an approach to verify GP 2 programs with a standard first-order logic. We show how to construct a strongest liberal postcondition with respect to a rule schema and a precondition. We then extend this construction to obtain strongest liberal postconditions for arbitrary loop-free programs. Compared with previous work, this allows to reason about a vastly generalised class of graph programs. In particular, many programs with nested loops can be verified with the new calculus.
[ { "created": "Thu, 3 Dec 2020 02:30:12 GMT", "version": "v1" } ]
2020-12-04
[ [ "Wulandari", "Gia S.", "", "University of York, UK, Telkom University, Bandung,\n Indonesia" ], [ "Plump", "Detlef", "", "University of York, UK" ] ]
We consider Hoare-style verification for the graph programming language GP 2. In previous work, graph properties were specified by so-called E-conditions which extend nested graph conditions. However, this type of assertions is not easy to comprehend by programmers that are used to formal specifications in standard first-order logic. In this paper, we present an approach to verify GP 2 programs with a standard first-order logic. We show how to construct a strongest liberal postcondition with respect to a rule schema and a precondition. We then extend this construction to obtain strongest liberal postconditions for arbitrary loop-free programs. Compared with previous work, this allows to reason about a vastly generalised class of graph programs. In particular, many programs with nested loops can be verified with the new calculus.
2102.01936
Liangxi Liu
Liangxi Liu, Xi Jiang, Feng Zheng, Hong Chen, Guo-Jun Qi, Heng Huang and Ling Shao
A Bayesian Federated Learning Framework with Online Laplace Approximation
null
null
10.1109/TPAMI.2023.3322743
null
cs.LG cs.AI cs.DC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Federated learning (FL) allows multiple clients to collaboratively learn a globally shared model through cycles of model aggregation and local model training, without the need to share data. Most existing FL methods train local models separately on different clients, and then simply average their parameters to obtain a centralized model on the server side. However, these approaches generally suffer from large aggregation errors and severe local forgetting, which are particularly bad in heterogeneous data settings. To tackle these issues, in this paper, we propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side. On the server side, a multivariate Gaussian product mechanism is employed to construct and maximize a global posterior, largely reducing the aggregation errors induced by large discrepancies between local models. On the client side, a prior loss that uses the global posterior probabilistic parameters delivered from the server is designed to guide the local training. Binding such learning constraints from other clients enables our method to mitigate local forgetting. Finally, we achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
[ { "created": "Wed, 3 Feb 2021 08:36:58 GMT", "version": "v1" }, { "created": "Tue, 20 Jul 2021 16:44:04 GMT", "version": "v2" }, { "created": "Sat, 2 Dec 2023 07:13:00 GMT", "version": "v3" } ]
2023-12-05
[ [ "Liu", "Liangxi", "" ], [ "Jiang", "Xi", "" ], [ "Zheng", "Feng", "" ], [ "Chen", "Hong", "" ], [ "Qi", "Guo-Jun", "" ], [ "Huang", "Heng", "" ], [ "Shao", "Ling", "" ] ]
Federated learning (FL) allows multiple clients to collaboratively learn a globally shared model through cycles of model aggregation and local model training, without the need to share data. Most existing FL methods train local models separately on different clients, and then simply average their parameters to obtain a centralized model on the server side. However, these approaches generally suffer from large aggregation errors and severe local forgetting, which are particularly bad in heterogeneous data settings. To tackle these issues, in this paper, we propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side. On the server side, a multivariate Gaussian product mechanism is employed to construct and maximize a global posterior, largely reducing the aggregation errors induced by large discrepancies between local models. On the client side, a prior loss that uses the global posterior probabilistic parameters delivered from the server is designed to guide the local training. Binding such learning constraints from other clients enables our method to mitigate local forgetting. Finally, we achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
2305.06163
Hunter McNichols
Hunter McNichols, Mengxue Zhang, Andrew Lan
Algebra Error Classification with Large Language Models
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Automated feedback as students answer open-ended math questions has significant potential in improving learning outcomes at large scale. A key part of automated feedback systems is an error classification component, which identifies student errors and enables appropriate, predefined feedback to be deployed. Most existing approaches to error classification use a rule-based method, which has limited capacity to generalize. Existing data-driven methods avoid these limitations but specifically require mathematical expressions in student responses to be parsed into syntax trees. This requirement is itself a limitation, since student responses are not always syntactically valid and cannot be converted into trees. In this work, we introduce a flexible method for error classification using pre-trained large language models. We demonstrate that our method can outperform existing methods in algebra error classification, and is able to classify a larger set of student responses. Additionally, we analyze common classification errors made by our method and discuss limitations of automated error classification.
[ { "created": "Mon, 8 May 2023 15:51:38 GMT", "version": "v1" } ]
2023-05-11
[ [ "McNichols", "Hunter", "" ], [ "Zhang", "Mengxue", "" ], [ "Lan", "Andrew", "" ] ]
Automated feedback as students answer open-ended math questions has significant potential in improving learning outcomes at large scale. A key part of automated feedback systems is an error classification component, which identifies student errors and enables appropriate, predefined feedback to be deployed. Most existing approaches to error classification use a rule-based method, which has limited capacity to generalize. Existing data-driven methods avoid these limitations but specifically require mathematical expressions in student responses to be parsed into syntax trees. This requirement is itself a limitation, since student responses are not always syntactically valid and cannot be converted into trees. In this work, we introduce a flexible method for error classification using pre-trained large language models. We demonstrate that our method can outperform existing methods in algebra error classification, and is able to classify a larger set of student responses. Additionally, we analyze common classification errors made by our method and discuss limitations of automated error classification.
1704.03132
Yifei Jin
Yifei Jin, Jian Li, Wei Zhan
Odd Yao-Yao Graphs are Not Spanners
29 pages, 26 figures
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is a long standing open problem whether Yao-Yao graphs $\mathsf{YY}_{k}$ are all spanners [li2002sparse]. Bauer and Damian [bauer2013infinite] showed that all $\mathsf{YY}_{6k}$ for $k \geq 6$ are spanners. Li and Zhan [li2016almost] generalized their result and proved that all even Yao-Yao graphs $\mathsf{YY}_{2k}$ are spanners (for $k\geq 42$). However, their technique cannot be extended to odd Yao-Yao graphs, and whether they are spanners are still elusive. In this paper, we show that, surprisingly, for any integer $k \geq 1$, there exist odd Yao-Yao graph $\mathsf{YY}_{2k+1}$ instances, which are not spanners.
[ { "created": "Tue, 11 Apr 2017 03:50:11 GMT", "version": "v1" }, { "created": "Sun, 18 Mar 2018 22:46:53 GMT", "version": "v2" }, { "created": "Tue, 20 Mar 2018 02:04:12 GMT", "version": "v3" }, { "created": "Sun, 12 Aug 2018 08:07:13 GMT", "version": "v4" } ]
2018-08-14
[ [ "Jin", "Yifei", "" ], [ "Li", "Jian", "" ], [ "Zhan", "Wei", "" ] ]
It is a long standing open problem whether Yao-Yao graphs $\mathsf{YY}_{k}$ are all spanners [li2002sparse]. Bauer and Damian [bauer2013infinite] showed that all $\mathsf{YY}_{6k}$ for $k \geq 6$ are spanners. Li and Zhan [li2016almost] generalized their result and proved that all even Yao-Yao graphs $\mathsf{YY}_{2k}$ are spanners (for $k\geq 42$). However, their technique cannot be extended to odd Yao-Yao graphs, and whether they are spanners are still elusive. In this paper, we show that, surprisingly, for any integer $k \geq 1$, there exist odd Yao-Yao graph $\mathsf{YY}_{2k+1}$ instances, which are not spanners.
0801.1306
Seyed Abolfazl Motahari
Abolfazl S. Motahari, Amir K. Khandani
Capacity Bounds for the Gaussian Interference Channel
35 pages, 14 figures, submitted to IEEE Trans. on Inf. Theory
null
null
null
cs.IT math.IT
null
The capacity region of the two-user Gaussian Interference Channel (IC) is studied. Three classes of channels are considered: weak, one-sided, and mixed Gaussian IC. For the weak Gaussian IC, a new outer bound on the capacity region is obtained that outperforms previously known outer bounds. The sum capacity for a certain range of channel parameters is derived. For this range, it is proved that using Gaussian codebooks and treating interference as noise is optimal. It is shown that when Gaussian codebooks are used, the full Han-Kobayashi achievable rate region can be obtained by using the naive Han-Kobayashi achievable scheme over three frequency bands (equivalently, three subspaces). For the one-sided Gaussian IC, an alternative proof for the Sato's outer bound is presented. We derive the full Han-Kobayashi achievable rate region when Gaussian codebooks are utilized. For the mixed Gaussian IC, a new outer bound is obtained that outperforms previously known outer bounds. For this case, the sum capacity for the entire range of channel parameters is derived. It is proved that the full Han-Kobayashi achievable rate region using Gaussian codebooks is equivalent to that of the one-sided Gaussian IC for a particular range of channel parameters.
[ { "created": "Tue, 8 Jan 2008 19:56:00 GMT", "version": "v1" } ]
2008-01-09
[ [ "Motahari", "Abolfazl S.", "" ], [ "Khandani", "Amir K.", "" ] ]
The capacity region of the two-user Gaussian Interference Channel (IC) is studied. Three classes of channels are considered: weak, one-sided, and mixed Gaussian IC. For the weak Gaussian IC, a new outer bound on the capacity region is obtained that outperforms previously known outer bounds. The sum capacity for a certain range of channel parameters is derived. For this range, it is proved that using Gaussian codebooks and treating interference as noise is optimal. It is shown that when Gaussian codebooks are used, the full Han-Kobayashi achievable rate region can be obtained by using the naive Han-Kobayashi achievable scheme over three frequency bands (equivalently, three subspaces). For the one-sided Gaussian IC, an alternative proof for the Sato's outer bound is presented. We derive the full Han-Kobayashi achievable rate region when Gaussian codebooks are utilized. For the mixed Gaussian IC, a new outer bound is obtained that outperforms previously known outer bounds. For this case, the sum capacity for the entire range of channel parameters is derived. It is proved that the full Han-Kobayashi achievable rate region using Gaussian codebooks is equivalent to that of the one-sided Gaussian IC for a particular range of channel parameters.
1702.02096
Chao-Yang Chen
Zhi-Hong Guan, Chao-Yang Chen, Gang Feng, Tao Li
Optimal Tracking Performance Limitation of Networked Control Systems with Limited Bandwidth and Additive Colored White Gaussian Noise
10 pages, 4 figures, IEEE Transactions on Circuits and Systems I: Regular Papers
null
10.1109/TCSI.2012.2215717
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies optimal tracking performance issues for multi-input-multi-output linear time-invariant systems under networked control with limited bandwidth and additive colored white Gaussian noise channel. The tracking performance is measured by control input energy and the energy of the error signal between the output of the system and the reference signal with respect to a Brownian motion random process. This paper focuses on two kinds of network parameters, the basic network parameter-bandwidth and the additive colored white Gaussian noise, and studies the tracking performance limitation problem. The best attainable tracking performance is obtained, and the impact of limited bandwidth and additive colored white Gaussian noise of the communication channel on the attainable tracking performance is revealed. It is shown that the optimal tracking performance depends on nonminimum phase zeros, gain at all frequencies and their directions unitary vector of the given plant, as well as the limited bandwidth and additive colored white Gaussian noise of the communication channel. The simulation results are finally given to illustrate the theoretical results.
[ { "created": "Tue, 7 Feb 2017 16:53:32 GMT", "version": "v1" } ]
2017-02-08
[ [ "Guan", "Zhi-Hong", "" ], [ "Chen", "Chao-Yang", "" ], [ "Feng", "Gang", "" ], [ "Li", "Tao", "" ] ]
This paper studies optimal tracking performance issues for multi-input-multi-output linear time-invariant systems under networked control with limited bandwidth and additive colored white Gaussian noise channel. The tracking performance is measured by control input energy and the energy of the error signal between the output of the system and the reference signal with respect to a Brownian motion random process. This paper focuses on two kinds of network parameters, the basic network parameter-bandwidth and the additive colored white Gaussian noise, and studies the tracking performance limitation problem. The best attainable tracking performance is obtained, and the impact of limited bandwidth and additive colored white Gaussian noise of the communication channel on the attainable tracking performance is revealed. It is shown that the optimal tracking performance depends on nonminimum phase zeros, gain at all frequencies and their directions unitary vector of the given plant, as well as the limited bandwidth and additive colored white Gaussian noise of the communication channel. The simulation results are finally given to illustrate the theoretical results.
1404.1140
Christopher Amato
Christopher Amato, Frans A. Oliehoek
Scalable Planning and Learning for Multiagent POMDPs: Extended Version
null
null
null
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online, sample-based planning algorithms for POMDPs have shown great promise in scaling to problems with large state spaces, but they become intractable for large action and observation spaces. This is particularly problematic in multiagent POMDPs where the action and observation space grows exponentially with the number of agents. To combat this intractability, we propose a novel scalable approach based on sample-based planning and factored value functions that exploits structure present in many multiagent settings. This approach applies not only in the planning case, but also in the Bayesian reinforcement learning setting. Experimental results show that we are able to provide high quality solutions to large multiagent planning and learning problems.
[ { "created": "Fri, 4 Apr 2014 03:02:44 GMT", "version": "v1" }, { "created": "Sat, 20 Dec 2014 03:28:34 GMT", "version": "v2" } ]
2014-12-23
[ [ "Amato", "Christopher", "" ], [ "Oliehoek", "Frans A.", "" ] ]
Online, sample-based planning algorithms for POMDPs have shown great promise in scaling to problems with large state spaces, but they become intractable for large action and observation spaces. This is particularly problematic in multiagent POMDPs where the action and observation space grows exponentially with the number of agents. To combat this intractability, we propose a novel scalable approach based on sample-based planning and factored value functions that exploits structure present in many multiagent settings. This approach applies not only in the planning case, but also in the Bayesian reinforcement learning setting. Experimental results show that we are able to provide high quality solutions to large multiagent planning and learning problems.
2208.08469
Rosina Kharal
Rosina F. Kharal, Trevor Brown
Performance Anomalies in Concurrent Data Structure Microbenchmarks
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent decades have witnessed a surge in the development of concurrent data structures with an increasing interest in data structures implementing concurrent sets (CSets). Microbenchmarking tools are frequently utilized to evaluate and compare the performance differences across concurrent data structures. The underlying structure and design of the microbenchmarks themselves can play a hidden but influential role in performance results. However, the impact of microbenchmark design has not been well investigated. In this work, we illustrate instances where concurrent data structure performance results reported by a microbenchmark can vary 10-100x depending on the microbenchmark implementation details. We investigate factors leading to performance variance across three popular microbenchmarks and outline cases in which flawed microbenchmark design can lead to an inversion of performance results between two concurrent data structure implementations. We further derive a set of recommendations for best practices in the design and usage of concurrent data structure microbenchmarks and explore advanced features in the Setbench microbenchmark.
[ { "created": "Wed, 17 Aug 2022 18:15:27 GMT", "version": "v1" }, { "created": "Tue, 20 Sep 2022 00:50:03 GMT", "version": "v2" }, { "created": "Thu, 8 Dec 2022 07:01:23 GMT", "version": "v3" } ]
2022-12-09
[ [ "Kharal", "Rosina F.", "" ], [ "Brown", "Trevor", "" ] ]
Recent decades have witnessed a surge in the development of concurrent data structures with an increasing interest in data structures implementing concurrent sets (CSets). Microbenchmarking tools are frequently utilized to evaluate and compare the performance differences across concurrent data structures. The underlying structure and design of the microbenchmarks themselves can play a hidden but influential role in performance results. However, the impact of microbenchmark design has not been well investigated. In this work, we illustrate instances where concurrent data structure performance results reported by a microbenchmark can vary 10-100x depending on the microbenchmark implementation details. We investigate factors leading to performance variance across three popular microbenchmarks and outline cases in which flawed microbenchmark design can lead to an inversion of performance results between two concurrent data structure implementations. We further derive a set of recommendations for best practices in the design and usage of concurrent data structure microbenchmarks and explore advanced features in the Setbench microbenchmark.
2112.02144
Ran Gilad-Bachrach
Christopher Pyles, Francois van Schalkwyk, Gerard J. Gorman, Marijan Beg, Lee Stott, Nir Levy, and Ran Gilad-Bachrach
PyBryt: auto-assessment and auto-grading for computational thinking
null
null
null
null
cs.HC cs.CY
http://creativecommons.org/licenses/by/4.0/
We continuously interact with computerized systems to achieve goals and perform tasks in our personal and professional lives. Therefore, the ability to program such systems is a skill needed by everyone. Consequently, computational thinking skills are essential for everyone, which creates a challenge for the educational system to teach these skills at scale and allow students to practice these skills. To address this challenge, we present a novel approach to providing formative feedback to students on programming assignments. Our approach uses dynamic evaluation to trace intermediate results generated by student's code and compares them to the reference implementation provided by their teachers. We have implemented this method as a Python library and demonstrate its use to give students relevant feedback on their work while allowing teachers to challenge their students' computational thinking skills.
[ { "created": "Fri, 3 Dec 2021 20:01:06 GMT", "version": "v1" } ]
2021-12-07
[ [ "Pyles", "Christopher", "" ], [ "van Schalkwyk", "Francois", "" ], [ "Gorman", "Gerard J.", "" ], [ "Beg", "Marijan", "" ], [ "Stott", "Lee", "" ], [ "Levy", "Nir", "" ], [ "Gilad-Bachrach", "Ran", "" ] ]
We continuously interact with computerized systems to achieve goals and perform tasks in our personal and professional lives. Therefore, the ability to program such systems is a skill needed by everyone. Consequently, computational thinking skills are essential for everyone, which creates a challenge for the educational system to teach these skills at scale and allow students to practice these skills. To address this challenge, we present a novel approach to providing formative feedback to students on programming assignments. Our approach uses dynamic evaluation to trace intermediate results generated by student's code and compares them to the reference implementation provided by their teachers. We have implemented this method as a Python library and demonstrate its use to give students relevant feedback on their work while allowing teachers to challenge their students' computational thinking skills.
2209.03596
Danil Provodin
Danil Provodin, Pratik Gajane, Mykola Pechenizkiy, Maurits Kaptein
An Empirical Evaluation of Posterior Sampling for Constrained Reinforcement Learning
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
We study a posterior sampling approach to efficient exploration in constrained reinforcement learning. Alternatively to existing algorithms, we propose two simple algorithms that are more efficient statistically, simpler to implement and computationally cheaper. The first algorithm is based on a linear formulation of CMDP, and the second algorithm leverages the saddle-point formulation of CMDP. Our empirical results demonstrate that, despite its simplicity, posterior sampling achieves state-of-the-art performance and, in some cases, significantly outperforms optimistic algorithms.
[ { "created": "Thu, 8 Sep 2022 06:52:49 GMT", "version": "v1" } ]
2022-09-09
[ [ "Provodin", "Danil", "" ], [ "Gajane", "Pratik", "" ], [ "Pechenizkiy", "Mykola", "" ], [ "Kaptein", "Maurits", "" ] ]
We study a posterior sampling approach to efficient exploration in constrained reinforcement learning. Alternatively to existing algorithms, we propose two simple algorithms that are more efficient statistically, simpler to implement and computationally cheaper. The first algorithm is based on a linear formulation of CMDP, and the second algorithm leverages the saddle-point formulation of CMDP. Our empirical results demonstrate that, despite its simplicity, posterior sampling achieves state-of-the-art performance and, in some cases, significantly outperforms optimistic algorithms.
2004.05654
Abhishek Dubey
Charles Hartsell and Nagabhushan Mahadevan and Harmon Nine and Ted Bapty and Abhishek Dubey and Gabor Karsai
Workflow Automation for Cyber Physical System Development Processes
Accepted for Publication at DESTION 2020
null
null
null
cs.SE cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Development of Cyber Physical Systems (CPSs) requires close interaction between developers with expertise in many domains to achieve ever-increasing demands for improved performance, reduced cost, and more system autonomy. Each engineering discipline commonly relies on domain-specific modeling languages, and analysis and execution of these models is often automated with appropriate tooling. However, integration between these heterogeneous models and tools is often lacking, and most of the burden for inter-operation of these tools is placed on system developers. To address this problem, we introduce a workflow modeling language for the automation of complex CPS development processes and implement a platform for execution of these models in the Assurance-based Learning-enabled CPS (ALC) Toolchain. Several illustrative examples are provided which show how these workflow models are able to automate many time-consuming integration tasks previously performed manually by system developers.
[ { "created": "Sun, 12 Apr 2020 17:32:05 GMT", "version": "v1" } ]
2020-04-14
[ [ "Hartsell", "Charles", "" ], [ "Mahadevan", "Nagabhushan", "" ], [ "Nine", "Harmon", "" ], [ "Bapty", "Ted", "" ], [ "Dubey", "Abhishek", "" ], [ "Karsai", "Gabor", "" ] ]
Development of Cyber Physical Systems (CPSs) requires close interaction between developers with expertise in many domains to achieve ever-increasing demands for improved performance, reduced cost, and more system autonomy. Each engineering discipline commonly relies on domain-specific modeling languages, and analysis and execution of these models is often automated with appropriate tooling. However, integration between these heterogeneous models and tools is often lacking, and most of the burden for inter-operation of these tools is placed on system developers. To address this problem, we introduce a workflow modeling language for the automation of complex CPS development processes and implement a platform for execution of these models in the Assurance-based Learning-enabled CPS (ALC) Toolchain. Several illustrative examples are provided which show how these workflow models are able to automate many time-consuming integration tasks previously performed manually by system developers.
1710.11097
Nikhil Chavan Dafle
Nikhil Chavan-Dafle and Alberto Rodriguez
Stable Prehensile Pushing: In-Hand Manipulation with Alternating Sticking Contacts
IEEE International Conference on Robotics and Automation 2018
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an approach to in-hand manipulation planning that exploits the mechanics of alternating sticking contact. Particularly, we consider the problem of manipulating a grasped object using external pushes for which the pusher sticks to the object. Given the physical properties of the object, frictional coefficients at contacts and a desired regrasp on the object, we propose a sampling-based planning framework that builds a pushing strategy concatenating different feasible stable pushes to achieve the desired regrasp. An efficient dynamics formulation allows us to plan in-hand manipulations 100-1000 times faster than our previous work which builds upon a complementarity formulation. Experimental observations for the generated plans show that the object precisely moves in the grasp as expected by the planner. Video Summary -- youtu.be/qOTKRJMx6Ho
[ { "created": "Mon, 30 Oct 2017 17:42:48 GMT", "version": "v1" }, { "created": "Tue, 31 Oct 2017 10:29:54 GMT", "version": "v2" }, { "created": "Sun, 4 Mar 2018 13:25:17 GMT", "version": "v3" } ]
2018-03-06
[ [ "Chavan-Dafle", "Nikhil", "" ], [ "Rodriguez", "Alberto", "" ] ]
This paper presents an approach to in-hand manipulation planning that exploits the mechanics of alternating sticking contact. Particularly, we consider the problem of manipulating a grasped object using external pushes for which the pusher sticks to the object. Given the physical properties of the object, frictional coefficients at contacts and a desired regrasp on the object, we propose a sampling-based planning framework that builds a pushing strategy concatenating different feasible stable pushes to achieve the desired regrasp. An efficient dynamics formulation allows us to plan in-hand manipulations 100-1000 times faster than our previous work which builds upon a complementarity formulation. Experimental observations for the generated plans show that the object precisely moves in the grasp as expected by the planner. Video Summary -- youtu.be/qOTKRJMx6Ho
1906.07968
Bernie Liu
Xinyu Wei, Mengjia Zhou, Bernie Liu
Camouflage Design of Analysis Based on HSV Color Statistics and K-means Clustering
null
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since ancient times, it has been essential to adopting camouflage on the battlefield, whether it is in the forefront, in-depth or the rear. The traditional evaluation method is made up of people opinion. By watching target or looking at the pictures, and determine the effect of camouflage, so it can be more influenced by man's subjective factors. And now, in order to objectively reflect the camouflage effect, we set up a model through using images similarity to evaluate camouflage effect. Image similarity comparison is divided into two main image feature comparison: image color features and texture features of images. We now using computer design camouflage, camouflage pattern design is divided into two aspects of design color and design plaques. For the design of the color, we based on HSV color model, and as for the design of plague, the key steps are the background color edge extraction, we adopt algorithm based on k-means clustering analysis of the method of background color edge extraction.
[ { "created": "Wed, 19 Jun 2019 08:30:53 GMT", "version": "v1" } ]
2019-06-20
[ [ "Wei", "Xinyu", "" ], [ "Zhou", "Mengjia", "" ], [ "Liu", "Bernie", "" ] ]
Since ancient times, it has been essential to adopting camouflage on the battlefield, whether it is in the forefront, in-depth or the rear. The traditional evaluation method is made up of people opinion. By watching target or looking at the pictures, and determine the effect of camouflage, so it can be more influenced by man's subjective factors. And now, in order to objectively reflect the camouflage effect, we set up a model through using images similarity to evaluate camouflage effect. Image similarity comparison is divided into two main image feature comparison: image color features and texture features of images. We now using computer design camouflage, camouflage pattern design is divided into two aspects of design color and design plaques. For the design of the color, we based on HSV color model, and as for the design of plague, the key steps are the background color edge extraction, we adopt algorithm based on k-means clustering analysis of the method of background color edge extraction.
2203.01726
Marco Baity-Jesi
S. Kyathanahally, T. Hardeman, M. Reyes, E. Merz, T. Bulas, P. Brun, F. Pomati, and M. Baity-Jesi
Ensembles of Vision Transformers as a New Paradigm for Automated Classification in Ecology
To appear in Scientific Reports
Scientific Reports 12, 18590 (2022)
10.1038/s41598-022-21910-0
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Monitoring biodiversity is paramount to manage and protect natural resources. Collecting images of organisms over large temporal or spatial scales is a promising practice to monitor the biodiversity of natural ecosystems, providing large amounts of data with minimal interference with the environment. Deep learning models are currently used to automate classification of organisms into taxonomic units. However, imprecision in these classifiers introduces a measurement noise that is difficult to control and can significantly hinder the analysis and interpretation of data. {We overcome this limitation through ensembles of Data-efficient image Transformers (DeiTs), which not only are easy to train and implement, but also significantly outperform} the previous state of the art (SOTA). We validate our results on ten ecological imaging datasets of diverse origin, ranging from plankton to birds. On all the datasets, we achieve a new SOTA, with a reduction of the error with respect to the previous SOTA ranging from 29.35% to 100.00%, and often achieving performances very close to perfect classification. Ensembles of DeiTs perform better not because of superior single-model performances but rather due to smaller overlaps in the predictions by independent models and lower top-1 probabilities. This increases the benefit of ensembling, especially when using geometric averages to combine individual learners. While we only test our approach on biodiversity image datasets, our approach is generic and can be applied to any kind of images.
[ { "created": "Thu, 3 Mar 2022 14:16:22 GMT", "version": "v1" }, { "created": "Thu, 22 Sep 2022 16:22:48 GMT", "version": "v2" }, { "created": "Thu, 29 Sep 2022 12:15:31 GMT", "version": "v3" } ]
2023-02-07
[ [ "Kyathanahally", "S.", "" ], [ "Hardeman", "T.", "" ], [ "Reyes", "M.", "" ], [ "Merz", "E.", "" ], [ "Bulas", "T.", "" ], [ "Brun", "P.", "" ], [ "Pomati", "F.", "" ], [ "Baity-Jesi", "M.", "" ] ]
Monitoring biodiversity is paramount to manage and protect natural resources. Collecting images of organisms over large temporal or spatial scales is a promising practice to monitor the biodiversity of natural ecosystems, providing large amounts of data with minimal interference with the environment. Deep learning models are currently used to automate classification of organisms into taxonomic units. However, imprecision in these classifiers introduces a measurement noise that is difficult to control and can significantly hinder the analysis and interpretation of data. {We overcome this limitation through ensembles of Data-efficient image Transformers (DeiTs), which not only are easy to train and implement, but also significantly outperform} the previous state of the art (SOTA). We validate our results on ten ecological imaging datasets of diverse origin, ranging from plankton to birds. On all the datasets, we achieve a new SOTA, with a reduction of the error with respect to the previous SOTA ranging from 29.35% to 100.00%, and often achieving performances very close to perfect classification. Ensembles of DeiTs perform better not because of superior single-model performances but rather due to smaller overlaps in the predictions by independent models and lower top-1 probabilities. This increases the benefit of ensembling, especially when using geometric averages to combine individual learners. While we only test our approach on biodiversity image datasets, our approach is generic and can be applied to any kind of images.
1509.03677
Maziar Izadi
Sasi Prabhakaran Viswanathan, Amit Kumar Sanyal, Maziar Izadi
Mechatronics Architecture of Smartphone-Based Spacecraft ADCS using VSCMG Actuators
null
null
null
null
cs.SY math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hardware and software architecture of a novel spacecraft Attitude Determination and Control System (ADCS) based on smartphones using Variable Speed Control Moment Gyroscope (VSCMG) as actuator is proposed here. A spacecraft ground simulator testbed for Hardware-in-the-loop (HIL) attitude estimation and control with VSCMG is also described. The sensor breakouts with independent micro-controller units are used in the conventional ADCS units, which are replaced by a single integrated off-the-shelf smartphone. On-board sensing, data acquisition, data uplink/downlink, state estimation and real-time feedback control objectives can be performed using this novel spacecraft ADCS. The attitude control and attitude determination (estimation) schemes have appeared in prior publications, but are presented in brief here. Experimental results from running the attitude estimation (filtering) scheme with the "onboard" sensors of the smartphone in the HIL simulator are given. These results, obtained in the Spacecraft Guidance, Navigation and Control Laboratory at NMSU, demonstrate the excellent performance of this estimation scheme with the noisy raw data from the smartphone sensors.
[ { "created": "Fri, 11 Sep 2015 21:58:31 GMT", "version": "v1" } ]
2015-09-15
[ [ "Viswanathan", "Sasi Prabhakaran", "" ], [ "Sanyal", "Amit Kumar", "" ], [ "Izadi", "Maziar", "" ] ]
Hardware and software architecture of a novel spacecraft Attitude Determination and Control System (ADCS) based on smartphones using Variable Speed Control Moment Gyroscope (VSCMG) as actuator is proposed here. A spacecraft ground simulator testbed for Hardware-in-the-loop (HIL) attitude estimation and control with VSCMG is also described. The sensor breakouts with independent micro-controller units are used in the conventional ADCS units, which are replaced by a single integrated off-the-shelf smartphone. On-board sensing, data acquisition, data uplink/downlink, state estimation and real-time feedback control objectives can be performed using this novel spacecraft ADCS. The attitude control and attitude determination (estimation) schemes have appeared in prior publications, but are presented in brief here. Experimental results from running the attitude estimation (filtering) scheme with the "onboard" sensors of the smartphone in the HIL simulator are given. These results, obtained in the Spacecraft Guidance, Navigation and Control Laboratory at NMSU, demonstrate the excellent performance of this estimation scheme with the noisy raw data from the smartphone sensors.
1910.08639
Ilya Kuzovkin
Ashish Kumar, Toby Buckley, John B. Lanier, Qiaozhi Wang, Alicia Kavelaars, Ilya Kuzovkin
OffWorld Gym: open-access physical robotics environment for real-world reinforcement learning benchmark and research
null
null
null
null
cs.LG cs.AI cs.RO stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Success stories of applied machine learning can be traced back to the datasets and environments that were put forward as challenges for the community. The challenge that the community sets as a benchmark is usually the challenge that the community eventually solves. The ultimate challenge of reinforcement learning research is to train real agents to operate in the real environment, but until now there has not been a common real-world RL benchmark. In this work, we present a prototype real-world environment from OffWorld Gym -- a collection of real-world environments for reinforcement learning in robotics with free public remote access. Close integration into existing ecosystem allows the community to start using OffWorld Gym without any prior experience in robotics and takes away the burden of managing a physical robotics system, abstracting it under a familiar API. We introduce a navigation task, where a robot has to reach a visual beacon on an uneven terrain using only the camera input and provide baseline results in both the real environment and the simulated replica. To start training, visit https://gym.offworld.ai
[ { "created": "Fri, 18 Oct 2019 21:58:24 GMT", "version": "v1" }, { "created": "Fri, 3 Apr 2020 08:51:54 GMT", "version": "v2" }, { "created": "Thu, 22 Oct 2020 05:19:37 GMT", "version": "v3" }, { "created": "Tue, 15 Dec 2020 02:59:34 GMT", "version": "v4" } ]
2020-12-16
[ [ "Kumar", "Ashish", "" ], [ "Buckley", "Toby", "" ], [ "Lanier", "John B.", "" ], [ "Wang", "Qiaozhi", "" ], [ "Kavelaars", "Alicia", "" ], [ "Kuzovkin", "Ilya", "" ] ]
Success stories of applied machine learning can be traced back to the datasets and environments that were put forward as challenges for the community. The challenge that the community sets as a benchmark is usually the challenge that the community eventually solves. The ultimate challenge of reinforcement learning research is to train real agents to operate in the real environment, but until now there has not been a common real-world RL benchmark. In this work, we present a prototype real-world environment from OffWorld Gym -- a collection of real-world environments for reinforcement learning in robotics with free public remote access. Close integration into existing ecosystem allows the community to start using OffWorld Gym without any prior experience in robotics and takes away the burden of managing a physical robotics system, abstracting it under a familiar API. We introduce a navigation task, where a robot has to reach a visual beacon on an uneven terrain using only the camera input and provide baseline results in both the real environment and the simulated replica. To start training, visit https://gym.offworld.ai
2104.10819
Kun Li
Kun Li, Liang Yuan, Yunquan Zhang, Gongwei Chen
An Accurate and Efficient Large-scale Regression Method through Best Friend Clustering
null
null
null
null
cs.LG cs.DC
http://creativecommons.org/licenses/by/4.0/
As the data size in Machine Learning fields grows exponentially, it is inevitable to accelerate the computation by utilizing the ever-growing large number of available cores provided by high-performance computing hardware. However, existing parallel methods for clustering or regression often suffer from problems of low accuracy, slow convergence, and complex hyperparameter-tuning. Furthermore, the parallel efficiency is usually difficult to improve while striking a balance between preserving model properties and partitioning computing workloads on distributed systems. In this paper, we propose a novel and simple data structure capturing the most important information among data samples. It has several advantageous properties supporting a hierarchical clustering strategy that is irrelevant to the hardware parallelism, well-defined metrics for determining optimal clustering, balanced partition for maintaining the compactness property, and efficient parallelization for accelerating computation phases. Then we combine the clustering with regression techniques as a parallel library and utilize a hybrid structure of data and model parallelism to make predictions. Experiments illustrate that our library obtains remarkable performance on convergence, accuracy, and scalability.
[ { "created": "Thu, 22 Apr 2021 01:34:29 GMT", "version": "v1" } ]
2021-04-23
[ [ "Li", "Kun", "" ], [ "Yuan", "Liang", "" ], [ "Zhang", "Yunquan", "" ], [ "Chen", "Gongwei", "" ] ]
As the data size in Machine Learning fields grows exponentially, it is inevitable to accelerate the computation by utilizing the ever-growing large number of available cores provided by high-performance computing hardware. However, existing parallel methods for clustering or regression often suffer from problems of low accuracy, slow convergence, and complex hyperparameter-tuning. Furthermore, the parallel efficiency is usually difficult to improve while striking a balance between preserving model properties and partitioning computing workloads on distributed systems. In this paper, we propose a novel and simple data structure capturing the most important information among data samples. It has several advantageous properties supporting a hierarchical clustering strategy that is irrelevant to the hardware parallelism, well-defined metrics for determining optimal clustering, balanced partition for maintaining the compactness property, and efficient parallelization for accelerating computation phases. Then we combine the clustering with regression techniques as a parallel library and utilize a hybrid structure of data and model parallelism to make predictions. Experiments illustrate that our library obtains remarkable performance on convergence, accuracy, and scalability.
2306.05561
Oleksandr Yermilov
Oleksandr Yermilov, Vipul Raheja, Artem Chernodub
Privacy- and Utility-Preserving NLP with Anonymized Data: A case study of Pseudonymization
10 pages. Accepted for TrustNLP workshop at ACL2023
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
This work investigates the effectiveness of different pseudonymization techniques, ranging from rule-based substitutions to using pre-trained Large Language Models (LLMs), on a variety of datasets and models used for two widely used NLP tasks: text classification and summarization. Our work provides crucial insights into the gaps between original and anonymized data (focusing on the pseudonymization technique) and model quality and fosters future research into higher-quality anonymization techniques to better balance the trade-offs between data protection and utility preservation. We make our code, pseudonymized datasets, and downstream models publicly available
[ { "created": "Thu, 8 Jun 2023 21:06:19 GMT", "version": "v1" } ]
2023-06-12
[ [ "Yermilov", "Oleksandr", "" ], [ "Raheja", "Vipul", "" ], [ "Chernodub", "Artem", "" ] ]
This work investigates the effectiveness of different pseudonymization techniques, ranging from rule-based substitutions to using pre-trained Large Language Models (LLMs), on a variety of datasets and models used for two widely used NLP tasks: text classification and summarization. Our work provides crucial insights into the gaps between original and anonymized data (focusing on the pseudonymization technique) and model quality and fosters future research into higher-quality anonymization techniques to better balance the trade-offs between data protection and utility preservation. We make our code, pseudonymized datasets, and downstream models publicly available
2403.09092
Dacheng Wen
Yupeng Li, Haorui He, Jin Bai, and Dacheng Wen
MCFEND: A Multi-source Benchmark Dataset for Chinese Fake News Detection
Accepted by the ACM Web Conference 2024 (WWW 2024) oral, dataset available: https://github.com/TrustworthyComp
null
10.1145/3589334.3645385
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
The prevalence of fake news across various online sources has had a significant influence on the public. Existing Chinese fake news detection datasets are limited to news sourced solely from Weibo. However, fake news originating from multiple sources exhibits diversity in various aspects, including its content and social context. Methods trained on purely one single news source can hardly be applicable to real-world scenarios. Our pilot experiment demonstrates that the F1 score of the state-of-the-art method that learns from a large Chinese fake news detection dataset, Weibo-21, drops significantly from 0.943 to 0.470 when the test data is changed to multi-source news data, failing to identify more than one-third of the multi-source fake news. To address this limitation, we constructed the first multi-source benchmark dataset for Chinese fake news detection, termed MCFEND, which is composed of news we collected from diverse sources such as social platforms, messaging apps, and traditional online news outlets. Notably, such news has been fact-checked by 14 authoritative fact-checking agencies worldwide. In addition, various existing Chinese fake news detection methods are thoroughly evaluated on our proposed dataset in cross-source, multi-source, and unseen source ways. MCFEND, as a benchmark dataset, aims to advance Chinese fake news detection approaches in real-world scenarios.
[ { "created": "Thu, 14 Mar 2024 04:32:13 GMT", "version": "v1" }, { "created": "Wed, 24 Jul 2024 05:57:01 GMT", "version": "v2" } ]
2024-07-25
[ [ "Li", "Yupeng", "" ], [ "He", "Haorui", "" ], [ "Bai", "Jin", "" ], [ "Wen", "Dacheng", "" ] ]
The prevalence of fake news across various online sources has had a significant influence on the public. Existing Chinese fake news detection datasets are limited to news sourced solely from Weibo. However, fake news originating from multiple sources exhibits diversity in various aspects, including its content and social context. Methods trained on purely one single news source can hardly be applicable to real-world scenarios. Our pilot experiment demonstrates that the F1 score of the state-of-the-art method that learns from a large Chinese fake news detection dataset, Weibo-21, drops significantly from 0.943 to 0.470 when the test data is changed to multi-source news data, failing to identify more than one-third of the multi-source fake news. To address this limitation, we constructed the first multi-source benchmark dataset for Chinese fake news detection, termed MCFEND, which is composed of news we collected from diverse sources such as social platforms, messaging apps, and traditional online news outlets. Notably, such news has been fact-checked by 14 authoritative fact-checking agencies worldwide. In addition, various existing Chinese fake news detection methods are thoroughly evaluated on our proposed dataset in cross-source, multi-source, and unseen source ways. MCFEND, as a benchmark dataset, aims to advance Chinese fake news detection approaches in real-world scenarios.
2303.02604
Hanwen Cao
Hanwen Cao, Jianshu Zhou, Junda Huang, Yichuan Li, Ng Cheng Meng, Rui Cao, Qi Dou, Yunhui Liu
Two-Stage Grasping: A New Bin Picking Framework for Small Objects
ICRA 2023
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a novel bin picking framework, two-stage grasping, aiming at precise grasping of cluttered small objects. Object density estimation and rough grasping are conducted in the first stage. Fine segmentation, detection, grasping, and pushing are performed in the second stage. A small object bin picking system has been realized to exhibit the concept of two-stage grasping. Experiments have shown the effectiveness of the proposed framework. Unlike traditional bin picking methods focusing on vision-based grasping planning using classic frameworks, the challenges of picking cluttered small objects can be solved by the proposed new framework with simple vision detection and planning.
[ { "created": "Sun, 5 Mar 2023 08:05:00 GMT", "version": "v1" }, { "created": "Tue, 7 Mar 2023 06:44:29 GMT", "version": "v2" }, { "created": "Sat, 6 May 2023 06:37:12 GMT", "version": "v3" } ]
2023-05-09
[ [ "Cao", "Hanwen", "" ], [ "Zhou", "Jianshu", "" ], [ "Huang", "Junda", "" ], [ "Li", "Yichuan", "" ], [ "Meng", "Ng Cheng", "" ], [ "Cao", "Rui", "" ], [ "Dou", "Qi", "" ], [ "Liu", "Yunhui", "" ] ]
This paper proposes a novel bin picking framework, two-stage grasping, aiming at precise grasping of cluttered small objects. Object density estimation and rough grasping are conducted in the first stage. Fine segmentation, detection, grasping, and pushing are performed in the second stage. A small object bin picking system has been realized to exhibit the concept of two-stage grasping. Experiments have shown the effectiveness of the proposed framework. Unlike traditional bin picking methods focusing on vision-based grasping planning using classic frameworks, the challenges of picking cluttered small objects can be solved by the proposed new framework with simple vision detection and planning.
2307.09776
Shaun Azzopardi
Shaun Azzopardi, Nir Piterman, Gerardo Schneider, Luca di Stefano
LTL Synthesis on Infinite-State Arenas defined by Programs
null
null
null
null
cs.LO cs.FL cs.PL cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper deals with the problem of automatically and correctly controlling infinite-state reactive programs to achieve LTL goals. Applications include adapting a program to new requirements, or to repair bugs discovered in the original specification or program code. Existing approaches are able to solve this problem for safety and some reachability properties, but require an a priori template of the solution for more general properties. Fully automated approaches for full LTL exist, reducing the problem into successive finite LTL reactive synthesis problems in an abstraction-refinement loop. However, they do not terminate when the number of steps to be completed depends on unbounded variables. Our main insight is that safety abstractions of the program are not enough -- fairness properties are also essential to be able to decide many interesting problems, something missed by existing automated approaches. We thus go beyond the state-of-the-art to allow for automated reactive program control for full LTL, with automated discovery of the knowledge, including fairness, of the program needed to determine realisability. We further implement the approach in a tool, with an associated DSL for reactive programs, and illustrate the approach through several case studies.
[ { "created": "Wed, 19 Jul 2023 06:33:51 GMT", "version": "v1" } ]
2023-07-20
[ [ "Azzopardi", "Shaun", "" ], [ "Piterman", "Nir", "" ], [ "Schneider", "Gerardo", "" ], [ "di Stefano", "Luca", "" ] ]
This paper deals with the problem of automatically and correctly controlling infinite-state reactive programs to achieve LTL goals. Applications include adapting a program to new requirements, or to repair bugs discovered in the original specification or program code. Existing approaches are able to solve this problem for safety and some reachability properties, but require an a priori template of the solution for more general properties. Fully automated approaches for full LTL exist, reducing the problem into successive finite LTL reactive synthesis problems in an abstraction-refinement loop. However, they do not terminate when the number of steps to be completed depends on unbounded variables. Our main insight is that safety abstractions of the program are not enough -- fairness properties are also essential to be able to decide many interesting problems, something missed by existing automated approaches. We thus go beyond the state-of-the-art to allow for automated reactive program control for full LTL, with automated discovery of the knowledge, including fairness, of the program needed to determine realisability. We further implement the approach in a tool, with an associated DSL for reactive programs, and illustrate the approach through several case studies.
2104.03133
Bo Zhang
Bo Zhang and Li Niu and Liqing Zhang
Image Composition Assessment with Saliency-augmented Multi-pattern Pooling
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image composition assessment is crucial in aesthetic assessment, which aims to assess the overall composition quality of a given image. However, to the best of our knowledge, there is neither dataset nor method specifically proposed for this task. In this paper, we contribute the first composition assessment dataset CADB with composition scores for each image provided by multiple professional raters. Besides, we propose a composition assessment network SAMP-Net with a novel Saliency-Augmented Multi-pattern Pooling (SAMP) module, which analyses visual layout from the perspectives of multiple composition patterns. We also leverage composition-relevant attributes to further boost the performance, and extend Earth Mover's Distance (EMD) loss to weighted EMD loss to eliminate the content bias. The experimental results show that our SAMP-Net can perform more favorably than previous aesthetic assessment approaches.
[ { "created": "Wed, 7 Apr 2021 14:07:17 GMT", "version": "v1" }, { "created": "Mon, 18 Oct 2021 02:09:40 GMT", "version": "v2" } ]
2021-10-19
[ [ "Zhang", "Bo", "" ], [ "Niu", "Li", "" ], [ "Zhang", "Liqing", "" ] ]
Image composition assessment is crucial in aesthetic assessment, which aims to assess the overall composition quality of a given image. However, to the best of our knowledge, there is neither dataset nor method specifically proposed for this task. In this paper, we contribute the first composition assessment dataset CADB with composition scores for each image provided by multiple professional raters. Besides, we propose a composition assessment network SAMP-Net with a novel Saliency-Augmented Multi-pattern Pooling (SAMP) module, which analyses visual layout from the perspectives of multiple composition patterns. We also leverage composition-relevant attributes to further boost the performance, and extend Earth Mover's Distance (EMD) loss to weighted EMD loss to eliminate the content bias. The experimental results show that our SAMP-Net can perform more favorably than previous aesthetic assessment approaches.
1505.02348
Ali Atiia
Ali Atiia, Fran\c{c}ois Major, J\'er\^ome Waldisp\"uhl
The Topology of Biological Networks from a Complexity Perspective
11 pages, 2 figures, 3 tables, 1 theorem
null
null
null
cs.SI physics.soc-ph q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A complexity-theoretic approach to studying biological networks is proposed. A simple graph representation is used where molecules (DNA, RNA, proteins and chemicals) are vertices and relations between them are directed and signed (promotional (+) or inhibitory (-)) edges. Based on this model, the problem of network evolution (NE) is defined formally as an optimization problem and subsequently proven to be fundamentally hard (NP-hard) by means of reduction from the Knapsack problem (KP). Second, for empirical validation, various biological networks of experimentally-validated interactions are compared against randomly generated networks with varying degree distributions. An NE instance is created using a given real or synthetic (random) network. After being reverse-reduced to a KP instance, each NE instance is fed to a KP solver and the average achieved knapsack value-to-weight ratio is recorded from multiple rounds of simulated evolutionary pressure. The results show that biological networks (and synthetic networks of similar degree distribution) achieve the highest ratios at maximal evolutionary pressure and minimal error tolerance conditions. The more distant (in degree distribution) a synthetic network is from biological networks the lower its achieved ratio. The results shed light on how computational intractability has shaped the evolution of biological networks into their current topology.
[ { "created": "Sun, 10 May 2015 07:06:20 GMT", "version": "v1" }, { "created": "Tue, 12 May 2015 00:42:59 GMT", "version": "v2" }, { "created": "Mon, 18 May 2015 17:22:55 GMT", "version": "v3" }, { "created": "Tue, 24 Apr 2018 15:32:07 GMT", "version": "v4" } ]
2018-04-25
[ [ "Atiia", "Ali", "" ], [ "Major", "François", "" ], [ "Waldispühl", "Jérôme", "" ] ]
A complexity-theoretic approach to studying biological networks is proposed. A simple graph representation is used where molecules (DNA, RNA, proteins and chemicals) are vertices and relations between them are directed and signed (promotional (+) or inhibitory (-)) edges. Based on this model, the problem of network evolution (NE) is defined formally as an optimization problem and subsequently proven to be fundamentally hard (NP-hard) by means of reduction from the Knapsack problem (KP). Second, for empirical validation, various biological networks of experimentally-validated interactions are compared against randomly generated networks with varying degree distributions. An NE instance is created using a given real or synthetic (random) network. After being reverse-reduced to a KP instance, each NE instance is fed to a KP solver and the average achieved knapsack value-to-weight ratio is recorded from multiple rounds of simulated evolutionary pressure. The results show that biological networks (and synthetic networks of similar degree distribution) achieve the highest ratios at maximal evolutionary pressure and minimal error tolerance conditions. The more distant (in degree distribution) a synthetic network is from biological networks the lower its achieved ratio. The results shed light on how computational intractability has shaped the evolution of biological networks into their current topology.
2205.13970
Andrea Passarella
Marco Conti, Andrea Passarella
The Internet of People: A human and data-centric paradigm for the Next Generation Internet
null
Computer Communications, Volume 131, 2018, Pages 51-65, ISSN 0140-3664
10.1016/j.comcom.2018.07.034
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The cyber-physical convergence, the fast expansion of the Internet at its edge, and tighter interactions between human users and their personal mobile devices push towards a data-centric Internet where the human user becomes more central than ever. We argue that this will profoundly impact primarily on the way data should be handled in the Next Generation Internet. It will require a radical change of the Internet data-management paradigm, from the current platform-centric to a human-centric model. In this paper we present a new paradigm for Internet data management that we name Internet of People (IoP) because it embeds human behavior models in its algorithms. To this end, IoP algorithms exploit quantitative models of the humans' individual and social behavior, from sociology, anthropology, psychology, economics, physics. IoP is not a replacement of the current Internet networking infrastructure, but it exploits legacy Internet services as (reliable) primitives to achieve end-to-end connectivity on a global-scale. In this opinion paper, we first discuss the key features of the IoP paradigm along with the underlying research issues and challenges. Then, we present emerging data-management paradigms that are anticipating IoP.
[ { "created": "Fri, 27 May 2022 13:33:36 GMT", "version": "v1" } ]
2022-05-30
[ [ "Conti", "Marco", "" ], [ "Passarella", "Andrea", "" ] ]
The cyber-physical convergence, the fast expansion of the Internet at its edge, and tighter interactions between human users and their personal mobile devices push towards a data-centric Internet where the human user becomes more central than ever. We argue that this will profoundly impact primarily on the way data should be handled in the Next Generation Internet. It will require a radical change of the Internet data-management paradigm, from the current platform-centric to a human-centric model. In this paper we present a new paradigm for Internet data management that we name Internet of People (IoP) because it embeds human behavior models in its algorithms. To this end, IoP algorithms exploit quantitative models of the humans' individual and social behavior, from sociology, anthropology, psychology, economics, physics. IoP is not a replacement of the current Internet networking infrastructure, but it exploits legacy Internet services as (reliable) primitives to achieve end-to-end connectivity on a global-scale. In this opinion paper, we first discuss the key features of the IoP paradigm along with the underlying research issues and challenges. Then, we present emerging data-management paradigms that are anticipating IoP.
2305.18262
Mert Yuksekgonul
Mert Yuksekgonul, Linjun Zhang, James Zou, Carlos Guestrin
Beyond Confidence: Reliable Models Should Also Consider Atypicality
Published at NeurIPS 2023
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
While most machine learning models can provide confidence in their predictions, confidence is insufficient to understand a prediction's reliability. For instance, the model may have a low confidence prediction if the input is not well-represented in the training dataset or if the input is inherently ambiguous. In this work, we investigate the relationship between how atypical(rare) a sample or a class is and the reliability of a model's predictions. We first demonstrate that atypicality is strongly related to miscalibration and accuracy. In particular, we empirically show that predictions for atypical inputs or atypical classes are more overconfident and have lower accuracy. Using these insights, we show incorporating atypicality improves uncertainty quantification and model performance for discriminative neural networks and large language models. In a case study, we show that using atypicality improves the performance of a skin lesion classifier across different skin tone groups without having access to the group attributes. Overall, we propose that models should use not only confidence but also atypicality to improve uncertainty quantification and performance. Our results demonstrate that simple post-hoc atypicality estimators can provide significant value.
[ { "created": "Mon, 29 May 2023 17:37:09 GMT", "version": "v1" }, { "created": "Mon, 30 Oct 2023 05:24:15 GMT", "version": "v2" } ]
2023-10-31
[ [ "Yuksekgonul", "Mert", "" ], [ "Zhang", "Linjun", "" ], [ "Zou", "James", "" ], [ "Guestrin", "Carlos", "" ] ]
While most machine learning models can provide confidence in their predictions, confidence is insufficient to understand a prediction's reliability. For instance, the model may have a low confidence prediction if the input is not well-represented in the training dataset or if the input is inherently ambiguous. In this work, we investigate the relationship between how atypical(rare) a sample or a class is and the reliability of a model's predictions. We first demonstrate that atypicality is strongly related to miscalibration and accuracy. In particular, we empirically show that predictions for atypical inputs or atypical classes are more overconfident and have lower accuracy. Using these insights, we show incorporating atypicality improves uncertainty quantification and model performance for discriminative neural networks and large language models. In a case study, we show that using atypicality improves the performance of a skin lesion classifier across different skin tone groups without having access to the group attributes. Overall, we propose that models should use not only confidence but also atypicality to improve uncertainty quantification and performance. Our results demonstrate that simple post-hoc atypicality estimators can provide significant value.
1610.05892
Fuad Aleskerov
F. Aleskerov, N. Meshcheryakova, S. Shvydun
Centrality measures in networks based on nodes attributes, long-range interactions and group influence
44 pages
null
null
null
cs.SI physics.soc-ph q-fin.EC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a new method for assessing agents influence in network structures, which takes into consideration nodes attributes, individual and group influences of nodes, and the intensity of interactions. This approach helps us to identify both explicit and hidden central elements which cannot be detected by classical centrality measures or other indices.
[ { "created": "Wed, 19 Oct 2016 07:27:02 GMT", "version": "v1" } ]
2016-10-20
[ [ "Aleskerov", "F.", "" ], [ "Meshcheryakova", "N.", "" ], [ "Shvydun", "S.", "" ] ]
We propose a new method for assessing agents influence in network structures, which takes into consideration nodes attributes, individual and group influences of nodes, and the intensity of interactions. This approach helps us to identify both explicit and hidden central elements which cannot be detected by classical centrality measures or other indices.
1809.10283
Christopher Iliffe Sprague
Christopher Iliffe Sprague, Petter \"Ogren
Adding Neural Network Controllers to Behavior Trees without Destroying Performance Guarantees
Accepted as Regular Paper to The 61th IEEE Conference on Decision and Control (CDC 2022)
null
null
null
cs.RO cs.AI cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we show how Behavior Trees that have performance guarantees, in terms of safety and goal convergence, can be extended with components that were designed using machine learning, without destroying those performance guarantees. Machine learning approaches such as reinforcement learning or learning from demonstration can be very appealing to AI designers that want efficient and realistic behaviors in their agents. However, those algorithms seldom provide guarantees for solving the given task in all different situations while keeping the agent safe. Instead, such guarantees are often easier to find for manually designed model-based approaches. In this paper we exploit the modularity of behavior trees to extend a given design with an efficient, but possibly unreliable, machine learning component in a way that preserves the guarantees. The approach is illustrated with an inverted pendulum example.
[ { "created": "Wed, 26 Sep 2018 12:23:19 GMT", "version": "v1" }, { "created": "Sun, 30 Jun 2019 11:36:39 GMT", "version": "v2" }, { "created": "Mon, 25 Jul 2022 09:14:25 GMT", "version": "v3" } ]
2022-07-26
[ [ "Sprague", "Christopher Iliffe", "" ], [ "Ögren", "Petter", "" ] ]
In this paper, we show how Behavior Trees that have performance guarantees, in terms of safety and goal convergence, can be extended with components that were designed using machine learning, without destroying those performance guarantees. Machine learning approaches such as reinforcement learning or learning from demonstration can be very appealing to AI designers that want efficient and realistic behaviors in their agents. However, those algorithms seldom provide guarantees for solving the given task in all different situations while keeping the agent safe. Instead, such guarantees are often easier to find for manually designed model-based approaches. In this paper we exploit the modularity of behavior trees to extend a given design with an efficient, but possibly unreliable, machine learning component in a way that preserves the guarantees. The approach is illustrated with an inverted pendulum example.
2306.06508
Wenxuan Bao
Wenxuan Bao, Haohan Wang, Jun Wu, Jingrui He
Optimizing the Collaboration Structure in Cross-Silo Federated Learning
Accepted by ICML 2023
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In federated learning (FL), multiple clients collaborate to train machine learning models together while keeping their data decentralized. Through utilizing more training data, FL suffers from the potential negative transfer problem: the global FL model may even perform worse than the models trained with local data only. In this paper, we propose FedCollab, a novel FL framework that alleviates negative transfer by clustering clients into non-overlapping coalitions based on their distribution distances and data quantities. As a result, each client only collaborates with the clients having similar data distributions, and tends to collaborate with more clients when it has less data. We evaluate our framework with a variety of datasets, models, and types of non-IIDness. Our results demonstrate that FedCollab effectively mitigates negative transfer across a wide range of FL algorithms and consistently outperforms other clustered FL algorithms.
[ { "created": "Sat, 10 Jun 2023 18:59:50 GMT", "version": "v1" } ]
2023-06-13
[ [ "Bao", "Wenxuan", "" ], [ "Wang", "Haohan", "" ], [ "Wu", "Jun", "" ], [ "He", "Jingrui", "" ] ]
In federated learning (FL), multiple clients collaborate to train machine learning models together while keeping their data decentralized. Through utilizing more training data, FL suffers from the potential negative transfer problem: the global FL model may even perform worse than the models trained with local data only. In this paper, we propose FedCollab, a novel FL framework that alleviates negative transfer by clustering clients into non-overlapping coalitions based on their distribution distances and data quantities. As a result, each client only collaborates with the clients having similar data distributions, and tends to collaborate with more clients when it has less data. We evaluate our framework with a variety of datasets, models, and types of non-IIDness. Our results demonstrate that FedCollab effectively mitigates negative transfer across a wide range of FL algorithms and consistently outperforms other clustered FL algorithms.
1802.04358
Chulaka Gunasekara
Song Feng, R. Chulaka Gunasekara, Sunil Shashidhara, Kshitij P. Fadnis and Lazaros C. Polymenakos
A Unified Implicit Dialog Framework for Conversational Search
Appeared as a demo in AAAI-2018
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
We propose a unified Implicit Dialog framework for goal-oriented, information seeking tasks of Conversational Search applications. It aims to enable dialog interactions with domain data without replying on explicitly encoded the rules but utilizing the underlying data representation to build the components required for dialog interaction, which we refer as Implicit Dialog in this work. The proposed framework consists of a pipeline of End-to-End trainable modules. A centralized knowledge representation is used to semantically ground multiple dialog modules. An associated set of tools are integrated with the framework to gather end users' input for continuous improvement of the system. The goal is to facilitate development of conversational systems by identifying the components and the data that can be adapted and reused across many end-user applications. We demonstrate our approach by creating conversational agents for several independent domains.
[ { "created": "Mon, 12 Feb 2018 20:53:50 GMT", "version": "v1" } ]
2018-02-14
[ [ "Feng", "Song", "" ], [ "Gunasekara", "R. Chulaka", "" ], [ "Shashidhara", "Sunil", "" ], [ "Fadnis", "Kshitij P.", "" ], [ "Polymenakos", "Lazaros C.", "" ] ]
We propose a unified Implicit Dialog framework for goal-oriented, information seeking tasks of Conversational Search applications. It aims to enable dialog interactions with domain data without replying on explicitly encoded the rules but utilizing the underlying data representation to build the components required for dialog interaction, which we refer as Implicit Dialog in this work. The proposed framework consists of a pipeline of End-to-End trainable modules. A centralized knowledge representation is used to semantically ground multiple dialog modules. An associated set of tools are integrated with the framework to gather end users' input for continuous improvement of the system. The goal is to facilitate development of conversational systems by identifying the components and the data that can be adapted and reused across many end-user applications. We demonstrate our approach by creating conversational agents for several independent domains.
1809.00999
Abdallah Moussawi
Abdallah Moussawi
Towards Large Scale Training Of Autoencoders For Collaborative Filtering
2 pages, ACM RecSys 2018 Late-breaking Results Track (Posters)
null
null
null
cs.IR cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we apply a mini-batch based negative sampling method to efficiently train a latent factor autoencoder model on large scale and sparse data for implicit feedback collaborative filtering. We compare our work against a state-of-the-art baseline model on different experimental datasets and show that this method can lead to a good and fast approximation of the baseline model performance. The source code is available in https://github.com/amoussawi/recoder .
[ { "created": "Thu, 30 Aug 2018 22:34:29 GMT", "version": "v1" }, { "created": "Sat, 20 Oct 2018 00:13:30 GMT", "version": "v2" }, { "created": "Tue, 23 Oct 2018 07:44:07 GMT", "version": "v3" } ]
2018-10-24
[ [ "Moussawi", "Abdallah", "" ] ]
In this paper, we apply a mini-batch based negative sampling method to efficiently train a latent factor autoencoder model on large scale and sparse data for implicit feedback collaborative filtering. We compare our work against a state-of-the-art baseline model on different experimental datasets and show that this method can lead to a good and fast approximation of the baseline model performance. The source code is available in https://github.com/amoussawi/recoder .
2109.12719
Julian D. Cortes
Julian D. Cortes, Daniel A. Andrade
The Colombian Scientific Elite -- Science Mapping and Bibliometric Outlook
null
null
null
null
cs.DL cs.SI
http://creativecommons.org/licenses/by-nc-nd/4.0/
A well established agenda on the research output, impact, and structure of global scientific elites such as Nobel Prize laureates has generated interest in the scientific elites from developing countries. This study deploys science mapping techniques to provide a comprehensive analysis of the output, impact, and structure of the Colombian scientific elite, i.e., researchers awarded with the Alejandro Angel Escobar Foundation National Prize 1990 2020, known locally as the Colombian Nobel. Findings showed that the Colombian scientific elite has a broader agenda than indexing titles in internationally renowned bibliographic databases. The Colombian scientific elite also showed positive growth, which is an inverse trend compared with Nobel laureate productivity. There were no noticeable changes in productivity and impact before and after receiving the prize. Institutional collaboration within the Colombian scientific elite displayed the highest betweenness (brokerage) role of world and local top-tier universities. However, only two Colombian scientific elite members published an article with two Nobel Prize laureates. Most of the research profiles reflected the national output priorities, but were found to diverge from the national focus in respect of strategic research capacities. This study also conducted a productivity and impact comparison with Nobel Prize laureates in science and economics by means of a stratified random sample 1990-2020 via the composite indicator proposed by Ioannidis et al. The interleaving of the Colombian scientific elite and Nobel Prize laureates, particularly between the 3rd and 2nd quartiles, enabled a more nuanced analysis of the local impact in the global scientific landscape.
[ { "created": "Sun, 26 Sep 2021 23:12:04 GMT", "version": "v1" } ]
2021-09-28
[ [ "Cortes", "Julian D.", "" ], [ "Andrade", "Daniel A.", "" ] ]
A well established agenda on the research output, impact, and structure of global scientific elites such as Nobel Prize laureates has generated interest in the scientific elites from developing countries. This study deploys science mapping techniques to provide a comprehensive analysis of the output, impact, and structure of the Colombian scientific elite, i.e., researchers awarded with the Alejandro Angel Escobar Foundation National Prize 1990 2020, known locally as the Colombian Nobel. Findings showed that the Colombian scientific elite has a broader agenda than indexing titles in internationally renowned bibliographic databases. The Colombian scientific elite also showed positive growth, which is an inverse trend compared with Nobel laureate productivity. There were no noticeable changes in productivity and impact before and after receiving the prize. Institutional collaboration within the Colombian scientific elite displayed the highest betweenness (brokerage) role of world and local top-tier universities. However, only two Colombian scientific elite members published an article with two Nobel Prize laureates. Most of the research profiles reflected the national output priorities, but were found to diverge from the national focus in respect of strategic research capacities. This study also conducted a productivity and impact comparison with Nobel Prize laureates in science and economics by means of a stratified random sample 1990-2020 via the composite indicator proposed by Ioannidis et al. The interleaving of the Colombian scientific elite and Nobel Prize laureates, particularly between the 3rd and 2nd quartiles, enabled a more nuanced analysis of the local impact in the global scientific landscape.
1804.04976
Daniele De Martini
Mirto Musci, Daniele De Martini, Nicola Blago, Tullio Facchinetti and Marco Piastra
Online Fall Detection using Recurrent Neural Networks
6 pages, ICRA 2018
null
10.1109/TETC.2020.3027454
null
cs.CY cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unintentional falls can cause severe injuries and even death, especially if no immediate assistance is given. The aim of Fall Detection Systems (FDSs) is to detect an occurring fall. This information can be used to trigger the necessary assistance in case of injury. This can be done by using either ambient-based sensors, e.g. cameras, or wearable devices. The aim of this work is to study the technical aspects of FDSs based on wearable devices and artificial intelligence techniques, in particular Deep Learning (DL), to implement an effective algorithm for on-line fall detection. The proposed classifier is based on a Recurrent Neural Network (RNN) model with underlying Long Short-Term Memory (LSTM) blocks. The method is tested on the publicly available SisFall dataset, with extended annotation, and compared with the results obtained by the SisFall authors.
[ { "created": "Fri, 13 Apr 2018 14:58:51 GMT", "version": "v1" } ]
2020-10-05
[ [ "Musci", "Mirto", "" ], [ "De Martini", "Daniele", "" ], [ "Blago", "Nicola", "" ], [ "Facchinetti", "Tullio", "" ], [ "Piastra", "Marco", "" ] ]
Unintentional falls can cause severe injuries and even death, especially if no immediate assistance is given. The aim of Fall Detection Systems (FDSs) is to detect an occurring fall. This information can be used to trigger the necessary assistance in case of injury. This can be done by using either ambient-based sensors, e.g. cameras, or wearable devices. The aim of this work is to study the technical aspects of FDSs based on wearable devices and artificial intelligence techniques, in particular Deep Learning (DL), to implement an effective algorithm for on-line fall detection. The proposed classifier is based on a Recurrent Neural Network (RNN) model with underlying Long Short-Term Memory (LSTM) blocks. The method is tested on the publicly available SisFall dataset, with extended annotation, and compared with the results obtained by the SisFall authors.
1701.08435
Joost van Amersfoort
Joost van Amersfoort, Anitha Kannan, Marc'Aurelio Ranzato, Arthur Szlam, Du Tran and Soumith Chintala
Transformation-Based Models of Video Sequences
null
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we propose a simple unsupervised approach for next frame prediction in video. Instead of directly predicting the pixels in a frame given past frames, we predict the transformations needed for generating the next frame in a sequence, given the transformations of the past frames. This leads to sharper results, while using a smaller prediction model. In order to enable a fair comparison between different video frame prediction models, we also propose a new evaluation protocol. We use generated frames as input to a classifier trained with ground truth sequences. This criterion guarantees that models scoring high are those producing sequences which preserve discriminative features, as opposed to merely penalizing any deviation, plausible or not, from the ground truth. Our proposed approach compares favourably against more sophisticated ones on the UCF-101 data set, while also being more efficient in terms of the number of parameters and computational cost.
[ { "created": "Sun, 29 Jan 2017 21:39:05 GMT", "version": "v1" }, { "created": "Mon, 24 Apr 2017 20:20:40 GMT", "version": "v2" }, { "created": "Mon, 6 Feb 2023 14:49:05 GMT", "version": "v3" } ]
2023-02-07
[ [ "van Amersfoort", "Joost", "" ], [ "Kannan", "Anitha", "" ], [ "Ranzato", "Marc'Aurelio", "" ], [ "Szlam", "Arthur", "" ], [ "Tran", "Du", "" ], [ "Chintala", "Soumith", "" ] ]
In this work we propose a simple unsupervised approach for next frame prediction in video. Instead of directly predicting the pixels in a frame given past frames, we predict the transformations needed for generating the next frame in a sequence, given the transformations of the past frames. This leads to sharper results, while using a smaller prediction model. In order to enable a fair comparison between different video frame prediction models, we also propose a new evaluation protocol. We use generated frames as input to a classifier trained with ground truth sequences. This criterion guarantees that models scoring high are those producing sequences which preserve discriminative features, as opposed to merely penalizing any deviation, plausible or not, from the ground truth. Our proposed approach compares favourably against more sophisticated ones on the UCF-101 data set, while also being more efficient in terms of the number of parameters and computational cost.
2407.20775
Harry J Davies
Harry J. Davies, James Monsen, Danilo P. Mandic
Interpretable Pre-Trained Transformers for Heart Time-Series Data
14 pages, 5 figures
null
null
null
cs.LG cs.AI eess.SP
http://creativecommons.org/licenses/by/4.0/
Decoder-only transformers are the backbone of the popular generative pre-trained transformer (GPT) series of large language models. In this work, we employ this framework to the analysis of clinical heart time-series data, to create two pre-trained general purpose cardiac models, termed PPG-PT and ECG-PT. We place a special emphasis on making both such pre-trained models fully interpretable. This is achieved firstly through aggregate attention maps which show that, in order to make predictions, the model focuses on similar points in previous cardiac cycles and gradually broadens its attention in deeper layers. Next, we show that tokens with the same value, which occur at different distinct points in the electrocardiography (ECG) and photoplethysmography (PPG) cycle, form separate clusters in high dimensional space. The clusters form according to phase, as the tokens propagate through the transformer blocks. Finally, we highlight that individual attention heads respond to specific physiologically relevent features, such as the dicrotic notch in PPG and the P-wave in ECG. It is also demonstrated that these pre-trained models are straightforward to fine-tune for tasks such as classification of atrial fibrillation (AF), and beat detection in photoplethysmography. For the example of AF, the fine-tuning took 11 minutes of computer time, and achieved the respective leave-one-subject-out AUCs of 0.99 and 0.93 for ECG and PPG within the MIMIC Perform AF dataset. In addition, the fine-tuned beat detector achieved a state-of-the-art F1 score of 98%, as well as uniquely providing a beat confidence level which acts as a signal quality estimator. Importantly, the fine-tuned models for AF screening are also fully explainable, with attention shifting to regions in the context that are strongly indicative of atrial fibrillation.
[ { "created": "Tue, 30 Jul 2024 12:22:03 GMT", "version": "v1" }, { "created": "Tue, 13 Aug 2024 10:18:45 GMT", "version": "v2" } ]
2024-08-14
[ [ "Davies", "Harry J.", "" ], [ "Monsen", "James", "" ], [ "Mandic", "Danilo P.", "" ] ]
Decoder-only transformers are the backbone of the popular generative pre-trained transformer (GPT) series of large language models. In this work, we employ this framework to the analysis of clinical heart time-series data, to create two pre-trained general purpose cardiac models, termed PPG-PT and ECG-PT. We place a special emphasis on making both such pre-trained models fully interpretable. This is achieved firstly through aggregate attention maps which show that, in order to make predictions, the model focuses on similar points in previous cardiac cycles and gradually broadens its attention in deeper layers. Next, we show that tokens with the same value, which occur at different distinct points in the electrocardiography (ECG) and photoplethysmography (PPG) cycle, form separate clusters in high dimensional space. The clusters form according to phase, as the tokens propagate through the transformer blocks. Finally, we highlight that individual attention heads respond to specific physiologically relevent features, such as the dicrotic notch in PPG and the P-wave in ECG. It is also demonstrated that these pre-trained models are straightforward to fine-tune for tasks such as classification of atrial fibrillation (AF), and beat detection in photoplethysmography. For the example of AF, the fine-tuning took 11 minutes of computer time, and achieved the respective leave-one-subject-out AUCs of 0.99 and 0.93 for ECG and PPG within the MIMIC Perform AF dataset. In addition, the fine-tuned beat detector achieved a state-of-the-art F1 score of 98%, as well as uniquely providing a beat confidence level which acts as a signal quality estimator. Importantly, the fine-tuned models for AF screening are also fully explainable, with attention shifting to regions in the context that are strongly indicative of atrial fibrillation.
1808.00048
Christos Rodosthenous
Christos Rodosthenous and Loizos Michael
Web-STAR: A Visual Web-Based IDE for a Story Comprehension System
Under consideration in Theory and Practice of Logic Programming (TPLP)
null
null
null
cs.CY cs.AI cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present Web-STAR, an online platform for story understanding built on top of the STAR reasoning engine for STory comprehension through ARgumentation. The platform includes a web-based IDE, integration with the STAR system, and a web service infrastructure to support integration with other systems that rely on story understanding functionality to complete their tasks. The platform also delivers a number of "social" features, including a community repository for public story sharing with a built-in commenting system, and tools for collaborative story editing that can be used for team development projects and for educational purposes.
[ { "created": "Sat, 28 Jul 2018 05:09:27 GMT", "version": "v1" } ]
2018-08-02
[ [ "Rodosthenous", "Christos", "" ], [ "Michael", "Loizos", "" ] ]
We present Web-STAR, an online platform for story understanding built on top of the STAR reasoning engine for STory comprehension through ARgumentation. The platform includes a web-based IDE, integration with the STAR system, and a web service infrastructure to support integration with other systems that rely on story understanding functionality to complete their tasks. The platform also delivers a number of "social" features, including a community repository for public story sharing with a built-in commenting system, and tools for collaborative story editing that can be used for team development projects and for educational purposes.
2002.11107
Okyu Kwon
Okyu Kwon
Very simple statistical evidence that AlphaGo has exceeded human limits in playing GO game
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning technology is making great progress in solving the challenging problems of artificial intelligence, hence machine learning based on artificial neural networks is in the spotlight again. In some areas, artificial intelligence based on deep learning is beyond human capabilities. It seemed extremely difficult for a machine to beat a human in a Go game, but AlphaGo has shown to beat a professional player in the game. By looking at the statistical distribution of the distance in which the Go stones are laid in succession, we find a clear trace that Alphago has surpassed human abilities. The AlphaGo than professional players and professional players than ordinary players shows the laying of stones in the distance becomes more frequent. In addition, AlphaGo shows a much more pronounced difference than that of ordinary players and professional players.
[ { "created": "Tue, 25 Feb 2020 01:46:12 GMT", "version": "v1" } ]
2020-02-27
[ [ "Kwon", "Okyu", "" ] ]
Deep learning technology is making great progress in solving the challenging problems of artificial intelligence, hence machine learning based on artificial neural networks is in the spotlight again. In some areas, artificial intelligence based on deep learning is beyond human capabilities. It seemed extremely difficult for a machine to beat a human in a Go game, but AlphaGo has shown to beat a professional player in the game. By looking at the statistical distribution of the distance in which the Go stones are laid in succession, we find a clear trace that Alphago has surpassed human abilities. The AlphaGo than professional players and professional players than ordinary players shows the laying of stones in the distance becomes more frequent. In addition, AlphaGo shows a much more pronounced difference than that of ordinary players and professional players.
2004.03630
Harish Doraiswamy
Harish Doraiswamy and Juliana Freire
A GPU-friendly Geometric Data Model and Algebra for Spatial Queries: Extended Version
This is the extended version of the paper published in SIGMOD 2020
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The availability of low cost sensors has led to an unprecedented growth in the volume of spatial data. However, the time required to evaluate even simple spatial queries over large data sets greatly hampers our ability to interactively explore these data sets and extract actionable insights. Graphics Processing Units~(GPUs) are increasingly being used to speedup spatial queries. However, existing GPU-based solutions have two important drawbacks: they are often tightly coupled to the specific query types they target, making it hard to adapt them for other queries; and since their design is based on CPU-based approaches, it can be difficult to effectively utilize all the benefits provided by the GPU. As a first step towards making GPU spatial query processing mainstream, we propose a new model that represents spatial data as geometric objects and define an algebra consisting of GPU-friendly composable operators that operate over these objects. We demonstrate the expressiveness of the proposed algebra by formulating standard spatial queries as algebraic expressions. We also present a proof-of-concept prototype that supports a subset of the operators and show that it is at least two orders of magnitude faster than a CPU-based implementation. This performance gain is obtained both using a discrete Nvidia mobile GPU and the less powerful integrated GPUs common in commodity laptops.
[ { "created": "Tue, 7 Apr 2020 18:10:53 GMT", "version": "v1" } ]
2020-04-09
[ [ "Doraiswamy", "Harish", "" ], [ "Freire", "Juliana", "" ] ]
The availability of low cost sensors has led to an unprecedented growth in the volume of spatial data. However, the time required to evaluate even simple spatial queries over large data sets greatly hampers our ability to interactively explore these data sets and extract actionable insights. Graphics Processing Units~(GPUs) are increasingly being used to speedup spatial queries. However, existing GPU-based solutions have two important drawbacks: they are often tightly coupled to the specific query types they target, making it hard to adapt them for other queries; and since their design is based on CPU-based approaches, it can be difficult to effectively utilize all the benefits provided by the GPU. As a first step towards making GPU spatial query processing mainstream, we propose a new model that represents spatial data as geometric objects and define an algebra consisting of GPU-friendly composable operators that operate over these objects. We demonstrate the expressiveness of the proposed algebra by formulating standard spatial queries as algebraic expressions. We also present a proof-of-concept prototype that supports a subset of the operators and show that it is at least two orders of magnitude faster than a CPU-based implementation. This performance gain is obtained both using a discrete Nvidia mobile GPU and the less powerful integrated GPUs common in commodity laptops.
2308.01923
Zhenpeng Chen
Zhenpeng Chen and Jie M. Zhang and Federica Sarro and Mark Harman
Fairness Improvement with Multiple Protected Attributes: How Far Are We?
Accepted by the 46th International Conference on Software Engineering (ICSE 2024). Please include ICSE in any citations
null
null
null
cs.LG cs.AI cs.CY cs.SE
http://creativecommons.org/licenses/by/4.0/
Existing research mostly improves the fairness of Machine Learning (ML) software regarding a single protected attribute at a time, but this is unrealistic given that many users have multiple protected attributes. This paper conducts an extensive study of fairness improvement regarding multiple protected attributes, covering 11 state-of-the-art fairness improvement methods. We analyze the effectiveness of these methods with different datasets, metrics, and ML models when considering multiple protected attributes. The results reveal that improving fairness for a single protected attribute can largely decrease fairness regarding unconsidered protected attributes. This decrease is observed in up to 88.3% of scenarios (57.5% on average). More surprisingly, we find little difference in accuracy loss when considering single and multiple protected attributes, indicating that accuracy can be maintained in the multiple-attribute paradigm. However, the effect on F1-score when handling two protected attributes is about twice that of a single attribute. This has important implications for future fairness research: reporting only accuracy as the ML performance metric, which is currently common in the literature, is inadequate.
[ { "created": "Tue, 25 Jul 2023 14:01:23 GMT", "version": "v1" }, { "created": "Fri, 3 Nov 2023 16:16:35 GMT", "version": "v2" }, { "created": "Thu, 4 Apr 2024 16:54:25 GMT", "version": "v3" } ]
2024-04-05
[ [ "Chen", "Zhenpeng", "" ], [ "Zhang", "Jie M.", "" ], [ "Sarro", "Federica", "" ], [ "Harman", "Mark", "" ] ]
Existing research mostly improves the fairness of Machine Learning (ML) software regarding a single protected attribute at a time, but this is unrealistic given that many users have multiple protected attributes. This paper conducts an extensive study of fairness improvement regarding multiple protected attributes, covering 11 state-of-the-art fairness improvement methods. We analyze the effectiveness of these methods with different datasets, metrics, and ML models when considering multiple protected attributes. The results reveal that improving fairness for a single protected attribute can largely decrease fairness regarding unconsidered protected attributes. This decrease is observed in up to 88.3% of scenarios (57.5% on average). More surprisingly, we find little difference in accuracy loss when considering single and multiple protected attributes, indicating that accuracy can be maintained in the multiple-attribute paradigm. However, the effect on F1-score when handling two protected attributes is about twice that of a single attribute. This has important implications for future fairness research: reporting only accuracy as the ML performance metric, which is currently common in the literature, is inadequate.
2110.04525
Xutan Peng
Jinghui Si, Xutan Peng, Chen Li, Haotian Xu, Jianxin Li
Generating Disentangled Arguments with Prompts: A Simple Event Extraction Framework that Works
Accepted at ICASSP 2022. Without the strict length constraint, this version (slightly) extends the conference camera-ready version
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Event Extraction bridges the gap between text and event signals. Based on the assumption of trigger-argument dependency, existing approaches have achieved state-of-the-art performance with expert-designed templates or complicated decoding constraints. In this paper, for the first time we introduce the prompt-based learning strategy to the domain of Event Extraction, which empowers the automatic exploitation of label semantics on both input and output sides. To validate the effectiveness of the proposed generative method, we conduct extensive experiments with 11 diverse baselines. Empirical results show that, in terms of F1 score on Argument Extraction, our simple architecture is stronger than any other generative counterpart and even competitive with algorithms that require template engineering. Regarding the measure of recall, it sets new overall records for both Argument and Trigger Extractions. We hereby recommend this framework to the community, with the code publicly available at https://git.io/GDAP.
[ { "created": "Sat, 9 Oct 2021 09:36:08 GMT", "version": "v1" }, { "created": "Tue, 15 Feb 2022 12:07:29 GMT", "version": "v2" } ]
2022-02-16
[ [ "Si", "Jinghui", "" ], [ "Peng", "Xutan", "" ], [ "Li", "Chen", "" ], [ "Xu", "Haotian", "" ], [ "Li", "Jianxin", "" ] ]
Event Extraction bridges the gap between text and event signals. Based on the assumption of trigger-argument dependency, existing approaches have achieved state-of-the-art performance with expert-designed templates or complicated decoding constraints. In this paper, for the first time we introduce the prompt-based learning strategy to the domain of Event Extraction, which empowers the automatic exploitation of label semantics on both input and output sides. To validate the effectiveness of the proposed generative method, we conduct extensive experiments with 11 diverse baselines. Empirical results show that, in terms of F1 score on Argument Extraction, our simple architecture is stronger than any other generative counterpart and even competitive with algorithms that require template engineering. Regarding the measure of recall, it sets new overall records for both Argument and Trigger Extractions. We hereby recommend this framework to the community, with the code publicly available at https://git.io/GDAP.
1606.04199
Jie Zhou
Jie Zhou and Ying Cao and Xuguang Wang and Peng Li and Wei Xu
Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation
TACL 2016
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task.
[ { "created": "Tue, 14 Jun 2016 03:53:00 GMT", "version": "v1" }, { "created": "Wed, 15 Jun 2016 04:21:03 GMT", "version": "v2" }, { "created": "Sat, 23 Jul 2016 13:14:17 GMT", "version": "v3" } ]
2016-07-26
[ [ "Zhou", "Jie", "" ], [ "Cao", "Ying", "" ], [ "Wang", "Xuguang", "" ], [ "Li", "Peng", "" ], [ "Xu", "Wei", "" ] ]
Neural machine translation (NMT) aims at solving machine translation (MT) problems using neural networks and has exhibited promising results in recent years. However, most of the existing NMT models are shallow and there is still a performance gap between a single NMT model and the best conventional MT system. In this work, we introduce a new type of linear connections, named fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks, and an interleaved bi-directional architecture for stacking the LSTM layers. Fast-forward connections play an essential role in propagating the gradients and building a deep topology of depth 16. On the WMT'14 English-to-French task, we achieve BLEU=37.7 with a single attention model, which outperforms the corresponding single shallow model by 6.2 BLEU points. This is the first time that a single NMT model achieves state-of-the-art performance and outperforms the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3 even without using an attention mechanism. After special handling of unknown words and model ensembling, we obtain the best score reported to date on this task with BLEU=40.4. Our models are also validated on the more difficult WMT'14 English-to-German task.
2205.12472
Manoj Mathews
Manoj Mathews
Mathematical Modelling of TEAM and VTEAM Memristor Model Using VerilogA
null
null
null
null
cs.ET cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Anyone who looks into the circuitry world will be familiar with the three fundamental circuit elements - capacitor, resistor, and inductor. These circuit elements are defined by the relation between two of the four fundamental circuit variables current, voltage, charge, and flux. However, in 1971, Prof. Leon Chua proposed on the grounds of symmetry that there should be a fourth fundamental circuit element that gives the relation between flux and charge. He named this the memristor, which is short for memory resistor. This theory was then practicallymodeled, in May 2008 when the researchers at HP Labs published a paper announcing a model for a physical realization of a memristor. This report mainly focuses on the model of memristor and its applications. The advantages of variable resistance, flexibility, no leakage current, and compatibility with CMOS. The element memristor exhibits different characteristics for different applications which results in the formation of different models of the memristor. This paper gives a review of different models of the memristor. Memristors devices can be used in many applications such as memory, logic, and neuromorphic systems. A computer model of the memristor would be a useful tool for analyzing circuit behavior to help in developing the application of this memristor is a passive circuit element via simulation. In this paper, various VerilogA model of memristor devices are simulated for sinusoidal inputs and output are verified Various window functions has been used. The circuit analysis of the various memristor models is done
[ { "created": "Wed, 25 May 2022 03:51:56 GMT", "version": "v1" } ]
2022-05-26
[ [ "Mathews", "Manoj", "" ] ]
Anyone who looks into the circuitry world will be familiar with the three fundamental circuit elements - capacitor, resistor, and inductor. These circuit elements are defined by the relation between two of the four fundamental circuit variables current, voltage, charge, and flux. However, in 1971, Prof. Leon Chua proposed on the grounds of symmetry that there should be a fourth fundamental circuit element that gives the relation between flux and charge. He named this the memristor, which is short for memory resistor. This theory was then practicallymodeled, in May 2008 when the researchers at HP Labs published a paper announcing a model for a physical realization of a memristor. This report mainly focuses on the model of memristor and its applications. The advantages of variable resistance, flexibility, no leakage current, and compatibility with CMOS. The element memristor exhibits different characteristics for different applications which results in the formation of different models of the memristor. This paper gives a review of different models of the memristor. Memristors devices can be used in many applications such as memory, logic, and neuromorphic systems. A computer model of the memristor would be a useful tool for analyzing circuit behavior to help in developing the application of this memristor is a passive circuit element via simulation. In this paper, various VerilogA model of memristor devices are simulated for sinusoidal inputs and output are verified Various window functions has been used. The circuit analysis of the various memristor models is done
2406.05162
Russell Brown
Russell A. Brown
Optimized Deletion From an AVL Tree
5 pages, 1 figure, 1 table
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An AVL tree is a binary search tree that guarantees $ O\left( \log n \right ) $ search. The guarantee is obtained at the cost of rebalancing the AVL tree, potentially after each insertion or deletion. This article proposes a deletion algorithm that reduces the rebalancing required after deletion compared to the rebalancing required after deletion by a ubiquitously taught algorithm.
[ { "created": "Fri, 7 Jun 2024 00:48:47 GMT", "version": "v1" }, { "created": "Tue, 11 Jun 2024 17:52:30 GMT", "version": "v2" }, { "created": "Tue, 18 Jun 2024 13:17:07 GMT", "version": "v3" }, { "created": "Tue, 25 Jun 2024 16:40:11 GMT", "version": "v4" }, { "created": "Mon, 1 Jul 2024 19:14:17 GMT", "version": "v5" }, { "created": "Thu, 18 Jul 2024 18:43:59 GMT", "version": "v6" }, { "created": "Mon, 5 Aug 2024 23:15:45 GMT", "version": "v7" }, { "created": "Mon, 12 Aug 2024 15:13:34 GMT", "version": "v8" } ]
2024-08-13
[ [ "Brown", "Russell A.", "" ] ]
An AVL tree is a binary search tree that guarantees $ O\left( \log n \right ) $ search. The guarantee is obtained at the cost of rebalancing the AVL tree, potentially after each insertion or deletion. This article proposes a deletion algorithm that reduces the rebalancing required after deletion compared to the rebalancing required after deletion by a ubiquitously taught algorithm.
2103.02521
Alec Diaz-Arias
Alec Diaz-Arias, Mitchell Messmore, Dmitriy Shin, and Stephen Baek
On the role of depth predictions for 3D human pose estimation
13 pages, 6 figures, and 8 tables
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Following the successful application of deep convolutional neural networks to 2d human pose estimation, the next logical problem to solve is 3d human pose estimation from monocular images. While previous solutions have shown some success, they do not fully utilize the depth information from the 2d inputs. With the goal of addressing this depth ambiguity, we build a system that takes 2d joint locations as input along with their estimated depth value and predicts their 3d positions in camera coordinates. Given the inherent noise and inaccuracy from estimating depth maps from monocular images, we perform an extensive statistical analysis showing that given this noise there is still a statistically significant correlation between the predicted depth values and the third coordinate of camera coordinates. We further explain how the state-of-the-art results we achieve on the H3.6M validation set are due to the additional input of depth. Notably, our results are produced on neural network that accepts a low dimensional input and be integrated into a real-time system. Furthermore, our system can be combined with an off-the-shelf 2d pose detector and a depth map predictor to perform 3d pose estimation in the wild.
[ { "created": "Wed, 3 Mar 2021 16:51:38 GMT", "version": "v1" } ]
2021-03-04
[ [ "Diaz-Arias", "Alec", "" ], [ "Messmore", "Mitchell", "" ], [ "Shin", "Dmitriy", "" ], [ "Baek", "Stephen", "" ] ]
Following the successful application of deep convolutional neural networks to 2d human pose estimation, the next logical problem to solve is 3d human pose estimation from monocular images. While previous solutions have shown some success, they do not fully utilize the depth information from the 2d inputs. With the goal of addressing this depth ambiguity, we build a system that takes 2d joint locations as input along with their estimated depth value and predicts their 3d positions in camera coordinates. Given the inherent noise and inaccuracy from estimating depth maps from monocular images, we perform an extensive statistical analysis showing that given this noise there is still a statistically significant correlation between the predicted depth values and the third coordinate of camera coordinates. We further explain how the state-of-the-art results we achieve on the H3.6M validation set are due to the additional input of depth. Notably, our results are produced on neural network that accepts a low dimensional input and be integrated into a real-time system. Furthermore, our system can be combined with an off-the-shelf 2d pose detector and a depth map predictor to perform 3d pose estimation in the wild.
2210.09880
Yudong Xu
Yudong Xu, Elias B. Khalil, Scott Sanner
Graphs, Constraints, and Search for the Abstraction and Reasoning Corpus
9 pages, 5 figures, to be published in AAAI-23
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
The Abstraction and Reasoning Corpus (ARC) aims at benchmarking the performance of general artificial intelligence algorithms. The ARC's focus on broad generalization and few-shot learning has made it difficult to solve using pure machine learning. A more promising approach has been to perform program synthesis within an appropriately designed Domain Specific Language (DSL). However, these too have seen limited success. We propose Abstract Reasoning with Graph Abstractions (ARGA), a new object-centric framework that first represents images using graphs and then performs a search for a correct program in a DSL that is based on the abstracted graph space. The complexity of this combinatorial search is tamed through the use of constraint acquisition, state hashing, and Tabu search. An extensive set of experiments demonstrates the promise of ARGA in tackling some of the complicated object-centric tasks of the ARC rather efficiently, producing programs that are correct and easy to understand.
[ { "created": "Tue, 18 Oct 2022 14:13:43 GMT", "version": "v1" }, { "created": "Fri, 2 Dec 2022 00:54:33 GMT", "version": "v2" } ]
2022-12-05
[ [ "Xu", "Yudong", "" ], [ "Khalil", "Elias B.", "" ], [ "Sanner", "Scott", "" ] ]
The Abstraction and Reasoning Corpus (ARC) aims at benchmarking the performance of general artificial intelligence algorithms. The ARC's focus on broad generalization and few-shot learning has made it difficult to solve using pure machine learning. A more promising approach has been to perform program synthesis within an appropriately designed Domain Specific Language (DSL). However, these too have seen limited success. We propose Abstract Reasoning with Graph Abstractions (ARGA), a new object-centric framework that first represents images using graphs and then performs a search for a correct program in a DSL that is based on the abstracted graph space. The complexity of this combinatorial search is tamed through the use of constraint acquisition, state hashing, and Tabu search. An extensive set of experiments demonstrates the promise of ARGA in tackling some of the complicated object-centric tasks of the ARC rather efficiently, producing programs that are correct and easy to understand.
2311.02055
Matthew Guthaus
M. Guthaus, C. Batten, E. Brunvand, P.E. Gaillardon, D. harris, R. Manohar, P. Mazumder, L. Pileggi, J. Stine
NSF Integrated Circuit Research, Education and Workforce Development Workshop Final Report
This material is based upon work supported by the NSF under Grant No. 2137629
null
null
null
cs.AR
http://creativecommons.org/licenses/by-sa/4.0/
As the pace of progress that has followed Moore's law continues to diminish, it is critical that the US support Integrated Circuit (IC or chip) education and research to maintain technological innovation. Furthermore, US economic independence, security, and future international standing rely on having on-shore IC design capabilities. New devices with disparate technologies, improved design software toolchains and methodologies, and technologies to integrate heterogeneous systems will be needed to advance IC design capabilities. This will require rethinking both how we teach design to address the new complexity and how we inspire student interest in a hardware systems career path. The main recommendation of this workshop is that accessibility is the key issue. To this end, a National Chip Design Center (NCDC) should be established to further research and education by partnering academics and industry to train our future workforce. This should not be limited to R1 universities, but should also include R2, community college, minority serving institutions (MSI), and K-12 institutions to have the broadest effect. The NCDC should support the access, development, and maintenance of open design tools, tool flows, design kits, design components, and educational materials. Open-source options should be emphasized wherever possible to maximize accessibility. The NCDC should also provide access and support for chip fabrication, packaging and testing for both research and educational purposes.
[ { "created": "Fri, 3 Nov 2023 17:33:59 GMT", "version": "v1" } ]
2023-11-06
[ [ "Guthaus", "M.", "" ], [ "Batten", "C.", "" ], [ "Brunvand", "E.", "" ], [ "Gaillardon", "P. E.", "" ], [ "harris", "D.", "" ], [ "Manohar", "R.", "" ], [ "Mazumder", "P.", "" ], [ "Pileggi", "L.", "" ], [ "Stine", "J.", "" ] ]
As the pace of progress that has followed Moore's law continues to diminish, it is critical that the US support Integrated Circuit (IC or chip) education and research to maintain technological innovation. Furthermore, US economic independence, security, and future international standing rely on having on-shore IC design capabilities. New devices with disparate technologies, improved design software toolchains and methodologies, and technologies to integrate heterogeneous systems will be needed to advance IC design capabilities. This will require rethinking both how we teach design to address the new complexity and how we inspire student interest in a hardware systems career path. The main recommendation of this workshop is that accessibility is the key issue. To this end, a National Chip Design Center (NCDC) should be established to further research and education by partnering academics and industry to train our future workforce. This should not be limited to R1 universities, but should also include R2, community college, minority serving institutions (MSI), and K-12 institutions to have the broadest effect. The NCDC should support the access, development, and maintenance of open design tools, tool flows, design kits, design components, and educational materials. Open-source options should be emphasized wherever possible to maximize accessibility. The NCDC should also provide access and support for chip fabrication, packaging and testing for both research and educational purposes.
2309.07984
Johnathan Alsop
Johnathan Alsop, Shaizeen Aga, Mohamed Ibrahim, Mahzabeen Islam, Andrew Mccrabb, Nuwan Jayasena
Inclusive-PIM: Hardware-Software Co-design for Broad Acceleration on Commercial PIM Architectures
null
null
null
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Continual demand for memory bandwidth has made it worthwhile for memory vendors to reassess processing in memory (PIM), which enables higher bandwidth by placing compute units in/near-memory. As such, memory vendors have recently proposed commercially viable PIM designs. However, these proposals are largely driven by the needs of (a narrow set of) machine learning (ML) primitives. While such proposals are reasonable given the the growing importance of ML, as memory is a pervasive component, %in this work, we make there is a case for a more inclusive PIM design that can accelerate primitives across domains. In this work, we ascertain the capabilities of commercial PIM proposals to accelerate various primitives across domains. We first begin with outlining a set of characteristics, termed PIM-amenability-test, which aid in assessing if a given primitive is likely to be accelerated by PIM. Next, we apply this test to primitives under study to ascertain efficient data-placement and orchestration to map the primitives to underlying PIM architecture. We observe here that, even though primitives under study are largely PIM-amenable, existing commercial PIM proposals do not realize their performance potential for these primitives. To address this, we identify bottlenecks that arise in PIM execution and propose hardware and software optimizations which stand to broaden the acceleration reach of commercial PIM designs (improving average PIM speedups from 1.12x to 2.49x relative to a GPU baseline). Overall, while we believe emerging commercial PIM proposals add a necessary and complementary design point in the application acceleration space, hardware-software co-design is necessary to deliver their benefits broadly.
[ { "created": "Thu, 14 Sep 2023 18:42:29 GMT", "version": "v1" }, { "created": "Mon, 18 Sep 2023 17:55:24 GMT", "version": "v2" }, { "created": "Wed, 17 Jan 2024 16:00:27 GMT", "version": "v3" } ]
2024-01-18
[ [ "Alsop", "Johnathan", "" ], [ "Aga", "Shaizeen", "" ], [ "Ibrahim", "Mohamed", "" ], [ "Islam", "Mahzabeen", "" ], [ "Mccrabb", "Andrew", "" ], [ "Jayasena", "Nuwan", "" ] ]
Continual demand for memory bandwidth has made it worthwhile for memory vendors to reassess processing in memory (PIM), which enables higher bandwidth by placing compute units in/near-memory. As such, memory vendors have recently proposed commercially viable PIM designs. However, these proposals are largely driven by the needs of (a narrow set of) machine learning (ML) primitives. While such proposals are reasonable given the the growing importance of ML, as memory is a pervasive component, %in this work, we make there is a case for a more inclusive PIM design that can accelerate primitives across domains. In this work, we ascertain the capabilities of commercial PIM proposals to accelerate various primitives across domains. We first begin with outlining a set of characteristics, termed PIM-amenability-test, which aid in assessing if a given primitive is likely to be accelerated by PIM. Next, we apply this test to primitives under study to ascertain efficient data-placement and orchestration to map the primitives to underlying PIM architecture. We observe here that, even though primitives under study are largely PIM-amenable, existing commercial PIM proposals do not realize their performance potential for these primitives. To address this, we identify bottlenecks that arise in PIM execution and propose hardware and software optimizations which stand to broaden the acceleration reach of commercial PIM designs (improving average PIM speedups from 1.12x to 2.49x relative to a GPU baseline). Overall, while we believe emerging commercial PIM proposals add a necessary and complementary design point in the application acceleration space, hardware-software co-design is necessary to deliver their benefits broadly.
2311.05589
Zhuang Liu
Yida Yin, Zhiqiu Xu, Zhiyuan Li, Trevor Darrell, Zhuang Liu
A Coefficient Makes SVRG Effective
Preprint
null
null
null
cs.LG math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stochastic Variance Reduced Gradient (SVRG), introduced by Johnson & Zhang (2013), is a theoretically compelling optimization method. However, as Defazio & Bottou (2019) highlights, its effectiveness in deep learning is yet to be proven. In this work, we demonstrate the potential of SVRG in optimizing real-world neural networks. Our analysis finds that, for deeper networks, the strength of the variance reduction term in SVRG should be smaller and decrease as training progresses. Inspired by this, we introduce a multiplicative coefficient $\alpha$ to control the strength and adjust it through a linear decay schedule. We name our method $\alpha$-SVRG. Our results show $\alpha$-SVRG better optimizes neural networks, consistently reducing training loss compared to both baseline and the standard SVRG across various architectures and image classification datasets. We hope our findings encourage further exploration into variance reduction techniques in deep learning. Code is available at https://github.com/davidyyd/alpha-SVRG.
[ { "created": "Thu, 9 Nov 2023 18:47:44 GMT", "version": "v1" } ]
2023-11-10
[ [ "Yin", "Yida", "" ], [ "Xu", "Zhiqiu", "" ], [ "Li", "Zhiyuan", "" ], [ "Darrell", "Trevor", "" ], [ "Liu", "Zhuang", "" ] ]
Stochastic Variance Reduced Gradient (SVRG), introduced by Johnson & Zhang (2013), is a theoretically compelling optimization method. However, as Defazio & Bottou (2019) highlights, its effectiveness in deep learning is yet to be proven. In this work, we demonstrate the potential of SVRG in optimizing real-world neural networks. Our analysis finds that, for deeper networks, the strength of the variance reduction term in SVRG should be smaller and decrease as training progresses. Inspired by this, we introduce a multiplicative coefficient $\alpha$ to control the strength and adjust it through a linear decay schedule. We name our method $\alpha$-SVRG. Our results show $\alpha$-SVRG better optimizes neural networks, consistently reducing training loss compared to both baseline and the standard SVRG across various architectures and image classification datasets. We hope our findings encourage further exploration into variance reduction techniques in deep learning. Code is available at https://github.com/davidyyd/alpha-SVRG.
1402.2635
Bei Yu
Bei Yu and Xiaoqing Xu and Jhih-Rong Gao and David Z. Pan
Methodology for standard cell compliance and detailed placement for triple patterning lithography
null
null
null
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As the feature size of semiconductor process further scales to sub-16nm technology node, triple patterning lithography (TPL) has been regarded one of the most promising lithography candidates. M1 and contact layers, which are usually deployed within standard cells, are most critical and complex parts for modern digital designs. Traditional design flow that ignores TPL in early stages may limit the potential to resolve all the TPL conflicts. In this paper, we propose a coherent framework, including standard cell compliance and detailed placement to enable TPL friendly design. Considering TPL constraints during early design stages, such as standard cell compliance, improves the layout decomposability. With the pre-coloring solutions of standard cells, we present a TPL aware detailed placement, where the layout decomposition and placement can be resolved simultaneously. Our experimental results show that, with negligible impact on critical path delay, our framework can resolve the conflicts much more easily, compared with the traditional physical design flow and followed layout decomposition.
[ { "created": "Tue, 11 Feb 2014 20:29:09 GMT", "version": "v1" } ]
2014-02-12
[ [ "Yu", "Bei", "" ], [ "Xu", "Xiaoqing", "" ], [ "Gao", "Jhih-Rong", "" ], [ "Pan", "David Z.", "" ] ]
As the feature size of semiconductor process further scales to sub-16nm technology node, triple patterning lithography (TPL) has been regarded one of the most promising lithography candidates. M1 and contact layers, which are usually deployed within standard cells, are most critical and complex parts for modern digital designs. Traditional design flow that ignores TPL in early stages may limit the potential to resolve all the TPL conflicts. In this paper, we propose a coherent framework, including standard cell compliance and detailed placement to enable TPL friendly design. Considering TPL constraints during early design stages, such as standard cell compliance, improves the layout decomposability. With the pre-coloring solutions of standard cells, we present a TPL aware detailed placement, where the layout decomposition and placement can be resolved simultaneously. Our experimental results show that, with negligible impact on critical path delay, our framework can resolve the conflicts much more easily, compared with the traditional physical design flow and followed layout decomposition.
1411.0225
Zubair Nabi Zubair Nabi
Zubair Nabi
Censorship is Futile
First Monday, Volume 19, Number 11 - 3 November 2014
null
10.5210/fm.v19i11.
null
cs.CY
http://creativecommons.org/licenses/publicdomain/
The Internet has become the new battle ground between authoritarian regimes and ordinary individuals who want unimpeded access to information. The immense popularity of online activism and citizen journalism enabled by social media has instigated state level players to partially or completely block access to the Internet. In return, individuals and organizations have been employing various anti-censorship tools to circumvent these restrictions. In this paper, we claim that censorship is futile as not only has it been ineffective in restricting access, it has also had the side-effect of popularising blocked content. Using data from Alexa Web Rankings, Google Trends, and YouTube Statistics, we quantify the ineffectiveness of state level censorship in Pakistan and Turkey and highlight the emergence of the Streisand Effect. We hope that our findings will, a) prove to governments and other players the futility of their actions, and b) aid citizens around the world in using legal measures to counteract censorship by showing its ineffectiveness.
[ { "created": "Sun, 2 Nov 2014 08:45:48 GMT", "version": "v1" } ]
2014-11-04
[ [ "Nabi", "Zubair", "" ] ]
The Internet has become the new battle ground between authoritarian regimes and ordinary individuals who want unimpeded access to information. The immense popularity of online activism and citizen journalism enabled by social media has instigated state level players to partially or completely block access to the Internet. In return, individuals and organizations have been employing various anti-censorship tools to circumvent these restrictions. In this paper, we claim that censorship is futile as not only has it been ineffective in restricting access, it has also had the side-effect of popularising blocked content. Using data from Alexa Web Rankings, Google Trends, and YouTube Statistics, we quantify the ineffectiveness of state level censorship in Pakistan and Turkey and highlight the emergence of the Streisand Effect. We hope that our findings will, a) prove to governments and other players the futility of their actions, and b) aid citizens around the world in using legal measures to counteract censorship by showing its ineffectiveness.
1904.03885
Peratham Wiriyathammabhum Mr.
Peratham Wiriyathammabhum, Abhinav Shrivastava, Vlad I. Morariu, Larry S. Davis
Referring to Objects in Videos using Spatio-Temporal Identifying Descriptions
null
null
null
null
cs.CV cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a new task, the grounding of spatio-temporal identifying descriptions in videos. Previous work suggests potential bias in existing datasets and emphasizes the need for a new data creation schema to better model linguistic structure. We introduce a new data collection scheme based on grammatical constraints for surface realization to enable us to investigate the problem of grounding spatio-temporal identifying descriptions in videos. We then propose a two-stream modular attention network that learns and grounds spatio-temporal identifying descriptions based on appearance and motion. We show that motion modules help to ground motion-related words and also help to learn in appearance modules because modular neural networks resolve task interference between modules. Finally, we propose a future challenge and a need for a robust system arising from replacing ground truth visual annotations with automatic video object detector and temporal event localization.
[ { "created": "Mon, 8 Apr 2019 08:28:54 GMT", "version": "v1" } ]
2019-04-09
[ [ "Wiriyathammabhum", "Peratham", "" ], [ "Shrivastava", "Abhinav", "" ], [ "Morariu", "Vlad I.", "" ], [ "Davis", "Larry S.", "" ] ]
This paper presents a new task, the grounding of spatio-temporal identifying descriptions in videos. Previous work suggests potential bias in existing datasets and emphasizes the need for a new data creation schema to better model linguistic structure. We introduce a new data collection scheme based on grammatical constraints for surface realization to enable us to investigate the problem of grounding spatio-temporal identifying descriptions in videos. We then propose a two-stream modular attention network that learns and grounds spatio-temporal identifying descriptions based on appearance and motion. We show that motion modules help to ground motion-related words and also help to learn in appearance modules because modular neural networks resolve task interference between modules. Finally, we propose a future challenge and a need for a robust system arising from replacing ground truth visual annotations with automatic video object detector and temporal event localization.
1901.07010
Reazul Hasan Russel
Reazul Hasan Russel
A Short Survey on Probabilistic Reinforcement Learning
7 pages, originally written as a literature survey for PhD candidacy exam
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A reinforcement learning agent tries to maximize its cumulative payoff by interacting in an unknown environment. It is important for the agent to explore suboptimal actions as well as to pick actions with highest known rewards. Yet, in sensitive domains, collecting more data with exploration is not always possible, but it is important to find a policy with a certain performance guaranty. In this paper, we present a brief survey of methods available in the literature for balancing exploration-exploitation trade off and computing robust solutions from fixed samples in reinforcement learning.
[ { "created": "Mon, 21 Jan 2019 17:52:06 GMT", "version": "v1" } ]
2019-01-23
[ [ "Russel", "Reazul Hasan", "" ] ]
A reinforcement learning agent tries to maximize its cumulative payoff by interacting in an unknown environment. It is important for the agent to explore suboptimal actions as well as to pick actions with highest known rewards. Yet, in sensitive domains, collecting more data with exploration is not always possible, but it is important to find a policy with a certain performance guaranty. In this paper, we present a brief survey of methods available in the literature for balancing exploration-exploitation trade off and computing robust solutions from fixed samples in reinforcement learning.
2310.17533
Anaelia Ovalle
Anaelia Ovalle
Decoding The Digital Fuku: Deciphering Colonial Legacies to Critically Assess ChatGPT in Dominican Education
null
null
null
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
Educational disparities within the Dominican Republic (DR) have long-standing origins rooted in economic, political, and social inequity. Addressing these challenges has necessarily called for capacity building with respect to educational materials, high-quality instruction, and structural resourcing. Generative AI tools like ChatGPT have begun to pique the interest of Dominican educators due to their perceived potential to bridge these educational gaps. However, a substantial body of AI fairness literature has documented ways AI disproportionately reinforces power dynamics reflective of jurisdictions driving AI development and deployment policies, collectively termed the AI Global North. As such, indiscriminate adoption of this technology for DR education, even in part, risks perpetuating forms of digital coloniality. Therefore, this paper centers embracing AI-facilitated educational reform by critically examining how AI-driven tools like ChatGPT in DR education may replicate facets of digital colonialism. We provide a concise overview of 20th-century Dominican education reforms following the 1916 US occupation. Then, we employ identified neocolonial aspects historically shaping Dominican education to interrogate the perceived advantages of ChatGPT for contemporary Dominican education, as outlined by a Dominican scholar. This work invites AI Global North & South developers, stakeholders, and Dominican leaders alike to exercise a relational contextualization of data-centric epistemologies like ChatGPT to reap its transformative benefits while remaining vigilant of safeguarding Dominican digital sovereignty.
[ { "created": "Thu, 26 Oct 2023 16:20:35 GMT", "version": "v1" }, { "created": "Tue, 31 Oct 2023 13:41:49 GMT", "version": "v2" } ]
2023-11-01
[ [ "Ovalle", "Anaelia", "" ] ]
Educational disparities within the Dominican Republic (DR) have long-standing origins rooted in economic, political, and social inequity. Addressing these challenges has necessarily called for capacity building with respect to educational materials, high-quality instruction, and structural resourcing. Generative AI tools like ChatGPT have begun to pique the interest of Dominican educators due to their perceived potential to bridge these educational gaps. However, a substantial body of AI fairness literature has documented ways AI disproportionately reinforces power dynamics reflective of jurisdictions driving AI development and deployment policies, collectively termed the AI Global North. As such, indiscriminate adoption of this technology for DR education, even in part, risks perpetuating forms of digital coloniality. Therefore, this paper centers embracing AI-facilitated educational reform by critically examining how AI-driven tools like ChatGPT in DR education may replicate facets of digital colonialism. We provide a concise overview of 20th-century Dominican education reforms following the 1916 US occupation. Then, we employ identified neocolonial aspects historically shaping Dominican education to interrogate the perceived advantages of ChatGPT for contemporary Dominican education, as outlined by a Dominican scholar. This work invites AI Global North & South developers, stakeholders, and Dominican leaders alike to exercise a relational contextualization of data-centric epistemologies like ChatGPT to reap its transformative benefits while remaining vigilant of safeguarding Dominican digital sovereignty.
2201.02113
Jose Bonet
Jos\'e Bonet and Jos\'e Bonet
ConTrip: Consensus Sentiment review Analysis and Platform ratings in a single score
4 pagines, 1 figure
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
People unequivocally employ reviews to decide on purchasing an item or an experience on the internet. In that regard, the growing significance and number of opinions have led to the development of methods to assess their sentiment content automatically. However, it is not straightforward for the models to create a consensus value that embodies the agreement of the different reviews and differentiates across equal ratings for an item. Based on the approach proposed by Nguyen et al. in 2020, we derive a novel consensus value named ConTrip that merges their consensus score and the overall rating of a platform for an item. ConTrip lies in the rating range values, which makes it more interpretable while maintaining the ability to differentiate across equally rated experiences. ConTrip is implemented and freely available under MIT license at https://github.com/pepebonet/contripscore
[ { "created": "Thu, 6 Jan 2022 15:50:34 GMT", "version": "v1" } ]
2022-01-07
[ [ "Bonet", "José", "" ], [ "Bonet", "José", "" ] ]
People unequivocally employ reviews to decide on purchasing an item or an experience on the internet. In that regard, the growing significance and number of opinions have led to the development of methods to assess their sentiment content automatically. However, it is not straightforward for the models to create a consensus value that embodies the agreement of the different reviews and differentiates across equal ratings for an item. Based on the approach proposed by Nguyen et al. in 2020, we derive a novel consensus value named ConTrip that merges their consensus score and the overall rating of a platform for an item. ConTrip lies in the rating range values, which makes it more interpretable while maintaining the ability to differentiate across equally rated experiences. ConTrip is implemented and freely available under MIT license at https://github.com/pepebonet/contripscore
2302.12611
Nils Dycke
Dennis Zyska, Nils Dycke, Jan Buchmann, Ilia Kuznetsov, Iryna Gurevych
CARE: Collaborative AI-Assisted Reading Environment
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Recent years have seen impressive progress in AI-assisted writing, yet the developments in AI-assisted reading are lacking. We propose inline commentary as a natural vehicle for AI-based reading assistance, and present CARE: the first open integrated platform for the study of inline commentary and reading. CARE facilitates data collection for inline commentaries in a commonplace collaborative reading environment, and provides a framework for enhancing reading with NLP-based assistance, such as text classification, generation or question answering. The extensible behavioral logging allows unique insights into the reading and commenting behavior, and flexible configuration makes the platform easy to deploy in new scenarios. To evaluate CARE in action, we apply the platform in a user study dedicated to scholarly peer review. CARE facilitates the data collection and study of inline commentary in NLP, extrinsic evaluation of NLP assistance, and application prototyping. We invite the community to explore and build upon the open source implementation of CARE.
[ { "created": "Fri, 24 Feb 2023 12:55:31 GMT", "version": "v1" } ]
2023-02-27
[ [ "Zyska", "Dennis", "" ], [ "Dycke", "Nils", "" ], [ "Buchmann", "Jan", "" ], [ "Kuznetsov", "Ilia", "" ], [ "Gurevych", "Iryna", "" ] ]
Recent years have seen impressive progress in AI-assisted writing, yet the developments in AI-assisted reading are lacking. We propose inline commentary as a natural vehicle for AI-based reading assistance, and present CARE: the first open integrated platform for the study of inline commentary and reading. CARE facilitates data collection for inline commentaries in a commonplace collaborative reading environment, and provides a framework for enhancing reading with NLP-based assistance, such as text classification, generation or question answering. The extensible behavioral logging allows unique insights into the reading and commenting behavior, and flexible configuration makes the platform easy to deploy in new scenarios. To evaluate CARE in action, we apply the platform in a user study dedicated to scholarly peer review. CARE facilitates the data collection and study of inline commentary in NLP, extrinsic evaluation of NLP assistance, and application prototyping. We invite the community to explore and build upon the open source implementation of CARE.