id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2011.03186
Chong Liu
Chong Liu, Yuqing Zhu, Kamalika Chaudhuri, and Yu-Xiang Wang
Revisiting Model-Agnostic Private Learning: Faster Rates and Active Learning
null
Journal of Machine Learning Research 22(262) (2021) 1-44
null
null
cs.LG cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Private Aggregation of Teacher Ensembles (PATE) framework is one of the most promising recent approaches in differentially private learning. Existing theoretical analysis shows that PATE consistently learns any VC-classes in the realizable setting, but falls short in explaining its success in more general cases where the error rate of the optimal classifier is bounded away from zero. We fill in this gap by introducing the Tsybakov Noise Condition (TNC) and establish stronger and more interpretable learning bounds. These bounds provide new insights into when PATE works and improve over existing results even in the narrower realizable setting. We also investigate the compelling idea of using active learning for saving privacy budget, and empirical studies show the effectiveness of this new idea. The novel components in the proofs include a more refined analysis of the majority voting classifier - which could be of independent interest - and an observation that the synthetic "student" learning problem is nearly realizable by construction under the Tsybakov noise condition.
[ { "created": "Fri, 6 Nov 2020 04:35:32 GMT", "version": "v1" }, { "created": "Fri, 13 Nov 2020 08:19:15 GMT", "version": "v2" }, { "created": "Tue, 21 Sep 2021 18:02:38 GMT", "version": "v3" }, { "created": "Fri, 11 Mar 2022 22:44:07 GMT", "version": "v4" } ]
2022-03-15
[ [ "Liu", "Chong", "" ], [ "Zhu", "Yuqing", "" ], [ "Chaudhuri", "Kamalika", "" ], [ "Wang", "Yu-Xiang", "" ] ]
The Private Aggregation of Teacher Ensembles (PATE) framework is one of the most promising recent approaches in differentially private learning. Existing theoretical analysis shows that PATE consistently learns any VC-classes in the realizable setting, but falls short in explaining its success in more general cases where the error rate of the optimal classifier is bounded away from zero. We fill in this gap by introducing the Tsybakov Noise Condition (TNC) and establish stronger and more interpretable learning bounds. These bounds provide new insights into when PATE works and improve over existing results even in the narrower realizable setting. We also investigate the compelling idea of using active learning for saving privacy budget, and empirical studies show the effectiveness of this new idea. The novel components in the proofs include a more refined analysis of the majority voting classifier - which could be of independent interest - and an observation that the synthetic "student" learning problem is nearly realizable by construction under the Tsybakov noise condition.
2004.00472
Dileep Kalathil
Archana Bura, Desik Rengarajan, Dileep Kalathil, Srinivas Shakkottai, and Jean-Francois Chamberland-Tremblay
Learning to Cache and Caching to Learn: Regret Analysis of Caching Algorithms
null
null
null
null
cs.NI cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Crucial performance metrics of a caching algorithm include its ability to quickly and accurately learn a popularity distribution of requests. However, a majority of work on analytical performance analysis focuses on hit probability after an asymptotically large time has elapsed. We consider an online learning viewpoint, and characterize the "regret" in terms of the finite time difference between the hits achieved by a candidate caching algorithm with respect to a genie-aided scheme that places the most popular items in the cache. We first consider the Full Observation regime wherein all requests are seen by the cache. We show that the Least Frequently Used (LFU) algorithm is able to achieve order optimal regret, which is matched by an efficient counting algorithm design that we call LFU-Lite. We then consider the Partial Observation regime wherein only requests for items currently cached are seen by the cache, making it similar to an online learning problem related to the multi-armed bandit problem. We show how approaching this "caching bandit" using traditional approaches yields either high complexity or regret, but a simple algorithm design that exploits the structure of the distribution can ensure order optimal regret. We conclude by illustrating our insights using numerical simulations.
[ { "created": "Wed, 1 Apr 2020 14:38:53 GMT", "version": "v1" } ]
2020-04-02
[ [ "Bura", "Archana", "" ], [ "Rengarajan", "Desik", "" ], [ "Kalathil", "Dileep", "" ], [ "Shakkottai", "Srinivas", "" ], [ "Chamberland-Tremblay", "Jean-Francois", "" ] ]
Crucial performance metrics of a caching algorithm include its ability to quickly and accurately learn a popularity distribution of requests. However, a majority of work on analytical performance analysis focuses on hit probability after an asymptotically large time has elapsed. We consider an online learning viewpoint, and characterize the "regret" in terms of the finite time difference between the hits achieved by a candidate caching algorithm with respect to a genie-aided scheme that places the most popular items in the cache. We first consider the Full Observation regime wherein all requests are seen by the cache. We show that the Least Frequently Used (LFU) algorithm is able to achieve order optimal regret, which is matched by an efficient counting algorithm design that we call LFU-Lite. We then consider the Partial Observation regime wherein only requests for items currently cached are seen by the cache, making it similar to an online learning problem related to the multi-armed bandit problem. We show how approaching this "caching bandit" using traditional approaches yields either high complexity or regret, but a simple algorithm design that exploits the structure of the distribution can ensure order optimal regret. We conclude by illustrating our insights using numerical simulations.
2306.04911
Jungwuk Park
Jungwuk Park, Dong-Jun Han, Soyeong Kim, Jaekyun Moon
Test-Time Style Shifting: Handling Arbitrary Styles in Domain Generalization
ICML 2023 camera-ready version
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In domain generalization (DG), the target domain is unknown when the model is being trained, and the trained model should successfully work on an arbitrary (and possibly unseen) target domain during inference. This is a difficult problem, and despite active studies in recent years, it remains a great challenge. In this paper, we take a simple yet effective approach to tackle this issue. We propose test-time style shifting, which shifts the style of the test sample (that has a large style gap with the source domains) to the nearest source domain that the model is already familiar with, before making the prediction. This strategy enables the model to handle any target domains with arbitrary style statistics, without additional model update at test-time. Additionally, we propose style balancing, which provides a great platform for maximizing the advantage of test-time style shifting by handling the DG-specific imbalance issues. The proposed ideas are easy to implement and successfully work in conjunction with various other DG schemes. Experimental results on different datasets show the effectiveness of our methods.
[ { "created": "Thu, 8 Jun 2023 03:26:16 GMT", "version": "v1" }, { "created": "Tue, 13 Jun 2023 00:37:33 GMT", "version": "v2" } ]
2023-06-14
[ [ "Park", "Jungwuk", "" ], [ "Han", "Dong-Jun", "" ], [ "Kim", "Soyeong", "" ], [ "Moon", "Jaekyun", "" ] ]
In domain generalization (DG), the target domain is unknown when the model is being trained, and the trained model should successfully work on an arbitrary (and possibly unseen) target domain during inference. This is a difficult problem, and despite active studies in recent years, it remains a great challenge. In this paper, we take a simple yet effective approach to tackle this issue. We propose test-time style shifting, which shifts the style of the test sample (that has a large style gap with the source domains) to the nearest source domain that the model is already familiar with, before making the prediction. This strategy enables the model to handle any target domains with arbitrary style statistics, without additional model update at test-time. Additionally, we propose style balancing, which provides a great platform for maximizing the advantage of test-time style shifting by handling the DG-specific imbalance issues. The proposed ideas are easy to implement and successfully work in conjunction with various other DG schemes. Experimental results on different datasets show the effectiveness of our methods.
1905.07065
Li Chen
Li Chen
Privacy Preserving Adjacency Spectral Embedding on Stochastic Blockmodels
Accepted at Learning and Reasoning with Graph-Structured Representations at ICML 2019
null
null
null
cs.LG cs.CR stat.ME stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For graphs generated from stochastic blockmodels, adjacency spectral embedding is asymptotically consistent. Further, adjacency spectral embedding composed with universally consistent classifiers is universally consistent to achieve the Bayes error. However when the graph contains private or sensitive information, treating the data as non-private can potentially leak privacy and incur disclosure risks. In this paper, we propose a differentially private adjacency spectral embedding algorithm for stochastic blockmodels. We demonstrate that our proposed methodology can estimate the latent positions close to, in Frobenius norm, the latent positions by adjacency spectral embedding and achieve comparable accuracy at desired privacy parameters in simulated and real world networks.
[ { "created": "Thu, 16 May 2019 23:43:45 GMT", "version": "v1" } ]
2019-05-20
[ [ "Chen", "Li", "" ] ]
For graphs generated from stochastic blockmodels, adjacency spectral embedding is asymptotically consistent. Further, adjacency spectral embedding composed with universally consistent classifiers is universally consistent to achieve the Bayes error. However when the graph contains private or sensitive information, treating the data as non-private can potentially leak privacy and incur disclosure risks. In this paper, we propose a differentially private adjacency spectral embedding algorithm for stochastic blockmodels. We demonstrate that our proposed methodology can estimate the latent positions close to, in Frobenius norm, the latent positions by adjacency spectral embedding and achieve comparable accuracy at desired privacy parameters in simulated and real world networks.
2406.07115
Yibo Wang
Sijia Chen, Yibo Wang, Yi-Feng Wu, Qing-Guo Chen, Zhao Xu, Weihua Luo, Kaifu Zhang, Lijun Zhang
Advancing Tool-Augmented Large Language Models: Integrating Insights from Errors in Inference Trees
null
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Tool-augmented large language models (LLMs) leverage tools, often in the form of APIs, to enhance their reasoning capabilities on complex tasks, thus taking on the role of intelligent agents interacting with the real world. The recently introduced ToolLLaMA model by Qin et al. [2024] utilizes the depth-first search-based decision tree (DFSDT) method for reasoning with $16000+$ real-world APIs, which effectively improves the planning and inferencing performance of tool-augmented LLMs compared to traditional chain reasoning approaches. However, their approach only employs successful paths from decision trees (also called inference trees) for supervised fine-tuning (SFT) during training, which does not fully exploit the advantages of the tree of thought. In this study, we propose an inference trajectory optimization framework based on the preference data extracted from decision trees to address this limitation. We first introduce a novel method for constructing preference data from the tree of thought, capitalizing on the failed explorations previously overlooked in the trees. Specifically, we generate an effective step-wise preference dataset, named ToolPreference, for tool use based on the ToolBench dataset. In the subsequent training phase, we first fine-tune the LLM with tool-usage expert trajectories and then use these step-wise preference pairs for direct preference optimization (DPO) to update the policy of the LLM, resulting in our ToolPrefer-LLaMA (TP-LLaMA) model. Our experiments demonstrate that by obtaining insights from errors in inference trees, TP-LLaMA significantly outperforms the baselines across almost all test scenarios by a large margin and exhibits better generalization capabilities with unseen APIs. At the same time, TP-LLaMA has also demonstrated superior reasoning efficiency compared to the baselines, making it more suitable for complex tool-usage reasoning tasks.
[ { "created": "Tue, 11 Jun 2024 10:00:18 GMT", "version": "v1" } ]
2024-06-12
[ [ "Chen", "Sijia", "" ], [ "Wang", "Yibo", "" ], [ "Wu", "Yi-Feng", "" ], [ "Chen", "Qing-Guo", "" ], [ "Xu", "Zhao", "" ], [ "Luo", "Weihua", "" ], [ "Zhang", "Kaifu", "" ], [ "Zhang", "Lijun", "" ] ]
Tool-augmented large language models (LLMs) leverage tools, often in the form of APIs, to enhance their reasoning capabilities on complex tasks, thus taking on the role of intelligent agents interacting with the real world. The recently introduced ToolLLaMA model by Qin et al. [2024] utilizes the depth-first search-based decision tree (DFSDT) method for reasoning with $16000+$ real-world APIs, which effectively improves the planning and inferencing performance of tool-augmented LLMs compared to traditional chain reasoning approaches. However, their approach only employs successful paths from decision trees (also called inference trees) for supervised fine-tuning (SFT) during training, which does not fully exploit the advantages of the tree of thought. In this study, we propose an inference trajectory optimization framework based on the preference data extracted from decision trees to address this limitation. We first introduce a novel method for constructing preference data from the tree of thought, capitalizing on the failed explorations previously overlooked in the trees. Specifically, we generate an effective step-wise preference dataset, named ToolPreference, for tool use based on the ToolBench dataset. In the subsequent training phase, we first fine-tune the LLM with tool-usage expert trajectories and then use these step-wise preference pairs for direct preference optimization (DPO) to update the policy of the LLM, resulting in our ToolPrefer-LLaMA (TP-LLaMA) model. Our experiments demonstrate that by obtaining insights from errors in inference trees, TP-LLaMA significantly outperforms the baselines across almost all test scenarios by a large margin and exhibits better generalization capabilities with unseen APIs. At the same time, TP-LLaMA has also demonstrated superior reasoning efficiency compared to the baselines, making it more suitable for complex tool-usage reasoning tasks.
1503.03270
Vandna Bhalla Ms
Vandna Bhalla, Santanu Chaudhury, Arihant Jain
A Novel Hybrid CNN-AIS Visual Pattern Recognition Engine
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/3.0/
Machine learning methods are used today for most recognition problems. Convolutional Neural Networks (CNN) have time and again proved successful for many image processing tasks primarily for their architecture. In this paper we propose to apply CNN to small data sets like for example, personal albums or other similar environs where the size of training dataset is a limitation, within the framework of a proposed hybrid CNN-AIS model. We use Artificial Immune System Principles to enhance small size of training data set. A layer of Clonal Selection is added to the local filtering and max pooling of CNN Architecture. The proposed Architecture is evaluated using the standard MNIST dataset by limiting the data size and also with a small personal data sample belonging to two different classes. Experimental results show that the proposed hybrid CNN-AIS based recognition engine works well when the size of training data is limited in size
[ { "created": "Wed, 11 Mar 2015 10:58:25 GMT", "version": "v1" } ]
2015-03-12
[ [ "Bhalla", "Vandna", "" ], [ "Chaudhury", "Santanu", "" ], [ "Jain", "Arihant", "" ] ]
Machine learning methods are used today for most recognition problems. Convolutional Neural Networks (CNN) have time and again proved successful for many image processing tasks primarily for their architecture. In this paper we propose to apply CNN to small data sets like for example, personal albums or other similar environs where the size of training dataset is a limitation, within the framework of a proposed hybrid CNN-AIS model. We use Artificial Immune System Principles to enhance small size of training data set. A layer of Clonal Selection is added to the local filtering and max pooling of CNN Architecture. The proposed Architecture is evaluated using the standard MNIST dataset by limiting the data size and also with a small personal data sample belonging to two different classes. Experimental results show that the proposed hybrid CNN-AIS based recognition engine works well when the size of training data is limited in size
1906.03764
Adam Harley
Adam W. Harley and Shrinidhi K. Lakshmikanth and Fangyu Li and Xian Zhou and Hsiao-Yu Fish Tung and Katerina Fragkiadaki
Learning from Unlabelled Videos Using Contrastive Predictive Neural 3D Mapping
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predictive coding theories suggest that the brain learns by predicting observations at various levels of abstraction. One of the most basic prediction tasks is view prediction: how would a given scene look from an alternative viewpoint? Humans excel at this task. Our ability to imagine and fill in missing information is tightly coupled with perception: we feel as if we see the world in 3 dimensions, while in fact, information from only the front surface of the world hits our retinas. This paper explores the role of view prediction in the development of 3D visual recognition. We propose neural 3D mapping networks, which take as input 2.5D (color and depth) video streams captured by a moving camera, and lift them to stable 3D feature maps of the scene, by disentangling the scene content from the motion of the camera. The model also projects its 3D feature maps to novel viewpoints, to predict and match against target views. We propose contrastive prediction losses to replace the standard color regression loss, and show that this leads to better performance on complex photorealistic data. We show that the proposed model learns visual representations useful for (1) semi-supervised learning of 3D object detectors, and (2) unsupervised learning of 3D moving object detectors, by estimating the motion of the inferred 3D feature maps in videos of dynamic scenes. To the best of our knowledge, this is the first work that empirically shows view prediction to be a scalable self-supervised task beneficial to 3D object detection.
[ { "created": "Mon, 10 Jun 2019 01:53:42 GMT", "version": "v1" }, { "created": "Mon, 24 Jun 2019 02:02:58 GMT", "version": "v2" }, { "created": "Wed, 10 Jul 2019 23:02:29 GMT", "version": "v3" }, { "created": "Mon, 30 Sep 2019 18:52:19 GMT", "version": "v4" }, { "created": "Mon, 17 Feb 2020 17:09:42 GMT", "version": "v5" }, { "created": "Sun, 17 May 2020 02:16:28 GMT", "version": "v6" } ]
2020-05-19
[ [ "Harley", "Adam W.", "" ], [ "Lakshmikanth", "Shrinidhi K.", "" ], [ "Li", "Fangyu", "" ], [ "Zhou", "Xian", "" ], [ "Tung", "Hsiao-Yu Fish", "" ], [ "Fragkiadaki", "Katerina", "" ] ]
Predictive coding theories suggest that the brain learns by predicting observations at various levels of abstraction. One of the most basic prediction tasks is view prediction: how would a given scene look from an alternative viewpoint? Humans excel at this task. Our ability to imagine and fill in missing information is tightly coupled with perception: we feel as if we see the world in 3 dimensions, while in fact, information from only the front surface of the world hits our retinas. This paper explores the role of view prediction in the development of 3D visual recognition. We propose neural 3D mapping networks, which take as input 2.5D (color and depth) video streams captured by a moving camera, and lift them to stable 3D feature maps of the scene, by disentangling the scene content from the motion of the camera. The model also projects its 3D feature maps to novel viewpoints, to predict and match against target views. We propose contrastive prediction losses to replace the standard color regression loss, and show that this leads to better performance on complex photorealistic data. We show that the proposed model learns visual representations useful for (1) semi-supervised learning of 3D object detectors, and (2) unsupervised learning of 3D moving object detectors, by estimating the motion of the inferred 3D feature maps in videos of dynamic scenes. To the best of our knowledge, this is the first work that empirically shows view prediction to be a scalable self-supervised task beneficial to 3D object detection.
2202.06600
JunJie Li
Junjie Li and Hui Cao
Research on Dual Channel News Headline Classification Based on ERNIE Pre-training Model
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The classification of news headlines is an important direction in the field of NLP, and its data has the characteristics of compactness, uniqueness and various forms. Aiming at the problem that the traditional neural network model cannot adequately capture the underlying feature information of the data and cannot jointly extract key global features and deep local features, a dual-channel network model DC-EBAD based on the ERNIE pre-training model is proposed. Use ERNIE to extract the lexical, semantic and contextual feature information at the bottom of the text, generate dynamic word vector representations fused with context, and then use the BiLSTM-AT network channel to secondary extract the global features of the data and use the attention mechanism to give key parts higher The weight of the DPCNN channel is used to overcome the long-distance text dependence problem and obtain deep local features. The local and global feature vectors are spliced, and finally passed to the fully connected layer, and the final classification result is output through Softmax. The experimental results show that the proposed model improves the accuracy, precision and F1-score of news headline classification compared with the traditional neural network model and the single-channel model under the same conditions. It can be seen that it can perform well in the multi-classification application of news headline text under large data volume.
[ { "created": "Mon, 14 Feb 2022 10:44:12 GMT", "version": "v1" } ]
2022-02-15
[ [ "Li", "Junjie", "" ], [ "Cao", "Hui", "" ] ]
The classification of news headlines is an important direction in the field of NLP, and its data has the characteristics of compactness, uniqueness and various forms. Aiming at the problem that the traditional neural network model cannot adequately capture the underlying feature information of the data and cannot jointly extract key global features and deep local features, a dual-channel network model DC-EBAD based on the ERNIE pre-training model is proposed. Use ERNIE to extract the lexical, semantic and contextual feature information at the bottom of the text, generate dynamic word vector representations fused with context, and then use the BiLSTM-AT network channel to secondary extract the global features of the data and use the attention mechanism to give key parts higher The weight of the DPCNN channel is used to overcome the long-distance text dependence problem and obtain deep local features. The local and global feature vectors are spliced, and finally passed to the fully connected layer, and the final classification result is output through Softmax. The experimental results show that the proposed model improves the accuracy, precision and F1-score of news headline classification compared with the traditional neural network model and the single-channel model under the same conditions. It can be seen that it can perform well in the multi-classification application of news headline text under large data volume.
2309.00242
Sepideh Aghamolaei
Sepideh Aghamolaei and Mohammad Ghodsi
A Massively Parallel Dynamic Programming for Approximate Rectangle Escape Problem
null
null
null
null
cs.CG cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sublinear time complexity is required by the massively parallel computation (MPC) model. Breaking dynamic programs into a set of sparse dynamic programs that can be divided, solved, and merged in sublinear time. The rectangle escape problem (REP) is defined as follows: For $n$ axis-aligned rectangles inside an axis-aligned bounding box $B$, extend each rectangle in only one of the four directions: up, down, left, or right until it reaches $B$ and the density $k$ is minimized, where $k$ is the maximum number of extensions of rectangles to the boundary that pass through a point inside bounding box $B$. REP is NP-hard for $k>1$. If the rectangles are points of a grid (or unit squares of a grid), the problem is called the square escape problem (SEP) and it is still NP-hard. We give a $2$-approximation algorithm for SEP with $k\geq2$ with time complexity $O(n^{3/2}k^2)$. This improves the time complexity of existing algorithms which are at least quadratic. Also, the approximation ratio of our algorithm for $k\geq 3$ is $3/2$ which is tight. We also give a $8$-approximation algorithm for REP with time complexity $O(n\log n+nk)$ and give a MPC version of this algorithm for $k=O(1)$ which is the first parallel algorithm for this problem.
[ { "created": "Fri, 1 Sep 2023 04:23:15 GMT", "version": "v1" } ]
2023-09-04
[ [ "Aghamolaei", "Sepideh", "" ], [ "Ghodsi", "Mohammad", "" ] ]
Sublinear time complexity is required by the massively parallel computation (MPC) model. Breaking dynamic programs into a set of sparse dynamic programs that can be divided, solved, and merged in sublinear time. The rectangle escape problem (REP) is defined as follows: For $n$ axis-aligned rectangles inside an axis-aligned bounding box $B$, extend each rectangle in only one of the four directions: up, down, left, or right until it reaches $B$ and the density $k$ is minimized, where $k$ is the maximum number of extensions of rectangles to the boundary that pass through a point inside bounding box $B$. REP is NP-hard for $k>1$. If the rectangles are points of a grid (or unit squares of a grid), the problem is called the square escape problem (SEP) and it is still NP-hard. We give a $2$-approximation algorithm for SEP with $k\geq2$ with time complexity $O(n^{3/2}k^2)$. This improves the time complexity of existing algorithms which are at least quadratic. Also, the approximation ratio of our algorithm for $k\geq 3$ is $3/2$ which is tight. We also give a $8$-approximation algorithm for REP with time complexity $O(n\log n+nk)$ and give a MPC version of this algorithm for $k=O(1)$ which is the first parallel algorithm for this problem.
2205.02393
Aili Shen
Aili Shen, Xudong Han, Trevor Cohn, Timothy Baldwin, Lea Frermann
Optimising Equal Opportunity Fairness in Model Training
Accepted to NAACL 2022 main conference
null
null
null
cs.LG cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Real-world datasets often encode stereotypes and societal biases. Such biases can be implicitly captured by trained models, leading to biased predictions and exacerbating existing societal preconceptions. Existing debiasing methods, such as adversarial training and removing protected information from representations, have been shown to reduce bias. However, a disconnect between fairness criteria and training objectives makes it difficult to reason theoretically about the effectiveness of different techniques. In this work, we propose two novel training objectives which directly optimise for the widely-used criterion of {\it equal opportunity}, and show that they are effective in reducing bias while maintaining high performance over two classification tasks.
[ { "created": "Thu, 5 May 2022 01:57:58 GMT", "version": "v1" } ]
2022-05-06
[ [ "Shen", "Aili", "" ], [ "Han", "Xudong", "" ], [ "Cohn", "Trevor", "" ], [ "Baldwin", "Timothy", "" ], [ "Frermann", "Lea", "" ] ]
Real-world datasets often encode stereotypes and societal biases. Such biases can be implicitly captured by trained models, leading to biased predictions and exacerbating existing societal preconceptions. Existing debiasing methods, such as adversarial training and removing protected information from representations, have been shown to reduce bias. However, a disconnect between fairness criteria and training objectives makes it difficult to reason theoretically about the effectiveness of different techniques. In this work, we propose two novel training objectives which directly optimise for the widely-used criterion of {\it equal opportunity}, and show that they are effective in reducing bias while maintaining high performance over two classification tasks.
2408.08002
Srinivas Vivek
Deep Inder Mohan and Srinivas Vivek
Practical Privacy-Preserving Identity Verification using Third-Party Cloud Services and FHE (Role of Data Encoding in Circuit Depth Management)
This work was presented (without proceedings) at the Turing Trustworthy Digital Identity International Conference 2022 at The Alan Turing Institute, London, UK, on Sep. 16, 2022
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
National digital identity verification systems have played a critical role in the effective distribution of goods and services, particularly, in developing countries. Due to the cost involved in deploying and maintaining such systems, combined with a lack of in-house technical expertise, governments seek to outsource this service to third-party cloud service providers to the extent possible. This leads to increased concerns regarding the privacy of users' personal data. In this work, we propose a practical privacy-preserving digital identity (ID) verification protocol where the third-party cloud services process the identity data encrypted using a (single-key) Fully Homomorphic Encryption (FHE) scheme such as BFV. Though the role of a trusted entity such as government is not completely eliminated, our protocol does significantly reduces the computation load on such parties. A challenge in implementing a privacy-preserving ID verification protocol using FHE is to support various types of queries such as exact and/or fuzzy demographic and biometric matches including secure age comparisons. From a cryptographic engineering perspective, our main technical contribution is a user data encoding scheme that encodes demographic and biometric user data in only two BFV ciphertexts and yet facilitates us to outsource various types of ID verification queries to a third-party cloud. Our encoding scheme also ensures that the only computation done by the trusted entity is a query-agnostic "extended" decryption. This is in stark contrast with recent works that outsource all the non-arithmetic operations to a trusted server. We implement our protocol using the Microsoft SEAL FHE library and demonstrate its practicality.
[ { "created": "Thu, 15 Aug 2024 08:12:07 GMT", "version": "v1" } ]
2024-08-16
[ [ "Mohan", "Deep Inder", "" ], [ "Vivek", "Srinivas", "" ] ]
National digital identity verification systems have played a critical role in the effective distribution of goods and services, particularly, in developing countries. Due to the cost involved in deploying and maintaining such systems, combined with a lack of in-house technical expertise, governments seek to outsource this service to third-party cloud service providers to the extent possible. This leads to increased concerns regarding the privacy of users' personal data. In this work, we propose a practical privacy-preserving digital identity (ID) verification protocol where the third-party cloud services process the identity data encrypted using a (single-key) Fully Homomorphic Encryption (FHE) scheme such as BFV. Though the role of a trusted entity such as government is not completely eliminated, our protocol does significantly reduces the computation load on such parties. A challenge in implementing a privacy-preserving ID verification protocol using FHE is to support various types of queries such as exact and/or fuzzy demographic and biometric matches including secure age comparisons. From a cryptographic engineering perspective, our main technical contribution is a user data encoding scheme that encodes demographic and biometric user data in only two BFV ciphertexts and yet facilitates us to outsource various types of ID verification queries to a third-party cloud. Our encoding scheme also ensures that the only computation done by the trusted entity is a query-agnostic "extended" decryption. This is in stark contrast with recent works that outsource all the non-arithmetic operations to a trusted server. We implement our protocol using the Microsoft SEAL FHE library and demonstrate its practicality.
2403.07199
Fabian Weigend
Fabian C Weigend, Xiao Liu, Shubham Sonawani, Neelesh Kumar, Venugopal Vasudevan, Heni Ben Amor
iRoCo: Intuitive Robot Control From Anywhere Using a Smartwatch
7 pages, 7 Figures, 4 Tables, Conference: ICRA
null
10.1109/ICRA57147.2024.10610805
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
This paper introduces iRoCo (intuitive Robot Control) - a framework for ubiquitous human-robot collaboration using a single smartwatch and smartphone. By integrating probabilistic differentiable filters, iRoCo optimizes a combination of precise robot control and unrestricted user movement from ubiquitous devices. We demonstrate and evaluate the effectiveness of iRoCo in practical teleoperation and drone piloting applications. Comparative analysis shows no significant difference between task performance with iRoCo and gold-standard control systems in teleoperation tasks. Additionally, iRoCo users complete drone piloting tasks 32\% faster than with a traditional remote control and report less frustration in a subjective load index questionnaire. Our findings strongly suggest that iRoCo is a promising new approach for intuitive robot control through smartwatches and smartphones from anywhere, at any time. The code is available at www.github.com/wearable-motion-capture
[ { "created": "Mon, 11 Mar 2024 22:47:07 GMT", "version": "v1" } ]
2024-08-15
[ [ "Weigend", "Fabian C", "" ], [ "Liu", "Xiao", "" ], [ "Sonawani", "Shubham", "" ], [ "Kumar", "Neelesh", "" ], [ "Vasudevan", "Venugopal", "" ], [ "Amor", "Heni Ben", "" ] ]
This paper introduces iRoCo (intuitive Robot Control) - a framework for ubiquitous human-robot collaboration using a single smartwatch and smartphone. By integrating probabilistic differentiable filters, iRoCo optimizes a combination of precise robot control and unrestricted user movement from ubiquitous devices. We demonstrate and evaluate the effectiveness of iRoCo in practical teleoperation and drone piloting applications. Comparative analysis shows no significant difference between task performance with iRoCo and gold-standard control systems in teleoperation tasks. Additionally, iRoCo users complete drone piloting tasks 32\% faster than with a traditional remote control and report less frustration in a subjective load index questionnaire. Our findings strongly suggest that iRoCo is a promising new approach for intuitive robot control through smartwatches and smartphones from anywhere, at any time. The code is available at www.github.com/wearable-motion-capture
2109.02353
Hang Liu
Hang Liu, Zehong Lin, Xiaojun Yuan, and Ying-Jun Angela Zhang
Reconfigurable Intelligent Surface Empowered Over-the-Air Federated Edge Learning
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
null
null
null
cs.IT cs.LG cs.NI eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated edge learning (FEEL) has emerged as a revolutionary paradigm to develop AI services at the edge of 6G wireless networks as it supports collaborative model training at a massive number of mobile devices. However, model communication over wireless channels, especially in uplink model uploading of FEEL, has been widely recognized as a bottleneck that critically limits the efficiency of FEEL. Although over-the-air computation can alleviate the excessive cost of radio resources in FEEL model uploading, practical implementations of over-the-air FEEL still suffer from several challenges, including strong straggler issues, large communication overheads, and potential privacy leakage. In this article, we study these challenges in over-the-air FEEL and leverage reconfigurable intelligent surface (RIS), a key enabler of future wireless systems, to address these challenges. We study the state-of-the-art solutions on RIS-empowered FEEL and explore the promising research opportunities for adopting RIS to enhance FEEL performance.
[ { "created": "Mon, 6 Sep 2021 10:44:54 GMT", "version": "v1" }, { "created": "Wed, 20 Jul 2022 03:33:40 GMT", "version": "v2" } ]
2022-07-21
[ [ "Liu", "Hang", "" ], [ "Lin", "Zehong", "" ], [ "Yuan", "Xiaojun", "" ], [ "Zhang", "Ying-Jun Angela", "" ] ]
Federated edge learning (FEEL) has emerged as a revolutionary paradigm to develop AI services at the edge of 6G wireless networks as it supports collaborative model training at a massive number of mobile devices. However, model communication over wireless channels, especially in uplink model uploading of FEEL, has been widely recognized as a bottleneck that critically limits the efficiency of FEEL. Although over-the-air computation can alleviate the excessive cost of radio resources in FEEL model uploading, practical implementations of over-the-air FEEL still suffer from several challenges, including strong straggler issues, large communication overheads, and potential privacy leakage. In this article, we study these challenges in over-the-air FEEL and leverage reconfigurable intelligent surface (RIS), a key enabler of future wireless systems, to address these challenges. We study the state-of-the-art solutions on RIS-empowered FEEL and explore the promising research opportunities for adopting RIS to enhance FEEL performance.
2310.07397
Jian Wang
Jian Wang, Yi Cheng, Dongding Lin, Chak Tou Leong, Wenjie Li
Target-oriented Proactive Dialogue Systems with Personalization: Problem Formulation and Dataset Curation
Accepted to EMNLP-2023 main conference
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Target-oriented dialogue systems, designed to proactively steer conversations toward predefined targets or accomplish specific system-side goals, are an exciting area in conversational AI. In this work, by formulating a <dialogue act, topic> pair as the conversation target, we explore a novel problem of personalized target-oriented dialogue by considering personalization during the target accomplishment process. However, there remains an emergent need for high-quality datasets, and building one from scratch requires tremendous human effort. To address this, we propose an automatic dataset curation framework using a role-playing approach. Based on this framework, we construct a large-scale personalized target-oriented dialogue dataset, TopDial, which comprises about 18K multi-turn dialogues. The experimental results show that this dataset is of high quality and could contribute to exploring personalized target-oriented dialogue.
[ { "created": "Wed, 11 Oct 2023 11:32:57 GMT", "version": "v1" }, { "created": "Fri, 13 Oct 2023 11:16:58 GMT", "version": "v2" } ]
2023-10-16
[ [ "Wang", "Jian", "" ], [ "Cheng", "Yi", "" ], [ "Lin", "Dongding", "" ], [ "Leong", "Chak Tou", "" ], [ "Li", "Wenjie", "" ] ]
Target-oriented dialogue systems, designed to proactively steer conversations toward predefined targets or accomplish specific system-side goals, are an exciting area in conversational AI. In this work, by formulating a <dialogue act, topic> pair as the conversation target, we explore a novel problem of personalized target-oriented dialogue by considering personalization during the target accomplishment process. However, there remains an emergent need for high-quality datasets, and building one from scratch requires tremendous human effort. To address this, we propose an automatic dataset curation framework using a role-playing approach. Based on this framework, we construct a large-scale personalized target-oriented dialogue dataset, TopDial, which comprises about 18K multi-turn dialogues. The experimental results show that this dataset is of high quality and could contribute to exploring personalized target-oriented dialogue.
1801.09522
Sharath Adavanne
Sharath Adavanne, Archontis Politis, Tuomas Virtanen
Multichannel Sound Event Detection Using 3D Convolutional Neural Networks for Learning Inter-channel Features
null
null
null
null
cs.SD cs.LG eess.AS
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this paper, we propose a stacked convolutional and recurrent neural network (CRNN) with a 3D convolutional neural network (CNN) in the first layer for the multichannel sound event detection (SED) task. The 3D CNN enables the network to simultaneously learn the inter- and intra-channel features from the input multichannel audio. In order to evaluate the proposed method, multichannel audio datasets with different number of overlapping sound sources are synthesized. Each of this dataset has a four-channel first-order Ambisonic, binaural, and single-channel versions, on which the performance of SED using the proposed method are compared to study the potential of SED using multichannel audio. A similar study is also done with the binaural and single-channel versions of the real-life recording TUT-SED 2017 development dataset. The proposed method learns to recognize overlapping sound events from multichannel features faster and performs better SED with a fewer number of training epochs. The results show that on using multichannel Ambisonic audio in place of single-channel audio we improve the overall F-score by 7.5%, overall error rate by 10% and recognize 15.6% more sound events in time frames with four overlapping sound sources.
[ { "created": "Mon, 29 Jan 2018 14:24:39 GMT", "version": "v1" } ]
2018-01-30
[ [ "Adavanne", "Sharath", "" ], [ "Politis", "Archontis", "" ], [ "Virtanen", "Tuomas", "" ] ]
In this paper, we propose a stacked convolutional and recurrent neural network (CRNN) with a 3D convolutional neural network (CNN) in the first layer for the multichannel sound event detection (SED) task. The 3D CNN enables the network to simultaneously learn the inter- and intra-channel features from the input multichannel audio. In order to evaluate the proposed method, multichannel audio datasets with different number of overlapping sound sources are synthesized. Each of this dataset has a four-channel first-order Ambisonic, binaural, and single-channel versions, on which the performance of SED using the proposed method are compared to study the potential of SED using multichannel audio. A similar study is also done with the binaural and single-channel versions of the real-life recording TUT-SED 2017 development dataset. The proposed method learns to recognize overlapping sound events from multichannel features faster and performs better SED with a fewer number of training epochs. The results show that on using multichannel Ambisonic audio in place of single-channel audio we improve the overall F-score by 7.5%, overall error rate by 10% and recognize 15.6% more sound events in time frames with four overlapping sound sources.
1809.04662
Marco Baldi
Massimo Battaglioni, Alireza Tasdighi, Marco Baldi, Mohammad H. Tadayon, Franco Chiaraluce
Compact QC-LDPC Block and SC-LDPC Convolutional Codes for Low-Latency Communications
5 pages, 1 figure, presented at IEEE PIMRC 2018
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Low decoding latency and complexity are two important requirements of channel codes used in many applications, like machine-to-machine communications. In this paper, we show how these requirements can be fulfilled by using some special quasi-cyclic low-density parity-check block codes and spatially coupled low-density parity-check convolutional codes that we denote as compact. They are defined by parity-check matrices designed according to a recent approach based on sequentially multiplied columns. This method allows obtaining codes with girth up to 12. Many numerical examples of practical codes are provided.
[ { "created": "Wed, 12 Sep 2018 20:26:31 GMT", "version": "v1" } ]
2018-09-14
[ [ "Battaglioni", "Massimo", "" ], [ "Tasdighi", "Alireza", "" ], [ "Baldi", "Marco", "" ], [ "Tadayon", "Mohammad H.", "" ], [ "Chiaraluce", "Franco", "" ] ]
Low decoding latency and complexity are two important requirements of channel codes used in many applications, like machine-to-machine communications. In this paper, we show how these requirements can be fulfilled by using some special quasi-cyclic low-density parity-check block codes and spatially coupled low-density parity-check convolutional codes that we denote as compact. They are defined by parity-check matrices designed according to a recent approach based on sequentially multiplied columns. This method allows obtaining codes with girth up to 12. Many numerical examples of practical codes are provided.
1909.04770
Oscar Luis Vera P\'erez
Oscar Luis Vera-P\'erez, Benjamin Danglot, Martin Monperrus, Benoit Baudry
Suggestions on Test Suite Improvements with Automatic Infection and Propagation Analysis
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An extreme transformation removes the body of a method that is reached by one test case at least. If the test suite passes on the original program and still passes after the extreme transformation, the transformation is said to be undetected, and the test suite needs to be improved. In this work we propose a technique to automatically determine which of the following three reasons prevent the detection of the extreme transformation is : the test inputs are not sufficient to infect the state of the program; the infection does not propagate to the test cases; the test cases have a weak oracle that does not observe the infection. We have developed Reneri, a tool that observes the program under test and the test suite in order to determine runtime differences between test runs on the original and the transformed method. The observations gathered during the analysis are processed by Reneri to suggest possible improvements to the developers. We evaluate Reneri on 15 projects and a total of 312 undetected extreme transformations. The tool is able to generate a suggestion for each each undetected transformation. For 63% of the cases, the existing test cases can infect the program state, meaning that undetected transformations are mostly due to observability and weak oracle issues. Interviews with developers confirm the relevance of the suggested improvements and experiments with state of the art automatic test generation tools indicate that no tool can improve the existing test suites to fix all undetected transformations.
[ { "created": "Tue, 10 Sep 2019 21:46:01 GMT", "version": "v1" } ]
2019-09-12
[ [ "Vera-Pérez", "Oscar Luis", "" ], [ "Danglot", "Benjamin", "" ], [ "Monperrus", "Martin", "" ], [ "Baudry", "Benoit", "" ] ]
An extreme transformation removes the body of a method that is reached by one test case at least. If the test suite passes on the original program and still passes after the extreme transformation, the transformation is said to be undetected, and the test suite needs to be improved. In this work we propose a technique to automatically determine which of the following three reasons prevent the detection of the extreme transformation is : the test inputs are not sufficient to infect the state of the program; the infection does not propagate to the test cases; the test cases have a weak oracle that does not observe the infection. We have developed Reneri, a tool that observes the program under test and the test suite in order to determine runtime differences between test runs on the original and the transformed method. The observations gathered during the analysis are processed by Reneri to suggest possible improvements to the developers. We evaluate Reneri on 15 projects and a total of 312 undetected extreme transformations. The tool is able to generate a suggestion for each each undetected transformation. For 63% of the cases, the existing test cases can infect the program state, meaning that undetected transformations are mostly due to observability and weak oracle issues. Interviews with developers confirm the relevance of the suggested improvements and experiments with state of the art automatic test generation tools indicate that no tool can improve the existing test suites to fix all undetected transformations.
2304.02812
Katherine Stasaski
Katherine Stasaski and Marti A. Hearst
Pragmatically Appropriate Diversity for Dialogue Evaluation
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Linguistic pragmatics state that a conversation's underlying speech acts can constrain the type of response which is appropriate at each turn in the conversation. When generating dialogue responses, neural dialogue agents struggle to produce diverse responses. Currently, dialogue diversity is assessed using automatic metrics, but the underlying speech acts do not inform these metrics. To remedy this, we propose the notion of Pragmatically Appropriate Diversity, defined as the extent to which a conversation creates and constrains the creation of multiple diverse responses. Using a human-created multi-response dataset, we find significant support for the hypothesis that speech acts provide a signal for the diversity of the set of next responses. Building on this result, we propose a new human evaluation task where creative writers predict the extent to which conversations inspire the creation of multiple diverse responses. Our studies find that writers' judgments align with the Pragmatically Appropriate Diversity of conversations. Our work suggests that expectations for diversity metric scores should vary depending on the speech act.
[ { "created": "Thu, 6 Apr 2023 01:24:18 GMT", "version": "v1" } ]
2023-04-07
[ [ "Stasaski", "Katherine", "" ], [ "Hearst", "Marti A.", "" ] ]
Linguistic pragmatics state that a conversation's underlying speech acts can constrain the type of response which is appropriate at each turn in the conversation. When generating dialogue responses, neural dialogue agents struggle to produce diverse responses. Currently, dialogue diversity is assessed using automatic metrics, but the underlying speech acts do not inform these metrics. To remedy this, we propose the notion of Pragmatically Appropriate Diversity, defined as the extent to which a conversation creates and constrains the creation of multiple diverse responses. Using a human-created multi-response dataset, we find significant support for the hypothesis that speech acts provide a signal for the diversity of the set of next responses. Building on this result, we propose a new human evaluation task where creative writers predict the extent to which conversations inspire the creation of multiple diverse responses. Our studies find that writers' judgments align with the Pragmatically Appropriate Diversity of conversations. Our work suggests that expectations for diversity metric scores should vary depending on the speech act.
1909.08248
EPTCS
Felicidad Aguado (IRLab, CITIC Research Center, University of A Coru\~na, Spain), Pedro Cabalar (IRLab, CITIC Research Center, University of A Coru\~na, Spain), Jorge Fandinno (University of Potsdam, Germany), Brais Mu\~niz (IRLab, CITIC Research Center, University of A Coru\~na, Spain), Gilberto P\'erez (IRLab, CITIC Research Center, University of A Coru\~na, Spain), Francisco Su\'arez (Digestive Service, Complexo Hospitalario Universitario de A Coru\~na (CHUAC), Instituto de Investigaci\'on Biom\'edica de A Coru\~na (INIBIC), Coru\~na, Spain)
A Rule-Based System for Explainable Donor-Patient Matching in Liver Transplantation
In Proceedings ICLP 2019, arXiv:1909.07646
EPTCS 306, 2019, pp. 266-272
10.4204/EPTCS.306.31
null
cs.LO cs.AI cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present web-liver, a rule-based system for decision support in the medical domain, focusing on its application in a liver transplantation unit for implementing policies for donor-patient matching. The rule-based system is built on top of an interpreter for logic programs with partial functions, called lppf, that extends the paradigm of Answer Set Programming (ASP) adding two main features: (1) the inclusion of partial functions and (2) the computation of causal explanations for the obtained solutions. The final goal of web-liver is assisting the medical experts in the design of new donor-patient matching policies that take into account not only the patient severity but also the transplantation utility. As an example, we illustrate the tool behaviour with a set of rules that implement the utility index called SOFT.
[ { "created": "Wed, 18 Sep 2019 07:08:25 GMT", "version": "v1" } ]
2019-09-19
[ [ "Aguado", "Felicidad", "", "IRLab, CITIC Research Center, University of A\n Coruña, Spain" ], [ "Cabalar", "Pedro", "", "IRLab, CITIC Research Center, University of\n A Coruña, Spain" ], [ "Fandinno", "Jorge", "", "University of Potsdam, Germany" ], [ "Muñiz", "Brais", "", "IRLab, CITIC Research Center, University of A Coruña, Spain" ], [ "Pérez", "Gilberto", "", "IRLab, CITIC Research Center, University of A Coruña,\n Spain" ], [ "Suárez", "Francisco", "", "Digestive Service, Complexo Hospitalario\n Universitario de A Coruña" ] ]
In this paper we present web-liver, a rule-based system for decision support in the medical domain, focusing on its application in a liver transplantation unit for implementing policies for donor-patient matching. The rule-based system is built on top of an interpreter for logic programs with partial functions, called lppf, that extends the paradigm of Answer Set Programming (ASP) adding two main features: (1) the inclusion of partial functions and (2) the computation of causal explanations for the obtained solutions. The final goal of web-liver is assisting the medical experts in the design of new donor-patient matching policies that take into account not only the patient severity but also the transplantation utility. As an example, we illustrate the tool behaviour with a set of rules that implement the utility index called SOFT.
2102.12459
Tao Lei
Tao Lei
When Attention Meets Fast Recurrence: Training Language Models with Reduced Compute
null
EMNLP 2021
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models have become increasingly difficult to train because of the growing computation time and cost. In this work, we present SRU++, a highly-efficient architecture that combines fast recurrence and attention for sequence modeling. SRU++ exhibits strong modeling capacity and training efficiency. On standard language modeling tasks such as Enwik8, Wiki-103 and Billion Word datasets, our model obtains better bits-per-character and perplexity while using 3x-10x less training cost compared to top-performing Transformer models. For instance, our model achieves a state-of-the-art result on the Enwik8 dataset using 1.6 days of training on an 8-GPU machine. We further demonstrate that SRU++ requires minimal attention for near state-of-the-art performance. Our results suggest jointly leveraging fast recurrence with little attention as a promising direction for accelerating model training and inference.
[ { "created": "Wed, 24 Feb 2021 18:39:56 GMT", "version": "v1" }, { "created": "Tue, 30 Mar 2021 16:32:25 GMT", "version": "v2" }, { "created": "Wed, 15 Sep 2021 03:59:10 GMT", "version": "v3" } ]
2021-09-16
[ [ "Lei", "Tao", "" ] ]
Large language models have become increasingly difficult to train because of the growing computation time and cost. In this work, we present SRU++, a highly-efficient architecture that combines fast recurrence and attention for sequence modeling. SRU++ exhibits strong modeling capacity and training efficiency. On standard language modeling tasks such as Enwik8, Wiki-103 and Billion Word datasets, our model obtains better bits-per-character and perplexity while using 3x-10x less training cost compared to top-performing Transformer models. For instance, our model achieves a state-of-the-art result on the Enwik8 dataset using 1.6 days of training on an 8-GPU machine. We further demonstrate that SRU++ requires minimal attention for near state-of-the-art performance. Our results suggest jointly leveraging fast recurrence with little attention as a promising direction for accelerating model training and inference.
2112.14243
Adrian Haret
Adrian Haret, Johannes P. Wallner
An AGM Approach to Revising Preferences
Presented at the NMR 2021 workshop
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
We look at preference change arising out of an interaction between two elements: the first is an initial preference ranking encoding a pre-existing attitude; the second element is new preference information signaling input from an authoritative source, which may come into conflict with the initial preference. The aim is to adjust the initial preference and bring it in line with the new preference, without having to give up more information than necessary. We model this process using the formal machinery of belief change, along the lines of the well-known AGM approach. We propose a set of fundamental rationality postulates, and derive the main results of the paper: a set of representation theorems showing that preference change according to these postulates can be rationalized as a choice function guided by a ranking on the comparisons in the initial preference order. We conclude by presenting operators satisfying our proposed postulates. Our approach thus allows us to situate preference revision within the larger family of belief change operators.
[ { "created": "Tue, 28 Dec 2021 18:12:57 GMT", "version": "v1" } ]
2021-12-30
[ [ "Haret", "Adrian", "" ], [ "Wallner", "Johannes P.", "" ] ]
We look at preference change arising out of an interaction between two elements: the first is an initial preference ranking encoding a pre-existing attitude; the second element is new preference information signaling input from an authoritative source, which may come into conflict with the initial preference. The aim is to adjust the initial preference and bring it in line with the new preference, without having to give up more information than necessary. We model this process using the formal machinery of belief change, along the lines of the well-known AGM approach. We propose a set of fundamental rationality postulates, and derive the main results of the paper: a set of representation theorems showing that preference change according to these postulates can be rationalized as a choice function guided by a ranking on the comparisons in the initial preference order. We conclude by presenting operators satisfying our proposed postulates. Our approach thus allows us to situate preference revision within the larger family of belief change operators.
2103.13425
Xiaohan Ding
Xiaohan Ding, Xiangyu Zhang, Jungong Han, Guiguang Ding
Diverse Branch Block: Building a Convolution as an Inception-like Unit
CVPR 2021
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a universal building block of Convolutional Neural Network (ConvNet) to improve the performance without any inference-time costs. The block is named Diverse Branch Block (DBB), which enhances the representational capacity of a single convolution by combining diverse branches of different scales and complexities to enrich the feature space, including sequences of convolutions, multi-scale convolutions, and average pooling. After training, a DBB can be equivalently converted into a single conv layer for deployment. Unlike the advancements of novel ConvNet architectures, DBB complicates the training-time microstructure while maintaining the macro architecture, so that it can be used as a drop-in replacement for regular conv layers of any architecture. In this way, the model can be trained to reach a higher level of performance and then transformed into the original inference-time structure for inference. DBB improves ConvNets on image classification (up to 1.9% higher top-1 accuracy on ImageNet), object detection and semantic segmentation. The PyTorch code and models are released at https://github.com/DingXiaoH/DiverseBranchBlock.
[ { "created": "Wed, 24 Mar 2021 18:12:00 GMT", "version": "v1" }, { "created": "Mon, 29 Mar 2021 13:00:50 GMT", "version": "v2" } ]
2021-03-30
[ [ "Ding", "Xiaohan", "" ], [ "Zhang", "Xiangyu", "" ], [ "Han", "Jungong", "" ], [ "Ding", "Guiguang", "" ] ]
We propose a universal building block of Convolutional Neural Network (ConvNet) to improve the performance without any inference-time costs. The block is named Diverse Branch Block (DBB), which enhances the representational capacity of a single convolution by combining diverse branches of different scales and complexities to enrich the feature space, including sequences of convolutions, multi-scale convolutions, and average pooling. After training, a DBB can be equivalently converted into a single conv layer for deployment. Unlike the advancements of novel ConvNet architectures, DBB complicates the training-time microstructure while maintaining the macro architecture, so that it can be used as a drop-in replacement for regular conv layers of any architecture. In this way, the model can be trained to reach a higher level of performance and then transformed into the original inference-time structure for inference. DBB improves ConvNets on image classification (up to 1.9% higher top-1 accuracy on ImageNet), object detection and semantic segmentation. The PyTorch code and models are released at https://github.com/DingXiaoH/DiverseBranchBlock.
1611.08725
F. Richard Yu
Meng Li, F. Richard Yu, Pengbo Si, Enchang Sun, Yanhua Zhang, and Haipeng Yao
Machine-to-Machine (M2M) Communications in Software-defined and Virtualized Cellular Networks
arXiv admin note: text overlap with arXiv:1611.05087
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine-to-machine (M2M) communications have attracted great attention from both academia and industry. In this paper, with recent advances in wireless network virtualization and software-defined networking (SDN), we propose a novel framework for M2M communications in software-defined cellular networks with wireless network virtualization. In the proposed framework, according to different functions and quality of service (QoS) requirements of machine-type communication devices (MTCDs), a hypervisor enables the virtualization of the physical M2M network, which is abstracted and sliced into multiple virtual M2M networks. In addition, we develop a decision-theoretic approach to optimize the random access process of M2M communications. Furthermore, we develop a feedback and control loop to dynamically adjust the number of resource blocks (RBs) that are used in the random access phase in a virtual M2M network by the SDN controller. Extensive simulation results with different system parameters are presented to show the performance of the proposed scheme.
[ { "created": "Sat, 26 Nov 2016 17:57:31 GMT", "version": "v1" } ]
2016-11-29
[ [ "Li", "Meng", "" ], [ "Yu", "F. Richard", "" ], [ "Si", "Pengbo", "" ], [ "Sun", "Enchang", "" ], [ "Zhang", "Yanhua", "" ], [ "Yao", "Haipeng", "" ] ]
Machine-to-machine (M2M) communications have attracted great attention from both academia and industry. In this paper, with recent advances in wireless network virtualization and software-defined networking (SDN), we propose a novel framework for M2M communications in software-defined cellular networks with wireless network virtualization. In the proposed framework, according to different functions and quality of service (QoS) requirements of machine-type communication devices (MTCDs), a hypervisor enables the virtualization of the physical M2M network, which is abstracted and sliced into multiple virtual M2M networks. In addition, we develop a decision-theoretic approach to optimize the random access process of M2M communications. Furthermore, we develop a feedback and control loop to dynamically adjust the number of resource blocks (RBs) that are used in the random access phase in a virtual M2M network by the SDN controller. Extensive simulation results with different system parameters are presented to show the performance of the proposed scheme.
1503.00756
Marco Stronati
Konstantinos Chatzikokolakis, Catuscia Palamidessi, Marco Stronati
Constructing elastic distinguishability metrics for location privacy
null
null
10.1515/popets-2015-0023
null
cs.CR
http://creativecommons.org/licenses/by/3.0/
With the increasing popularity of hand-held devices, location-based applications and services have access to accurate and real-time location information, raising serious privacy concerns for their users. The recently introduced notion of geo-indistinguishability tries to address this problem by adapting the well-known concept of differential privacy to the area of location-based systems. Although geo-indistinguishability presents various appealing aspects, it has the problem of treating space in a uniform way, imposing the addition of the same amount of noise everywhere on the map. In this paper we propose a novel elastic distinguishability metric that warps the geometrical distance, capturing the different degrees of density of each area. As a consequence, the obtained mechanism adapts the level of noise while achieving the same degree of privacy everywhere. We also show how such an elastic metric can easily incorporate the concept of a "geographic fence" that is commonly employed to protect the highly recurrent locations of a user, such as his home or work. We perform an extensive evaluation of our technique by building an elastic metric for Paris' wide metropolitan area, using semantic information from the OpenStreetMap database. We compare the resulting mechanism against the Planar Laplace mechanism satisfying standard geo-indistinguishability, using two real-world datasets from the Gowalla and Brightkite location-based social networks. The results show that the elastic mechanism adapts well to the semantics of each area, adjusting the noise as we move outside the city center, hence offering better overall privacy.
[ { "created": "Mon, 2 Mar 2015 21:32:11 GMT", "version": "v1" }, { "created": "Thu, 21 May 2015 09:39:47 GMT", "version": "v2" } ]
2015-05-22
[ [ "Chatzikokolakis", "Konstantinos", "" ], [ "Palamidessi", "Catuscia", "" ], [ "Stronati", "Marco", "" ] ]
With the increasing popularity of hand-held devices, location-based applications and services have access to accurate and real-time location information, raising serious privacy concerns for their users. The recently introduced notion of geo-indistinguishability tries to address this problem by adapting the well-known concept of differential privacy to the area of location-based systems. Although geo-indistinguishability presents various appealing aspects, it has the problem of treating space in a uniform way, imposing the addition of the same amount of noise everywhere on the map. In this paper we propose a novel elastic distinguishability metric that warps the geometrical distance, capturing the different degrees of density of each area. As a consequence, the obtained mechanism adapts the level of noise while achieving the same degree of privacy everywhere. We also show how such an elastic metric can easily incorporate the concept of a "geographic fence" that is commonly employed to protect the highly recurrent locations of a user, such as his home or work. We perform an extensive evaluation of our technique by building an elastic metric for Paris' wide metropolitan area, using semantic information from the OpenStreetMap database. We compare the resulting mechanism against the Planar Laplace mechanism satisfying standard geo-indistinguishability, using two real-world datasets from the Gowalla and Brightkite location-based social networks. The results show that the elastic mechanism adapts well to the semantics of each area, adjusting the noise as we move outside the city center, hence offering better overall privacy.
2306.04280
Jeremy Straub
Matthew Tassava, Cameron Kolodjski, Jeremy Straub
Development of a System Vulnerability Analysis Tool for Assessment of Complex Mission Critical Systems
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
A system vulnerability analysis technique (SVAT) for complex mission critical systems (CMCS) was developed in response to the need to be able to conduct penetration testing on large industrial systems which cannot be taken offline or risk disablement or impairment for conventional penetration testing. SVAT-CMCS facilitates the use of known vulnerability and exploit information, incremental testing of system components and data analysis techniques to identify attack pathways in CMCSs. This data can be utilized for corrective activities or to target controlled manual follow-up testing. This paper presents the SVAT-CMCS paradigm and describes its implementation in a software tool, which was built using the Blackboard Architecture, that can be utilized for attack pathway identification. The performance of this tool is characterized using three example models. In particular, it explores the path generation speed and the impact of link cap restrictions on system operations, under different levels of network size and complexity. Accurate fact-rule processing is also tested using these models. The results show significant decreases in path generation efficiency as the link cap and network complexity increase; however, rule processing accuracy is not impacted.
[ { "created": "Wed, 7 Jun 2023 09:35:47 GMT", "version": "v1" } ]
2023-06-08
[ [ "Tassava", "Matthew", "" ], [ "Kolodjski", "Cameron", "" ], [ "Straub", "Jeremy", "" ] ]
A system vulnerability analysis technique (SVAT) for complex mission critical systems (CMCS) was developed in response to the need to be able to conduct penetration testing on large industrial systems which cannot be taken offline or risk disablement or impairment for conventional penetration testing. SVAT-CMCS facilitates the use of known vulnerability and exploit information, incremental testing of system components and data analysis techniques to identify attack pathways in CMCSs. This data can be utilized for corrective activities or to target controlled manual follow-up testing. This paper presents the SVAT-CMCS paradigm and describes its implementation in a software tool, which was built using the Blackboard Architecture, that can be utilized for attack pathway identification. The performance of this tool is characterized using three example models. In particular, it explores the path generation speed and the impact of link cap restrictions on system operations, under different levels of network size and complexity. Accurate fact-rule processing is also tested using these models. The results show significant decreases in path generation efficiency as the link cap and network complexity increase; however, rule processing accuracy is not impacted.
2204.05454
Mengmeng Ma
Mengmeng Ma, Jian Ren, Long Zhao, Davide Testuggine, Xi Peng
Are Multimodal Transformers Robust to Missing Modality?
In CVPR 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multimodal data collected from the real world are often imperfect due to missing modalities. Therefore multimodal models that are robust against modal-incomplete data are highly preferred. Recently, Transformer models have shown great success in processing multimodal data. However, existing work has been limited to either architecture designs or pre-training strategies; whether Transformer models are naturally robust against missing-modal data has rarely been investigated. In this paper, we present the first-of-its-kind work to comprehensively investigate the behavior of Transformers in the presence of modal-incomplete data. Unsurprising, we find Transformer models are sensitive to missing modalities while different modal fusion strategies will significantly affect the robustness. What surprised us is that the optimal fusion strategy is dataset dependent even for the same Transformer model; there does not exist a universal strategy that works in general cases. Based on these findings, we propose a principle method to improve the robustness of Transformer models by automatically searching for an optimal fusion strategy regarding input data. Experimental validations on three benchmarks support the superior performance of the proposed method.
[ { "created": "Tue, 12 Apr 2022 00:21:31 GMT", "version": "v1" } ]
2022-04-13
[ [ "Ma", "Mengmeng", "" ], [ "Ren", "Jian", "" ], [ "Zhao", "Long", "" ], [ "Testuggine", "Davide", "" ], [ "Peng", "Xi", "" ] ]
Multimodal data collected from the real world are often imperfect due to missing modalities. Therefore multimodal models that are robust against modal-incomplete data are highly preferred. Recently, Transformer models have shown great success in processing multimodal data. However, existing work has been limited to either architecture designs or pre-training strategies; whether Transformer models are naturally robust against missing-modal data has rarely been investigated. In this paper, we present the first-of-its-kind work to comprehensively investigate the behavior of Transformers in the presence of modal-incomplete data. Unsurprising, we find Transformer models are sensitive to missing modalities while different modal fusion strategies will significantly affect the robustness. What surprised us is that the optimal fusion strategy is dataset dependent even for the same Transformer model; there does not exist a universal strategy that works in general cases. Based on these findings, we propose a principle method to improve the robustness of Transformer models by automatically searching for an optimal fusion strategy regarding input data. Experimental validations on three benchmarks support the superior performance of the proposed method.
2004.09007
Ahmed Abdelkader
Ahmed Abdelkader, Michael J. Curry, Liam Fowl, Tom Goldstein, Avi Schwarzschild, Manli Shu, Christoph Studer, Chen Zhu
Headless Horseman: Adversarial Attacks on Transfer Learning Models
5 pages, 2 figures. Accepted in ICASSP 2020. Code available on https://github.com/zhuchen03/headless-attack.git
null
10.1109/ICASSP40776.2020.9053181
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transfer learning facilitates the training of task-specific classifiers using pre-trained models as feature extractors. We present a family of transferable adversarial attacks against such classifiers, generated without access to the classification head; we call these \emph{headless attacks}. We first demonstrate successful transfer attacks against a victim network using \textit{only} its feature extractor. This motivates the introduction of a label-blind adversarial attack. This transfer attack method does not require any information about the class-label space of the victim. Our attack lowers the accuracy of a ResNet18 trained on CIFAR10 by over 40\%.
[ { "created": "Mon, 20 Apr 2020 01:07:45 GMT", "version": "v1" } ]
2020-04-21
[ [ "Abdelkader", "Ahmed", "" ], [ "Curry", "Michael J.", "" ], [ "Fowl", "Liam", "" ], [ "Goldstein", "Tom", "" ], [ "Schwarzschild", "Avi", "" ], [ "Shu", "Manli", "" ], [ "Studer", "Christoph", "" ], [ "Zhu", "Chen", "" ] ]
Transfer learning facilitates the training of task-specific classifiers using pre-trained models as feature extractors. We present a family of transferable adversarial attacks against such classifiers, generated without access to the classification head; we call these \emph{headless attacks}. We first demonstrate successful transfer attacks against a victim network using \textit{only} its feature extractor. This motivates the introduction of a label-blind adversarial attack. This transfer attack method does not require any information about the class-label space of the victim. Our attack lowers the accuracy of a ResNet18 trained on CIFAR10 by over 40\%.
2309.04259
Paul Bilokon
Paul Bilokon and Burak Gunduz
C++ Design Patterns for Low-latency Applications Including High-frequency Trading
null
null
null
null
cs.PF q-fin.TR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work aims to bridge the existing knowledge gap in the optimisation of latency-critical code, specifically focusing on high-frequency trading (HFT) systems. The research culminates in three main contributions: the creation of a Low-Latency Programming Repository, the optimisation of a market-neutral statistical arbitrage pairs trading strategy, and the implementation of the Disruptor pattern in C++. The repository serves as a practical guide and is enriched with rigorous statistical benchmarking, while the trading strategy optimisation led to substantial improvements in speed and profitability. The Disruptor pattern showcased significant performance enhancement over traditional queuing methods. Evaluation metrics include speed, cache utilisation, and statistical significance, among others. Techniques like Cache Warming and Constexpr showed the most significant gains in latency reduction. Future directions involve expanding the repository, testing the optimised trading algorithm in a live trading environment, and integrating the Disruptor pattern with the trading algorithm for comprehensive system benchmarking. The work is oriented towards academics and industry practitioners seeking to improve performance in latency-sensitive applications.
[ { "created": "Fri, 8 Sep 2023 11:01:05 GMT", "version": "v1" } ]
2023-09-11
[ [ "Bilokon", "Paul", "" ], [ "Gunduz", "Burak", "" ] ]
This work aims to bridge the existing knowledge gap in the optimisation of latency-critical code, specifically focusing on high-frequency trading (HFT) systems. The research culminates in three main contributions: the creation of a Low-Latency Programming Repository, the optimisation of a market-neutral statistical arbitrage pairs trading strategy, and the implementation of the Disruptor pattern in C++. The repository serves as a practical guide and is enriched with rigorous statistical benchmarking, while the trading strategy optimisation led to substantial improvements in speed and profitability. The Disruptor pattern showcased significant performance enhancement over traditional queuing methods. Evaluation metrics include speed, cache utilisation, and statistical significance, among others. Techniques like Cache Warming and Constexpr showed the most significant gains in latency reduction. Future directions involve expanding the repository, testing the optimised trading algorithm in a live trading environment, and integrating the Disruptor pattern with the trading algorithm for comprehensive system benchmarking. The work is oriented towards academics and industry practitioners seeking to improve performance in latency-sensitive applications.
1908.09031
Jiachen Li
Jiachen Li and Wei Zhan and Yeping Hu and Masayoshi Tomizuka
Generic Tracking and Probabilistic Prediction Framework and Its Application in Autonomous Driving
IEEE Transactions on Intelligent Transportation Systems
null
10.1109/TITS.2019.2930310
null
cs.RO cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurately tracking and predicting behaviors of surrounding objects are key prerequisites for intelligent systems such as autonomous vehicles to achieve safe and high-quality decision making and motion planning. However, there still remain challenges for multi-target tracking due to object number fluctuation and occlusion. To overcome these challenges, we propose a constrained mixture sequential Monte Carlo (CMSMC) method in which a mixture representation is incorporated in the estimated posterior distribution to maintain multi-modality. Multiple targets can be tracked simultaneously within a unified framework without explicit data association between observations and tracking targets. The framework can incorporate an arbitrary prediction model as the implicit proposal distribution of the CMSMC method. An example in this paper is a learning-based model for hierarchical time-series prediction, which consists of a behavior recognition module and a state evolution module. Both modules in the proposed model are generic and flexible so as to be applied to a class of time-series prediction problems where behaviors can be separated into different levels. Finally, the proposed framework is applied to a numerical case study as well as a task of on-road vehicle tracking, behavior recognition, and prediction in highway scenarios. Instead of only focusing on forecasting trajectory of a single entity, we jointly predict continuous motions for interactive entities simultaneously. The proposed approaches are evaluated from multiple aspects, which demonstrate great potential for intelligent vehicular systems and traffic surveillance systems.
[ { "created": "Fri, 23 Aug 2019 20:34:53 GMT", "version": "v1" } ]
2020-03-31
[ [ "Li", "Jiachen", "" ], [ "Zhan", "Wei", "" ], [ "Hu", "Yeping", "" ], [ "Tomizuka", "Masayoshi", "" ] ]
Accurately tracking and predicting behaviors of surrounding objects are key prerequisites for intelligent systems such as autonomous vehicles to achieve safe and high-quality decision making and motion planning. However, there still remain challenges for multi-target tracking due to object number fluctuation and occlusion. To overcome these challenges, we propose a constrained mixture sequential Monte Carlo (CMSMC) method in which a mixture representation is incorporated in the estimated posterior distribution to maintain multi-modality. Multiple targets can be tracked simultaneously within a unified framework without explicit data association between observations and tracking targets. The framework can incorporate an arbitrary prediction model as the implicit proposal distribution of the CMSMC method. An example in this paper is a learning-based model for hierarchical time-series prediction, which consists of a behavior recognition module and a state evolution module. Both modules in the proposed model are generic and flexible so as to be applied to a class of time-series prediction problems where behaviors can be separated into different levels. Finally, the proposed framework is applied to a numerical case study as well as a task of on-road vehicle tracking, behavior recognition, and prediction in highway scenarios. Instead of only focusing on forecasting trajectory of a single entity, we jointly predict continuous motions for interactive entities simultaneously. The proposed approaches are evaluated from multiple aspects, which demonstrate great potential for intelligent vehicular systems and traffic surveillance systems.
2405.20310
Jianghao Shen
Jianghao Shen, Nan Xue, Tianfu Wu
A Pixel Is Worth More Than One 3D Gaussians in Single-View 3D Reconstruction
preprint, under review
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Learning 3D scene representation from a single-view image is a long-standing fundamental problem in computer vision, with the inherent ambiguity in predicting contents unseen from the input view. Built on the recently proposed 3D Gaussian Splatting (3DGS), the Splatter Image method has made promising progress on fast single-image novel view synthesis via learning a single 3D Gaussian for each pixel based on the U-Net feature map of an input image. However, it has limited expressive power to represent occluded components that are not observable in the input view. To address this problem, this paper presents a Hierarchical Splatter Image method in which a pixel is worth more than one 3D Gaussians. Specifically, each pixel is represented by a parent 3D Gaussian and a small number of child 3D Gaussians. Parent 3D Gaussians are learned as done in the vanilla Splatter Image. Child 3D Gaussians are learned via a lightweight Multi-Layer Perceptron (MLP) which takes as input the projected image features of a parent 3D Gaussian and the embedding of a target camera view. Both parent and child 3D Gaussians are learned end-to-end in a stage-wise way. The joint condition of input image features from eyes of the parent Gaussians and the target camera position facilitates learning to allocate child Gaussians to ``see the unseen'', recovering the occluded details that are often missed by parent Gaussians. In experiments, the proposed method is tested on the ShapeNet-SRN and CO3D datasets with state-of-the-art performance obtained, especially showing promising capabilities of reconstructing occluded contents in the input view.
[ { "created": "Thu, 30 May 2024 17:52:52 GMT", "version": "v1" }, { "created": "Fri, 31 May 2024 15:27:52 GMT", "version": "v2" }, { "created": "Mon, 3 Jun 2024 15:13:55 GMT", "version": "v3" } ]
2024-06-04
[ [ "Shen", "Jianghao", "" ], [ "Xue", "Nan", "" ], [ "Wu", "Tianfu", "" ] ]
Learning 3D scene representation from a single-view image is a long-standing fundamental problem in computer vision, with the inherent ambiguity in predicting contents unseen from the input view. Built on the recently proposed 3D Gaussian Splatting (3DGS), the Splatter Image method has made promising progress on fast single-image novel view synthesis via learning a single 3D Gaussian for each pixel based on the U-Net feature map of an input image. However, it has limited expressive power to represent occluded components that are not observable in the input view. To address this problem, this paper presents a Hierarchical Splatter Image method in which a pixel is worth more than one 3D Gaussians. Specifically, each pixel is represented by a parent 3D Gaussian and a small number of child 3D Gaussians. Parent 3D Gaussians are learned as done in the vanilla Splatter Image. Child 3D Gaussians are learned via a lightweight Multi-Layer Perceptron (MLP) which takes as input the projected image features of a parent 3D Gaussian and the embedding of a target camera view. Both parent and child 3D Gaussians are learned end-to-end in a stage-wise way. The joint condition of input image features from eyes of the parent Gaussians and the target camera position facilitates learning to allocate child Gaussians to ``see the unseen'', recovering the occluded details that are often missed by parent Gaussians. In experiments, the proposed method is tested on the ShapeNet-SRN and CO3D datasets with state-of-the-art performance obtained, especially showing promising capabilities of reconstructing occluded contents in the input view.
1809.04585
Yichen Jiang
Yichen Jiang, Mohit Bansal
Closed-Book Training to Improve Summarization Encoder Memory
EMNLP 2018 (16 pages)
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A good neural sequence-to-sequence summarization model should have a strong encoder that can distill and memorize the important information from long input texts so that the decoder can generate salient summaries based on the encoder's memory. In this paper, we aim to improve the memorization capabilities of the encoder of a pointer-generator model by adding an additional 'closed-book' decoder without attention and pointer mechanisms. Such a decoder forces the encoder to be more selective in the information encoded in its memory state because the decoder can't rely on the extra information provided by the attention and possibly copy modules, and hence improves the entire model. On the CNN/Daily Mail dataset, our 2-decoder model outperforms the baseline significantly in terms of ROUGE and METEOR metrics, for both cross-entropy and reinforced setups (and on human evaluation). Moreover, our model also achieves higher scores in a test-only DUC-2002 generalizability setup. We further present a memory ability test, two saliency metrics, as well as several sanity-check ablations (based on fixed-encoder, gradient-flow cut, and model capacity) to prove that the encoder of our 2-decoder model does in fact learn stronger memory representations than the baseline encoder.
[ { "created": "Wed, 12 Sep 2018 17:50:07 GMT", "version": "v1" } ]
2018-09-13
[ [ "Jiang", "Yichen", "" ], [ "Bansal", "Mohit", "" ] ]
A good neural sequence-to-sequence summarization model should have a strong encoder that can distill and memorize the important information from long input texts so that the decoder can generate salient summaries based on the encoder's memory. In this paper, we aim to improve the memorization capabilities of the encoder of a pointer-generator model by adding an additional 'closed-book' decoder without attention and pointer mechanisms. Such a decoder forces the encoder to be more selective in the information encoded in its memory state because the decoder can't rely on the extra information provided by the attention and possibly copy modules, and hence improves the entire model. On the CNN/Daily Mail dataset, our 2-decoder model outperforms the baseline significantly in terms of ROUGE and METEOR metrics, for both cross-entropy and reinforced setups (and on human evaluation). Moreover, our model also achieves higher scores in a test-only DUC-2002 generalizability setup. We further present a memory ability test, two saliency metrics, as well as several sanity-check ablations (based on fixed-encoder, gradient-flow cut, and model capacity) to prove that the encoder of our 2-decoder model does in fact learn stronger memory representations than the baseline encoder.
2306.03937
Gwendolyne Legate
Gwen Legate, Nicolas Bernier, Lucas Caccia, Edouard Oyallon, Eugene Belilovsky
Guiding The Last Layer in Federated Learning with Pre-Trained Models
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Federated Learning (FL) is an emerging paradigm that allows a model to be trained across a number of participants without sharing data. Recent works have begun to consider the effects of using pre-trained models as an initialization point for existing FL algorithms; however, these approaches ignore the vast body of efficient transfer learning literature from the centralized learning setting. Here we revisit the problem of FL from a pre-trained model considered in prior work and expand it to a set of computer vision transfer learning problems. We first observe that simply fitting a linear classification head can be efficient and effective in many cases. We then show that in the FL setting, fitting a classifier using the Nearest Class Means (NCM) can be done exactly and orders of magnitude more efficiently than existing proposals, while obtaining strong performance. Finally, we demonstrate that using a two-phase approach of obtaining the classifier and then fine-tuning the model can yield rapid convergence and improved generalization in the federated setting. We demonstrate the potential our method has to reduce communication and compute costs while achieving better model performance.
[ { "created": "Tue, 6 Jun 2023 18:02:02 GMT", "version": "v1" }, { "created": "Mon, 6 Nov 2023 18:19:49 GMT", "version": "v2" } ]
2023-11-07
[ [ "Legate", "Gwen", "" ], [ "Bernier", "Nicolas", "" ], [ "Caccia", "Lucas", "" ], [ "Oyallon", "Edouard", "" ], [ "Belilovsky", "Eugene", "" ] ]
Federated Learning (FL) is an emerging paradigm that allows a model to be trained across a number of participants without sharing data. Recent works have begun to consider the effects of using pre-trained models as an initialization point for existing FL algorithms; however, these approaches ignore the vast body of efficient transfer learning literature from the centralized learning setting. Here we revisit the problem of FL from a pre-trained model considered in prior work and expand it to a set of computer vision transfer learning problems. We first observe that simply fitting a linear classification head can be efficient and effective in many cases. We then show that in the FL setting, fitting a classifier using the Nearest Class Means (NCM) can be done exactly and orders of magnitude more efficiently than existing proposals, while obtaining strong performance. Finally, we demonstrate that using a two-phase approach of obtaining the classifier and then fine-tuning the model can yield rapid convergence and improved generalization in the federated setting. We demonstrate the potential our method has to reduce communication and compute costs while achieving better model performance.
2402.04075
Reza Khanmohammadi
Reza Khanmohammadi, Ahmed I Ghanem, Kyle Verdecchia, Ryan Hall, Mohamed Elshaikh, Benjamin Movsas, Hassan Bagher-Ebadian, Indrin Chetty, Mohammad M. Ghassemi, Kundan Thind
Iterative Prompt Refinement for Radiation Oncology Symptom Extraction Using Teacher-Student Large Language Models
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
This study introduces a novel teacher-student architecture utilizing Large Language Models (LLMs) to improve prostate cancer radiotherapy symptom extraction from clinical notes. Mixtral, the student model, initially extracts symptoms, followed by GPT-4, the teacher model, which refines prompts based on Mixtral's performance. This iterative process involved 294 single symptom clinical notes across 12 symptoms, with up to 16 rounds of refinement per epoch. Results showed significant improvements in extracting symptoms from both single and multi-symptom notes. For 59 single symptom notes, accuracy increased from 0.51 to 0.71, precision from 0.52 to 0.82, recall from 0.52 to 0.72, and F1 score from 0.49 to 0.73. In 375 multi-symptom notes, accuracy rose from 0.24 to 0.43, precision from 0.6 to 0.76, recall from 0.24 to 0.43, and F1 score from 0.20 to 0.44. These results demonstrate the effectiveness of advanced prompt engineering in LLMs for radiation oncology use.
[ { "created": "Tue, 6 Feb 2024 15:25:09 GMT", "version": "v1" } ]
2024-02-07
[ [ "Khanmohammadi", "Reza", "" ], [ "Ghanem", "Ahmed I", "" ], [ "Verdecchia", "Kyle", "" ], [ "Hall", "Ryan", "" ], [ "Elshaikh", "Mohamed", "" ], [ "Movsas", "Benjamin", "" ], [ "Bagher-Ebadian", "Hassan", "" ], [ "Chetty", "Indrin", "" ], [ "Ghassemi", "Mohammad M.", "" ], [ "Thind", "Kundan", "" ] ]
This study introduces a novel teacher-student architecture utilizing Large Language Models (LLMs) to improve prostate cancer radiotherapy symptom extraction from clinical notes. Mixtral, the student model, initially extracts symptoms, followed by GPT-4, the teacher model, which refines prompts based on Mixtral's performance. This iterative process involved 294 single symptom clinical notes across 12 symptoms, with up to 16 rounds of refinement per epoch. Results showed significant improvements in extracting symptoms from both single and multi-symptom notes. For 59 single symptom notes, accuracy increased from 0.51 to 0.71, precision from 0.52 to 0.82, recall from 0.52 to 0.72, and F1 score from 0.49 to 0.73. In 375 multi-symptom notes, accuracy rose from 0.24 to 0.43, precision from 0.6 to 0.76, recall from 0.24 to 0.43, and F1 score from 0.20 to 0.44. These results demonstrate the effectiveness of advanced prompt engineering in LLMs for radiation oncology use.
1912.05393
Martijn Wezel Van
M.J.A. van Wezel, L.J. Hamburger, Y. Napolean
Fine-grained Classification of Rowing teams
7 pages, NCCV 2019, 6 figures, deep learning, attention learning, CNN, rowing boat, team detector, club detector, data set, dataset
null
null
null
cs.CV cs.LG eess.IV
http://creativecommons.org/licenses/by/4.0/
Fine-grained classification tasks such as identifying different breeds of dog are quite challenging as visual differences between categories is quite small and can be easily overwhelmed by external factors such as object pose, lighting, etc. This work focuses on the specific case of classifying rowing teams from various associations. Currently, the photos are taken at rowing competitions and are manually classified by a small set of members, in what is a painstaking process. To alleviate this, Deep learning models can be utilised as a faster method to classify the images. Recent studies show that localising the manually defined parts, and modelling based on these parts, improves on vanilla convolution models, so this work also investigates the detection of clothing attributes. The networks were trained and tested on a partially labelled data set mainly consisting of rowers from multiple associations. This paper resulted in the classification of up to ten rowing associations by using deep learning networks the smaller VGG network achieved 90.1\% accuracy whereas ResNet was limited to 87.20\%. Adding attention to the ResNet resulted into a drop of performance as only 78.10\% was achieved.
[ { "created": "Wed, 11 Dec 2019 15:36:25 GMT", "version": "v1" } ]
2019-12-12
[ [ "van Wezel", "M. J. A.", "" ], [ "Hamburger", "L. J.", "" ], [ "Napolean", "Y.", "" ] ]
Fine-grained classification tasks such as identifying different breeds of dog are quite challenging as visual differences between categories is quite small and can be easily overwhelmed by external factors such as object pose, lighting, etc. This work focuses on the specific case of classifying rowing teams from various associations. Currently, the photos are taken at rowing competitions and are manually classified by a small set of members, in what is a painstaking process. To alleviate this, Deep learning models can be utilised as a faster method to classify the images. Recent studies show that localising the manually defined parts, and modelling based on these parts, improves on vanilla convolution models, so this work also investigates the detection of clothing attributes. The networks were trained and tested on a partially labelled data set mainly consisting of rowers from multiple associations. This paper resulted in the classification of up to ten rowing associations by using deep learning networks the smaller VGG network achieved 90.1\% accuracy whereas ResNet was limited to 87.20\%. Adding attention to the ResNet resulted into a drop of performance as only 78.10\% was achieved.
2103.17202
Abhinav Kumar
Abhinav Kumar, Garrick Brazil and Xiaoming Liu
GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection
Accepted to CVPR 2021
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern 3D object detectors have immensely benefited from the end-to-end learning idea. However, most of them use a post-processing algorithm called Non-Maximal Suppression (NMS) only during inference. While there were attempts to include NMS in the training pipeline for tasks such as 2D object detection, they have been less widely adopted due to a non-mathematical expression of the NMS. In this paper, we present and integrate GrooMeD-NMS -- a novel Grouped Mathematically Differentiable NMS for monocular 3D object detection, such that the network is trained end-to-end with a loss on the boxes after NMS. We first formulate NMS as a matrix operation and then group and mask the boxes in an unsupervised manner to obtain a simple closed-form expression of the NMS. GrooMeD-NMS addresses the mismatch between training and inference pipelines and, therefore, forces the network to select the best 3D box in a differentiable manner. As a result, GrooMeD-NMS achieves state-of-the-art monocular 3D object detection results on the KITTI benchmark dataset performing comparably to monocular video-based methods. Code and models at https://github.com/abhi1kumar/groomed_nms
[ { "created": "Wed, 31 Mar 2021 16:29:50 GMT", "version": "v1" } ]
2021-04-01
[ [ "Kumar", "Abhinav", "" ], [ "Brazil", "Garrick", "" ], [ "Liu", "Xiaoming", "" ] ]
Modern 3D object detectors have immensely benefited from the end-to-end learning idea. However, most of them use a post-processing algorithm called Non-Maximal Suppression (NMS) only during inference. While there were attempts to include NMS in the training pipeline for tasks such as 2D object detection, they have been less widely adopted due to a non-mathematical expression of the NMS. In this paper, we present and integrate GrooMeD-NMS -- a novel Grouped Mathematically Differentiable NMS for monocular 3D object detection, such that the network is trained end-to-end with a loss on the boxes after NMS. We first formulate NMS as a matrix operation and then group and mask the boxes in an unsupervised manner to obtain a simple closed-form expression of the NMS. GrooMeD-NMS addresses the mismatch between training and inference pipelines and, therefore, forces the network to select the best 3D box in a differentiable manner. As a result, GrooMeD-NMS achieves state-of-the-art monocular 3D object detection results on the KITTI benchmark dataset performing comparably to monocular video-based methods. Code and models at https://github.com/abhi1kumar/groomed_nms
1411.6593
David Tolpin
David Tolpin, Oded Betzalel, Ariel Felner, Solomon Eyal Shimony
Rational Deployment of Multiple Heuristics in IDA*
7 pages, 6 tables, 20 references
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in metareasoning for search has shown its usefulness in improving numerous search algorithms. This paper applies rational metareasoning to IDA* when several admissible heuristics are available. The obvious basic approach of taking the maximum of the heuristics is improved upon by lazy evaluation of the heuristics, resulting in a variant known as Lazy IDA*. We introduce a rational version of lazy IDA* that decides whether to compute the more expensive heuristics or to bypass it, based on a myopic expected regret estimate. Empirical evaluation in several domains supports the theoretical results, and shows that rational lazy IDA* is a state-of-the-art heuristic combination method.
[ { "created": "Mon, 24 Nov 2014 20:04:20 GMT", "version": "v1" } ]
2014-11-25
[ [ "Tolpin", "David", "" ], [ "Betzalel", "Oded", "" ], [ "Felner", "Ariel", "" ], [ "Shimony", "Solomon Eyal", "" ] ]
Recent advances in metareasoning for search has shown its usefulness in improving numerous search algorithms. This paper applies rational metareasoning to IDA* when several admissible heuristics are available. The obvious basic approach of taking the maximum of the heuristics is improved upon by lazy evaluation of the heuristics, resulting in a variant known as Lazy IDA*. We introduce a rational version of lazy IDA* that decides whether to compute the more expensive heuristics or to bypass it, based on a myopic expected regret estimate. Empirical evaluation in several domains supports the theoretical results, and shows that rational lazy IDA* is a state-of-the-art heuristic combination method.
2403.10378
Haonan Li
Rocktim Jyoti Das and Simeon Emilov Hristov and Haonan Li and Dimitar Iliyanov Dimitrov and Ivan Koychev and Preslav Nakov
EXAMS-V: A Multi-Discipline Multilingual Multimodal Exam Benchmark for Evaluating Vision Language Models
null
null
null
null
cs.CL cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
We introduce EXAMS-V, a new challenging multi-discipline multimodal multilingual exam benchmark for evaluating vision language models. It consists of 20,932 multiple-choice questions across 20 school disciplines covering natural science, social science, and other miscellaneous studies, e.g., religion, fine arts, business, etc. EXAMS-V includes a variety of multimodal features such as text, images, tables, figures, diagrams, maps, scientific symbols, and equations. The questions come in 11 languages from 7 language families. Unlike existing benchmarks, EXAMS-V is uniquely curated by gathering school exam questions from various countries, with a variety of education systems. This distinctive approach calls for intricate reasoning across diverse languages and relies on region-specific knowledge. Solving the problems in the dataset requires advanced perception and joint reasoning over the text and the visual content of the image. Our evaluation results demonstrate that this is a challenging dataset, which is difficult even for advanced vision-text models such as GPT-4V and Gemini; this underscores the inherent complexity of the dataset and its significance as a future benchmark.
[ { "created": "Fri, 15 Mar 2024 15:08:39 GMT", "version": "v1" } ]
2024-03-18
[ [ "Das", "Rocktim Jyoti", "" ], [ "Hristov", "Simeon Emilov", "" ], [ "Li", "Haonan", "" ], [ "Dimitrov", "Dimitar Iliyanov", "" ], [ "Koychev", "Ivan", "" ], [ "Nakov", "Preslav", "" ] ]
We introduce EXAMS-V, a new challenging multi-discipline multimodal multilingual exam benchmark for evaluating vision language models. It consists of 20,932 multiple-choice questions across 20 school disciplines covering natural science, social science, and other miscellaneous studies, e.g., religion, fine arts, business, etc. EXAMS-V includes a variety of multimodal features such as text, images, tables, figures, diagrams, maps, scientific symbols, and equations. The questions come in 11 languages from 7 language families. Unlike existing benchmarks, EXAMS-V is uniquely curated by gathering school exam questions from various countries, with a variety of education systems. This distinctive approach calls for intricate reasoning across diverse languages and relies on region-specific knowledge. Solving the problems in the dataset requires advanced perception and joint reasoning over the text and the visual content of the image. Our evaluation results demonstrate that this is a challenging dataset, which is difficult even for advanced vision-text models such as GPT-4V and Gemini; this underscores the inherent complexity of the dataset and its significance as a future benchmark.
0805.2438
Russell O'Connor
Russell O'Connor
Certified Exact Transcendental Real Number Computation in Coq
This paper is to be part of the proceedings of the 21st International Conference on Theorem Proving in Higher Order Logics (TPHOLs 2008)
Ait Mohamed, C. Munoz, and S. Tahar (Eds.): TPHOLs 2008, LNCS 5170, pp. 246-261, 2008
10.1007/978-3-540-71067-7_21
null
cs.LO cs.MS cs.NA
http://creativecommons.org/licenses/publicdomain/
Reasoning about real number expressions in a proof assistant is challenging. Several problems in theorem proving can be solved by using exact real number computation. I have implemented a library for reasoning and computing with complete metric spaces in the Coq proof assistant and used this library to build a constructive real number implementation including elementary real number functions and proofs of correctness. Using this library, I have created a tactic that automatically proves strict inequalities over closed elementary real number expressions by computation.
[ { "created": "Fri, 16 May 2008 18:02:24 GMT", "version": "v1" } ]
2010-08-04
[ [ "O'Connor", "Russell", "" ] ]
Reasoning about real number expressions in a proof assistant is challenging. Several problems in theorem proving can be solved by using exact real number computation. I have implemented a library for reasoning and computing with complete metric spaces in the Coq proof assistant and used this library to build a constructive real number implementation including elementary real number functions and proofs of correctness. Using this library, I have created a tactic that automatically proves strict inequalities over closed elementary real number expressions by computation.
2405.06948
Shengyuan Liu
Shengyuan Liu, Bo Wang, Ye Ma, Te Yang, Xipeng Cao, Quan Chen, Han Li, Di Dong, Peng Jiang
Training-free Subject-Enhanced Attention Guidance for Compositional Text-to-image Generation
26 pages, 13 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Existing subject-driven text-to-image generation models suffer from tedious fine-tuning steps and struggle to maintain both text-image alignment and subject fidelity. For generating compositional subjects, it often encounters problems such as object missing and attribute mixing, where some subjects in the input prompt are not generated or their attributes are incorrectly combined. To address these limitations, we propose a subject-driven generation framework and introduce training-free guidance to intervene in the generative process during inference time. This approach strengthens the attention map, allowing for precise attribute binding and feature injection for each subject. Notably, our method exhibits exceptional zero-shot generation ability, especially in the challenging task of compositional generation. Furthermore, we propose a novel metric GroundingScore to evaluate subject alignment thoroughly. The obtained quantitative results serve as compelling evidence showcasing the effectiveness of our proposed method. The code will be released soon.
[ { "created": "Sat, 11 May 2024 08:11:25 GMT", "version": "v1" } ]
2024-05-14
[ [ "Liu", "Shengyuan", "" ], [ "Wang", "Bo", "" ], [ "Ma", "Ye", "" ], [ "Yang", "Te", "" ], [ "Cao", "Xipeng", "" ], [ "Chen", "Quan", "" ], [ "Li", "Han", "" ], [ "Dong", "Di", "" ], [ "Jiang", "Peng", "" ] ]
Existing subject-driven text-to-image generation models suffer from tedious fine-tuning steps and struggle to maintain both text-image alignment and subject fidelity. For generating compositional subjects, it often encounters problems such as object missing and attribute mixing, where some subjects in the input prompt are not generated or their attributes are incorrectly combined. To address these limitations, we propose a subject-driven generation framework and introduce training-free guidance to intervene in the generative process during inference time. This approach strengthens the attention map, allowing for precise attribute binding and feature injection for each subject. Notably, our method exhibits exceptional zero-shot generation ability, especially in the challenging task of compositional generation. Furthermore, we propose a novel metric GroundingScore to evaluate subject alignment thoroughly. The obtained quantitative results serve as compelling evidence showcasing the effectiveness of our proposed method. The code will be released soon.
1602.08139
Jean-Marc Valin
Jean-Marc Valin, Fran\c{c}ois Michaud, Jean Rouat
Robust Localization and Tracking of Simultaneous Moving Sound Sources Using Beamforming and Particle Filtering
26 pages
Robotics and Autonomous Systems Journal (Elsevier), Vol. 55, No. 3, pp. 216-228, 2007
10.1016/j.robot.2006.08.004
null
cs.RO cs.SD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mobile robots in real-life settings would benefit from being able to localize and track sound sources. Such a capability can help localizing a person or an interesting event in the environment, and also provides enhanced processing for other capabilities such as speech recognition. To give this capability to a robot, the challenge is not only to localize simultaneous sound sources, but to track them over time. In this paper we propose a robust sound source localization and tracking method using an array of eight microphones. The method is based on a frequency-domain implementation of a steered beamformer along with a particle filter-based tracking algorithm. Results show that a mobile robot can localize and track in real-time multiple moving sources of different types over a range of 7 meters. These new capabilities allow a mobile robot to interact using more natural means with people in real life settings.
[ { "created": "Thu, 25 Feb 2016 22:40:00 GMT", "version": "v1" } ]
2016-02-29
[ [ "Valin", "Jean-Marc", "" ], [ "Michaud", "François", "" ], [ "Rouat", "Jean", "" ] ]
Mobile robots in real-life settings would benefit from being able to localize and track sound sources. Such a capability can help localizing a person or an interesting event in the environment, and also provides enhanced processing for other capabilities such as speech recognition. To give this capability to a robot, the challenge is not only to localize simultaneous sound sources, but to track them over time. In this paper we propose a robust sound source localization and tracking method using an array of eight microphones. The method is based on a frequency-domain implementation of a steered beamformer along with a particle filter-based tracking algorithm. Results show that a mobile robot can localize and track in real-time multiple moving sources of different types over a range of 7 meters. These new capabilities allow a mobile robot to interact using more natural means with people in real life settings.
2306.06930
Wenying Duan
Wenying Duan, Xiaoxi He, Zimu Zhou, Lothar Thiele, Hong Rao
Localised Adaptive Spatial-Temporal Graph Neural Network
This paper was accepted by KDD 2023
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spatial-temporal graph models are prevailing for abstracting and modelling spatial and temporal dependencies. In this work, we ask the following question: whether and to what extent can we localise spatial-temporal graph models? We limit our scope to adaptive spatial-temporal graph neural networks (ASTGNNs), the state-of-the-art model architecture. Our approach to localisation involves sparsifying the spatial graph adjacency matrices. To this end, we propose Adaptive Graph Sparsification (AGS), a graph sparsification algorithm which successfully enables the localisation of ASTGNNs to an extreme extent (fully localisation). We apply AGS to two distinct ASTGNN architectures and nine spatial-temporal datasets. Intriguingly, we observe that spatial graphs in ASTGNNs can be sparsified by over 99.5\% without any decline in test accuracy. Furthermore, even when ASTGNNs are fully localised, becoming graph-less and purely temporal, we record no drop in accuracy for the majority of tested datasets, with only minor accuracy deterioration observed in the remaining datasets. However, when the partially or fully localised ASTGNNs are reinitialised and retrained on the same data, there is a considerable and consistent drop in accuracy. Based on these observations, we reckon that \textit{(i)} in the tested data, the information provided by the spatial dependencies is primarily included in the information provided by the temporal dependencies and, thus, can be essentially ignored for inference; and \textit{(ii)} although the spatial dependencies provide redundant information, it is vital for the effective training of ASTGNNs and thus cannot be ignored during training. Furthermore, the localisation of ASTGNNs holds the potential to reduce the heavy computation overhead required on large-scale spatial-temporal data and further enable the distributed deployment of ASTGNNs.
[ { "created": "Mon, 12 Jun 2023 08:08:53 GMT", "version": "v1" }, { "created": "Thu, 15 Jun 2023 13:54:24 GMT", "version": "v2" } ]
2023-06-16
[ [ "Duan", "Wenying", "" ], [ "He", "Xiaoxi", "" ], [ "Zhou", "Zimu", "" ], [ "Thiele", "Lothar", "" ], [ "Rao", "Hong", "" ] ]
Spatial-temporal graph models are prevailing for abstracting and modelling spatial and temporal dependencies. In this work, we ask the following question: whether and to what extent can we localise spatial-temporal graph models? We limit our scope to adaptive spatial-temporal graph neural networks (ASTGNNs), the state-of-the-art model architecture. Our approach to localisation involves sparsifying the spatial graph adjacency matrices. To this end, we propose Adaptive Graph Sparsification (AGS), a graph sparsification algorithm which successfully enables the localisation of ASTGNNs to an extreme extent (fully localisation). We apply AGS to two distinct ASTGNN architectures and nine spatial-temporal datasets. Intriguingly, we observe that spatial graphs in ASTGNNs can be sparsified by over 99.5\% without any decline in test accuracy. Furthermore, even when ASTGNNs are fully localised, becoming graph-less and purely temporal, we record no drop in accuracy for the majority of tested datasets, with only minor accuracy deterioration observed in the remaining datasets. However, when the partially or fully localised ASTGNNs are reinitialised and retrained on the same data, there is a considerable and consistent drop in accuracy. Based on these observations, we reckon that \textit{(i)} in the tested data, the information provided by the spatial dependencies is primarily included in the information provided by the temporal dependencies and, thus, can be essentially ignored for inference; and \textit{(ii)} although the spatial dependencies provide redundant information, it is vital for the effective training of ASTGNNs and thus cannot be ignored during training. Furthermore, the localisation of ASTGNNs holds the potential to reduce the heavy computation overhead required on large-scale spatial-temporal data and further enable the distributed deployment of ASTGNNs.
2405.02732
Sneha Singhania
Sneha Singhania, Simon Razniewski, Gerhard Weikum
Recall Them All: Retrieval-Augmented Language Models for Long Object List Extraction from Long Documents
null
null
null
null
cs.CL cs.IR
http://creativecommons.org/licenses/by/4.0/
Methods for relation extraction from text mostly focus on high precision, at the cost of limited recall. High recall is crucial, though, to populate long lists of object entities that stand in a specific relation with a given subject. Cues for relevant objects can be spread across many passages in long texts. This poses the challenge of extracting long lists from long texts. We present the L3X method which tackles the problem in two stages: (1) recall-oriented generation using a large language model (LLM) with judicious techniques for retrieval augmentation, and (2) precision-oriented scrutinization to validate or prune candidates. Our L3X method outperforms LLM-only generations by a substantial margin.
[ { "created": "Sat, 4 May 2024 18:32:08 GMT", "version": "v1" } ]
2024-05-07
[ [ "Singhania", "Sneha", "" ], [ "Razniewski", "Simon", "" ], [ "Weikum", "Gerhard", "" ] ]
Methods for relation extraction from text mostly focus on high precision, at the cost of limited recall. High recall is crucial, though, to populate long lists of object entities that stand in a specific relation with a given subject. Cues for relevant objects can be spread across many passages in long texts. This poses the challenge of extracting long lists from long texts. We present the L3X method which tackles the problem in two stages: (1) recall-oriented generation using a large language model (LLM) with judicious techniques for retrieval augmentation, and (2) precision-oriented scrutinization to validate or prune candidates. Our L3X method outperforms LLM-only generations by a substantial margin.
1501.06813
Frank Staals
Maarten L\"offler, Martin N\"ollenburg, Frank Staals
Mixed Map Labeling
Full version for the paper accepted at CIAC 2015
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Point feature map labeling is a geometric problem, in which a set of input points must be labeled with a set of disjoint rectangles (the bounding boxes of the label texts). Typically, labeling models either use internal labels, which must touch their feature point, or external (boundary) labels, which are placed on one of the four sides of the input points' bounding box and which are connected to their feature points by crossing-free leader lines. In this paper we study polynomial-time algorithms for maximizing the number of internal labels in a mixed labeling model that combines internal and external labels. The model requires that all leaders are parallel to a given orientation $\theta \in [0,2\pi)$, whose value influences the geometric properties and hence the running times of our algorithms.
[ { "created": "Tue, 27 Jan 2015 16:40:19 GMT", "version": "v1" } ]
2015-01-28
[ [ "Löffler", "Maarten", "" ], [ "Nöllenburg", "Martin", "" ], [ "Staals", "Frank", "" ] ]
Point feature map labeling is a geometric problem, in which a set of input points must be labeled with a set of disjoint rectangles (the bounding boxes of the label texts). Typically, labeling models either use internal labels, which must touch their feature point, or external (boundary) labels, which are placed on one of the four sides of the input points' bounding box and which are connected to their feature points by crossing-free leader lines. In this paper we study polynomial-time algorithms for maximizing the number of internal labels in a mixed labeling model that combines internal and external labels. The model requires that all leaders are parallel to a given orientation $\theta \in [0,2\pi)$, whose value influences the geometric properties and hence the running times of our algorithms.
2006.04663
Benjamin Doerr
Benjamin Doerr
Runtime Analysis of Evolutionary Algorithms via Symmetry Arguments
Minor changes compared to the previous version
Inf. Process. Lett. 166: 106064 (2021)
10.1016/j.ipl.2020.106064
null
cs.NE cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We use an elementary argument building on group actions to prove that the selection-free steady state genetic algorithm analyzed by Sutton and Witt (GECCO 2019) takes an expected number of $\Omega(2^n / \sqrt n)$ iterations to find any particular target search point. This bound is valid for all population sizes $\mu$. Our result improves over the previous lower bound of $\Omega(\exp(n^{\delta/2}))$ valid for population sizes $\mu = O(n^{1/2 - \delta})$, $0 < \delta < 1/2$.
[ { "created": "Mon, 8 Jun 2020 15:04:51 GMT", "version": "v1" }, { "created": "Thu, 20 Aug 2020 11:32:39 GMT", "version": "v2" }, { "created": "Sat, 31 Oct 2020 10:42:28 GMT", "version": "v3" } ]
2021-09-21
[ [ "Doerr", "Benjamin", "" ] ]
We use an elementary argument building on group actions to prove that the selection-free steady state genetic algorithm analyzed by Sutton and Witt (GECCO 2019) takes an expected number of $\Omega(2^n / \sqrt n)$ iterations to find any particular target search point. This bound is valid for all population sizes $\mu$. Our result improves over the previous lower bound of $\Omega(\exp(n^{\delta/2}))$ valid for population sizes $\mu = O(n^{1/2 - \delta})$, $0 < \delta < 1/2$.
2107.09786
Jingtao Li
Xing Chen, Jingtao Li and Chaitali Chakrabarti
Communication and Computation Reduction for Split Learning using Asynchronous Training
Accepted by SIPS '21
null
10.1109/SiPS52927.2021.00022
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Split learning is a promising privacy-preserving distributed learning scheme that has low computation requirement at the edge device but has the disadvantage of high communication overhead between edge device and server. To reduce the communication overhead, this paper proposes a loss-based asynchronous training scheme that updates the client-side model less frequently and only sends/receives activations/gradients in selected epochs. To further reduce the communication overhead, the activations/gradients are quantized using 8-bit floating point prior to transmission. An added benefit of the proposed communication reduction method is that the computations at the client side are reduced due to reduction in the number of client model updates. Furthermore, the privacy of the proposed communication reduction based split learning method is almost the same as traditional split learning. Simulation results on VGG11, VGG13 and ResNet18 models on CIFAR-10 show that the communication cost is reduced by 1.64x-106.7x and the computations in the client are reduced by 2.86x-32.1x when the accuracy degradation is less than 0.5% for the single-client case. For 5 and 10-client cases, the communication cost reduction is 11.9x and 11.3x on VGG11 for 0.5% loss in accuracy.
[ { "created": "Tue, 20 Jul 2021 22:08:13 GMT", "version": "v1" } ]
2022-03-10
[ [ "Chen", "Xing", "" ], [ "Li", "Jingtao", "" ], [ "Chakrabarti", "Chaitali", "" ] ]
Split learning is a promising privacy-preserving distributed learning scheme that has low computation requirement at the edge device but has the disadvantage of high communication overhead between edge device and server. To reduce the communication overhead, this paper proposes a loss-based asynchronous training scheme that updates the client-side model less frequently and only sends/receives activations/gradients in selected epochs. To further reduce the communication overhead, the activations/gradients are quantized using 8-bit floating point prior to transmission. An added benefit of the proposed communication reduction method is that the computations at the client side are reduced due to reduction in the number of client model updates. Furthermore, the privacy of the proposed communication reduction based split learning method is almost the same as traditional split learning. Simulation results on VGG11, VGG13 and ResNet18 models on CIFAR-10 show that the communication cost is reduced by 1.64x-106.7x and the computations in the client are reduced by 2.86x-32.1x when the accuracy degradation is less than 0.5% for the single-client case. For 5 and 10-client cases, the communication cost reduction is 11.9x and 11.3x on VGG11 for 0.5% loss in accuracy.
2211.07065
Eunchan Kim
Byeongmin Choi, YongHyun Lee, Yeunwoong Kyung and Eunchan Kim
ALBERT with Knowledge Graph Encoder Utilizing Semantic Similarity for Commonsense Question Answering
12 pages, 9 figures
Intelligent Automation & Soft Computing, vol. 36, no.1, pp. 71-82, 2023
10.32604/iasc.2023.032783
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recently, pre-trained language representation models such as bidirectional encoder representations from transformers (BERT) have been performing well in commonsense question answering (CSQA). However, there is a problem that the models do not directly use explicit information of knowledge sources existing outside. To augment this, additional methods such as knowledge-aware graph network (KagNet) and multi-hop graph relation network (MHGRN) have been proposed. In this study, we propose to use the latest pre-trained language model a lite bidirectional encoder representations from transformers (ALBERT) with knowledge graph information extraction technique. We also propose to applying the novel method, schema graph expansion to recent language models. Then, we analyze the effect of applying knowledge graph-based knowledge extraction techniques to recent pre-trained language models and confirm that schema graph expansion is effective in some extent. Furthermore, we show that our proposed model can achieve better performance than existing KagNet and MHGRN models in CommonsenseQA dataset.
[ { "created": "Mon, 14 Nov 2022 01:39:26 GMT", "version": "v1" } ]
2022-11-15
[ [ "Choi", "Byeongmin", "" ], [ "Lee", "YongHyun", "" ], [ "Kyung", "Yeunwoong", "" ], [ "Kim", "Eunchan", "" ] ]
Recently, pre-trained language representation models such as bidirectional encoder representations from transformers (BERT) have been performing well in commonsense question answering (CSQA). However, there is a problem that the models do not directly use explicit information of knowledge sources existing outside. To augment this, additional methods such as knowledge-aware graph network (KagNet) and multi-hop graph relation network (MHGRN) have been proposed. In this study, we propose to use the latest pre-trained language model a lite bidirectional encoder representations from transformers (ALBERT) with knowledge graph information extraction technique. We also propose to applying the novel method, schema graph expansion to recent language models. Then, we analyze the effect of applying knowledge graph-based knowledge extraction techniques to recent pre-trained language models and confirm that schema graph expansion is effective in some extent. Furthermore, we show that our proposed model can achieve better performance than existing KagNet and MHGRN models in CommonsenseQA dataset.
2211.02044
Sven J\"ager
Sven J\"ager, Guillaume Sagnol, Daniel Schmidt genannt Waldschmidt, Philipp Warode
Competitive Kill-and-Restart and Preemptive Strategies for Non-Clairvoyant Scheduling
An extended abstract occurred in the Proceedings of the 24th International Conference on Integer Programming and Combinatorial Optimization
null
10.1007/s10107-024-02118-8
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study kill-and-restart and preemptive strategies for the fundamental scheduling problem of minimizing the sum of weighted completion times on a single machine in the non-clairvoyant setting. First, we show a lower bound of~$3$ for any deterministic non-clairvoyant kill-and-restart strategy. Then, we give for any $b > 1$ a tight analysis for the natural $b$-scaling kill-and-restart strategy as well as for a randomized variant of it. In particular, we show a competitive ratio of $(1+3\sqrt{3})\approx 6.197$ for the deterministic and of $\approx 3.032$ for the randomized strategy, by making use of the largest eigenvalue of a Toeplitz matrix. In addition, we show that the preemptive Weighted Shortest Elapsed Time First (WSETF) rule is $2$-competitive when jobs are released online, matching the lower bound for the unit weight case with trivial release dates for any non-clairvoyant algorithm. Using this result as well as the competitiveness of round-robin for multiple machines, we prove performance guarantees smaller than $10$ for adaptions of the $b$-scaling strategy to online release dates and unweighted jobs on identical parallel machines.
[ { "created": "Thu, 3 Nov 2022 17:57:28 GMT", "version": "v1" }, { "created": "Mon, 14 Nov 2022 11:09:16 GMT", "version": "v2" }, { "created": "Thu, 1 Jun 2023 16:21:30 GMT", "version": "v3" } ]
2024-07-24
[ [ "Jäger", "Sven", "" ], [ "Sagnol", "Guillaume", "" ], [ "Waldschmidt", "Daniel Schmidt genannt", "" ], [ "Warode", "Philipp", "" ] ]
We study kill-and-restart and preemptive strategies for the fundamental scheduling problem of minimizing the sum of weighted completion times on a single machine in the non-clairvoyant setting. First, we show a lower bound of~$3$ for any deterministic non-clairvoyant kill-and-restart strategy. Then, we give for any $b > 1$ a tight analysis for the natural $b$-scaling kill-and-restart strategy as well as for a randomized variant of it. In particular, we show a competitive ratio of $(1+3\sqrt{3})\approx 6.197$ for the deterministic and of $\approx 3.032$ for the randomized strategy, by making use of the largest eigenvalue of a Toeplitz matrix. In addition, we show that the preemptive Weighted Shortest Elapsed Time First (WSETF) rule is $2$-competitive when jobs are released online, matching the lower bound for the unit weight case with trivial release dates for any non-clairvoyant algorithm. Using this result as well as the competitiveness of round-robin for multiple machines, we prove performance guarantees smaller than $10$ for adaptions of the $b$-scaling strategy to online release dates and unweighted jobs on identical parallel machines.
2403.14003
Alessandro Favero
Alessandro Favero, Luca Zancato, Matthew Trager, Siddharth Choudhary, Pramuditha Perera, Alessandro Achille, Ashwin Swaminathan, Stefano Soatto
Multi-Modal Hallucination Control by Visual Information Grounding
null
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2024
null
null
cs.CV cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative Vision-Language Models (VLMs) are prone to generate plausible-sounding textual answers that, however, are not always grounded in the input image. We investigate this phenomenon, usually referred to as "hallucination" and show that it stems from an excessive reliance on the language prior. In particular, we show that as more tokens are generated, the reliance on the visual prompt decreases, and this behavior strongly correlates with the emergence of hallucinations. To reduce hallucinations, we introduce Multi-Modal Mutual-Information Decoding (M3ID), a new sampling method for prompt amplification. M3ID amplifies the influence of the reference image over the language prior, hence favoring the generation of tokens with higher mutual information with the visual prompt. M3ID can be applied to any pre-trained autoregressive VLM at inference time without necessitating further training and with minimal computational overhead. If training is an option, we show that M3ID can be paired with Direct Preference Optimization (DPO) to improve the model's reliance on the prompt image without requiring any labels. Our empirical findings show that our algorithms maintain the fluency and linguistic capabilities of pre-trained VLMs while reducing hallucinations by mitigating visually ungrounded answers. Specifically, for the LLaVA 13B model, M3ID and M3ID+DPO reduce the percentage of hallucinated objects in captioning tasks by 25% and 28%, respectively, and improve the accuracy on VQA benchmarks such as POPE by 21% and 24%.
[ { "created": "Wed, 20 Mar 2024 22:05:18 GMT", "version": "v1" } ]
2024-03-22
[ [ "Favero", "Alessandro", "" ], [ "Zancato", "Luca", "" ], [ "Trager", "Matthew", "" ], [ "Choudhary", "Siddharth", "" ], [ "Perera", "Pramuditha", "" ], [ "Achille", "Alessandro", "" ], [ "Swaminathan", "Ashwin", "" ], [ "Soatto", "Stefano", "" ] ]
Generative Vision-Language Models (VLMs) are prone to generate plausible-sounding textual answers that, however, are not always grounded in the input image. We investigate this phenomenon, usually referred to as "hallucination" and show that it stems from an excessive reliance on the language prior. In particular, we show that as more tokens are generated, the reliance on the visual prompt decreases, and this behavior strongly correlates with the emergence of hallucinations. To reduce hallucinations, we introduce Multi-Modal Mutual-Information Decoding (M3ID), a new sampling method for prompt amplification. M3ID amplifies the influence of the reference image over the language prior, hence favoring the generation of tokens with higher mutual information with the visual prompt. M3ID can be applied to any pre-trained autoregressive VLM at inference time without necessitating further training and with minimal computational overhead. If training is an option, we show that M3ID can be paired with Direct Preference Optimization (DPO) to improve the model's reliance on the prompt image without requiring any labels. Our empirical findings show that our algorithms maintain the fluency and linguistic capabilities of pre-trained VLMs while reducing hallucinations by mitigating visually ungrounded answers. Specifically, for the LLaVA 13B model, M3ID and M3ID+DPO reduce the percentage of hallucinated objects in captioning tasks by 25% and 28%, respectively, and improve the accuracy on VQA benchmarks such as POPE by 21% and 24%.
2005.08946
Fatemah Husain
Fatemah Husain
Arabic Offensive Language Detection Using Machine Learning and Ensemble Machine Learning Approaches
5 pages, 3 figures. arXiv admin note: text overlap with arXiv:2005.07297
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
This study aims at investigating the effect of applying single learner machine learning approach and ensemble machine learning approach for offensive language detection on Arabic language. Classifying Arabic social media text is a very challenging task due to the ambiguity and informality of the written format of the text. Arabic language has multiple dialects with diverse vocabularies and structures, which increase the complexity of obtaining high classification performance. Our study shows significant impact for applying ensemble machine learning approach over the single learner machine learning approach. Among the trained ensemble machine learning classifiers, bagging performs the best in offensive language detection with F1 score of 88%, which exceeds the score obtained by the best single learner classifier by 6%. Our findings highlight the great opportunities of investing more efforts in promoting the ensemble machine learning approach solutions for offensive language detection models.
[ { "created": "Sat, 16 May 2020 06:40:36 GMT", "version": "v1" } ]
2020-05-20
[ [ "Husain", "Fatemah", "" ] ]
This study aims at investigating the effect of applying single learner machine learning approach and ensemble machine learning approach for offensive language detection on Arabic language. Classifying Arabic social media text is a very challenging task due to the ambiguity and informality of the written format of the text. Arabic language has multiple dialects with diverse vocabularies and structures, which increase the complexity of obtaining high classification performance. Our study shows significant impact for applying ensemble machine learning approach over the single learner machine learning approach. Among the trained ensemble machine learning classifiers, bagging performs the best in offensive language detection with F1 score of 88%, which exceeds the score obtained by the best single learner classifier by 6%. Our findings highlight the great opportunities of investing more efforts in promoting the ensemble machine learning approach solutions for offensive language detection models.
2302.13687
Albert Li
Albert H. Li, Preston Culbertson, Joel W. Burdick, Aaron D. Ames
FRoGGeR: Fast Robust Grasp Generation via the Min-Weight Metric
Accepted at IROS 2023. The arXiv version contains the appendix, which does not appear in the conference version
null
null
null
cs.RO math.OC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Many approaches to grasp synthesis optimize analytic quality metrics that measure grasp robustness based on finger placements and local surface geometry. However, generating feasible dexterous grasps by optimizing these metrics is slow, often taking minutes. To address this issue, this paper presents FRoGGeR: a method that quickly generates robust precision grasps using the min-weight metric, a novel, almost-everywhere differentiable approximation of the classical epsilon grasp metric. The min-weight metric is simple and interpretable, provides a reasonable measure of grasp robustness, and admits numerically efficient gradients for smooth optimization. We leverage these properties to rapidly synthesize collision-free robust grasps - typically in less than a second. FRoGGeR can refine the candidate grasps generated by other methods (heuristic, data-driven, etc.) and is compatible with many object representations (SDFs, meshes, etc.). We study FRoGGeR's performance on over 40 objects drawn from the YCB dataset, outperforming a competitive baseline in computation time, feasibility rate of grasp synthesis, and picking success in simulation. We conclude that FRoGGeR is fast: it has a median synthesis time of 0.834s over hundreds of experiments.
[ { "created": "Mon, 27 Feb 2023 11:46:13 GMT", "version": "v1" }, { "created": "Mon, 24 Jul 2023 07:23:45 GMT", "version": "v2" } ]
2023-07-25
[ [ "Li", "Albert H.", "" ], [ "Culbertson", "Preston", "" ], [ "Burdick", "Joel W.", "" ], [ "Ames", "Aaron D.", "" ] ]
Many approaches to grasp synthesis optimize analytic quality metrics that measure grasp robustness based on finger placements and local surface geometry. However, generating feasible dexterous grasps by optimizing these metrics is slow, often taking minutes. To address this issue, this paper presents FRoGGeR: a method that quickly generates robust precision grasps using the min-weight metric, a novel, almost-everywhere differentiable approximation of the classical epsilon grasp metric. The min-weight metric is simple and interpretable, provides a reasonable measure of grasp robustness, and admits numerically efficient gradients for smooth optimization. We leverage these properties to rapidly synthesize collision-free robust grasps - typically in less than a second. FRoGGeR can refine the candidate grasps generated by other methods (heuristic, data-driven, etc.) and is compatible with many object representations (SDFs, meshes, etc.). We study FRoGGeR's performance on over 40 objects drawn from the YCB dataset, outperforming a competitive baseline in computation time, feasibility rate of grasp synthesis, and picking success in simulation. We conclude that FRoGGeR is fast: it has a median synthesis time of 0.834s over hundreds of experiments.
1011.5168
Emilio Ferrara
Salvatore Catanese, Pasquale De Meo, Emilio Ferrara, Giacomo Fiumara
Analyzing the Facebook Friendship Graph
6 pages, 1 figure; MIFI '10: Proceedings of the 1st International Workshop on Mining the Future Internet
Proceedings of the 1st International Workshop on Mining the Future Internet (MIFI '10), 2010
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online Social Networks (OSN) during last years acquired a huge and increasing popularity as one of the most important emerging Web phenomena, deeply modifying the behavior of users and contributing to build a solid substrate of connections and relationships among people using the Web. In this preliminary work paper, our purpose is to analyze Facebook, considering a significant sample of data reflecting relationships among subscribed users. Our goal is to extract, from this platform, relevant information about the distribution of these relations and exploit tools and algorithms provided by the Social Network Analysis (SNA) to discover and, possibly, understand underlying similarities between the developing of OSN and real-life social networks.
[ { "created": "Tue, 23 Nov 2010 17:07:32 GMT", "version": "v1" }, { "created": "Thu, 2 Jun 2011 15:10:49 GMT", "version": "v2" } ]
2011-06-03
[ [ "Catanese", "Salvatore", "" ], [ "De Meo", "Pasquale", "" ], [ "Ferrara", "Emilio", "" ], [ "Fiumara", "Giacomo", "" ] ]
Online Social Networks (OSN) during last years acquired a huge and increasing popularity as one of the most important emerging Web phenomena, deeply modifying the behavior of users and contributing to build a solid substrate of connections and relationships among people using the Web. In this preliminary work paper, our purpose is to analyze Facebook, considering a significant sample of data reflecting relationships among subscribed users. Our goal is to extract, from this platform, relevant information about the distribution of these relations and exploit tools and algorithms provided by the Social Network Analysis (SNA) to discover and, possibly, understand underlying similarities between the developing of OSN and real-life social networks.
2402.02037
Huang Dong
Dong Huang, Yuhao Qing, Weiyi Shang, Heming Cui, Jie M.Zhang
EffiBench: Benchmarking the Efficiency of Automatically Generated Code
30 pages, 7 figures
null
null
null
cs.SE cs.CL
http://creativecommons.org/licenses/by/4.0/
Code generation models have increasingly become integral to aiding software development. Although current research has thoroughly examined the correctness of the code produced by code generation models, a vital aspect that plays a pivotal role in green computing and sustainability efforts has often been neglected. This paper presents EffiBench, a benchmark with 1,000 efficiency-critical coding problems to assess the efficiency of code generated by code generation models. EffiBench contains a diverse set of LeetCode coding problems. Each problem is paired with an executable human-written canonical solution, which obtains the SOTA efficiency on the LeetCode solution leaderboard. With EffiBench, we empirically examine the ability of 42 large language models (35 open-source and 7 closed-source) to generate efficient code. Our evaluation results demonstrate that the efficiency of the code generated by LLMs is generally worse than the efficiency of human-written canonical solutions. For example, GPT-4 generated code has an average \textbf{3.12} times execution time that of the human-written canonical solutions. In the most extreme cases, the execution time and total memory usage of GPT-4 generated code are \textbf{13.89} and \textbf{43.92} times that of the canonical solutions. The source code of EffiBench is released on https://github.com/huangd1999/EffiBench. We also provide the LeaderBoard at https://huggingface.co/spaces/EffiBench/effibench-leaderboard.
[ { "created": "Sat, 3 Feb 2024 05:24:39 GMT", "version": "v1" }, { "created": "Thu, 15 Feb 2024 15:57:06 GMT", "version": "v2" }, { "created": "Fri, 7 Jun 2024 09:21:21 GMT", "version": "v3" }, { "created": "Thu, 4 Jul 2024 02:55:05 GMT", "version": "v4" } ]
2024-07-08
[ [ "Huang", "Dong", "" ], [ "Qing", "Yuhao", "" ], [ "Shang", "Weiyi", "" ], [ "Cui", "Heming", "" ], [ "Zhang", "Jie M.", "" ] ]
Code generation models have increasingly become integral to aiding software development. Although current research has thoroughly examined the correctness of the code produced by code generation models, a vital aspect that plays a pivotal role in green computing and sustainability efforts has often been neglected. This paper presents EffiBench, a benchmark with 1,000 efficiency-critical coding problems to assess the efficiency of code generated by code generation models. EffiBench contains a diverse set of LeetCode coding problems. Each problem is paired with an executable human-written canonical solution, which obtains the SOTA efficiency on the LeetCode solution leaderboard. With EffiBench, we empirically examine the ability of 42 large language models (35 open-source and 7 closed-source) to generate efficient code. Our evaluation results demonstrate that the efficiency of the code generated by LLMs is generally worse than the efficiency of human-written canonical solutions. For example, GPT-4 generated code has an average \textbf{3.12} times execution time that of the human-written canonical solutions. In the most extreme cases, the execution time and total memory usage of GPT-4 generated code are \textbf{13.89} and \textbf{43.92} times that of the canonical solutions. The source code of EffiBench is released on https://github.com/huangd1999/EffiBench. We also provide the LeaderBoard at https://huggingface.co/spaces/EffiBench/effibench-leaderboard.
2102.04588
Debasish Mohapatra
Debasish Ray Mohapatra, Victor Zappi, Sidney Fels
A comparative study of two-dimensional vocal tract acoustic modeling based on Finite-Difference Time-Domain methods
4 pages, 3 figures
null
null
null
cs.SD cs.CL eess.AS
http://creativecommons.org/licenses/by/4.0/
The two-dimensional (2D) numerical approaches for vocal tract (VT) modelling can afford a better balance between the low computational cost and accurate rendering of acoustic wave propagation. However, they require a high spatio-temporal resolution in the numerical scheme for a precise estimation of acoustic formants at the simulation run-time expense. We have recently proposed a new VT acoustic modelling technique, known as the 2.5D Finite-Difference Time-Domain (2.5D FDTD), which extends the existing 2D FDTD approach by adding tube depth to its acoustic wave solver. In this work, first, the simulated acoustic outputs of our new model are shown to be comparable with the 2D FDTD and a realistic 3D FEM VT model at a low spatio-temporal resolution. Next, a radiation model is developed by including a circular baffle around the VT as head geometry. The transfer functions of the radiation model are analyzed using five different vocal tract shapes for vowel sounds /a/, /e/, /i/, /o/ and /u/.
[ { "created": "Tue, 9 Feb 2021 00:40:52 GMT", "version": "v1" } ]
2021-02-10
[ [ "Mohapatra", "Debasish Ray", "" ], [ "Zappi", "Victor", "" ], [ "Fels", "Sidney", "" ] ]
The two-dimensional (2D) numerical approaches for vocal tract (VT) modelling can afford a better balance between the low computational cost and accurate rendering of acoustic wave propagation. However, they require a high spatio-temporal resolution in the numerical scheme for a precise estimation of acoustic formants at the simulation run-time expense. We have recently proposed a new VT acoustic modelling technique, known as the 2.5D Finite-Difference Time-Domain (2.5D FDTD), which extends the existing 2D FDTD approach by adding tube depth to its acoustic wave solver. In this work, first, the simulated acoustic outputs of our new model are shown to be comparable with the 2D FDTD and a realistic 3D FEM VT model at a low spatio-temporal resolution. Next, a radiation model is developed by including a circular baffle around the VT as head geometry. The transfer functions of the radiation model are analyzed using five different vocal tract shapes for vowel sounds /a/, /e/, /i/, /o/ and /u/.
1911.05281
Chen Xu
Chen Xu, Jian Wang, Tianhang Yu, Chuili Kong, Yourui Huangfu, Rong Li, Yiqun Ge, Jun Wang
Buffer-aware Wireless Scheduling based on Deep Reinforcement Learning
submitted to WCNC2020
null
null
null
cs.IT cs.LG math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, the downlink packet scheduling problem for cellular networks is modeled, which jointly optimizes throughput, fairness and packet drop rate. Two genie-aided heuristic search methods are employed to explore the solution space. A deep reinforcement learning (DRL) framework with A2C algorithm is proposed for the optimization problem. Several methods have been utilized in the framework to improve the sampling and training efficiency and to adapt the algorithm to a specific scheduling problem. Numerical results show that DRL outperforms the baseline algorithm and achieves similar performance as genie-aided methods without using the future information.
[ { "created": "Wed, 13 Nov 2019 04:15:02 GMT", "version": "v1" } ]
2019-11-14
[ [ "Xu", "Chen", "" ], [ "Wang", "Jian", "" ], [ "Yu", "Tianhang", "" ], [ "Kong", "Chuili", "" ], [ "Huangfu", "Yourui", "" ], [ "Li", "Rong", "" ], [ "Ge", "Yiqun", "" ], [ "Wang", "Jun", "" ] ]
In this paper, the downlink packet scheduling problem for cellular networks is modeled, which jointly optimizes throughput, fairness and packet drop rate. Two genie-aided heuristic search methods are employed to explore the solution space. A deep reinforcement learning (DRL) framework with A2C algorithm is proposed for the optimization problem. Several methods have been utilized in the framework to improve the sampling and training efficiency and to adapt the algorithm to a specific scheduling problem. Numerical results show that DRL outperforms the baseline algorithm and achieves similar performance as genie-aided methods without using the future information.
1607.03516
Muhammad Ghifary
Muhammad Ghifary and W. Bastiaan Kleijn and Mengjie Zhang and David Balduzzi and Wen Li
Deep Reconstruction-Classification Networks for Unsupervised Domain Adaptation
to appear in European Conference on Computer Vision (ECCV) 2016
null
null
null
cs.CV cs.AI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a novel unsupervised domain adaptation algorithm based on deep learning for visual object recognition. Specifically, we design a new model called Deep Reconstruction-Classification Network (DRCN), which jointly learns a shared encoding representation for two tasks: i) supervised classification of labeled source data, and ii) unsupervised reconstruction of unlabeled target data.In this way, the learnt representation not only preserves discriminability, but also encodes useful information from the target domain. Our new DRCN model can be optimized by using backpropagation similarly as the standard neural networks. We evaluate the performance of DRCN on a series of cross-domain object recognition tasks, where DRCN provides a considerable improvement (up to ~8% in accuracy) over the prior state-of-the-art algorithms. Interestingly, we also observe that the reconstruction pipeline of DRCN transforms images from the source domain into images whose appearance resembles the target dataset. This suggests that DRCN's performance is due to constructing a single composite representation that encodes information about both the structure of target images and the classification of source images. Finally, we provide a formal analysis to justify the algorithm's objective in domain adaptation context.
[ { "created": "Tue, 12 Jul 2016 20:48:58 GMT", "version": "v1" }, { "created": "Mon, 1 Aug 2016 09:58:13 GMT", "version": "v2" } ]
2016-08-03
[ [ "Ghifary", "Muhammad", "" ], [ "Kleijn", "W. Bastiaan", "" ], [ "Zhang", "Mengjie", "" ], [ "Balduzzi", "David", "" ], [ "Li", "Wen", "" ] ]
In this paper, we propose a novel unsupervised domain adaptation algorithm based on deep learning for visual object recognition. Specifically, we design a new model called Deep Reconstruction-Classification Network (DRCN), which jointly learns a shared encoding representation for two tasks: i) supervised classification of labeled source data, and ii) unsupervised reconstruction of unlabeled target data.In this way, the learnt representation not only preserves discriminability, but also encodes useful information from the target domain. Our new DRCN model can be optimized by using backpropagation similarly as the standard neural networks. We evaluate the performance of DRCN on a series of cross-domain object recognition tasks, where DRCN provides a considerable improvement (up to ~8% in accuracy) over the prior state-of-the-art algorithms. Interestingly, we also observe that the reconstruction pipeline of DRCN transforms images from the source domain into images whose appearance resembles the target dataset. This suggests that DRCN's performance is due to constructing a single composite representation that encodes information about both the structure of target images and the classification of source images. Finally, we provide a formal analysis to justify the algorithm's objective in domain adaptation context.
2006.10645
Xiaohang Zhan
Xiaohang Zhan, Jiahao Xie, Ziwei Liu, Yew Soon Ong, Chen Change Loy
Online Deep Clustering for Unsupervised Representation Learning
Accepted by CVPR 2020. Code: https://github.com/open-mmlab/OpenSelfSup
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Joint clustering and feature learning methods have shown remarkable performance in unsupervised representation learning. However, the training schedule alternating between feature clustering and network parameters update leads to unstable learning of visual representations. To overcome this challenge, we propose Online Deep Clustering (ODC) that performs clustering and network update simultaneously rather than alternatingly. Our key insight is that the cluster centroids should evolve steadily in keeping the classifier stably updated. Specifically, we design and maintain two dynamic memory modules, i.e., samples memory to store samples labels and features, and centroids memory for centroids evolution. We break down the abrupt global clustering into steady memory update and batch-wise label re-assignment. The process is integrated into network update iterations. In this way, labels and the network evolve shoulder-to-shoulder rather than alternatingly. Extensive experiments demonstrate that ODC stabilizes the training process and boosts the performance effectively. Code: https://github.com/open-mmlab/OpenSelfSup.
[ { "created": "Thu, 18 Jun 2020 16:15:46 GMT", "version": "v1" } ]
2020-06-19
[ [ "Zhan", "Xiaohang", "" ], [ "Xie", "Jiahao", "" ], [ "Liu", "Ziwei", "" ], [ "Ong", "Yew Soon", "" ], [ "Loy", "Chen Change", "" ] ]
Joint clustering and feature learning methods have shown remarkable performance in unsupervised representation learning. However, the training schedule alternating between feature clustering and network parameters update leads to unstable learning of visual representations. To overcome this challenge, we propose Online Deep Clustering (ODC) that performs clustering and network update simultaneously rather than alternatingly. Our key insight is that the cluster centroids should evolve steadily in keeping the classifier stably updated. Specifically, we design and maintain two dynamic memory modules, i.e., samples memory to store samples labels and features, and centroids memory for centroids evolution. We break down the abrupt global clustering into steady memory update and batch-wise label re-assignment. The process is integrated into network update iterations. In this way, labels and the network evolve shoulder-to-shoulder rather than alternatingly. Extensive experiments demonstrate that ODC stabilizes the training process and boosts the performance effectively. Code: https://github.com/open-mmlab/OpenSelfSup.
1705.03250
George Grispos
George Grispos and Jesus Garcia-Galan and Liliana Pasquale and Bashar Nuseibeh
Are You Ready? Towards the Engineering of Forensic-Ready Systems
Presented at IEEE 11th International Conference on Research Challenges in Information Science, Brighton, United Kindgom
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As security incidents continue to impact organisations, there is a growing demand for systems to be 'forensic ready'- to maximise the potential use of evidence whilst minimising the costs of an investigation. Researchers have supported organisational forensic readiness efforts by proposing the use of policies and processes, aligning systems with forensics objectives and training employees. However, recent work has also proposed an alternative strategy for implementing forensic readiness called forensic-by-design. This is an approach that involves integrating requirements for forensics into relevant phases of the systems development lifecycle with the aim of engineering forensic-ready systems. While this alternative forensic readiness strategy has been discussed in the literature, no previous research has examined the extent to which organisations actually use this approach for implementing forensic readiness. Hence, we investigate the extent to which organisations consider requirements for forensics during systems development. We first assessed existing research to identify the various perspectives of implementing forensic readiness, and then undertook an online survey to investigate the consideration of requirements for forensics during systems development lifecycles. Our findings provide an initial assessment of the extent to which requirements for forensics are considered within organisations. We then use our findings, coupled with the literature, to identify a number of research challenges regarding the engineering of forensic-ready systems.
[ { "created": "Tue, 9 May 2017 09:47:01 GMT", "version": "v1" }, { "created": "Mon, 15 May 2017 12:51:38 GMT", "version": "v2" } ]
2017-05-16
[ [ "Grispos", "George", "" ], [ "Garcia-Galan", "Jesus", "" ], [ "Pasquale", "Liliana", "" ], [ "Nuseibeh", "Bashar", "" ] ]
As security incidents continue to impact organisations, there is a growing demand for systems to be 'forensic ready'- to maximise the potential use of evidence whilst minimising the costs of an investigation. Researchers have supported organisational forensic readiness efforts by proposing the use of policies and processes, aligning systems with forensics objectives and training employees. However, recent work has also proposed an alternative strategy for implementing forensic readiness called forensic-by-design. This is an approach that involves integrating requirements for forensics into relevant phases of the systems development lifecycle with the aim of engineering forensic-ready systems. While this alternative forensic readiness strategy has been discussed in the literature, no previous research has examined the extent to which organisations actually use this approach for implementing forensic readiness. Hence, we investigate the extent to which organisations consider requirements for forensics during systems development. We first assessed existing research to identify the various perspectives of implementing forensic readiness, and then undertook an online survey to investigate the consideration of requirements for forensics during systems development lifecycles. Our findings provide an initial assessment of the extent to which requirements for forensics are considered within organisations. We then use our findings, coupled with the literature, to identify a number of research challenges regarding the engineering of forensic-ready systems.
1711.09368
Siyu Zhou
Siyu Zhou, Weiqiang Zhao, Jiashi Feng, Hanjiang Lai, Yan Pan, Jian Yin, Shuicheng Yan
Personalized and Occupational-aware Age Progression by Generative Adversarial Networks
9 pages, 10 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Face age progression, which aims to predict the future looks, is important for various applications and has been received considerable attentions. Existing methods and datasets are limited in exploring the effects of occupations which may influence the personal appearances. In this paper, we firstly introduce an occupational face aging dataset for studying the influences of occupations on the appearances. It includes five occupations, which enables the development of new algorithms for age progression and facilitate future researches. Second, we propose a new occupational-aware adversarial face aging network, which learns human aging process under different occupations. Two factors are taken into consideration in our aging process: personality-preserving and visually plausible texture change for different occupations. We propose personalized network with personalized loss in deep autoencoder network for keeping personalized facial characteristics, and occupational-aware adversarial network with occupational-aware adversarial loss for obtaining more realistic texture changes. Experimental results well demonstrate the advantages of the proposed method by comparing with other state-of-the-arts age progression methods.
[ { "created": "Sun, 26 Nov 2017 10:50:56 GMT", "version": "v1" }, { "created": "Fri, 1 Dec 2017 06:58:03 GMT", "version": "v2" } ]
2017-12-04
[ [ "Zhou", "Siyu", "" ], [ "Zhao", "Weiqiang", "" ], [ "Feng", "Jiashi", "" ], [ "Lai", "Hanjiang", "" ], [ "Pan", "Yan", "" ], [ "Yin", "Jian", "" ], [ "Yan", "Shuicheng", "" ] ]
Face age progression, which aims to predict the future looks, is important for various applications and has been received considerable attentions. Existing methods and datasets are limited in exploring the effects of occupations which may influence the personal appearances. In this paper, we firstly introduce an occupational face aging dataset for studying the influences of occupations on the appearances. It includes five occupations, which enables the development of new algorithms for age progression and facilitate future researches. Second, we propose a new occupational-aware adversarial face aging network, which learns human aging process under different occupations. Two factors are taken into consideration in our aging process: personality-preserving and visually plausible texture change for different occupations. We propose personalized network with personalized loss in deep autoencoder network for keeping personalized facial characteristics, and occupational-aware adversarial network with occupational-aware adversarial loss for obtaining more realistic texture changes. Experimental results well demonstrate the advantages of the proposed method by comparing with other state-of-the-arts age progression methods.
1912.01713
Oleksii Konashevych
Oleksii Konashevych
Cross-Blockchain Databases for Governments: The Technology for Public Registries and Smart Laws
This document needs major revision and is not going to be updated
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is an ongoing competition among blockchain technologies and the existence of one ultimate blockchain is impossible for many reasons. On the other hand, such variety can create difficulties in adoption, especially for the governments and corporations. The proposed technology ensures a blockchain agnostic approach and aimed to create a unified ecosystem of multiple networks. The cross-blockchain protocol can be used to develop services where end-users decide for themselves their most preferred blockchain. The invention solves problems of duplication of tokens in the result of hardforks, issues with scalability, digital identity and even the "problem" of immutability (enforceability). A cross-blockchain DB means a consistent non-conflicting key-value database across a bunch of defined blockchains. It is not a new blockchain, but a protocol for developing databases on existing blockchains. The protocol is also a basis for a "smart law" which is a framework for public registries and their governance.
[ { "created": "Thu, 28 Nov 2019 00:10:09 GMT", "version": "v1" }, { "created": "Fri, 24 Apr 2020 02:32:40 GMT", "version": "v2" } ]
2020-07-24
[ [ "Konashevych", "Oleksii", "" ] ]
There is an ongoing competition among blockchain technologies and the existence of one ultimate blockchain is impossible for many reasons. On the other hand, such variety can create difficulties in adoption, especially for the governments and corporations. The proposed technology ensures a blockchain agnostic approach and aimed to create a unified ecosystem of multiple networks. The cross-blockchain protocol can be used to develop services where end-users decide for themselves their most preferred blockchain. The invention solves problems of duplication of tokens in the result of hardforks, issues with scalability, digital identity and even the "problem" of immutability (enforceability). A cross-blockchain DB means a consistent non-conflicting key-value database across a bunch of defined blockchains. It is not a new blockchain, but a protocol for developing databases on existing blockchains. The protocol is also a basis for a "smart law" which is a framework for public registries and their governance.
2003.07329
Jize Zhang
Jize Zhang and Bhavya Kailkhura and T. Yong-Jin Han
Mix-n-Match: Ensemble and Compositional Methods for Uncertainty Calibration in Deep Learning
ICML 2020
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper studies the problem of post-hoc calibration of machine learning classifiers. We introduce the following desiderata for uncertainty calibration: (a) accuracy-preserving, (b) data-efficient, and (c) high expressive power. We show that none of the existing methods satisfy all three requirements, and demonstrate how Mix-n-Match calibration strategies (i.e., ensemble and composition) can help achieve remarkably better data-efficiency and expressive power while provably maintaining the classification accuracy of the original classifier. Mix-n-Match strategies are generic in the sense that they can be used to improve the performance of any off-the-shelf calibrator. We also reveal potential issues in standard evaluation practices. Popular approaches (e.g., histogram-based expected calibration error (ECE)) may provide misleading results especially in small-data regime. Therefore, we propose an alternative data-efficient kernel density-based estimator for a reliable evaluation of the calibration performance and prove its asymptotically unbiasedness and consistency. Our approaches outperform state-of-the-art solutions on both the calibration as well as the evaluation tasks in most of the experimental settings. Our codes are available at https://github.com/zhang64-llnl/Mix-n-Match-Calibration.
[ { "created": "Mon, 16 Mar 2020 17:00:35 GMT", "version": "v1" }, { "created": "Tue, 30 Jun 2020 06:44:30 GMT", "version": "v2" } ]
2020-07-01
[ [ "Zhang", "Jize", "" ], [ "Kailkhura", "Bhavya", "" ], [ "Han", "T. Yong-Jin", "" ] ]
This paper studies the problem of post-hoc calibration of machine learning classifiers. We introduce the following desiderata for uncertainty calibration: (a) accuracy-preserving, (b) data-efficient, and (c) high expressive power. We show that none of the existing methods satisfy all three requirements, and demonstrate how Mix-n-Match calibration strategies (i.e., ensemble and composition) can help achieve remarkably better data-efficiency and expressive power while provably maintaining the classification accuracy of the original classifier. Mix-n-Match strategies are generic in the sense that they can be used to improve the performance of any off-the-shelf calibrator. We also reveal potential issues in standard evaluation practices. Popular approaches (e.g., histogram-based expected calibration error (ECE)) may provide misleading results especially in small-data regime. Therefore, we propose an alternative data-efficient kernel density-based estimator for a reliable evaluation of the calibration performance and prove its asymptotically unbiasedness and consistency. Our approaches outperform state-of-the-art solutions on both the calibration as well as the evaluation tasks in most of the experimental settings. Our codes are available at https://github.com/zhang64-llnl/Mix-n-Match-Calibration.
2004.02709
Matt Gardner
Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, Ben Zhou
Evaluating Models' Local Decision Boundaries via Contrast Sets
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Standard test sets for supervised learning evaluate in-distribution generalization. Unfortunately, when a dataset has systematic gaps (e.g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture a dataset's intended capabilities. We propose a new annotation paradigm for NLP that helps to close systematic gaps in the test data. In particular, after a dataset is constructed, we recommend that the dataset authors manually perturb the test instances in small but meaningful ways that (typically) change the gold label, creating contrast sets. Contrast sets provide a local view of a model's decision boundary, which can be used to more accurately evaluate a model's true linguistic capabilities. We demonstrate the efficacy of contrast sets by creating them for 10 diverse NLP datasets (e.g., DROP reading comprehension, UD parsing, IMDb sentiment analysis). Although our contrast sets are not explicitly adversarial, model performance is significantly lower on them than on the original test sets---up to 25\% in some cases. We release our contrast sets as new evaluation benchmarks and encourage future dataset construction efforts to follow similar annotation processes.
[ { "created": "Mon, 6 Apr 2020 14:47:18 GMT", "version": "v1" }, { "created": "Thu, 1 Oct 2020 21:26:57 GMT", "version": "v2" } ]
2020-10-05
[ [ "Gardner", "Matt", "" ], [ "Artzi", "Yoav", "" ], [ "Basmova", "Victoria", "" ], [ "Berant", "Jonathan", "" ], [ "Bogin", "Ben", "" ], [ "Chen", "Sihao", "" ], [ "Dasigi", "Pradeep", "" ], [ "Dua", "Dheeru", "" ], [ "Elazar", "Yanai", "" ], [ "Gottumukkala", "Ananth", "" ], [ "Gupta", "Nitish", "" ], [ "Hajishirzi", "Hanna", "" ], [ "Ilharco", "Gabriel", "" ], [ "Khashabi", "Daniel", "" ], [ "Lin", "Kevin", "" ], [ "Liu", "Jiangming", "" ], [ "Liu", "Nelson F.", "" ], [ "Mulcaire", "Phoebe", "" ], [ "Ning", "Qiang", "" ], [ "Singh", "Sameer", "" ], [ "Smith", "Noah A.", "" ], [ "Subramanian", "Sanjay", "" ], [ "Tsarfaty", "Reut", "" ], [ "Wallace", "Eric", "" ], [ "Zhang", "Ally", "" ], [ "Zhou", "Ben", "" ] ]
Standard test sets for supervised learning evaluate in-distribution generalization. Unfortunately, when a dataset has systematic gaps (e.g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture a dataset's intended capabilities. We propose a new annotation paradigm for NLP that helps to close systematic gaps in the test data. In particular, after a dataset is constructed, we recommend that the dataset authors manually perturb the test instances in small but meaningful ways that (typically) change the gold label, creating contrast sets. Contrast sets provide a local view of a model's decision boundary, which can be used to more accurately evaluate a model's true linguistic capabilities. We demonstrate the efficacy of contrast sets by creating them for 10 diverse NLP datasets (e.g., DROP reading comprehension, UD parsing, IMDb sentiment analysis). Although our contrast sets are not explicitly adversarial, model performance is significantly lower on them than on the original test sets---up to 25\% in some cases. We release our contrast sets as new evaluation benchmarks and encourage future dataset construction efforts to follow similar annotation processes.
cs/0309041
Konstantin Rybnikov
Konstantin Rybnikov
Fast Verification of Convexity of Piecewise-linear Surfaces
10 pages (abbreviated version). Significantly different from all older versions. Discount the previous version -- it had many omissions and typos, like the following one: everything works starting from dimension n=3, not n=2 as was printed in the old abstract. Hyperbolic and spherical cases have been substantially rewritten and errors fixed. This preprint is close to a similar preprint on the MATH part of arxiv.org
null
null
null
cs.CG cs.CV
null
We show that a realization of a closed connected PL-manifold of dimension n-1 in n-dimensional Euclidean space (n>2) is the boundary of a convex polyhedron (finite or infinite) if and only if the interior of each (n-3)-face has a point, which has a neighborhood lying on the boundary of an n-dimensional convex body. No initial assumptions about the topology or orientability of the input surface are made. The theorem is derived from a refinement and generalization of Van Heijenoort's theorem on locally convex manifolds to spherical spaces. Our convexity criterion for PL-manifolds implies an easy polynomial-time algorithm for checking convexity of a given PL-surface in n-dimensional Euclidean or spherical space, n>2. The algorithm is worst case optimal with respect to both the number of operations and the algebraic degree. The algorithm works under significantly weaker assumptions and is easier to implement than convexity verification algorithms suggested by Mehlhorn et al (1996-1999), and Devillers et al.(1998). A paradigm of approximate convexity is suggested and a simplified algorithm of smaller degree and complexity is suggested for approximate floating point convexity verification.
[ { "created": "Tue, 23 Sep 2003 06:47:28 GMT", "version": "v1" }, { "created": "Mon, 24 Nov 2003 11:23:31 GMT", "version": "v2" } ]
2007-05-23
[ [ "Rybnikov", "Konstantin", "" ] ]
We show that a realization of a closed connected PL-manifold of dimension n-1 in n-dimensional Euclidean space (n>2) is the boundary of a convex polyhedron (finite or infinite) if and only if the interior of each (n-3)-face has a point, which has a neighborhood lying on the boundary of an n-dimensional convex body. No initial assumptions about the topology or orientability of the input surface are made. The theorem is derived from a refinement and generalization of Van Heijenoort's theorem on locally convex manifolds to spherical spaces. Our convexity criterion for PL-manifolds implies an easy polynomial-time algorithm for checking convexity of a given PL-surface in n-dimensional Euclidean or spherical space, n>2. The algorithm is worst case optimal with respect to both the number of operations and the algebraic degree. The algorithm works under significantly weaker assumptions and is easier to implement than convexity verification algorithms suggested by Mehlhorn et al (1996-1999), and Devillers et al.(1998). A paradigm of approximate convexity is suggested and a simplified algorithm of smaller degree and complexity is suggested for approximate floating point convexity verification.
2312.15993
Ruidong Yan
Yuqi Zheng, Ruidong Yan, Bin Jia, Rui Jiang, Adriana TAPUS, Xiaojing Chen, Shiteng Zheng, Ying Shang
Adaptive Kalman-based hybrid car following strategy using TD3 and CACC
32pages,13figures
null
null
null
cs.AI cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In autonomous driving, the hybrid strategy of deep reinforcement learning and cooperative adaptive cruise control (CACC) can fully utilize the advantages of the two algorithms and significantly improve the performance of car following. However, it is challenging for the traditional hybrid strategy based on fixed coefficients to adapt to mixed traffic flow scenarios, which may decrease the performance and even lead to accidents. To address the above problems, a hybrid car following strategy based on an adaptive Kalman Filter is proposed by regarding CACC and Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithms. Different from traditional hybrid strategy based on fixed coefficients, the Kalman gain H, using as an adaptive coefficient, is derived from multi-timestep predictions and Monte Carlo Tree Search. At the end of study, simulation results with 4157745 timesteps indicate that, compared with the TD3 and HCFS algorithms, the proposed algorithm in this study can substantially enhance the safety of car following in mixed traffic flow without compromising the comfort and efficiency.
[ { "created": "Tue, 26 Dec 2023 10:51:46 GMT", "version": "v1" } ]
2023-12-27
[ [ "Zheng", "Yuqi", "" ], [ "Yan", "Ruidong", "" ], [ "Jia", "Bin", "" ], [ "Jiang", "Rui", "" ], [ "TAPUS", "Adriana", "" ], [ "Chen", "Xiaojing", "" ], [ "Zheng", "Shiteng", "" ], [ "Shang", "Ying", "" ] ]
In autonomous driving, the hybrid strategy of deep reinforcement learning and cooperative adaptive cruise control (CACC) can fully utilize the advantages of the two algorithms and significantly improve the performance of car following. However, it is challenging for the traditional hybrid strategy based on fixed coefficients to adapt to mixed traffic flow scenarios, which may decrease the performance and even lead to accidents. To address the above problems, a hybrid car following strategy based on an adaptive Kalman Filter is proposed by regarding CACC and Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithms. Different from traditional hybrid strategy based on fixed coefficients, the Kalman gain H, using as an adaptive coefficient, is derived from multi-timestep predictions and Monte Carlo Tree Search. At the end of study, simulation results with 4157745 timesteps indicate that, compared with the TD3 and HCFS algorithms, the proposed algorithm in this study can substantially enhance the safety of car following in mixed traffic flow without compromising the comfort and efficiency.
2405.04370
Junyi Ma
Junyi Ma, Jingyi Xu, Xieyuanli Chen, Hesheng Wang
Diff-IP2D: Diffusion-Based Hand-Object Interaction Prediction on Egocentric Videos
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Understanding how humans would behave during hand-object interaction is vital for applications in service robot manipulation and extended reality. To achieve this, some recent works have been proposed to simultaneously forecast hand trajectories and object affordances on human egocentric videos. The joint prediction serves as a comprehensive representation of future hand-object interactions in 2D space, indicating potential human motion and motivation. However, the existing approaches mostly adopt the autoregressive paradigm for unidirectional prediction, which lacks mutual constraints within the holistic future sequence, and accumulates errors along the time axis. Meanwhile, these works basically overlook the effect of camera egomotion on first-person view predictions. To address these limitations, we propose a novel diffusion-based interaction prediction method, namely Diff-IP2D, to forecast future hand trajectories and object affordances concurrently in an iterative non-autoregressive manner. We transform the sequential 2D images into latent feature space and design a denoising diffusion model to predict future latent interaction features conditioned on past ones. Motion features are further integrated into the conditional denoising process to enable Diff-IP2D aware of the camera wearer's dynamics for more accurate interaction prediction. Extensive experiments demonstrate that our method significantly outperforms the state-of-the-art baselines on both the off-the-shelf metrics and our newly proposed evaluation protocol. This highlights the efficacy of leveraging a generative paradigm for 2D hand-object interaction prediction. The code of Diff-IP2D will be released at https://github.com/IRMVLab/Diff-IP2D.
[ { "created": "Tue, 7 May 2024 14:51:05 GMT", "version": "v1" }, { "created": "Mon, 20 May 2024 02:57:51 GMT", "version": "v2" } ]
2024-05-21
[ [ "Ma", "Junyi", "" ], [ "Xu", "Jingyi", "" ], [ "Chen", "Xieyuanli", "" ], [ "Wang", "Hesheng", "" ] ]
Understanding how humans would behave during hand-object interaction is vital for applications in service robot manipulation and extended reality. To achieve this, some recent works have been proposed to simultaneously forecast hand trajectories and object affordances on human egocentric videos. The joint prediction serves as a comprehensive representation of future hand-object interactions in 2D space, indicating potential human motion and motivation. However, the existing approaches mostly adopt the autoregressive paradigm for unidirectional prediction, which lacks mutual constraints within the holistic future sequence, and accumulates errors along the time axis. Meanwhile, these works basically overlook the effect of camera egomotion on first-person view predictions. To address these limitations, we propose a novel diffusion-based interaction prediction method, namely Diff-IP2D, to forecast future hand trajectories and object affordances concurrently in an iterative non-autoregressive manner. We transform the sequential 2D images into latent feature space and design a denoising diffusion model to predict future latent interaction features conditioned on past ones. Motion features are further integrated into the conditional denoising process to enable Diff-IP2D aware of the camera wearer's dynamics for more accurate interaction prediction. Extensive experiments demonstrate that our method significantly outperforms the state-of-the-art baselines on both the off-the-shelf metrics and our newly proposed evaluation protocol. This highlights the efficacy of leveraging a generative paradigm for 2D hand-object interaction prediction. The code of Diff-IP2D will be released at https://github.com/IRMVLab/Diff-IP2D.
2102.03785
Rami Mochaourab
Rami Mochaourab and Sugandh Sinha and Stanley Greenstein and Panagiotis Papapetrou
Robust Explanations for Private Support Vector Machines
13 pages, 9 figures, 1 table
null
null
null
cs.LG cs.CR math.OC
http://creativecommons.org/licenses/by/4.0/
We consider counterfactual explanations for private support vector machines (SVM), where the privacy mechanism that publicly releases the classifier guarantees differential privacy. While privacy preservation is essential when dealing with sensitive data, there is a consequent degradation in the classification accuracy due to the introduced perturbations in the classifier weights. For such classifiers, counterfactual explanations need to be robust against the uncertainties in the SVM weights in order to ensure, with high confidence, that the classification of the data instance to be explained is different than its explanation. We model the uncertainties in the SVM weights through a random vector, and formulate the explanation problem as an optimization problem with probabilistic constraint. Subsequently, we characterize the problem's deterministic equivalent and study its solution. For linear SVMs, the problem is a convex second-order cone program. For non-linear SVMs, the problem is non-convex. Thus, we propose a sub-optimal solution that is based on the bisection method. The results show that, contrary to non-robust explanations, the quality of explanations from the robust solution degrades with increasing privacy in order to guarantee a prespecified confidence level for correct classifications.
[ { "created": "Sun, 7 Feb 2021 11:55:32 GMT", "version": "v1" }, { "created": "Wed, 9 Jun 2021 19:21:19 GMT", "version": "v2" } ]
2021-06-11
[ [ "Mochaourab", "Rami", "" ], [ "Sinha", "Sugandh", "" ], [ "Greenstein", "Stanley", "" ], [ "Papapetrou", "Panagiotis", "" ] ]
We consider counterfactual explanations for private support vector machines (SVM), where the privacy mechanism that publicly releases the classifier guarantees differential privacy. While privacy preservation is essential when dealing with sensitive data, there is a consequent degradation in the classification accuracy due to the introduced perturbations in the classifier weights. For such classifiers, counterfactual explanations need to be robust against the uncertainties in the SVM weights in order to ensure, with high confidence, that the classification of the data instance to be explained is different than its explanation. We model the uncertainties in the SVM weights through a random vector, and formulate the explanation problem as an optimization problem with probabilistic constraint. Subsequently, we characterize the problem's deterministic equivalent and study its solution. For linear SVMs, the problem is a convex second-order cone program. For non-linear SVMs, the problem is non-convex. Thus, we propose a sub-optimal solution that is based on the bisection method. The results show that, contrary to non-robust explanations, the quality of explanations from the robust solution degrades with increasing privacy in order to guarantee a prespecified confidence level for correct classifications.
1701.03263
Marten Maack
Klaus Jansen and Marten Maack
An EPTAS for Scheduling on Unrelated Machines of Few Different Types
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the classical problem of scheduling on unrelated parallel machines, a set of jobs has to be assigned to a set of machines. The jobs have a processing time depending on the machine and the goal is to minimize the makespan, that is the maximum machine load. It is well known that this problem is NP-hard and does not allow polynomial time approximation algorithms with approximation guarantees smaller than $1.5$ unless P$=$NP. We consider the case that there are only a constant number $K$ of machine types. Two machines have the same type if all jobs have the same processing time for them. This variant of the problem is strongly NP-hard already for $K=1$. We present an efficient polynomial time approximation scheme (EPTAS) for the problem, that is, for any $\varepsilon > 0$ an assignment with makespan of length at most $(1+\varepsilon)$ times the optimum can be found in polynomial time in the input length and the exponent is independent of $1/\varepsilon$. In particular we achieve a running time of $2^{\mathcal{O}(K\log(K) \frac{1}{\varepsilon}\log^4 \frac{1}{\varepsilon})}+\mathrm{poly}(|I|)$, where $|I|$ denotes the input length. Furthermore, we study three other problem variants and present an EPTAS for each of them: The Santa Claus problem, where the minimum machine load has to be maximized; the case of scheduling on unrelated parallel machines with a constant number of uniform types, where machines of the same type behave like uniformly related machines; and the multidimensional vector scheduling variant of the problem where both the dimension and the number of machine types are constant. For the Santa Claus problem we achieve the same running time. The results are achieved, using mixed integer linear programming and rounding techniques.
[ { "created": "Thu, 12 Jan 2017 08:12:36 GMT", "version": "v1" }, { "created": "Wed, 6 Dec 2017 16:22:53 GMT", "version": "v2" } ]
2017-12-07
[ [ "Jansen", "Klaus", "" ], [ "Maack", "Marten", "" ] ]
In the classical problem of scheduling on unrelated parallel machines, a set of jobs has to be assigned to a set of machines. The jobs have a processing time depending on the machine and the goal is to minimize the makespan, that is the maximum machine load. It is well known that this problem is NP-hard and does not allow polynomial time approximation algorithms with approximation guarantees smaller than $1.5$ unless P$=$NP. We consider the case that there are only a constant number $K$ of machine types. Two machines have the same type if all jobs have the same processing time for them. This variant of the problem is strongly NP-hard already for $K=1$. We present an efficient polynomial time approximation scheme (EPTAS) for the problem, that is, for any $\varepsilon > 0$ an assignment with makespan of length at most $(1+\varepsilon)$ times the optimum can be found in polynomial time in the input length and the exponent is independent of $1/\varepsilon$. In particular we achieve a running time of $2^{\mathcal{O}(K\log(K) \frac{1}{\varepsilon}\log^4 \frac{1}{\varepsilon})}+\mathrm{poly}(|I|)$, where $|I|$ denotes the input length. Furthermore, we study three other problem variants and present an EPTAS for each of them: The Santa Claus problem, where the minimum machine load has to be maximized; the case of scheduling on unrelated parallel machines with a constant number of uniform types, where machines of the same type behave like uniformly related machines; and the multidimensional vector scheduling variant of the problem where both the dimension and the number of machine types are constant. For the Santa Claus problem we achieve the same running time. The results are achieved, using mixed integer linear programming and rounding techniques.
2308.06420
Yen Nhi Truong Vu
Yen Nhi Truong Vu, Dan Guo, Ahmed Taha, Jason Su, Thomas Paul Matthews
M&M: Tackling False Positives in Mammography with a Multi-view and Multi-instance Learning Sparse Detector
MICCAI 2023 with supplementary materials
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Deep-learning-based object detection methods show promise for improving screening mammography, but high rates of false positives can hinder their effectiveness in clinical practice. To reduce false positives, we identify three challenges: (1) unlike natural images, a malignant mammogram typically contains only one malignant finding; (2) mammography exams contain two views of each breast, and both views ought to be considered to make a correct assessment; (3) most mammograms are negative and do not contain any findings. In this work, we tackle the three aforementioned challenges by: (1) leveraging Sparse R-CNN and showing that sparse detectors are more appropriate than dense detectors for mammography; (2) including a multi-view cross-attention module to synthesize information from different views; (3) incorporating multi-instance learning (MIL) to train with unannotated images and perform breast-level classification. The resulting model, M&M, is a Multi-view and Multi-instance learning system that can both localize malignant findings and provide breast-level predictions. We validate M&M's detection and classification performance using five mammography datasets. In addition, we demonstrate the effectiveness of each proposed component through comprehensive ablation studies.
[ { "created": "Fri, 11 Aug 2023 23:59:47 GMT", "version": "v1" } ]
2023-08-15
[ [ "Vu", "Yen Nhi Truong", "" ], [ "Guo", "Dan", "" ], [ "Taha", "Ahmed", "" ], [ "Su", "Jason", "" ], [ "Matthews", "Thomas Paul", "" ] ]
Deep-learning-based object detection methods show promise for improving screening mammography, but high rates of false positives can hinder their effectiveness in clinical practice. To reduce false positives, we identify three challenges: (1) unlike natural images, a malignant mammogram typically contains only one malignant finding; (2) mammography exams contain two views of each breast, and both views ought to be considered to make a correct assessment; (3) most mammograms are negative and do not contain any findings. In this work, we tackle the three aforementioned challenges by: (1) leveraging Sparse R-CNN and showing that sparse detectors are more appropriate than dense detectors for mammography; (2) including a multi-view cross-attention module to synthesize information from different views; (3) incorporating multi-instance learning (MIL) to train with unannotated images and perform breast-level classification. The resulting model, M&M, is a Multi-view and Multi-instance learning system that can both localize malignant findings and provide breast-level predictions. We validate M&M's detection and classification performance using five mammography datasets. In addition, we demonstrate the effectiveness of each proposed component through comprehensive ablation studies.
1905.07903
Ruslan Nikolaev
Ruslan Nikolaev and Binoy Ravindran
Snapshot-Free, Transparent, and Robust Memory Reclamation for Lock-Free Data Structures
An extended version of the PLDI'21 paper (with Appendix)
42nd ACM SIGPLAN International Conference on Programming Language Design and Implementation (PLDI 2021)
10.1145/3453483.3454090
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a family of safe memory reclamation schemes, Hyaline, which are fast, scalable, and transparent to the underlying lock-free data structures. Hyaline is based on reference counting - considered impractical for memory reclamation in the past due to high overheads. Hyaline uses reference counters only during reclamation, but not while accessing individual objects, which reduces overheads for object accesses. Since with reference counters, an arbitrary thread ends up freeing memory, Hyaline's reclamation workload is (almost) balanced across all threads, unlike most prior reclamation schemes such as epoch-based reclamation (EBR) or hazard pointers (HP). Hyaline often yields (excellent) EBR-grade performance with (good) HP-grade memory efficiency, which is a challenging tradeoff with all existing schemes. Hyaline schemes offer: (i) high performance; (ii) good memory efficiency; (iii) robustness: bounding memory usage even in the presence of stalled threads, a well-known problem with EBR; (iv) transparency: supporting virtually unbounded number of threads (or concurrent entities) that can be created and deleted dynamically, and effortlessly join existent workload; (v) autonomy: avoiding special OS mechanisms and being non-intrusive to runtime or compiler environments; (vi) simplicity: enabling easy integration into unmanaged C/C++ code; and (vii) generality: supporting many data structures. All existing schemes lack one or more properties. We have implemented and tested Hyaline on x86(-64), ARM32/64, PowerPC, and MIPS. The general approach requires LL/SC or double-width CAS, while a specialized version also works with single-width CAS. Our evaluation reveals that Hyaline's throughput is very high - it steadily outperforms EBR by 10% in one test and yields 2x gains in oversubscribed scenarios. Hyaline's superior memory efficiency is especially evident in read-dominated workloads
[ { "created": "Mon, 20 May 2019 06:34:15 GMT", "version": "v1" }, { "created": "Sat, 1 May 2021 13:38:45 GMT", "version": "v2" } ]
2021-05-04
[ [ "Nikolaev", "Ruslan", "" ], [ "Ravindran", "Binoy", "" ] ]
We present a family of safe memory reclamation schemes, Hyaline, which are fast, scalable, and transparent to the underlying lock-free data structures. Hyaline is based on reference counting - considered impractical for memory reclamation in the past due to high overheads. Hyaline uses reference counters only during reclamation, but not while accessing individual objects, which reduces overheads for object accesses. Since with reference counters, an arbitrary thread ends up freeing memory, Hyaline's reclamation workload is (almost) balanced across all threads, unlike most prior reclamation schemes such as epoch-based reclamation (EBR) or hazard pointers (HP). Hyaline often yields (excellent) EBR-grade performance with (good) HP-grade memory efficiency, which is a challenging tradeoff with all existing schemes. Hyaline schemes offer: (i) high performance; (ii) good memory efficiency; (iii) robustness: bounding memory usage even in the presence of stalled threads, a well-known problem with EBR; (iv) transparency: supporting virtually unbounded number of threads (or concurrent entities) that can be created and deleted dynamically, and effortlessly join existent workload; (v) autonomy: avoiding special OS mechanisms and being non-intrusive to runtime or compiler environments; (vi) simplicity: enabling easy integration into unmanaged C/C++ code; and (vii) generality: supporting many data structures. All existing schemes lack one or more properties. We have implemented and tested Hyaline on x86(-64), ARM32/64, PowerPC, and MIPS. The general approach requires LL/SC or double-width CAS, while a specialized version also works with single-width CAS. Our evaluation reveals that Hyaline's throughput is very high - it steadily outperforms EBR by 10% in one test and yields 2x gains in oversubscribed scenarios. Hyaline's superior memory efficiency is especially evident in read-dominated workloads
2103.16748
Ning Yu
Ning Yu, Guilin Liu, Aysegul Dundar, Andrew Tao, Bryan Catanzaro, Larry Davis, Mario Fritz
Dual Contrastive Loss and Attention for GANs
Accepted to ICCV'21
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative Adversarial Networks (GANs) produce impressive results on unconditional image generation when powered with large-scale image datasets. Yet generated images are still easy to spot especially on datasets with high variance (e.g. bedroom, church). In this paper, we propose various improvements to further push the boundaries in image generation. Specifically, we propose a novel dual contrastive loss and show that, with this loss, discriminator learns more generalized and distinguishable representations to incentivize generation. In addition, we revisit attention and extensively experiment with different attention blocks in the generator. We find attention to be still an important module for successful image generation even though it was not used in the recent state-of-the-art models. Lastly, we study different attention architectures in the discriminator, and propose a reference attention mechanism. By combining the strengths of these remedies, we improve the compelling state-of-the-art Fr\'{e}chet Inception Distance (FID) by at least 17.5% on several benchmark datasets. We obtain even more significant improvements on compositional synthetic scenes (up to 47.5% in FID). Code and models are available at https://github.com/ningyu1991/AttentionDualContrastGAN .
[ { "created": "Wed, 31 Mar 2021 01:10:26 GMT", "version": "v1" }, { "created": "Thu, 7 Oct 2021 10:58:28 GMT", "version": "v2" }, { "created": "Thu, 17 Mar 2022 20:59:31 GMT", "version": "v3" } ]
2022-03-21
[ [ "Yu", "Ning", "" ], [ "Liu", "Guilin", "" ], [ "Dundar", "Aysegul", "" ], [ "Tao", "Andrew", "" ], [ "Catanzaro", "Bryan", "" ], [ "Davis", "Larry", "" ], [ "Fritz", "Mario", "" ] ]
Generative Adversarial Networks (GANs) produce impressive results on unconditional image generation when powered with large-scale image datasets. Yet generated images are still easy to spot especially on datasets with high variance (e.g. bedroom, church). In this paper, we propose various improvements to further push the boundaries in image generation. Specifically, we propose a novel dual contrastive loss and show that, with this loss, discriminator learns more generalized and distinguishable representations to incentivize generation. In addition, we revisit attention and extensively experiment with different attention blocks in the generator. We find attention to be still an important module for successful image generation even though it was not used in the recent state-of-the-art models. Lastly, we study different attention architectures in the discriminator, and propose a reference attention mechanism. By combining the strengths of these remedies, we improve the compelling state-of-the-art Fr\'{e}chet Inception Distance (FID) by at least 17.5% on several benchmark datasets. We obtain even more significant improvements on compositional synthetic scenes (up to 47.5% in FID). Code and models are available at https://github.com/ningyu1991/AttentionDualContrastGAN .
2103.15042
Yinyin He
Yin-Yin He, Jianxin Wu, Xiu-Shen Wei
Distilling Virtual Examples for Long-tailed Recognition
Accepted to ICCV 2021
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We tackle the long-tailed visual recognition problem from the knowledge distillation perspective by proposing a Distill the Virtual Examples (DiVE) method. Specifically, by treating the predictions of a teacher model as virtual examples, we prove that distilling from these virtual examples is equivalent to label distribution learning under certain constraints. We show that when the virtual example distribution becomes flatter than the original input distribution, the under-represented tail classes will receive significant improvements, which is crucial in long-tailed recognition. The proposed DiVE method can explicitly tune the virtual example distribution to become flat. Extensive experiments on three benchmark datasets, including the large-scale iNaturalist ones, justify that the proposed DiVE method can significantly outperform state-of-the-art methods. Furthermore, additional analyses and experiments verify the virtual example interpretation, and demonstrate the effectiveness of tailored designs in DiVE for long-tailed problems.
[ { "created": "Sun, 28 Mar 2021 04:25:43 GMT", "version": "v1" }, { "created": "Sun, 29 Aug 2021 14:03:22 GMT", "version": "v2" }, { "created": "Sun, 19 Sep 2021 08:14:44 GMT", "version": "v3" } ]
2021-09-21
[ [ "He", "Yin-Yin", "" ], [ "Wu", "Jianxin", "" ], [ "Wei", "Xiu-Shen", "" ] ]
We tackle the long-tailed visual recognition problem from the knowledge distillation perspective by proposing a Distill the Virtual Examples (DiVE) method. Specifically, by treating the predictions of a teacher model as virtual examples, we prove that distilling from these virtual examples is equivalent to label distribution learning under certain constraints. We show that when the virtual example distribution becomes flatter than the original input distribution, the under-represented tail classes will receive significant improvements, which is crucial in long-tailed recognition. The proposed DiVE method can explicitly tune the virtual example distribution to become flat. Extensive experiments on three benchmark datasets, including the large-scale iNaturalist ones, justify that the proposed DiVE method can significantly outperform state-of-the-art methods. Furthermore, additional analyses and experiments verify the virtual example interpretation, and demonstrate the effectiveness of tailored designs in DiVE for long-tailed problems.
2201.11653
Jin Hyun Park Mr
Jin Hyun Park
Representations learnt by SGD and Adaptive learning rules: Conditions that vary sparsity and selectivity in neural network
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
From the point of view of the human brain, continual learning can perform various tasks without mutual interference. An effective way to reduce mutual interference can be found in sparsity and selectivity of neurons. According to Aljundi et al. and Hadsell et al., imposing sparsity at the representational level is advantageous for continual learning because sparse neuronal activations encourage less overlap between parameters, resulting in less interference. Similarly, highly selective neural networks are likely to induce less interference since particular response in neurons will reduce the chance of overlap with other parameters. Considering that the human brain performs continual learning over the lifespan, finding conditions where sparsity and selectivity naturally arises may provide insight for understanding how the brain functions. This paper investigates various conditions that naturally increase sparsity and selectivity in a neural network. This paper tested different optimizers with Hoyer's sparsity metric and CCMAS selectivity metric in MNIST classification task. It is essential to note that investigations on the natural occurrence of sparsity and selectivity concerning various conditions have not been acknowledged in any sector of neuroscience nor machine learning until this day. This paper found that particular conditions increase sparsity and selectivity such as applying a large learning rate and lowering a batch size. In addition to the relationship between the condition, sparsity, and selectivity, the following will be discussed based on empirical analysis: 1. The relationship between sparsity and selectivity and 2. The relationship between test accuracy, sparsity, and selectivity.
[ { "created": "Tue, 25 Jan 2022 05:40:24 GMT", "version": "v1" }, { "created": "Mon, 19 Feb 2024 09:08:11 GMT", "version": "v2" } ]
2024-02-21
[ [ "Park", "Jin Hyun", "" ] ]
From the point of view of the human brain, continual learning can perform various tasks without mutual interference. An effective way to reduce mutual interference can be found in sparsity and selectivity of neurons. According to Aljundi et al. and Hadsell et al., imposing sparsity at the representational level is advantageous for continual learning because sparse neuronal activations encourage less overlap between parameters, resulting in less interference. Similarly, highly selective neural networks are likely to induce less interference since particular response in neurons will reduce the chance of overlap with other parameters. Considering that the human brain performs continual learning over the lifespan, finding conditions where sparsity and selectivity naturally arises may provide insight for understanding how the brain functions. This paper investigates various conditions that naturally increase sparsity and selectivity in a neural network. This paper tested different optimizers with Hoyer's sparsity metric and CCMAS selectivity metric in MNIST classification task. It is essential to note that investigations on the natural occurrence of sparsity and selectivity concerning various conditions have not been acknowledged in any sector of neuroscience nor machine learning until this day. This paper found that particular conditions increase sparsity and selectivity such as applying a large learning rate and lowering a batch size. In addition to the relationship between the condition, sparsity, and selectivity, the following will be discussed based on empirical analysis: 1. The relationship between sparsity and selectivity and 2. The relationship between test accuracy, sparsity, and selectivity.
2112.07431
Xiaomeng Li
Yi Li, Yiqun Duan, Zhanghui Kuang, Yimin Chen, Wayne Zhang, Xiaomeng Li
Uncertainty Estimation via Response Scaling for Pseudo-mask Noise Mitigation in Weakly-supervised Semantic Segmentation
Accept at AAAI 2022, Code is available at https://github.com/XMed-Lab/URN
null
null
null
cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
Weakly-Supervised Semantic Segmentation (WSSS) segments objects without a heavy burden of dense annotation. While as a price, generated pseudo-masks exist obvious noisy pixels, which result in sub-optimal segmentation models trained over these pseudo-masks. But rare studies notice or work on this problem, even these noisy pixels are inevitable after their improvements on pseudo-mask. So we try to improve WSSS in the aspect of noise mitigation. And we observe that many noisy pixels are of high confidence, especially when the response range is too wide or narrow, presenting an uncertain status. Thus, in this paper, we simulate noisy variations of response by scaling the prediction map multiple times for uncertainty estimation. The uncertainty is then used to weight the segmentation loss to mitigate noisy supervision signals. We call this method URN, abbreviated from Uncertainty estimation via Response scaling for Noise mitigation. Experiments validate the benefits of URN, and our method achieves state-of-the-art results at 71.2% and 41.5% on PASCAL VOC 2012 and MS COCO 2014 respectively, without extra models like saliency detection. Code is available at https://github.com/XMed-Lab/URN.
[ { "created": "Tue, 14 Dec 2021 14:37:19 GMT", "version": "v1" } ]
2021-12-15
[ [ "Li", "Yi", "" ], [ "Duan", "Yiqun", "" ], [ "Kuang", "Zhanghui", "" ], [ "Chen", "Yimin", "" ], [ "Zhang", "Wayne", "" ], [ "Li", "Xiaomeng", "" ] ]
Weakly-Supervised Semantic Segmentation (WSSS) segments objects without a heavy burden of dense annotation. While as a price, generated pseudo-masks exist obvious noisy pixels, which result in sub-optimal segmentation models trained over these pseudo-masks. But rare studies notice or work on this problem, even these noisy pixels are inevitable after their improvements on pseudo-mask. So we try to improve WSSS in the aspect of noise mitigation. And we observe that many noisy pixels are of high confidence, especially when the response range is too wide or narrow, presenting an uncertain status. Thus, in this paper, we simulate noisy variations of response by scaling the prediction map multiple times for uncertainty estimation. The uncertainty is then used to weight the segmentation loss to mitigate noisy supervision signals. We call this method URN, abbreviated from Uncertainty estimation via Response scaling for Noise mitigation. Experiments validate the benefits of URN, and our method achieves state-of-the-art results at 71.2% and 41.5% on PASCAL VOC 2012 and MS COCO 2014 respectively, without extra models like saliency detection. Code is available at https://github.com/XMed-Lab/URN.
2408.03977
Mengting Li
Mengting Li, Chuang Zhu
Learning from Noisy Labels for Long-tailed Data via Optimal Transport
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Noisy labels, which are common in real-world datasets, can significantly impair the training of deep learning models. However, recent adversarial noise-combating methods overlook the long-tailed distribution of real data, which can significantly harm the effect of denoising strategies. Meanwhile, the mismanagement of noisy labels further compromises the model's ability to handle long-tailed data. To tackle this issue, we propose a novel approach to manage data characterized by both long-tailed distributions and noisy labels. First, we introduce a loss-distance cross-selection module, which integrates class predictions and feature distributions to filter clean samples, effectively addressing uncertainties introduced by noisy labels and long-tailed distributions. Subsequently, we employ optimal transport strategies to generate pseudo-labels for the noise set in a semi-supervised training manner, enhancing pseudo-label quality while mitigating the effects of sample scarcity caused by the long-tailed distribution. We conduct experiments on both synthetic and real-world datasets, and the comprehensive experimental results demonstrate that our method surpasses current state-of-the-art methods. Our code will be available in the future.
[ { "created": "Wed, 7 Aug 2024 14:15:18 GMT", "version": "v1" } ]
2024-08-09
[ [ "Li", "Mengting", "" ], [ "Zhu", "Chuang", "" ] ]
Noisy labels, which are common in real-world datasets, can significantly impair the training of deep learning models. However, recent adversarial noise-combating methods overlook the long-tailed distribution of real data, which can significantly harm the effect of denoising strategies. Meanwhile, the mismanagement of noisy labels further compromises the model's ability to handle long-tailed data. To tackle this issue, we propose a novel approach to manage data characterized by both long-tailed distributions and noisy labels. First, we introduce a loss-distance cross-selection module, which integrates class predictions and feature distributions to filter clean samples, effectively addressing uncertainties introduced by noisy labels and long-tailed distributions. Subsequently, we employ optimal transport strategies to generate pseudo-labels for the noise set in a semi-supervised training manner, enhancing pseudo-label quality while mitigating the effects of sample scarcity caused by the long-tailed distribution. We conduct experiments on both synthetic and real-world datasets, and the comprehensive experimental results demonstrate that our method surpasses current state-of-the-art methods. Our code will be available in the future.
1612.01361
Hoang Dau
Hoang Dau and Iwan Duursma and Han Mao Kiah and Olgica Milenkovic
Repairing Reed-Solomon Codes With Multiple Erasures
15 pages
IEEE Transactions on Information Theory (2018) 64(10) 6567-6582
10.1109/TIT.2018.2827942
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite their exceptional error-correcting properties, Reed-Solomon codes have been overlooked in distributed storage applications due to the common belief that they have poor repair bandwidth: A naive repair approach would require the whole file to be reconstructed in order to recover a single erased codeword symbol. In a recent work, Guruswami and Wootters (STOC'16) proposed a single-erasure repair method for Reed-Solomon codes that achieves the optimal repair bandwidth amongst all linear encoding schemes. Their key idea is to recover the erased symbol by collecting a sufficiently large number of its traces, each of which can be constructed from a number of traces of other symbols. We extend the trace collection technique to cope with two and three erasures.
[ { "created": "Mon, 28 Nov 2016 22:32:07 GMT", "version": "v1" }, { "created": "Sun, 3 May 2020 02:30:48 GMT", "version": "v2" } ]
2020-05-05
[ [ "Dau", "Hoang", "" ], [ "Duursma", "Iwan", "" ], [ "Kiah", "Han Mao", "" ], [ "Milenkovic", "Olgica", "" ] ]
Despite their exceptional error-correcting properties, Reed-Solomon codes have been overlooked in distributed storage applications due to the common belief that they have poor repair bandwidth: A naive repair approach would require the whole file to be reconstructed in order to recover a single erased codeword symbol. In a recent work, Guruswami and Wootters (STOC'16) proposed a single-erasure repair method for Reed-Solomon codes that achieves the optimal repair bandwidth amongst all linear encoding schemes. Their key idea is to recover the erased symbol by collecting a sufficiently large number of its traces, each of which can be constructed from a number of traces of other symbols. We extend the trace collection technique to cope with two and three erasures.
1305.3586
Dilip Bethanabhotla
Dilip Bethanabhotla, Giuseppe Caire and Michael J. Neely
Utility Optimal Scheduling and Admission Control for Adaptive Video Streaming in Small Cell Networks
5 pages, 4 figures. Accepted and will be presented at IEEE International Symposium on Information Theory (ISIT) 2013
null
10.1109/ISIT.2013.6620565
null
cs.IT cs.MM cs.NI math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the jointly optimal design of a transmission scheduling and admission control policy for adaptive video streaming over small cell networks. We formulate the problem as a dynamic network utility maximization and observe that it naturally decomposes into two subproblems: admission control and transmission scheduling. The resulting algorithms are simple and suitable for distributed implementation. The admission control decisions involve each user choosing the quality of the video chunk asked for download, based on the network congestion in its neighborhood. This form of admission control is compatible with the current video streaming technology based on the DASH protocol over TCP connections. Through simulations, we evaluate the performance of the proposed algorithm under realistic assumptions for a small-cell network.
[ { "created": "Wed, 15 May 2013 18:56:03 GMT", "version": "v1" } ]
2016-11-17
[ [ "Bethanabhotla", "Dilip", "" ], [ "Caire", "Giuseppe", "" ], [ "Neely", "Michael J.", "" ] ]
We consider the jointly optimal design of a transmission scheduling and admission control policy for adaptive video streaming over small cell networks. We formulate the problem as a dynamic network utility maximization and observe that it naturally decomposes into two subproblems: admission control and transmission scheduling. The resulting algorithms are simple and suitable for distributed implementation. The admission control decisions involve each user choosing the quality of the video chunk asked for download, based on the network congestion in its neighborhood. This form of admission control is compatible with the current video streaming technology based on the DASH protocol over TCP connections. Through simulations, we evaluate the performance of the proposed algorithm under realistic assumptions for a small-cell network.
2309.07530
Pierre Gaillard
Pierre Gaillard (Thoth), S\'ebastien Gerchinovitz (IMT), \'Etienne de Montbrun (TSE-R)
Adaptive approximation of monotone functions
null
null
null
null
cs.LG cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the classical problem of approximating a non-decreasing function $f: \mathcal{X} \to \mathcal{Y}$ in $L^p(\mu)$ norm by sequentially querying its values, for known compact real intervals $\mathcal{X}$, $\mathcal{Y}$ and a known probability measure $\mu$ on $\cX$. For any function~$f$ we characterize the minimum number of evaluations of $f$ that algorithms need to guarantee an approximation $\hat{f}$ with an $L^p(\mu)$ error below $\epsilon$ after stopping. Unlike worst-case results that hold uniformly over all $f$, our complexity measure is dependent on each specific function $f$. To address this problem, we introduce GreedyBox, a generalization of an algorithm originally proposed by Novak (1992) for numerical integration. We prove that GreedyBox achieves an optimal sample complexity for any function $f$, up to logarithmic factors. Additionally, we uncover results regarding piecewise-smooth functions. Perhaps as expected, the $L^p(\mu)$ error of GreedyBox decreases much faster for piecewise-$C^2$ functions than predicted by the algorithm (without any knowledge on the smoothness of $f$). A simple modification even achieves optimal minimax approximation rates for such functions, which we compute explicitly. In particular, our findings highlight multiple performance gaps between adaptive and non-adaptive algorithms, smooth and piecewise-smooth functions, as well as monotone or non-monotone functions. Finally, we provide numerical experiments to support our theoretical results.
[ { "created": "Thu, 14 Sep 2023 08:56:31 GMT", "version": "v1" } ]
2023-09-15
[ [ "Gaillard", "Pierre", "", "Thoth" ], [ "Gerchinovitz", "Sébastien", "", "IMT" ], [ "de Montbrun", "Étienne", "", "TSE-R" ] ]
We study the classical problem of approximating a non-decreasing function $f: \mathcal{X} \to \mathcal{Y}$ in $L^p(\mu)$ norm by sequentially querying its values, for known compact real intervals $\mathcal{X}$, $\mathcal{Y}$ and a known probability measure $\mu$ on $\cX$. For any function~$f$ we characterize the minimum number of evaluations of $f$ that algorithms need to guarantee an approximation $\hat{f}$ with an $L^p(\mu)$ error below $\epsilon$ after stopping. Unlike worst-case results that hold uniformly over all $f$, our complexity measure is dependent on each specific function $f$. To address this problem, we introduce GreedyBox, a generalization of an algorithm originally proposed by Novak (1992) for numerical integration. We prove that GreedyBox achieves an optimal sample complexity for any function $f$, up to logarithmic factors. Additionally, we uncover results regarding piecewise-smooth functions. Perhaps as expected, the $L^p(\mu)$ error of GreedyBox decreases much faster for piecewise-$C^2$ functions than predicted by the algorithm (without any knowledge on the smoothness of $f$). A simple modification even achieves optimal minimax approximation rates for such functions, which we compute explicitly. In particular, our findings highlight multiple performance gaps between adaptive and non-adaptive algorithms, smooth and piecewise-smooth functions, as well as monotone or non-monotone functions. Finally, we provide numerical experiments to support our theoretical results.
1906.04091
Priyanka Bhovad
Priyanka Bhovad, Joshua Kaufmann, and Suyi Li
Peristaltic locomotion without digital controllers: Exploiting the origami multi-stability to coordinate robotic motions
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This study proposes and examines a novel approach to generate peristaltic-like locomotion in a segmented origami robot. Specifically, we demonstrate the use of multi-stability embedded in origami skeleton to eliminate the need for multiple actuators or digital controllers to coordinate the complex robotic movements in peristaltic crawling. The crawling robot in this study consists of two serially connected bistable origami segments, each featuring a generalized Kresling design and a foldable anchoring mechanism. Mechanics analysis and experimental testing of this dual-segment module reveal a deterministic deformation sequence or actuation cycle, which is then used to generate the different phases in a peristaltic-like locomotion gait. Instead of individually controlling the segment deformation like in earthworm and other crawling robots, we only control the total length of this robot. Therefore, this approach can significantly reduce the total number of actuators needed for locomotion and simplify the control requirements. Moreover, the richness in Kresling origami design offers us substantial freedom to tailor the locomotion performance. Results of this study will contribute to a paradigm shift in how we can use the mechanics of multi-stability for robotic actuation and control.
[ { "created": "Mon, 10 Jun 2019 16:10:04 GMT", "version": "v1" } ]
2019-06-11
[ [ "Bhovad", "Priyanka", "" ], [ "Kaufmann", "Joshua", "" ], [ "Li", "Suyi", "" ] ]
This study proposes and examines a novel approach to generate peristaltic-like locomotion in a segmented origami robot. Specifically, we demonstrate the use of multi-stability embedded in origami skeleton to eliminate the need for multiple actuators or digital controllers to coordinate the complex robotic movements in peristaltic crawling. The crawling robot in this study consists of two serially connected bistable origami segments, each featuring a generalized Kresling design and a foldable anchoring mechanism. Mechanics analysis and experimental testing of this dual-segment module reveal a deterministic deformation sequence or actuation cycle, which is then used to generate the different phases in a peristaltic-like locomotion gait. Instead of individually controlling the segment deformation like in earthworm and other crawling robots, we only control the total length of this robot. Therefore, this approach can significantly reduce the total number of actuators needed for locomotion and simplify the control requirements. Moreover, the richness in Kresling origami design offers us substantial freedom to tailor the locomotion performance. Results of this study will contribute to a paradigm shift in how we can use the mechanics of multi-stability for robotic actuation and control.
2407.04573
Hang Gao
Hang Gao and Yongfeng Zhang
VRSD: Rethinking Similarity and Diversity for Retrieval in Large Language Models
null
null
null
null
cs.IR cs.CL
http://creativecommons.org/licenses/by/4.0/
Vector retrieval algorithms are vital for semantic queries in the evolving landscape of Large Language Models (LLMs). Retrieving vectors that simultaneously meet criteria for both similarity and diversity significantly enhances the capabilities of LLM-based agents. Despite the widespread use of the Maximal Marginal Relevance (MMR) in retrieval scenarios with relevance and diversity requirements, fluctuations caused by variations in the parameter $ \lambda $ within the MMR complicate the determination of the optimization trajectory in vector spaces, thus obscuring the direction of enhancement. Moreover, there is a lack of a robust theoretical analysis for the constraints of similarity and diversity in retrieval processes. This paper introduces a novel approach to characterizing both constraints through the relationship between the sum vector and the query vector. The proximity of these vectors addresses the similarity constraint, while necessitating that individual vectors within the sum vector divergently align with the query vector to satisfy the diversity constraint. We also formulate a new combinatorial optimization challenge, taking a selection of $k$ vectors from a set of candidates such that their sum vector maximally aligns with the query vector, a problem we demonstrate to be NP-complete. This establishes the profound difficulty of pursuing similarity and diversity simultaneously in vector retrieval and lays a theoretical groundwork for further research. Additionally, we present the heuristic algorithm Vectors Retrieval with Similarity and Diversity (VRSD) which not only has a definitive optimization goal and eschews the need for preset parameters but also offers a modest reduction in time complexity compared to MMR. Empirical validation further confirm that VRSD significantly surpasses MMR across various datasets.
[ { "created": "Fri, 5 Jul 2024 15:08:44 GMT", "version": "v1" } ]
2024-07-08
[ [ "Gao", "Hang", "" ], [ "Zhang", "Yongfeng", "" ] ]
Vector retrieval algorithms are vital for semantic queries in the evolving landscape of Large Language Models (LLMs). Retrieving vectors that simultaneously meet criteria for both similarity and diversity significantly enhances the capabilities of LLM-based agents. Despite the widespread use of the Maximal Marginal Relevance (MMR) in retrieval scenarios with relevance and diversity requirements, fluctuations caused by variations in the parameter $ \lambda $ within the MMR complicate the determination of the optimization trajectory in vector spaces, thus obscuring the direction of enhancement. Moreover, there is a lack of a robust theoretical analysis for the constraints of similarity and diversity in retrieval processes. This paper introduces a novel approach to characterizing both constraints through the relationship between the sum vector and the query vector. The proximity of these vectors addresses the similarity constraint, while necessitating that individual vectors within the sum vector divergently align with the query vector to satisfy the diversity constraint. We also formulate a new combinatorial optimization challenge, taking a selection of $k$ vectors from a set of candidates such that their sum vector maximally aligns with the query vector, a problem we demonstrate to be NP-complete. This establishes the profound difficulty of pursuing similarity and diversity simultaneously in vector retrieval and lays a theoretical groundwork for further research. Additionally, we present the heuristic algorithm Vectors Retrieval with Similarity and Diversity (VRSD) which not only has a definitive optimization goal and eschews the need for preset parameters but also offers a modest reduction in time complexity compared to MMR. Empirical validation further confirm that VRSD significantly surpasses MMR across various datasets.
2403.18258
Taro Togo
Taro Togo, Ren Togo, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
Enhancing Generative Class Incremental Learning Performance with Model Forgetting Approach
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This study presents a novel approach to Generative Class Incremental Learning (GCIL) by introducing the forgetting mechanism, aimed at dynamically managing class information for better adaptation to streaming data. GCIL is one of the hot topics in the field of computer vision, and this is considered one of the crucial tasks in society, specifically the continual learning of generative models. The ability to forget is a crucial brain function that facilitates continual learning by selectively discarding less relevant information for humans. However, in the field of machine learning models, the concept of intentionally forgetting has not been extensively investigated. In this study we aim to bridge this gap by incorporating the forgetting mechanisms into GCIL, thereby examining their impact on the models' ability to learn in continual learning. Through our experiments, we have found that integrating the forgetting mechanisms significantly enhances the models' performance in acquiring new knowledge, underscoring the positive role that strategic forgetting plays in the process of continual learning.
[ { "created": "Wed, 27 Mar 2024 05:10:38 GMT", "version": "v1" } ]
2024-03-28
[ [ "Togo", "Taro", "" ], [ "Togo", "Ren", "" ], [ "Maeda", "Keisuke", "" ], [ "Ogawa", "Takahiro", "" ], [ "Haseyama", "Miki", "" ] ]
This study presents a novel approach to Generative Class Incremental Learning (GCIL) by introducing the forgetting mechanism, aimed at dynamically managing class information for better adaptation to streaming data. GCIL is one of the hot topics in the field of computer vision, and this is considered one of the crucial tasks in society, specifically the continual learning of generative models. The ability to forget is a crucial brain function that facilitates continual learning by selectively discarding less relevant information for humans. However, in the field of machine learning models, the concept of intentionally forgetting has not been extensively investigated. In this study we aim to bridge this gap by incorporating the forgetting mechanisms into GCIL, thereby examining their impact on the models' ability to learn in continual learning. Through our experiments, we have found that integrating the forgetting mechanisms significantly enhances the models' performance in acquiring new knowledge, underscoring the positive role that strategic forgetting plays in the process of continual learning.
1712.00171
Wenbo Zhao
Wenbo Zhao, Yang Gao, Rita Singh
Speaker identification from the sound of the human breath
5 pages, 3 figures
null
null
null
cs.SD eess.AS stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper examines the speaker identification potential of breath sounds in continuous speech. Speech is largely produced during exhalation. In order to replenish air in the lungs, speakers must periodically inhale. When inhalation occurs in the midst of continuous speech, it is generally through the mouth. Intra-speech breathing behavior has been the subject of much study, including the patterns, cadence, and variations in energy levels. However, an often ignored characteristic is the {\em sound} produced during the inhalation phase of this cycle. Intra-speech inhalation is rapid and energetic, performed with open mouth and glottis, effectively exposing the entire vocal tract to enable maximum intake of air. This results in vocal tract resonances evoked by turbulence that are characteristic of the speaker's speech-producing apparatus. Consequently, the sounds of inhalation are expected to carry information about the speaker's identity. Moreover, unlike other spoken sounds which are subject to active control, inhalation sounds are generally more natural and less affected by voluntary influences. The goal of this paper is to demonstrate that breath sounds are indeed bio-signatures that can be used to identify speakers. We show that these sounds by themselves can yield remarkably accurate speaker recognition with appropriate feature representations and classification frameworks.
[ { "created": "Fri, 1 Dec 2017 03:16:23 GMT", "version": "v1" }, { "created": "Mon, 4 Dec 2017 17:30:42 GMT", "version": "v2" } ]
2017-12-05
[ [ "Zhao", "Wenbo", "" ], [ "Gao", "Yang", "" ], [ "Singh", "Rita", "" ] ]
This paper examines the speaker identification potential of breath sounds in continuous speech. Speech is largely produced during exhalation. In order to replenish air in the lungs, speakers must periodically inhale. When inhalation occurs in the midst of continuous speech, it is generally through the mouth. Intra-speech breathing behavior has been the subject of much study, including the patterns, cadence, and variations in energy levels. However, an often ignored characteristic is the {\em sound} produced during the inhalation phase of this cycle. Intra-speech inhalation is rapid and energetic, performed with open mouth and glottis, effectively exposing the entire vocal tract to enable maximum intake of air. This results in vocal tract resonances evoked by turbulence that are characteristic of the speaker's speech-producing apparatus. Consequently, the sounds of inhalation are expected to carry information about the speaker's identity. Moreover, unlike other spoken sounds which are subject to active control, inhalation sounds are generally more natural and less affected by voluntary influences. The goal of this paper is to demonstrate that breath sounds are indeed bio-signatures that can be used to identify speakers. We show that these sounds by themselves can yield remarkably accurate speaker recognition with appropriate feature representations and classification frameworks.
1206.2058
Ali Shadvar
Ali Shadvar
Dimension Reduction by Mutual Information Discriminant Analysis
13pages, 3 tables, International Journal of Artificial Intelligence & Applications
null
null
null
cs.CV cs.IT cs.LG math.IT
http://creativecommons.org/licenses/publicdomain/
In the past few decades, researchers have proposed many discriminant analysis (DA) algorithms for the study of high-dimensional data in a variety of problems. Most DA algorithms for feature extraction are based on transformations that simultaneously maximize the between-class scatter and minimize the withinclass scatter matrices. This paper presents a novel DA algorithm for feature extraction using mutual information (MI). However, it is not always easy to obtain an accurate estimation for high-dimensional MI. In this paper, we propose an efficient method for feature extraction that is based on one-dimensional MI estimations. We will refer to this algorithm as mutual information discriminant analysis (MIDA). The performance of this proposed method was evaluated using UCI databases. The results indicate that MIDA provides robust performance over different data sets with different characteristics and that MIDA always performs better than, or at least comparable to, the best performing algorithms.
[ { "created": "Sun, 10 Jun 2012 21:22:50 GMT", "version": "v1" } ]
2012-06-12
[ [ "Shadvar", "Ali", "" ] ]
In the past few decades, researchers have proposed many discriminant analysis (DA) algorithms for the study of high-dimensional data in a variety of problems. Most DA algorithms for feature extraction are based on transformations that simultaneously maximize the between-class scatter and minimize the withinclass scatter matrices. This paper presents a novel DA algorithm for feature extraction using mutual information (MI). However, it is not always easy to obtain an accurate estimation for high-dimensional MI. In this paper, we propose an efficient method for feature extraction that is based on one-dimensional MI estimations. We will refer to this algorithm as mutual information discriminant analysis (MIDA). The performance of this proposed method was evaluated using UCI databases. The results indicate that MIDA provides robust performance over different data sets with different characteristics and that MIDA always performs better than, or at least comparable to, the best performing algorithms.
1801.06358
Zhiyong Zhou
Zhiyong Zhou and Jun Yu
Sparse recovery based on q-ratio constrained minimal singular values
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study verifiable sufficient conditions and computable performance bounds for sparse recovery algorithms such as the Basis Pursuit, the Dantzig selector and the Lasso estimator, in terms of a newly defined family of quality measures for the measurement matrices. With high probability, the developed measures for subgaussian random matrices are bounded away from zero as long as the number of measurements is reasonably large. Comparing to the restricted isotropic constant based performance analysis, the arguments in this paper are much more concise and the obtained bounds are tighter. Numerical experiments are presented to illustrate our theoretical results.
[ { "created": "Fri, 19 Jan 2018 10:30:15 GMT", "version": "v1" } ]
2018-01-22
[ [ "Zhou", "Zhiyong", "" ], [ "Yu", "Jun", "" ] ]
We study verifiable sufficient conditions and computable performance bounds for sparse recovery algorithms such as the Basis Pursuit, the Dantzig selector and the Lasso estimator, in terms of a newly defined family of quality measures for the measurement matrices. With high probability, the developed measures for subgaussian random matrices are bounded away from zero as long as the number of measurements is reasonably large. Comparing to the restricted isotropic constant based performance analysis, the arguments in this paper are much more concise and the obtained bounds are tighter. Numerical experiments are presented to illustrate our theoretical results.
2402.05439
Joongkyu Lee
Joongkyu Lee, Seung Joon Park, Yunhao Tang, Min-hwan Oh
Learning Uncertainty-Aware Temporally-Extended Actions
Accepted in AAAI 2024 (Main Technical Track)
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by-nc-nd/4.0/
In reinforcement learning, temporal abstraction in the action space, exemplified by action repetition, is a technique to facilitate policy learning through extended actions. However, a primary limitation in previous studies of action repetition is its potential to degrade performance, particularly when sub-optimal actions are repeated. This issue often negates the advantages of action repetition. To address this, we propose a novel algorithm named Uncertainty-aware Temporal Extension (UTE). UTE employs ensemble methods to accurately measure uncertainty during action extension. This feature allows policies to strategically choose between emphasizing exploration or adopting an uncertainty-averse approach, tailored to their specific needs. We demonstrate the effectiveness of UTE through experiments in Gridworld and Atari 2600 environments. Our findings show that UTE outperforms existing action repetition algorithms, effectively mitigating their inherent limitations and significantly enhancing policy learning efficiency.
[ { "created": "Thu, 8 Feb 2024 06:32:06 GMT", "version": "v1" } ]
2024-02-09
[ [ "Lee", "Joongkyu", "" ], [ "Park", "Seung Joon", "" ], [ "Tang", "Yunhao", "" ], [ "Oh", "Min-hwan", "" ] ]
In reinforcement learning, temporal abstraction in the action space, exemplified by action repetition, is a technique to facilitate policy learning through extended actions. However, a primary limitation in previous studies of action repetition is its potential to degrade performance, particularly when sub-optimal actions are repeated. This issue often negates the advantages of action repetition. To address this, we propose a novel algorithm named Uncertainty-aware Temporal Extension (UTE). UTE employs ensemble methods to accurately measure uncertainty during action extension. This feature allows policies to strategically choose between emphasizing exploration or adopting an uncertainty-averse approach, tailored to their specific needs. We demonstrate the effectiveness of UTE through experiments in Gridworld and Atari 2600 environments. Our findings show that UTE outperforms existing action repetition algorithms, effectively mitigating their inherent limitations and significantly enhancing policy learning efficiency.
2010.08887
Kibok Lee
Kibok Lee, Yian Zhu, Kihyuk Sohn, Chun-Liang Li, Jinwoo Shin, Honglak Lee
i-Mix: A Domain-Agnostic Strategy for Contrastive Representation Learning
ICLR 2021
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contrastive representation learning has shown to be effective to learn representations from unlabeled data. However, much progress has been made in vision domains relying on data augmentations carefully designed using domain knowledge. In this work, we propose i-Mix, a simple yet effective domain-agnostic regularization strategy for improving contrastive representation learning. We cast contrastive learning as training a non-parametric classifier by assigning a unique virtual class to each data in a batch. Then, data instances are mixed in both the input and virtual label spaces, providing more augmented data during training. In experiments, we demonstrate that i-Mix consistently improves the quality of learned representations across domains, including image, speech, and tabular data. Furthermore, we confirm its regularization effect via extensive ablation studies across model and dataset sizes. The code is available at https://github.com/kibok90/imix.
[ { "created": "Sat, 17 Oct 2020 23:32:26 GMT", "version": "v1" }, { "created": "Thu, 18 Mar 2021 07:13:31 GMT", "version": "v2" } ]
2021-03-19
[ [ "Lee", "Kibok", "" ], [ "Zhu", "Yian", "" ], [ "Sohn", "Kihyuk", "" ], [ "Li", "Chun-Liang", "" ], [ "Shin", "Jinwoo", "" ], [ "Lee", "Honglak", "" ] ]
Contrastive representation learning has shown to be effective to learn representations from unlabeled data. However, much progress has been made in vision domains relying on data augmentations carefully designed using domain knowledge. In this work, we propose i-Mix, a simple yet effective domain-agnostic regularization strategy for improving contrastive representation learning. We cast contrastive learning as training a non-parametric classifier by assigning a unique virtual class to each data in a batch. Then, data instances are mixed in both the input and virtual label spaces, providing more augmented data during training. In experiments, we demonstrate that i-Mix consistently improves the quality of learned representations across domains, including image, speech, and tabular data. Furthermore, we confirm its regularization effect via extensive ablation studies across model and dataset sizes. The code is available at https://github.com/kibok90/imix.
0911.2322
EPTCS
Amin Coja-Oghlan
Random Constraint Satisfaction Problems
null
EPTCS 9, 2009, pp. 32-37
10.4204/EPTCS.9.4
null
cs.DM cs.CC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Random instances of constraint satisfaction problems such as k-SAT provide challenging benchmarks. If there are m constraints over n variables there is typically a large range of densities r=m/n where solutions are known to exist with probability close to one due to non-constructive arguments. However, no algorithms are known to find solutions efficiently with a non-vanishing probability at even much lower densities. This fact appears to be related to a phase transition in the set of all solutions. The goal of this extended abstract is to provide a perspective on this phenomenon, and on the computational challenge that it poses.
[ { "created": "Thu, 12 Nov 2009 08:42:42 GMT", "version": "v1" } ]
2009-11-13
[ [ "Coja-Oghlan", "Amin", "" ] ]
Random instances of constraint satisfaction problems such as k-SAT provide challenging benchmarks. If there are m constraints over n variables there is typically a large range of densities r=m/n where solutions are known to exist with probability close to one due to non-constructive arguments. However, no algorithms are known to find solutions efficiently with a non-vanishing probability at even much lower densities. This fact appears to be related to a phase transition in the set of all solutions. The goal of this extended abstract is to provide a perspective on this phenomenon, and on the computational challenge that it poses.
2401.07124
Farhad Kooban
Sara Shomal Zadeh, Sina Aalipour birgani, Meisam Khorshidi, Farhad Kooban
Concrete Surface Crack Detection with Convolutional-based Deep Learning Models
11 pages, 3 figures, Journal paper
International Journal of Novel Research in Civil Structural and Earth Sciences, Vol. 10, Issue 3, (2023) pp: (25-35)
10.5281/zenodo.10061654
null
cs.CV cs.LG eess.IV
http://creativecommons.org/licenses/by/4.0/
Effective crack detection is pivotal for the structural health monitoring and inspection of buildings. This task presents a formidable challenge to computer vision techniques due to the inherently subtle nature of cracks, which often exhibit low-level features that can be easily confounded with background textures, foreign objects, or irregularities in construction. Furthermore, the presence of issues like non-uniform lighting and construction irregularities poses significant hurdles for autonomous crack detection during building inspection and monitoring. Convolutional neural networks (CNNs) have emerged as a promising framework for crack detection, offering high levels of accuracy and precision. Additionally, the ability to adapt pre-trained networks through transfer learning provides a valuable tool for users, eliminating the need for an in-depth understanding of algorithm intricacies. Nevertheless, it is imperative to acknowledge the limitations and considerations when deploying CNNs, particularly in contexts where the outcomes carry immense significance, such as crack detection in buildings. In this paper, our approach to surface crack detection involves the utilization of various deep-learning models. Specifically, we employ fine-tuning techniques on pre-trained deep learning architectures: VGG19, ResNet50, Inception V3, and EfficientNetV2. These models are chosen for their established performance and versatility in image analysis tasks. We compare deep learning models using precision, recall, and F1 scores.
[ { "created": "Sat, 13 Jan 2024 17:31:12 GMT", "version": "v1" } ]
2024-01-17
[ [ "Zadeh", "Sara Shomal", "" ], [ "birgani", "Sina Aalipour", "" ], [ "Khorshidi", "Meisam", "" ], [ "Kooban", "Farhad", "" ] ]
Effective crack detection is pivotal for the structural health monitoring and inspection of buildings. This task presents a formidable challenge to computer vision techniques due to the inherently subtle nature of cracks, which often exhibit low-level features that can be easily confounded with background textures, foreign objects, or irregularities in construction. Furthermore, the presence of issues like non-uniform lighting and construction irregularities poses significant hurdles for autonomous crack detection during building inspection and monitoring. Convolutional neural networks (CNNs) have emerged as a promising framework for crack detection, offering high levels of accuracy and precision. Additionally, the ability to adapt pre-trained networks through transfer learning provides a valuable tool for users, eliminating the need for an in-depth understanding of algorithm intricacies. Nevertheless, it is imperative to acknowledge the limitations and considerations when deploying CNNs, particularly in contexts where the outcomes carry immense significance, such as crack detection in buildings. In this paper, our approach to surface crack detection involves the utilization of various deep-learning models. Specifically, we employ fine-tuning techniques on pre-trained deep learning architectures: VGG19, ResNet50, Inception V3, and EfficientNetV2. These models are chosen for their established performance and versatility in image analysis tasks. We compare deep learning models using precision, recall, and F1 scores.
1207.1187
Hardik Shah Mr
Hardik Shah, Andreas Raabe and Alois Knoll
Dynamic Priority Queue: An SDRAM Arbiter With Bounded Access Latencies for Tight WCET Calculation
null
null
null
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This report introduces a shared resource arbitration scheme "DPQ - Dynamic Priority Queue" which provides bandwidth guarantees and low worst case latency to each master in an MPSoC. Being a non-trivial candidate for timing analysis, SDRAM has been chosen as a showcase, but the approach is valid for any shared resource arbitration. Due to its significant cost, data rate and physical size advantages, SDRAM is a potential candidate for cost sensitive, safety critical and space conserving systems. The variable access latency is a major drawback of SDRAM that induces largely over estimated Worst Case Execution Time (WCET) bounds of applications. In this report we present the DPQ together with an algorithm to predict the shared SDRAM's worst case latencies. We use the approach to calculate WCET bounds of six hardware tasks executing on an Altera Cyclone III FPGA with shared DDR2 memory. The results show that the DPQ is a fair arbitration scheme and produces low WCET bounds.
[ { "created": "Thu, 5 Jul 2012 08:20:02 GMT", "version": "v1" } ]
2015-03-20
[ [ "Shah", "Hardik", "" ], [ "Raabe", "Andreas", "" ], [ "Knoll", "Alois", "" ] ]
This report introduces a shared resource arbitration scheme "DPQ - Dynamic Priority Queue" which provides bandwidth guarantees and low worst case latency to each master in an MPSoC. Being a non-trivial candidate for timing analysis, SDRAM has been chosen as a showcase, but the approach is valid for any shared resource arbitration. Due to its significant cost, data rate and physical size advantages, SDRAM is a potential candidate for cost sensitive, safety critical and space conserving systems. The variable access latency is a major drawback of SDRAM that induces largely over estimated Worst Case Execution Time (WCET) bounds of applications. In this report we present the DPQ together with an algorithm to predict the shared SDRAM's worst case latencies. We use the approach to calculate WCET bounds of six hardware tasks executing on an Altera Cyclone III FPGA with shared DDR2 memory. The results show that the DPQ is a fair arbitration scheme and produces low WCET bounds.
2309.17170
Luuk van den Bent
Luuk van den Bent, Tom\'as Coleman, Robert Babuska
A Vision-Guided Robotic System for Grasping Harvested Tomato Trusses in Cluttered Environments
7 pages, 7 figures
null
null
null
cs.RO cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
Currently, truss tomato weighing and packaging require significant manual work. The main obstacle to automation lies in the difficulty of developing a reliable robotic grasping system for already harvested trusses. We propose a method to grasp trusses that are stacked in a crate with considerable clutter, which is how they are commonly stored and transported after harvest. The method consists of a deep learning-based vision system to first identify the individual trusses in the crate and then determine a suitable grasping location on the stem. To this end, we have introduced a grasp pose ranking algorithm with online learning capabilities. After selecting the most promising grasp pose, the robot executes a pinch grasp without needing touch sensors or geometric models. Lab experiments with a robotic manipulator equipped with an eye-in-hand RGB-D camera showed a 100% clearance rate when tasked to pick all trusses from a pile. 93% of the trusses were successfully grasped on the first try, while the remaining 7% required more attempts.
[ { "created": "Fri, 29 Sep 2023 12:07:08 GMT", "version": "v1" } ]
2023-10-02
[ [ "Bent", "Luuk van den", "" ], [ "Coleman", "Tomás", "" ], [ "Babuska", "Robert", "" ] ]
Currently, truss tomato weighing and packaging require significant manual work. The main obstacle to automation lies in the difficulty of developing a reliable robotic grasping system for already harvested trusses. We propose a method to grasp trusses that are stacked in a crate with considerable clutter, which is how they are commonly stored and transported after harvest. The method consists of a deep learning-based vision system to first identify the individual trusses in the crate and then determine a suitable grasping location on the stem. To this end, we have introduced a grasp pose ranking algorithm with online learning capabilities. After selecting the most promising grasp pose, the robot executes a pinch grasp without needing touch sensors or geometric models. Lab experiments with a robotic manipulator equipped with an eye-in-hand RGB-D camera showed a 100% clearance rate when tasked to pick all trusses from a pile. 93% of the trusses were successfully grasped on the first try, while the remaining 7% required more attempts.
2206.10192
Leonardo Rossi
Leonardo Rossi, Marco Valenti, Sara Elisabetta Legler, Andrea Prati
LDD: A Dataset for Grape Diseases Object Detection and Instance Segmentation
null
International Conference on Image Analysis and Processing. Springer, Cham, 2022
10.1007/978-3-031-06430-2_32
null
cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
The Instance Segmentation task, an extension of the well-known Object Detection task, is of great help in many areas, such as precision agriculture: being able to automatically identify plant organs and the possible diseases associated with them, allows to effectively scale and automate crop monitoring and its diseases control. To address the problem related to early disease detection and diagnosis on vines plants, a new dataset has been created with the goal of advancing the state-of-the-art of diseases recognition via instance segmentation approaches. This was achieved by gathering images of leaves and clusters of grapes affected by diseases in their natural context. The dataset contains photos of 10 object types which include leaves and grapes with and without symptoms of the eight more common grape diseases, with a total of 17,706 labeled instances in 1,092 images. Multiple statistical measures are proposed in order to offer a complete view on the characteristics of the dataset. Preliminary results for the object detection and instance segmentation tasks reached by the models Mask R-CNN and R^3-CNN are provided as baseline, demonstrating that the procedure is able to reach promising results about the objective of automatic diseases' symptoms recognition.
[ { "created": "Tue, 21 Jun 2022 08:50:13 GMT", "version": "v1" } ]
2022-06-22
[ [ "Rossi", "Leonardo", "" ], [ "Valenti", "Marco", "" ], [ "Legler", "Sara Elisabetta", "" ], [ "Prati", "Andrea", "" ] ]
The Instance Segmentation task, an extension of the well-known Object Detection task, is of great help in many areas, such as precision agriculture: being able to automatically identify plant organs and the possible diseases associated with them, allows to effectively scale and automate crop monitoring and its diseases control. To address the problem related to early disease detection and diagnosis on vines plants, a new dataset has been created with the goal of advancing the state-of-the-art of diseases recognition via instance segmentation approaches. This was achieved by gathering images of leaves and clusters of grapes affected by diseases in their natural context. The dataset contains photos of 10 object types which include leaves and grapes with and without symptoms of the eight more common grape diseases, with a total of 17,706 labeled instances in 1,092 images. Multiple statistical measures are proposed in order to offer a complete view on the characteristics of the dataset. Preliminary results for the object detection and instance segmentation tasks reached by the models Mask R-CNN and R^3-CNN are provided as baseline, demonstrating that the procedure is able to reach promising results about the objective of automatic diseases' symptoms recognition.
2012.07935
Alexandros Psomas
Aranyak Mehta, Uri Nadav, Alexandros Psomas, Aviad Rubinstein
Hitting the High Notes: Subset Selection for Maximizing Expected Order Statistics
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the fundamental problem of selecting $k$ out of $n$ random variables in a way that the expected highest or second-highest value is maximized. This question captures several applications where we have uncertainty about the quality of candidates (e.g. auction bids, search results) and have the capacity to explore only a small subset due to an exogenous constraint. For example, consider a second price auction where system constraints (e.g., costly retrieval or model computation) allow the participation of only $k$ out of $n$ bidders, and the goal is to optimize the expected efficiency (highest bid) or expected revenue (second highest bid). We study the case where we are given an explicit description of each random variable. We give a PTAS for the problem of maximizing the expected highest value. For the second-highest value, we prove a hardness result: assuming the Planted Clique Hypothesis, there is no constant factor approximation algorithm that runs in polynomial time. Surprisingly, under the assumption that each random variable has monotone hazard rate (MHR), a simple score-based algorithm, namely picking the $k$ random variables with the largest $1/\sqrt{k}$ top quantile value, is a constant approximation to the expected highest and second highest value, \emph{simultaneously}.
[ { "created": "Mon, 14 Dec 2020 20:53:39 GMT", "version": "v1" } ]
2020-12-16
[ [ "Mehta", "Aranyak", "" ], [ "Nadav", "Uri", "" ], [ "Psomas", "Alexandros", "" ], [ "Rubinstein", "Aviad", "" ] ]
We consider the fundamental problem of selecting $k$ out of $n$ random variables in a way that the expected highest or second-highest value is maximized. This question captures several applications where we have uncertainty about the quality of candidates (e.g. auction bids, search results) and have the capacity to explore only a small subset due to an exogenous constraint. For example, consider a second price auction where system constraints (e.g., costly retrieval or model computation) allow the participation of only $k$ out of $n$ bidders, and the goal is to optimize the expected efficiency (highest bid) or expected revenue (second highest bid). We study the case where we are given an explicit description of each random variable. We give a PTAS for the problem of maximizing the expected highest value. For the second-highest value, we prove a hardness result: assuming the Planted Clique Hypothesis, there is no constant factor approximation algorithm that runs in polynomial time. Surprisingly, under the assumption that each random variable has monotone hazard rate (MHR), a simple score-based algorithm, namely picking the $k$ random variables with the largest $1/\sqrt{k}$ top quantile value, is a constant approximation to the expected highest and second highest value, \emph{simultaneously}.
2211.14077
Miguel P\'erez Msc
Miguel Angel P\'erez-Cuti\~no and Juan Sebasti\'an Valverde and Jos\'e Miguel D\'iaz-B\'a\~nez
Detecting broken Absorber Tubes in CSP plants using intelligent sampling and dual loss
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Concentrated solar power (CSP) is one of the growing technologies that is leading the process of changing from fossil fuels to renewable energies. The sophistication and size of the systems require an increase in maintenance tasks to ensure reliability, availability, maintainability and safety. Currently, automatic fault detection in CSP plants using Parabolic Trough Collector systems evidences two main drawbacks: 1) the devices in use needs to be manually placed near the receiver tube, 2) the Machine Learning-based solutions are not tested in real plants. We address both gaps by combining the data extracted with the use of an Unmaned Aerial Vehicle, and the data provided by sensors placed within 7 real plants. The resulting dataset is the first one of this type and can help to standardize research activities for the problem of fault detection in this type of plants. Our work proposes supervised machine-learning algorithms for detecting broken envelopes of the absorber tubes in CSP plants. The proposed solution takes the class imbalance problem into account, boosting the accuracy of the algorithms for the minority class without harming the overall performance of the models. For a Deep Residual Network, we solve an imbalance and a balance problem at the same time, which increases by 5% the Recall of the minority class with no harm to the F1-score. Additionally, the Random Under Sampling technique boost the performance of traditional Machine Learning models, being the Histogram Gradient Boost Classifier the algorithm with the highest increase (3%) in the F1-Score. To the best of our knowledge, this paper is the first providing an automated solution to this problem using data from operating plants.
[ { "created": "Fri, 25 Nov 2022 12:53:52 GMT", "version": "v1" } ]
2022-11-28
[ [ "Pérez-Cutiño", "Miguel Angel", "" ], [ "Valverde", "Juan Sebastián", "" ], [ "Díaz-Báñez", "José Miguel", "" ] ]
Concentrated solar power (CSP) is one of the growing technologies that is leading the process of changing from fossil fuels to renewable energies. The sophistication and size of the systems require an increase in maintenance tasks to ensure reliability, availability, maintainability and safety. Currently, automatic fault detection in CSP plants using Parabolic Trough Collector systems evidences two main drawbacks: 1) the devices in use needs to be manually placed near the receiver tube, 2) the Machine Learning-based solutions are not tested in real plants. We address both gaps by combining the data extracted with the use of an Unmaned Aerial Vehicle, and the data provided by sensors placed within 7 real plants. The resulting dataset is the first one of this type and can help to standardize research activities for the problem of fault detection in this type of plants. Our work proposes supervised machine-learning algorithms for detecting broken envelopes of the absorber tubes in CSP plants. The proposed solution takes the class imbalance problem into account, boosting the accuracy of the algorithms for the minority class without harming the overall performance of the models. For a Deep Residual Network, we solve an imbalance and a balance problem at the same time, which increases by 5% the Recall of the minority class with no harm to the F1-score. Additionally, the Random Under Sampling technique boost the performance of traditional Machine Learning models, being the Histogram Gradient Boost Classifier the algorithm with the highest increase (3%) in the F1-Score. To the best of our knowledge, this paper is the first providing an automated solution to this problem using data from operating plants.
2107.14178
Michael Yang
Xuewen Yang, Yingru Liu, Xin Wang
ReFormer: The Relational Transformer for Image Captioning
null
null
null
null
cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
Image captioning is shown to be able to achieve a better performance by using scene graphs to represent the relations of objects in the image. The current captioning encoders generally use a Graph Convolutional Net (GCN) to represent the relation information and merge it with the object region features via concatenation or convolution to get the final input for sentence decoding. However, the GCN-based encoders in the existing methods are less effective for captioning due to two reasons. First, using the image captioning as the objective (i.e., Maximum Likelihood Estimation) rather than a relation-centric loss cannot fully explore the potential of the encoder. Second, using a pre-trained model instead of the encoder itself to extract the relationships is not flexible and cannot contribute to the explainability of the model. To improve the quality of image captioning, we propose a novel architecture ReFormer -- a RElational transFORMER to generate features with relation information embedded and to explicitly express the pair-wise relationships between objects in the image. ReFormer incorporates the objective of scene graph generation with that of image captioning using one modified Transformer model. This design allows ReFormer to generate not only better image captions with the bene-fit of extracting strong relational image features, but also scene graphs to explicitly describe the pair-wise relation-ships. Experiments on publicly available datasets show that our model significantly outperforms state-of-the-art methods on image captioning and scene graph generation
[ { "created": "Thu, 29 Jul 2021 17:03:36 GMT", "version": "v1" }, { "created": "Thu, 14 Jul 2022 20:11:17 GMT", "version": "v2" } ]
2022-07-18
[ [ "Yang", "Xuewen", "" ], [ "Liu", "Yingru", "" ], [ "Wang", "Xin", "" ] ]
Image captioning is shown to be able to achieve a better performance by using scene graphs to represent the relations of objects in the image. The current captioning encoders generally use a Graph Convolutional Net (GCN) to represent the relation information and merge it with the object region features via concatenation or convolution to get the final input for sentence decoding. However, the GCN-based encoders in the existing methods are less effective for captioning due to two reasons. First, using the image captioning as the objective (i.e., Maximum Likelihood Estimation) rather than a relation-centric loss cannot fully explore the potential of the encoder. Second, using a pre-trained model instead of the encoder itself to extract the relationships is not flexible and cannot contribute to the explainability of the model. To improve the quality of image captioning, we propose a novel architecture ReFormer -- a RElational transFORMER to generate features with relation information embedded and to explicitly express the pair-wise relationships between objects in the image. ReFormer incorporates the objective of scene graph generation with that of image captioning using one modified Transformer model. This design allows ReFormer to generate not only better image captions with the bene-fit of extracting strong relational image features, but also scene graphs to explicitly describe the pair-wise relation-ships. Experiments on publicly available datasets show that our model significantly outperforms state-of-the-art methods on image captioning and scene graph generation
1907.05681
D\'avid Terj\'ek
D\'avid Terj\'ek
Adversarial Lipschitz Regularization
ICLR 2020
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative adversarial networks (GANs) are one of the most popular approaches when it comes to training generative models, among which variants of Wasserstein GANs are considered superior to the standard GAN formulation in terms of learning stability and sample quality. However, Wasserstein GANs require the critic to be 1-Lipschitz, which is often enforced implicitly by penalizing the norm of its gradient, or by globally restricting its Lipschitz constant via weight normalization techniques. Training with a regularization term penalizing the violation of the Lipschitz constraint explicitly, instead of through the norm of the gradient, was found to be practically infeasible in most situations. Inspired by Virtual Adversarial Training, we propose a method called Adversarial Lipschitz Regularization, and show that using an explicit Lipschitz penalty is indeed viable and leads to competitive performance when applied to Wasserstein GANs, highlighting an important connection between Lipschitz regularization and adversarial training.
[ { "created": "Fri, 12 Jul 2019 11:41:18 GMT", "version": "v1" }, { "created": "Thu, 2 Jan 2020 16:02:17 GMT", "version": "v2" }, { "created": "Fri, 3 Jan 2020 09:11:31 GMT", "version": "v3" } ]
2020-01-06
[ [ "Terjék", "Dávid", "" ] ]
Generative adversarial networks (GANs) are one of the most popular approaches when it comes to training generative models, among which variants of Wasserstein GANs are considered superior to the standard GAN formulation in terms of learning stability and sample quality. However, Wasserstein GANs require the critic to be 1-Lipschitz, which is often enforced implicitly by penalizing the norm of its gradient, or by globally restricting its Lipschitz constant via weight normalization techniques. Training with a regularization term penalizing the violation of the Lipschitz constraint explicitly, instead of through the norm of the gradient, was found to be practically infeasible in most situations. Inspired by Virtual Adversarial Training, we propose a method called Adversarial Lipschitz Regularization, and show that using an explicit Lipschitz penalty is indeed viable and leads to competitive performance when applied to Wasserstein GANs, highlighting an important connection between Lipschitz regularization and adversarial training.
1803.07097
Ryo Ashida
Ryo Ashida and Kotaro Nakagawa
$\tilde{O}(n^{1/3})$-Space Algorithm for the Grid Graph Reachability Problem
null
null
10.4230/LIPIcs.SoCG.2018.5
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The directed graph reachability problem takes as input an $n$-vertex directed graph $G=(V,E)$, and two distinguished vertices $s$ and $t$. The problem is to determine whether there exists a path from $s$ to $t$ in $G$. This is a canonical complete problem for class NL. Asano et al. proposed an $\tilde{O}(\sqrt{n})$ space and polynomial time algorithm for the directed grid and planar graph reachability problem. The main result of this paper is to show that the directed graph reachability problem restricted to grid graphs can be solved in polynomial time using only $\tilde{O}(n^{1/3})$ space.
[ { "created": "Mon, 19 Mar 2018 18:06:52 GMT", "version": "v1" }, { "created": "Thu, 7 Feb 2019 06:17:14 GMT", "version": "v2" }, { "created": "Fri, 20 Sep 2019 07:25:38 GMT", "version": "v3" } ]
2019-09-23
[ [ "Ashida", "Ryo", "" ], [ "Nakagawa", "Kotaro", "" ] ]
The directed graph reachability problem takes as input an $n$-vertex directed graph $G=(V,E)$, and two distinguished vertices $s$ and $t$. The problem is to determine whether there exists a path from $s$ to $t$ in $G$. This is a canonical complete problem for class NL. Asano et al. proposed an $\tilde{O}(\sqrt{n})$ space and polynomial time algorithm for the directed grid and planar graph reachability problem. The main result of this paper is to show that the directed graph reachability problem restricted to grid graphs can be solved in polynomial time using only $\tilde{O}(n^{1/3})$ space.
2204.04338
Javier Andreu-Perez Dr
Christian Flores Vega, Jonathan Quevedo, Elmer Escand\'on, Mehrin Kiani, Weiping Ding, Javier Andreu-Perez
Fuzzy temporal convolutional neural networks in P300-based Brain-computer interface for smart home interaction
null
Applied Soft Computing 117 (2022) 108359
10.1016/j.asoc.2021.108359
null
cs.LG cs.AI cs.NE q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
The processing and classification of electroencephalographic signals (EEG) are increasingly performed using deep learning frameworks, such as convolutional neural networks (CNNs), to generate abstract features from brain data, automatically paving the way for remarkable classification prowess. However, EEG patterns exhibit high variability across time and uncertainty due to noise. It is a significant problem to be addressed in P300-based Brain Computer Interface (BCI) for smart home interaction. It operates in a non-optimal natural environment where added noise is often present. In this work, we propose a sequential unification of temporal convolutional networks (TCNs) modified to EEG signals, LSTM cells, with a fuzzy neural block (FNB), which we called EEG-TCFNet. Fuzzy components may enable a higher tolerance to noisy conditions. We applied three different architectures comparing the effect of using block FNB to classify a P300 wave to build a BCI for smart home interaction with healthy and post-stroke individuals. Our results reported a maximum classification accuracy of 98.6% and 74.3% using the proposed method of EEG-TCFNet in subject-dependent strategy and subject-independent strategy, respectively. Overall, FNB usage in all three CNN topologies outperformed those without FNB. In addition, we compared the addition of FNB to other state-of-the-art methods and obtained higher classification accuracies on account of the integration with FNB. The remarkable performance of the proposed model, EEG-TCFNet, and the general integration of fuzzy units to other classifiers would pave the way for enhanced P300-based BCIs for smart home interaction within natural settings.
[ { "created": "Sat, 9 Apr 2022 00:35:35 GMT", "version": "v1" } ]
2022-04-12
[ [ "Vega", "Christian Flores", "" ], [ "Quevedo", "Jonathan", "" ], [ "Escandón", "Elmer", "" ], [ "Kiani", "Mehrin", "" ], [ "Ding", "Weiping", "" ], [ "Andreu-Perez", "Javier", "" ] ]
The processing and classification of electroencephalographic signals (EEG) are increasingly performed using deep learning frameworks, such as convolutional neural networks (CNNs), to generate abstract features from brain data, automatically paving the way for remarkable classification prowess. However, EEG patterns exhibit high variability across time and uncertainty due to noise. It is a significant problem to be addressed in P300-based Brain Computer Interface (BCI) for smart home interaction. It operates in a non-optimal natural environment where added noise is often present. In this work, we propose a sequential unification of temporal convolutional networks (TCNs) modified to EEG signals, LSTM cells, with a fuzzy neural block (FNB), which we called EEG-TCFNet. Fuzzy components may enable a higher tolerance to noisy conditions. We applied three different architectures comparing the effect of using block FNB to classify a P300 wave to build a BCI for smart home interaction with healthy and post-stroke individuals. Our results reported a maximum classification accuracy of 98.6% and 74.3% using the proposed method of EEG-TCFNet in subject-dependent strategy and subject-independent strategy, respectively. Overall, FNB usage in all three CNN topologies outperformed those without FNB. In addition, we compared the addition of FNB to other state-of-the-art methods and obtained higher classification accuracies on account of the integration with FNB. The remarkable performance of the proposed model, EEG-TCFNet, and the general integration of fuzzy units to other classifiers would pave the way for enhanced P300-based BCIs for smart home interaction within natural settings.
2404.15653
Sachin Mehta
Sachin Mehta and Maxwell Horton and Fartash Faghri and Mohammad Hossein Sekhavat and Mahyar Najibi and Mehrdad Farajtabar and Oncel Tuzel and Mohammad Rastegari
CatLIP: CLIP-level Visual Recognition Accuracy with 2.7x Faster Pre-training on Web-scale Image-Text Data
null
null
null
null
cs.CV cs.AI cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contrastive learning has emerged as a transformative method for learning effective visual representations through the alignment of image and text embeddings. However, pairwise similarity computation in contrastive loss between image and text pairs poses computational challenges. This paper presents a novel weakly supervised pre-training of vision models on web-scale image-text data. The proposed method reframes pre-training on image-text data as a classification task. Consequently, it eliminates the need for pairwise similarity computations in contrastive loss, achieving a remarkable $2.7\times$ acceleration in training speed compared to contrastive learning on web-scale data. Through extensive experiments spanning diverse vision tasks, including detection and segmentation, we demonstrate that the proposed method maintains high representation quality. Our source code along with pre-trained model weights and training recipes is available at \url{https://github.com/apple/corenet}.
[ { "created": "Wed, 24 Apr 2024 05:13:28 GMT", "version": "v1" } ]
2024-04-25
[ [ "Mehta", "Sachin", "" ], [ "Horton", "Maxwell", "" ], [ "Faghri", "Fartash", "" ], [ "Sekhavat", "Mohammad Hossein", "" ], [ "Najibi", "Mahyar", "" ], [ "Farajtabar", "Mehrdad", "" ], [ "Tuzel", "Oncel", "" ], [ "Rastegari", "Mohammad", "" ] ]
Contrastive learning has emerged as a transformative method for learning effective visual representations through the alignment of image and text embeddings. However, pairwise similarity computation in contrastive loss between image and text pairs poses computational challenges. This paper presents a novel weakly supervised pre-training of vision models on web-scale image-text data. The proposed method reframes pre-training on image-text data as a classification task. Consequently, it eliminates the need for pairwise similarity computations in contrastive loss, achieving a remarkable $2.7\times$ acceleration in training speed compared to contrastive learning on web-scale data. Through extensive experiments spanning diverse vision tasks, including detection and segmentation, we demonstrate that the proposed method maintains high representation quality. Our source code along with pre-trained model weights and training recipes is available at \url{https://github.com/apple/corenet}.
1701.07594
Du\v{s}an Nemec
Dusan Nemec, Ales Janota, Marian Hrubos, Vojtech Simak
Intelligent real-time MEMS sensor fusion and calibration
null
IEEE Sensors Journal, vol. 16, no. 19, pp. 7150-7160, Oct.1, 2016
10.1109/JSEN.2016.2597292
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper discusses an innovative adaptive heterogeneous fusion algorithm based on estimation of the mean square error of all variables used in real time processing. The algorithm is designed for a fusion between derivative and absolute sensors and is explained by the fusion of the 3-axial gyroscope, 3-axial accelerometer and 3-axial magnetometer into attitude and heading estimation. Our algorithm has similar error performance in the steady state but much faster dynamic response compared to the fixed-gain fusion algorithm. In comparison with the extended Kalman filter the proposed algorithm converges faster and takes less computational time. On the other hand, Kalman filter has smaller mean square output error in a steady state but becomes unstable if the estimated state changes too rapidly. Additionally, the noisy fusion deviation can be used in the process of calibration. The paper proposes and explains a real-time calibration method based on machine learning working in the online mode during run-time. This allows compensation of sensor thermal drift right in the sensors working environment without need of re-calibration in the laboratory.
[ { "created": "Thu, 26 Jan 2017 07:09:21 GMT", "version": "v1" } ]
2017-01-27
[ [ "Nemec", "Dusan", "" ], [ "Janota", "Ales", "" ], [ "Hrubos", "Marian", "" ], [ "Simak", "Vojtech", "" ] ]
This paper discusses an innovative adaptive heterogeneous fusion algorithm based on estimation of the mean square error of all variables used in real time processing. The algorithm is designed for a fusion between derivative and absolute sensors and is explained by the fusion of the 3-axial gyroscope, 3-axial accelerometer and 3-axial magnetometer into attitude and heading estimation. Our algorithm has similar error performance in the steady state but much faster dynamic response compared to the fixed-gain fusion algorithm. In comparison with the extended Kalman filter the proposed algorithm converges faster and takes less computational time. On the other hand, Kalman filter has smaller mean square output error in a steady state but becomes unstable if the estimated state changes too rapidly. Additionally, the noisy fusion deviation can be used in the process of calibration. The paper proposes and explains a real-time calibration method based on machine learning working in the online mode during run-time. This allows compensation of sensor thermal drift right in the sensors working environment without need of re-calibration in the laboratory.
1110.0594
Tugcan Aktas
Tugcan Aktas, Ali Ozgur Yilmaz, Emre Aktas
Practical Wireless Network Coding and Decoding Methods for Multiple Unicast Transmissions
6 pages, 5 figures, Submitted to WCNC 2012, IEEE Wireless Communication and Networking Conference
null
10.1109/WCNC.2012.6214460
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a simple yet effective wireless network coding and decoding technique. It utilizes spatial diversity through cooperation between nodes which carry out distributed encoding operations dictated by generator matrices of linear block codes. For this purpose, we make use of greedy codes over the binary field and show that desired diversity orders can be flexibly assigned to nodes in a multiple unicast network, contrary to the previous findings in the literature. Furthermore, we present the optimal detection rule for the given model that accounts for intermediate node errors and suggest a network decoder using the sum-product algorithm. The proposed sum-product detector exhibits near optimal performance.
[ { "created": "Tue, 4 Oct 2011 07:37:38 GMT", "version": "v1" } ]
2016-11-17
[ [ "Aktas", "Tugcan", "" ], [ "Yilmaz", "Ali Ozgur", "" ], [ "Aktas", "Emre", "" ] ]
We propose a simple yet effective wireless network coding and decoding technique. It utilizes spatial diversity through cooperation between nodes which carry out distributed encoding operations dictated by generator matrices of linear block codes. For this purpose, we make use of greedy codes over the binary field and show that desired diversity orders can be flexibly assigned to nodes in a multiple unicast network, contrary to the previous findings in the literature. Furthermore, we present the optimal detection rule for the given model that accounts for intermediate node errors and suggest a network decoder using the sum-product algorithm. The proposed sum-product detector exhibits near optimal performance.
2209.07148
Gholamali Aminian
Gholamali Aminian, Armin Behnamnia, Roberto Vega, Laura Toni, Chengchun Shi, Hamid R. Rabiee, Omar Rivasplata, Miguel R. D. Rodrigues
Semi-supervised Batch Learning From Logged Data
46 pages,
null
null
null
cs.LG cs.AI cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Off-policy learning methods are intended to learn a policy from logged data, which includes context, action, and feedback (cost or reward) for each sample point. In this work, we build on the counterfactual risk minimization framework, which also assumes access to propensity scores. We propose learning methods for problems where feedback is missing for some samples, so there are samples with feedback and samples missing-feedback in the logged data. We refer to this type of learning as semi-supervised batch learning from logged data, which arises in a wide range of application domains. We derive a novel upper bound for the true risk under the inverse propensity score estimator to address this kind of learning problem. Using this bound, we propose a regularized semi-supervised batch learning method with logged data where the regularization term is feedback-independent and, as a result, can be evaluated using the logged missing-feedback data. Consequently, even though feedback is only present for some samples, a learning policy can be learned by leveraging the missing-feedback samples. The results of experiments derived from benchmark datasets indicate that these algorithms achieve policies with better performance in comparison with logging policies.
[ { "created": "Thu, 15 Sep 2022 08:58:28 GMT", "version": "v1" }, { "created": "Wed, 28 Sep 2022 09:46:14 GMT", "version": "v2" }, { "created": "Sun, 18 Feb 2024 15:26:01 GMT", "version": "v3" } ]
2024-02-20
[ [ "Aminian", "Gholamali", "" ], [ "Behnamnia", "Armin", "" ], [ "Vega", "Roberto", "" ], [ "Toni", "Laura", "" ], [ "Shi", "Chengchun", "" ], [ "Rabiee", "Hamid R.", "" ], [ "Rivasplata", "Omar", "" ], [ "Rodrigues", "Miguel R. D.", "" ] ]
Off-policy learning methods are intended to learn a policy from logged data, which includes context, action, and feedback (cost or reward) for each sample point. In this work, we build on the counterfactual risk minimization framework, which also assumes access to propensity scores. We propose learning methods for problems where feedback is missing for some samples, so there are samples with feedback and samples missing-feedback in the logged data. We refer to this type of learning as semi-supervised batch learning from logged data, which arises in a wide range of application domains. We derive a novel upper bound for the true risk under the inverse propensity score estimator to address this kind of learning problem. Using this bound, we propose a regularized semi-supervised batch learning method with logged data where the regularization term is feedback-independent and, as a result, can be evaluated using the logged missing-feedback data. Consequently, even though feedback is only present for some samples, a learning policy can be learned by leveraging the missing-feedback samples. The results of experiments derived from benchmark datasets indicate that these algorithms achieve policies with better performance in comparison with logging policies.
1612.02372
Jia Xue
Jia Xue, Hang Zhang, Kristin Dana, Ko Nishino
Differential Angular Imaging for Material Recognition
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Material recognition for real-world outdoor surfaces has become increasingly important for computer vision to support its operation "in the wild." Computational surface modeling that underlies material recognition has transitioned from reflectance modeling using in-lab controlled radiometric measurements to image-based representations based on internet-mined images of materials captured in the scene. We propose to take a middle-ground approach for material recognition that takes advantage of both rich radiometric cues and flexible image capture. We realize this by developing a framework for differential angular imaging, where small angular variations in image capture provide an enhanced appearance representation and significant recognition improvement. We build a large-scale material database, Ground Terrain in Outdoor Scenes (GTOS) database, geared towards real use for autonomous agents. The database consists of over 30,000 images covering 40 classes of outdoor ground terrain under varying weather and lighting conditions. We develop a novel approach for material recognition called a Differential Angular Imaging Network (DAIN) to fully leverage this large dataset. With this novel network architecture, we extract characteristics of materials encoded in the angular and spatial gradients of their appearance. Our results show that DAIN achieves recognition performance that surpasses single view or coarsely quantized multiview images. These results demonstrate the effectiveness of differential angular imaging as a means for flexible, in-place material recognition.
[ { "created": "Wed, 7 Dec 2016 18:59:19 GMT", "version": "v1" }, { "created": "Fri, 14 Jul 2017 00:43:10 GMT", "version": "v2" } ]
2017-07-17
[ [ "Xue", "Jia", "" ], [ "Zhang", "Hang", "" ], [ "Dana", "Kristin", "" ], [ "Nishino", "Ko", "" ] ]
Material recognition for real-world outdoor surfaces has become increasingly important for computer vision to support its operation "in the wild." Computational surface modeling that underlies material recognition has transitioned from reflectance modeling using in-lab controlled radiometric measurements to image-based representations based on internet-mined images of materials captured in the scene. We propose to take a middle-ground approach for material recognition that takes advantage of both rich radiometric cues and flexible image capture. We realize this by developing a framework for differential angular imaging, where small angular variations in image capture provide an enhanced appearance representation and significant recognition improvement. We build a large-scale material database, Ground Terrain in Outdoor Scenes (GTOS) database, geared towards real use for autonomous agents. The database consists of over 30,000 images covering 40 classes of outdoor ground terrain under varying weather and lighting conditions. We develop a novel approach for material recognition called a Differential Angular Imaging Network (DAIN) to fully leverage this large dataset. With this novel network architecture, we extract characteristics of materials encoded in the angular and spatial gradients of their appearance. Our results show that DAIN achieves recognition performance that surpasses single view or coarsely quantized multiview images. These results demonstrate the effectiveness of differential angular imaging as a means for flexible, in-place material recognition.