id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1807.09163
Anabik Pal Mr.
Anabik Pal, Sounak Ray and Utpal Garain
Skin disease identification from dermoscopy images using deep convolutional neural network
Challenge Participation in ISIC 2018: Skin Lesion Analysis Towards Melanoma Detection
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a deep neural network based ensemble method is experimented for automatic identification of skin disease from dermoscopic images. The developed algorithm is applied on the task3 of the ISIC 2018 challenge dataset (Skin Lesion Analysis Towards Melanoma Detection).
[ { "created": "Tue, 24 Jul 2018 14:48:57 GMT", "version": "v1" } ]
2018-07-25
[ [ "Pal", "Anabik", "" ], [ "Ray", "Sounak", "" ], [ "Garain", "Utpal", "" ] ]
In this paper, a deep neural network based ensemble method is experimented for automatic identification of skin disease from dermoscopic images. The developed algorithm is applied on the task3 of the ISIC 2018 challenge dataset (Skin Lesion Analysis Towards Melanoma Detection).
2107.01707
Rasheed el-Bouri
Rasheed el-Bouri, Tingting Zhu, David A. Clifton
Towards Scheduling Federated Deep Learning using Meta-Gradients for Inter-Hospital Learning
11 pages, 8 figures
null
null
null
cs.LG cs.CR cs.DC
http://creativecommons.org/licenses/by/4.0/
Given the abundance and ease of access of personal data today, individual privacy has become of paramount importance, particularly in the healthcare domain. In this work, we aim to utilise patient data extracted from multiple hospital data centres to train a machine learning model without sacrificing patient privacy. We develop a scheduling algorithm in conjunction with a student-teacher algorithm that is deployed in a federated manner. This allows a central model to learn from batches of data at each federal node. The teacher acts between data centres to update the main task (student) algorithm using the data that is stored in the various data centres. We show that the scheduler, trained using meta-gradients, can effectively organise training and as a result train a machine learning model on a diverse dataset without needing explicit access to the patient data. We achieve state-of-the-art performance and show how our method overcomes some of the problems faced in the federated learning such as node poisoning. We further show how the scheduler can be used as a mechanism for transfer learning, allowing different teachers to work together in training a student for state-of-the-art performance.
[ { "created": "Sun, 4 Jul 2021 18:45:58 GMT", "version": "v1" } ]
2021-07-06
[ [ "el-Bouri", "Rasheed", "" ], [ "Zhu", "Tingting", "" ], [ "Clifton", "David A.", "" ] ]
Given the abundance and ease of access of personal data today, individual privacy has become of paramount importance, particularly in the healthcare domain. In this work, we aim to utilise patient data extracted from multiple hospital data centres to train a machine learning model without sacrificing patient privacy. We develop a scheduling algorithm in conjunction with a student-teacher algorithm that is deployed in a federated manner. This allows a central model to learn from batches of data at each federal node. The teacher acts between data centres to update the main task (student) algorithm using the data that is stored in the various data centres. We show that the scheduler, trained using meta-gradients, can effectively organise training and as a result train a machine learning model on a diverse dataset without needing explicit access to the patient data. We achieve state-of-the-art performance and show how our method overcomes some of the problems faced in the federated learning such as node poisoning. We further show how the scheduler can be used as a mechanism for transfer learning, allowing different teachers to work together in training a student for state-of-the-art performance.
1702.01389
Mohammad Moltafet
Mohammad. Moltafet, Nader. Mokari, Mohammad R. Javan, Paiez. Azmi
Comparison Study between NOMA and SCMA
5 pages, 2 figures
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, the performance and system complexity of the candidate multiple access (MA) techniques for the next generation of cellular systems, namely, non-orthogonal multiple access (NOMA) (in this paper, we consider power domain MA as NOMA) and sparse code multiple access (SCMA), are investigated. To this end, for each MA technique, a resource allocation problem considering heterogeneous cellular networks (HetNet) is formulated. We apply successive convex approximation (SCA) method to each problem and obtain their solutions. The simulation results show that SCMA-based system achieves better performance than NOMA-based one at the cost of more complexity.
[ { "created": "Sun, 5 Feb 2017 11:42:00 GMT", "version": "v1" } ]
2017-02-07
[ [ "Moltafet", "Mohammad.", "" ], [ "Mokari", "Nader.", "" ], [ "Javan", "Mohammad R.", "" ], [ "Azmi", "Paiez.", "" ] ]
In this paper, the performance and system complexity of the candidate multiple access (MA) techniques for the next generation of cellular systems, namely, non-orthogonal multiple access (NOMA) (in this paper, we consider power domain MA as NOMA) and sparse code multiple access (SCMA), are investigated. To this end, for each MA technique, a resource allocation problem considering heterogeneous cellular networks (HetNet) is formulated. We apply successive convex approximation (SCA) method to each problem and obtain their solutions. The simulation results show that SCMA-based system achieves better performance than NOMA-based one at the cost of more complexity.
2005.05471
Lawrence Smolinsky
Lawrence Smolinsky and Aaron J. Lercher
Co-author weighting in bibliometric methodology and subfields of a scientific discipline
11 pages, 1 figure, 4 tables
null
null
null
cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Collaborative work and co-authorship are fundamental to the advancement of modern science. However, it is not clear how collaboration should be measured in achievement-based metrics. Co-author weighted credit introduces distortions into the bibliometric description of a discipline. It puts great weight on collaboration - not based on the results of collaboration - but purely because of the existence of collaborations. In terms of publication and citation impact, it artificially favors some subdisciplines. In order to understand how credit is given in a co-author weighted system (like the NRC's method), we introduced credit spaces. We include a study of the discipline of physics to illustrate the method. Indicators are introduced to measure the proportion of a credit space awarded to a subfield or a set of authors.
[ { "created": "Mon, 11 May 2020 22:40:21 GMT", "version": "v1" } ]
2020-05-13
[ [ "Smolinsky", "Lawrence", "" ], [ "Lercher", "Aaron J.", "" ] ]
Collaborative work and co-authorship are fundamental to the advancement of modern science. However, it is not clear how collaboration should be measured in achievement-based metrics. Co-author weighted credit introduces distortions into the bibliometric description of a discipline. It puts great weight on collaboration - not based on the results of collaboration - but purely because of the existence of collaborations. In terms of publication and citation impact, it artificially favors some subdisciplines. In order to understand how credit is given in a co-author weighted system (like the NRC's method), we introduced credit spaces. We include a study of the discipline of physics to illustrate the method. Indicators are introduced to measure the proportion of a credit space awarded to a subfield or a set of authors.
2308.07170
Jeremy Cochoy
Jeremy Cochoy
Human Voice Pitch Estimation: A Convolutional Network with Auto-Labeled and Synthetic Data
null
null
null
null
cs.SD cs.LG eess.AS
http://creativecommons.org/licenses/by/4.0/
In the domain of music and sound processing, pitch extraction plays a pivotal role. Our research presents a specialized convolutional neural network designed for pitch extraction, particularly from the human singing voice in acapella performances. Notably, our approach combines synthetic data with auto-labeled acapella sung audio, creating a robust training environment. Evaluation across datasets comprising synthetic sounds, opera recordings, and time-stretched vowels demonstrates its efficacy. This work paves the way for enhanced pitch extraction in both music and voice settings.
[ { "created": "Mon, 14 Aug 2023 14:26:52 GMT", "version": "v1" }, { "created": "Sun, 17 Dec 2023 17:46:27 GMT", "version": "v2" } ]
2023-12-19
[ [ "Cochoy", "Jeremy", "" ] ]
In the domain of music and sound processing, pitch extraction plays a pivotal role. Our research presents a specialized convolutional neural network designed for pitch extraction, particularly from the human singing voice in acapella performances. Notably, our approach combines synthetic data with auto-labeled acapella sung audio, creating a robust training environment. Evaluation across datasets comprising synthetic sounds, opera recordings, and time-stretched vowels demonstrates its efficacy. This work paves the way for enhanced pitch extraction in both music and voice settings.
1404.6784
Joao Leite
Martin Slota, Martin Bal\'az, Jo\~ao Leite
On Strong and Default Negation in Logic Program Updates (Extended Version)
14 pages, extended version of the paper to appear in the online supplement of Theory and Practice of Logic Programming (TPLP), and presented at the 15th International Workshop on Non-Monotonic Reasoning (NMR 2014) and at the 30th International Conference on Logic Programming (ICLP 2014)
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Existing semantics for answer-set program updates fall into two categories: either they consider only strong negation in heads of rules, or they primarily rely on default negation in heads of rules and optionally provide support for strong negation by means of a syntactic transformation. In this paper we pinpoint the limitations of both these approaches and argue that both types of negation should be first-class citizens in the context of updates. We identify principles that plausibly constrain their interaction but are not simultaneously satisfied by any existing rule update semantics. Then we extend one of the most advanced semantics with direct support for strong negation and show that it satisfies the outlined principles as well as a variety of other desirable properties.
[ { "created": "Sun, 27 Apr 2014 16:33:42 GMT", "version": "v1" }, { "created": "Thu, 8 May 2014 10:46:56 GMT", "version": "v2" }, { "created": "Wed, 11 Jun 2014 23:30:20 GMT", "version": "v3" }, { "created": "Wed, 9 Jul 2014 16:05:40 GMT", "version": "v4" } ]
2014-07-10
[ [ "Slota", "Martin", "" ], [ "Baláz", "Martin", "" ], [ "Leite", "João", "" ] ]
Existing semantics for answer-set program updates fall into two categories: either they consider only strong negation in heads of rules, or they primarily rely on default negation in heads of rules and optionally provide support for strong negation by means of a syntactic transformation. In this paper we pinpoint the limitations of both these approaches and argue that both types of negation should be first-class citizens in the context of updates. We identify principles that plausibly constrain their interaction but are not simultaneously satisfied by any existing rule update semantics. Then we extend one of the most advanced semantics with direct support for strong negation and show that it satisfies the outlined principles as well as a variety of other desirable properties.
1610.00813
Joel Mathias
Joel Mathias, Ana Bu\v{s}i\'c, Sean Meyn
Demand Dispatch with Heterogeneous Intelligent Loads
Extended version of the paper that was published in Proc. 50th Annual Hawaii International Conference on System Sciences (HICSS), 2017. This version contains an extended appendix that provides details relevant to the simulations, including: (i) design of the optimal linear inverse filter in Appendix A2, and (ii) the creation of nominal transition matrices for TCLs using Monte Carlo in Appendix A3
Proc. 50th Annual Hawaii International Conference on System Sciences (HICSS), 2017
10.24251/HICSS.2017.380
null
cs.SY math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A distributed control architecture is presented that is intended to make a collection of heterogeneous loads appear to the grid operator as a nearly perfect battery. Local control is based on randomized decision rules advocated in prior research, and extended in this paper to any load with a discrete number of power states. Additional linear filtering at the load ensures that the input-output dynamics of the aggregate has a nearly flat input-output response: the behavior of an ideal, multi-GW battery system.
[ { "created": "Tue, 4 Oct 2016 01:14:00 GMT", "version": "v1" }, { "created": "Thu, 24 Oct 2019 02:48:49 GMT", "version": "v2" } ]
2019-10-25
[ [ "Mathias", "Joel", "" ], [ "Bušić", "Ana", "" ], [ "Meyn", "Sean", "" ] ]
A distributed control architecture is presented that is intended to make a collection of heterogeneous loads appear to the grid operator as a nearly perfect battery. Local control is based on randomized decision rules advocated in prior research, and extended in this paper to any load with a discrete number of power states. Additional linear filtering at the load ensures that the input-output dynamics of the aggregate has a nearly flat input-output response: the behavior of an ideal, multi-GW battery system.
2006.08292
Liangchen Hu
Liangchen Hu and Wensheng Zhang
Robust Locality-Aware Regression for Labeled Data Classification
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the dramatic increase of dimensions in the data representation, extracting latent low-dimensional features becomes of the utmost importance for efficient classification. Aiming at the problems of unclear margin representation and difficulty in revealing the data manifold structure in most of the existing linear discriminant methods, we propose a new discriminant feature extraction framework, namely Robust Locality-Aware Regression (RLAR). In our model, we introduce a retargeted regression to perform the marginal representation learning adaptively instead of using the general average inter-class margin. Besides, we formulate a new strategy for enhancing the local intra-class compactness of the data manifold, which can achieve the joint learning of locality-aware graph structure and desirable projection matrix. To alleviate the disturbance of outliers and prevent overfitting, we measure the regression term and locality-aware term together with the regularization term by the L2,1 norm. Further, forcing the row sparsity on the projection matrix through the L2,1 norm achieves the cooperation of feature selection and feature extraction. Then, we derive an effective iterative algorithm for solving the proposed model. The experimental results over a range of UCI data sets and other benchmark databases demonstrate that the proposed RLAR outperforms some state-of-the-art approaches.
[ { "created": "Mon, 15 Jun 2020 11:36:59 GMT", "version": "v1" } ]
2020-06-16
[ [ "Hu", "Liangchen", "" ], [ "Zhang", "Wensheng", "" ] ]
With the dramatic increase of dimensions in the data representation, extracting latent low-dimensional features becomes of the utmost importance for efficient classification. Aiming at the problems of unclear margin representation and difficulty in revealing the data manifold structure in most of the existing linear discriminant methods, we propose a new discriminant feature extraction framework, namely Robust Locality-Aware Regression (RLAR). In our model, we introduce a retargeted regression to perform the marginal representation learning adaptively instead of using the general average inter-class margin. Besides, we formulate a new strategy for enhancing the local intra-class compactness of the data manifold, which can achieve the joint learning of locality-aware graph structure and desirable projection matrix. To alleviate the disturbance of outliers and prevent overfitting, we measure the regression term and locality-aware term together with the regularization term by the L2,1 norm. Further, forcing the row sparsity on the projection matrix through the L2,1 norm achieves the cooperation of feature selection and feature extraction. Then, we derive an effective iterative algorithm for solving the proposed model. The experimental results over a range of UCI data sets and other benchmark databases demonstrate that the proposed RLAR outperforms some state-of-the-art approaches.
2012.12411
Charlie C.L. Wang Prof. Dr.
Rob B.N. Scharff, Guoxin Fang, Yingjun Tian, Jun Wu, Jo M.P. Geraedts, Charlie C.L. Wang
Sensing and Reconstruction of 3D Deformation on Pneumatic Soft Robots
8 pages, 10 figures
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Real-time proprioception is a challenging problem for soft robots, which have almost infinite degrees-of-freedom in body deformation. When multiple actuators are used, it becomes more difficult as deformation can also occur on actuators caused by interaction between each other. To tackle this problem, we present a method in this paper to sense and reconstruct 3D deformation on pneumatic soft robots by first integrating multiple low-cost sensors inside the chambers of pneumatic actuators and then using machine learning to convert the captured signals into shape parameters of soft robots. An exterior motion capture system is employed to generate the datasets for both training and testing. With the help of good shape parameterization, the 3D shape of a soft robot can be accurately reconstructed from signals obtained from multiple sensors. We demonstrate the effectiveness of this approach on two designs of soft robots -- a robotic joint and a deformable membrane. After parameterizing the deformation of these soft robots into compact shape parameters, we can effectively train the neural networks to reconstruct the 3D deformation from the sensor signals. The sensing and shape prediction pipeline can run at 50Hz in real-time on a consumer-level device.
[ { "created": "Tue, 22 Dec 2020 23:18:49 GMT", "version": "v1" } ]
2020-12-24
[ [ "Scharff", "Rob B. N.", "" ], [ "Fang", "Guoxin", "" ], [ "Tian", "Yingjun", "" ], [ "Wu", "Jun", "" ], [ "Geraedts", "Jo M. P.", "" ], [ "Wang", "Charlie C. L.", "" ] ]
Real-time proprioception is a challenging problem for soft robots, which have almost infinite degrees-of-freedom in body deformation. When multiple actuators are used, it becomes more difficult as deformation can also occur on actuators caused by interaction between each other. To tackle this problem, we present a method in this paper to sense and reconstruct 3D deformation on pneumatic soft robots by first integrating multiple low-cost sensors inside the chambers of pneumatic actuators and then using machine learning to convert the captured signals into shape parameters of soft robots. An exterior motion capture system is employed to generate the datasets for both training and testing. With the help of good shape parameterization, the 3D shape of a soft robot can be accurately reconstructed from signals obtained from multiple sensors. We demonstrate the effectiveness of this approach on two designs of soft robots -- a robotic joint and a deformable membrane. After parameterizing the deformation of these soft robots into compact shape parameters, we can effectively train the neural networks to reconstruct the 3D deformation from the sensor signals. The sensing and shape prediction pipeline can run at 50Hz in real-time on a consumer-level device.
1907.09019
Eric Sun
Eric D. Sun and Ron Dekel
ImageNet-trained deep neural network exhibits illusion-like response to the Scintillating Grid
Supplementary material at end of document
null
null
null
cs.CV cs.LG eess.IV q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural network (DNN) models for computer vision are now capable of human-level object recognition. Consequently, similarities in the performance and vulnerabilities of DNN and human vision are of great interest. Here we characterize the response of the VGG-19 DNN to images of the Scintillating Grid visual illusion, in which white dots are perceived to be partially black. We observed a significant deviation from the expected monotonic relation between VGG-19 representational dissimilarity and dot whiteness in the Scintillating Grid. That is, a linear increase in dot whiteness leads to a non-linear increase and then, remarkably, a decrease (non-monotonicity) in representational dissimilarity. In control images, mostly monotonic relations between representational dissimilarity and dot whiteness were observed. Furthermore, the dot whiteness level corresponding to the maximal representational dissimilarity (i.e. onset of non-monotonic dissimilarity) matched closely with that corresponding to the onset of illusion perception in human observers. As such, the non-monotonic response in the DNN is a potential model correlate for human illusion perception.
[ { "created": "Sun, 21 Jul 2019 19:14:47 GMT", "version": "v1" }, { "created": "Mon, 5 Aug 2019 02:13:38 GMT", "version": "v2" } ]
2019-08-06
[ [ "Sun", "Eric D.", "" ], [ "Dekel", "Ron", "" ] ]
Deep neural network (DNN) models for computer vision are now capable of human-level object recognition. Consequently, similarities in the performance and vulnerabilities of DNN and human vision are of great interest. Here we characterize the response of the VGG-19 DNN to images of the Scintillating Grid visual illusion, in which white dots are perceived to be partially black. We observed a significant deviation from the expected monotonic relation between VGG-19 representational dissimilarity and dot whiteness in the Scintillating Grid. That is, a linear increase in dot whiteness leads to a non-linear increase and then, remarkably, a decrease (non-monotonicity) in representational dissimilarity. In control images, mostly monotonic relations between representational dissimilarity and dot whiteness were observed. Furthermore, the dot whiteness level corresponding to the maximal representational dissimilarity (i.e. onset of non-monotonic dissimilarity) matched closely with that corresponding to the onset of illusion perception in human observers. As such, the non-monotonic response in the DNN is a potential model correlate for human illusion perception.
0709.0426
Noelle Carbonell
No\"elle Carbonell (INRIA Rocquencourt / INRIA Lorraine - LORIA), Suzanne Kieffer (INRIA Rocquencourt / INRIA Lorraine - LORIA)
Do oral messages help visual search?
26 pages
Advances in Natural Multimodal Dialogue Systems, Dordrecht (NL) Springer (Ed.) (2005) pp. 131-157
null
null
cs.HC
null
A preliminary experimental study is presented, that aims at eliciting the contribution of oral messages to facilitating visual search tasks on crowded visual displays. Results of quantitative and qualitative analyses suggest that appropriate verbal messages can improve both target selection time and accuracy. In particular, multimodal messages including a visual presentation of the isolated target together with absolute spatial oral information on its location in the displayed scene seem most effective. These messages also got top-ranking ratings from most subjects.
[ { "created": "Tue, 4 Sep 2007 13:23:40 GMT", "version": "v1" } ]
2007-09-05
[ [ "Carbonell", "Noëlle", "", "INRIA Rocquencourt / INRIA Lorraine - LORIA" ], [ "Kieffer", "Suzanne", "", "INRIA Rocquencourt / INRIA Lorraine - LORIA" ] ]
A preliminary experimental study is presented, that aims at eliciting the contribution of oral messages to facilitating visual search tasks on crowded visual displays. Results of quantitative and qualitative analyses suggest that appropriate verbal messages can improve both target selection time and accuracy. In particular, multimodal messages including a visual presentation of the isolated target together with absolute spatial oral information on its location in the displayed scene seem most effective. These messages also got top-ranking ratings from most subjects.
2105.06314
Ismini Psychoula
Ismini Psychoula, Andreas Gutmann, Pradip Mainali, S. H. Lee, Paul Dunphy, Fabien A. P. Petitcolas
Explainable Machine Learning for Fraud Detection
To be published in IEEE Computer Special Issue on Explainable AI and Machine Learning, 12 pages, 7 figures
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The application of machine learning to support the processing of large datasets holds promise in many industries, including financial services. However, practical issues for the full adoption of machine learning remain with the focus being on understanding and being able to explain the decisions and predictions made by complex models. In this paper, we explore explainability methods in the domain of real-time fraud detection by investigating the selection of appropriate background datasets and runtime trade-offs on both supervised and unsupervised models.
[ { "created": "Thu, 13 May 2021 14:12:02 GMT", "version": "v1" } ]
2021-05-14
[ [ "Psychoula", "Ismini", "" ], [ "Gutmann", "Andreas", "" ], [ "Mainali", "Pradip", "" ], [ "Lee", "S. H.", "" ], [ "Dunphy", "Paul", "" ], [ "Petitcolas", "Fabien A. P.", "" ] ]
The application of machine learning to support the processing of large datasets holds promise in many industries, including financial services. However, practical issues for the full adoption of machine learning remain with the focus being on understanding and being able to explain the decisions and predictions made by complex models. In this paper, we explore explainability methods in the domain of real-time fraud detection by investigating the selection of appropriate background datasets and runtime trade-offs on both supervised and unsupervised models.
0910.1757
Laurent Tapie
Laurent Tapie (LURPA), Kwamiwi Mawussi (LURPA)
Decomposition of forging die for high speed machining
null
IDMME - Virtual Concept 2008, Beijing : China (2008)
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Today's forging die manufacturing process must be adapted to several evolutions in machining process generation: CAD/CAM models, CAM software solutions and High Speed Machining (HSM). In this context, the adequacy between die shape and HSM process is in the core of machining preparation and process planning approaches. This paper deals with an original approach of machining preparation integrating this adequacy in the main tasks carried out. In this approach, the design of the machining process is based on two levels of decomposition of the geometrical model of a given die with respect to HSM cutting conditions (cutting speed and feed rate) and technological constrains (tool selection, features accessibility). This decomposition assists machining assistant to generate an HSM process. The result of this decomposition is the identification of machining features.
[ { "created": "Fri, 9 Oct 2009 14:33:17 GMT", "version": "v1" } ]
2009-10-12
[ [ "Tapie", "Laurent", "", "LURPA" ], [ "Mawussi", "Kwamiwi", "", "LURPA" ] ]
Today's forging die manufacturing process must be adapted to several evolutions in machining process generation: CAD/CAM models, CAM software solutions and High Speed Machining (HSM). In this context, the adequacy between die shape and HSM process is in the core of machining preparation and process planning approaches. This paper deals with an original approach of machining preparation integrating this adequacy in the main tasks carried out. In this approach, the design of the machining process is based on two levels of decomposition of the geometrical model of a given die with respect to HSM cutting conditions (cutting speed and feed rate) and technological constrains (tool selection, features accessibility). This decomposition assists machining assistant to generate an HSM process. The result of this decomposition is the identification of machining features.
1503.03244
Baotian Hu
Baotian Hu, Zhengdong Lu, Hang Li, Qingcai Chen
Convolutional Neural Network Architectures for Matching Natural Language Sentences
null
null
null
null
cs.CL cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Semantic matching is of central importance to many natural language tasks \cite{bordes2014semantic,RetrievalQA}. A successful matching algorithm needs to adequately model the internal structures of language objects and the interaction between them. As a step toward this goal, we propose convolutional neural network models for matching two sentences, by adapting the convolutional strategy in vision and speech. The proposed models not only nicely represent the hierarchical structures of sentences with their layer-by-layer composition and pooling, but also capture the rich matching patterns at different levels. Our models are rather generic, requiring no prior knowledge on language, and can hence be applied to matching tasks of different nature and in different languages. The empirical study on a variety of matching tasks demonstrates the efficacy of the proposed model on a variety of matching tasks and its superiority to competitor models.
[ { "created": "Wed, 11 Mar 2015 09:46:36 GMT", "version": "v1" } ]
2015-03-12
[ [ "Hu", "Baotian", "" ], [ "Lu", "Zhengdong", "" ], [ "Li", "Hang", "" ], [ "Chen", "Qingcai", "" ] ]
Semantic matching is of central importance to many natural language tasks \cite{bordes2014semantic,RetrievalQA}. A successful matching algorithm needs to adequately model the internal structures of language objects and the interaction between them. As a step toward this goal, we propose convolutional neural network models for matching two sentences, by adapting the convolutional strategy in vision and speech. The proposed models not only nicely represent the hierarchical structures of sentences with their layer-by-layer composition and pooling, but also capture the rich matching patterns at different levels. Our models are rather generic, requiring no prior knowledge on language, and can hence be applied to matching tasks of different nature and in different languages. The empirical study on a variety of matching tasks demonstrates the efficacy of the proposed model on a variety of matching tasks and its superiority to competitor models.
2104.11670
Nitzan Tur
Roy Schwartz, Nitzan Tur
The Metric Relaxation for $0$-Extension Admits an $\Omega(\log^{2/3}{k})$ Gap
27 pages, 3 figures, will appear in STOC 2021
null
10.1145/3406325.3451071
null
cs.DS math.MG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the $0$-Extension problem, where we are given an undirected graph $\mathcal{G}=(V,E)$ equipped with non-negative edge weights $w:E\rightarrow \mathbb{R}^+$, a collection $ T=\{ t_1,\ldots,t_k\}\subseteq V$ of $k$ special vertices called terminals, and a semi-metric $D$ over $T$. The goal is to assign every non-terminal vertex to a terminal while minimizing the sum over all edges of the weight of the edge multiplied by the distance in $D$ between the terminals to which the endpoints of the edge are assigned. $0$-Extension admits two known algorithms, achieving approximations of $O(\log{k})$ [C{\u{a}}linescu-Karloff-Rabani SICOMP '05] and $O(\log{k}/\log{\log{k}})$ [Fakcharoenphol-Harrelson-Rao-Talwar SODA '03]. Both known algorithms are based on rounding a natural linear programming relaxation called the metric relaxation, in which $D$ is extended from $T$ to the entire of $V$. The current best known integrality gap for the metric relaxation is $\Omega (\sqrt{\log{k}})$. In this work we present an improved integrality gap of $\Omega(\log^{\frac{2}{3}}k)$ for the metric relaxation. Our construction is based on the randomized extension of one graph by another, a notion that captures lifts of graphs as a special case and might be of independent interest. Inspired by algebraic topology, our analysis of the gap instance is based on proving no continuous section (in the topological sense) exists in the randomized extension.
[ { "created": "Fri, 23 Apr 2021 15:53:06 GMT", "version": "v1" } ]
2021-04-26
[ [ "Schwartz", "Roy", "" ], [ "Tur", "Nitzan", "" ] ]
We consider the $0$-Extension problem, where we are given an undirected graph $\mathcal{G}=(V,E)$ equipped with non-negative edge weights $w:E\rightarrow \mathbb{R}^+$, a collection $ T=\{ t_1,\ldots,t_k\}\subseteq V$ of $k$ special vertices called terminals, and a semi-metric $D$ over $T$. The goal is to assign every non-terminal vertex to a terminal while minimizing the sum over all edges of the weight of the edge multiplied by the distance in $D$ between the terminals to which the endpoints of the edge are assigned. $0$-Extension admits two known algorithms, achieving approximations of $O(\log{k})$ [C{\u{a}}linescu-Karloff-Rabani SICOMP '05] and $O(\log{k}/\log{\log{k}})$ [Fakcharoenphol-Harrelson-Rao-Talwar SODA '03]. Both known algorithms are based on rounding a natural linear programming relaxation called the metric relaxation, in which $D$ is extended from $T$ to the entire of $V$. The current best known integrality gap for the metric relaxation is $\Omega (\sqrt{\log{k}})$. In this work we present an improved integrality gap of $\Omega(\log^{\frac{2}{3}}k)$ for the metric relaxation. Our construction is based on the randomized extension of one graph by another, a notion that captures lifts of graphs as a special case and might be of independent interest. Inspired by algebraic topology, our analysis of the gap instance is based on proving no continuous section (in the topological sense) exists in the randomized extension.
2311.02692
Zhelun Shi
Zhelun Shi, Zhipin Wang, Hongxing Fan, Zhenfei Yin, Lu Sheng, Yu Qiao, Jing Shao
ChEF: A Comprehensive Evaluation Framework for Standardized Assessment of Multimodal Large Language Models
39 pages, 26 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multimodal Large Language Models (MLLMs) have shown impressive abilities in interacting with visual content with myriad potential downstream tasks. However, even though a list of benchmarks has been proposed, the capabilities and limitations of MLLMs are still not comprehensively understood, due to a lack of a standardized and holistic evaluation framework. To this end, we present the first Comprehensive Evaluation Framework (ChEF) that can holistically profile each MLLM and fairly compare different MLLMs. First, we structure ChEF as four modular components, i.e., Scenario as scalable multimodal datasets, Instruction as flexible instruction retrieving formulae, Inferencer as reliable question answering strategies, and Metric as indicative task-specific score functions. Based on them, ChEF facilitates versatile evaluations in a standardized framework, and new evaluations can be built by designing new Recipes (systematic selection of these four components). Notably, current MLLM benchmarks can be readily summarized as recipes of ChEF. Second, we introduce 6 new recipes to quantify competent MLLMs' desired capabilities (or called desiderata, i.e., calibration, in-context learning, instruction following, language performance, hallucination, and robustness) as reliable agents that can perform real-world multimodal interactions. Third, we conduct a large-scale evaluation of 9 prominent MLLMs on 9 scenarios and 6 desiderata. Our evaluation summarized over 20 valuable observations concerning the generalizability of MLLMs across various scenarios and the composite capability of MLLMs required for multimodal interactions. We will publicly release all the detailed implementations for further analysis, as well as an easy-to-use modular toolkit for the integration of new recipes and models, so that ChEF can be a growing evaluation framework for the MLLM community.
[ { "created": "Sun, 5 Nov 2023 16:01:40 GMT", "version": "v1" } ]
2023-11-07
[ [ "Shi", "Zhelun", "" ], [ "Wang", "Zhipin", "" ], [ "Fan", "Hongxing", "" ], [ "Yin", "Zhenfei", "" ], [ "Sheng", "Lu", "" ], [ "Qiao", "Yu", "" ], [ "Shao", "Jing", "" ] ]
Multimodal Large Language Models (MLLMs) have shown impressive abilities in interacting with visual content with myriad potential downstream tasks. However, even though a list of benchmarks has been proposed, the capabilities and limitations of MLLMs are still not comprehensively understood, due to a lack of a standardized and holistic evaluation framework. To this end, we present the first Comprehensive Evaluation Framework (ChEF) that can holistically profile each MLLM and fairly compare different MLLMs. First, we structure ChEF as four modular components, i.e., Scenario as scalable multimodal datasets, Instruction as flexible instruction retrieving formulae, Inferencer as reliable question answering strategies, and Metric as indicative task-specific score functions. Based on them, ChEF facilitates versatile evaluations in a standardized framework, and new evaluations can be built by designing new Recipes (systematic selection of these four components). Notably, current MLLM benchmarks can be readily summarized as recipes of ChEF. Second, we introduce 6 new recipes to quantify competent MLLMs' desired capabilities (or called desiderata, i.e., calibration, in-context learning, instruction following, language performance, hallucination, and robustness) as reliable agents that can perform real-world multimodal interactions. Third, we conduct a large-scale evaluation of 9 prominent MLLMs on 9 scenarios and 6 desiderata. Our evaluation summarized over 20 valuable observations concerning the generalizability of MLLMs across various scenarios and the composite capability of MLLMs required for multimodal interactions. We will publicly release all the detailed implementations for further analysis, as well as an easy-to-use modular toolkit for the integration of new recipes and models, so that ChEF can be a growing evaluation framework for the MLLM community.
1801.01757
Fulvio Mastrogiovanni
Alessio Capitanelli, Marco Maratea, Fulvio Mastrogiovanni, Mauro Vallati
On the manipulation of articulated objects in human-robot cooperation scenarios
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Articulated and flexible objects constitute a challenge for robot manipulation tasks but are present in different real-world settings, including home and industrial environments. Current approaches to the manipulation of articulated and flexible objects employ ad hoc strategies to sequence and perform actions on them depending on a number of physical or geometrical characteristics related to those objects, as well as on an a priori classification of target object configurations. In this paper, we propose an action planning and execution framework, which (i) considers abstract representations of articulated or flexible objects, (ii) integrates action planning to reason upon such configurations and to sequence an appropriate set of actions with the aim of obtaining a target configuration provided as a goal, and (iii) is able to cooperate with humans to collaboratively carry out the plan. On the one hand, we show that a trade-off exists between the way articulated or flexible objects are perceived and how the system represents them. Such a trade-off greatly impacts on the complexity of the planning process. On the other hand, we demonstrate the system's capabilities in allowing humans to interrupt robot action execution, and - in general - to contribute to the whole manipulation process. Results related to planning performance are discussed, and examples of a Baxter dual-arm manipulator performing actions collaboratively with humans are shown.
[ { "created": "Fri, 5 Jan 2018 14:08:21 GMT", "version": "v1" }, { "created": "Sat, 13 Jan 2018 07:54:49 GMT", "version": "v2" } ]
2018-01-16
[ [ "Capitanelli", "Alessio", "" ], [ "Maratea", "Marco", "" ], [ "Mastrogiovanni", "Fulvio", "" ], [ "Vallati", "Mauro", "" ] ]
Articulated and flexible objects constitute a challenge for robot manipulation tasks but are present in different real-world settings, including home and industrial environments. Current approaches to the manipulation of articulated and flexible objects employ ad hoc strategies to sequence and perform actions on them depending on a number of physical or geometrical characteristics related to those objects, as well as on an a priori classification of target object configurations. In this paper, we propose an action planning and execution framework, which (i) considers abstract representations of articulated or flexible objects, (ii) integrates action planning to reason upon such configurations and to sequence an appropriate set of actions with the aim of obtaining a target configuration provided as a goal, and (iii) is able to cooperate with humans to collaboratively carry out the plan. On the one hand, we show that a trade-off exists between the way articulated or flexible objects are perceived and how the system represents them. Such a trade-off greatly impacts on the complexity of the planning process. On the other hand, we demonstrate the system's capabilities in allowing humans to interrupt robot action execution, and - in general - to contribute to the whole manipulation process. Results related to planning performance are discussed, and examples of a Baxter dual-arm manipulator performing actions collaboratively with humans are shown.
2203.10853
Mingkui Tan
Shuaicheng Niu and Jiaxiang Wu and Yifan Zhang and Guanghui Xu and Haokun Li and Peilin Zhao and Junzhou Huang and Yaowei Wang and Mingkui Tan
Boost Test-Time Performance with Closed-Loop Inference
10 pages, 10 figures, conference
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conventional deep models predict a test sample with a single forward propagation, which, however, may not be sufficient for predicting hard-classified samples. On the contrary, we human beings may need to carefully check the sample many times before making a final decision. During the recheck process, one may refine/adjust the prediction by referring to related samples. Motivated by this, we propose to predict those hard-classified test samples in a looped manner to boost the model performance. However, this idea may pose a critical challenge: how to construct looped inference, so that the original erroneous predictions on these hard test samples can be corrected with little additional effort. To address this, we propose a general Closed-Loop Inference (CLI) method. Specifically, we first devise a filtering criterion to identify those hard-classified test samples that need additional inference loops. For each hard sample, we construct an additional auxiliary learning task based on its original top-$K$ predictions to calibrate the model, and then use the calibrated model to obtain the final prediction. Promising results on ImageNet (in-distribution test samples) and ImageNet-C (out-of-distribution test samples) demonstrate the effectiveness of CLI in improving the performance of any pre-trained model.
[ { "created": "Mon, 21 Mar 2022 10:20:21 GMT", "version": "v1" }, { "created": "Sat, 26 Mar 2022 12:10:32 GMT", "version": "v2" } ]
2022-03-29
[ [ "Niu", "Shuaicheng", "" ], [ "Wu", "Jiaxiang", "" ], [ "Zhang", "Yifan", "" ], [ "Xu", "Guanghui", "" ], [ "Li", "Haokun", "" ], [ "Zhao", "Peilin", "" ], [ "Huang", "Junzhou", "" ], [ "Wang", "Yaowei", "" ], [ "Tan", "Mingkui", "" ] ]
Conventional deep models predict a test sample with a single forward propagation, which, however, may not be sufficient for predicting hard-classified samples. On the contrary, we human beings may need to carefully check the sample many times before making a final decision. During the recheck process, one may refine/adjust the prediction by referring to related samples. Motivated by this, we propose to predict those hard-classified test samples in a looped manner to boost the model performance. However, this idea may pose a critical challenge: how to construct looped inference, so that the original erroneous predictions on these hard test samples can be corrected with little additional effort. To address this, we propose a general Closed-Loop Inference (CLI) method. Specifically, we first devise a filtering criterion to identify those hard-classified test samples that need additional inference loops. For each hard sample, we construct an additional auxiliary learning task based on its original top-$K$ predictions to calibrate the model, and then use the calibrated model to obtain the final prediction. Promising results on ImageNet (in-distribution test samples) and ImageNet-C (out-of-distribution test samples) demonstrate the effectiveness of CLI in improving the performance of any pre-trained model.
0706.3412
Blai Bonet
Nerio Borges, Blai Bonet
On Canonical Forms of Complete Problems via First-order Projections
9 pages
null
null
null
cs.CC
null
The class of problems complete for NP via first-order reductions is known to be characterized by existential second-order sentences of a fixed form. All such sentences are built around the so-called generalized IS-form of the sentence that defines Independent-Set. This result can also be understood as that every sentence that defines a NP-complete problem P can be decomposed in two disjuncts such that the first one characterizes a fragment of P as hard as Independent-Set and the second the rest of P. That is, a decomposition that divides every such sentence into a quotient and residue modulo Independent-Set. In this paper, we show that this result can be generalized over a wide collection of complexity classes, including the so-called nice classes. Moreover, we show that such decomposition can be done for any complete problem with respect to the given class, and that two such decompositions are non-equivalent in general. Interestingly, our results are based on simple and well-known properties of first-order reductions.ow that this result can be generalized over a wide collection of complexity classes, including the so-called nice classes. Moreover, we show that such decomposition can be done for any complete problem with respect to the given class, and that two such decompositions are non-equivalent in general. Interestingly, our results are based on simple and well-known properties of first-order reductions.
[ { "created": "Fri, 22 Jun 2007 21:27:06 GMT", "version": "v1" } ]
2007-06-26
[ [ "Borges", "Nerio", "" ], [ "Bonet", "Blai", "" ] ]
The class of problems complete for NP via first-order reductions is known to be characterized by existential second-order sentences of a fixed form. All such sentences are built around the so-called generalized IS-form of the sentence that defines Independent-Set. This result can also be understood as that every sentence that defines a NP-complete problem P can be decomposed in two disjuncts such that the first one characterizes a fragment of P as hard as Independent-Set and the second the rest of P. That is, a decomposition that divides every such sentence into a quotient and residue modulo Independent-Set. In this paper, we show that this result can be generalized over a wide collection of complexity classes, including the so-called nice classes. Moreover, we show that such decomposition can be done for any complete problem with respect to the given class, and that two such decompositions are non-equivalent in general. Interestingly, our results are based on simple and well-known properties of first-order reductions.ow that this result can be generalized over a wide collection of complexity classes, including the so-called nice classes. Moreover, we show that such decomposition can be done for any complete problem with respect to the given class, and that two such decompositions are non-equivalent in general. Interestingly, our results are based on simple and well-known properties of first-order reductions.
1809.07296
Michael Baddeley
Michael Baddeley, Reza Nejabati, George Oikonomou, Mahesh Sooriyabandara, Dimitra Simeonidou
Evolving SDN for Low-Power IoT Networks
null
null
10.1109/NETSOFT.2018.8460125
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Software Defined Networking (SDN) offers a flexible and scalable architecture that abstracts decision making away from individual devices and provides a programmable network platform. However, implementing a centralized SDN architecture within the constraints of a low-power wireless network faces considerable challenges. Not only is controller traffic subject to jitter due to unreliable links and network contention, but the overhead generated by SDN can severely affect the performance of other traffic. This paper addresses the challenge of bringing high-overhead SDN architecture to IEEE 802.15.4 networks. We explore how traditional SDN needs to evolve in order to overcome the constraints of low-power wireless networks, and discuss protocol and architectural optimizations necessary to reduce SDN control overhead - the main barrier to successful implementation. We argue that interoperability with the existing protocol stack is necessary to provide a platform for controller discovery and coexistence with legacy networks. We consequently introduce {\mu}SDN, a lightweight SDN framework for Contiki, with both IPv6 and underlying routing protocol interoperability, as well as optimizing a number of elements within the SDN architecture to reduce control overhead to practical levels. We evaluate {\mu}SDN in terms of latency, energy, and packet delivery. Through this evaluation we show how the cost of SDN control overhead (both bootstrapping and management) can be reduced to a point where comparable performance and scalability is achieved against an IEEE 802.15.4-2012 RPL-based network. Additionally, we demonstrate {\mu}SDN through simulation: providing a use-case where the SDN configurability can be used to provide Quality of Service (QoS) for critical network flows experiencing interference, and we achieve considerable reductions in delay and jitter in comparison to a scenario without SDN.
[ { "created": "Wed, 19 Sep 2018 16:46:10 GMT", "version": "v1" }, { "created": "Tue, 8 Jan 2019 12:38:59 GMT", "version": "v2" }, { "created": "Wed, 29 May 2019 14:31:21 GMT", "version": "v3" } ]
2019-05-30
[ [ "Baddeley", "Michael", "" ], [ "Nejabati", "Reza", "" ], [ "Oikonomou", "George", "" ], [ "Sooriyabandara", "Mahesh", "" ], [ "Simeonidou", "Dimitra", "" ] ]
Software Defined Networking (SDN) offers a flexible and scalable architecture that abstracts decision making away from individual devices and provides a programmable network platform. However, implementing a centralized SDN architecture within the constraints of a low-power wireless network faces considerable challenges. Not only is controller traffic subject to jitter due to unreliable links and network contention, but the overhead generated by SDN can severely affect the performance of other traffic. This paper addresses the challenge of bringing high-overhead SDN architecture to IEEE 802.15.4 networks. We explore how traditional SDN needs to evolve in order to overcome the constraints of low-power wireless networks, and discuss protocol and architectural optimizations necessary to reduce SDN control overhead - the main barrier to successful implementation. We argue that interoperability with the existing protocol stack is necessary to provide a platform for controller discovery and coexistence with legacy networks. We consequently introduce {\mu}SDN, a lightweight SDN framework for Contiki, with both IPv6 and underlying routing protocol interoperability, as well as optimizing a number of elements within the SDN architecture to reduce control overhead to practical levels. We evaluate {\mu}SDN in terms of latency, energy, and packet delivery. Through this evaluation we show how the cost of SDN control overhead (both bootstrapping and management) can be reduced to a point where comparable performance and scalability is achieved against an IEEE 802.15.4-2012 RPL-based network. Additionally, we demonstrate {\mu}SDN through simulation: providing a use-case where the SDN configurability can be used to provide Quality of Service (QoS) for critical network flows experiencing interference, and we achieve considerable reductions in delay and jitter in comparison to a scenario without SDN.
1203.2973
Sigal Oren
David Bindel, Jon Kleinberg and Sigal Oren
How Bad is Forming Your Own Opinion?
null
null
null
null
cs.GT physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The question of how people form their opinion has fascinated economists and sociologists for quite some time. In many of the models, a group of people in a social network, each holding a numerical opinion, arrive at a shared opinion through repeated averaging with their neighbors in the network. Motivated by the observation that consensus is rarely reached in real opinion dynamics, we study a related sociological model in which individuals' intrinsic beliefs counterbalance the averaging process and yield a diversity of opinions. By interpreting the repeated averaging as best-response dynamics in an underlying game with natural payoffs, and the limit of the process as an equilibrium, we are able to study the cost of disagreement in these models relative to a social optimum. We provide a tight bound on the cost at equilibrium relative to the optimum; our analysis draws a connection between these agreement models and extremal problems that lead to generalized eigenvalues. We also consider a natural network design problem in this setting: which links can we add to the underlying network to reduce the cost of disagreement at equilibrium?
[ { "created": "Tue, 13 Mar 2012 23:14:40 GMT", "version": "v1" } ]
2012-03-15
[ [ "Bindel", "David", "" ], [ "Kleinberg", "Jon", "" ], [ "Oren", "Sigal", "" ] ]
The question of how people form their opinion has fascinated economists and sociologists for quite some time. In many of the models, a group of people in a social network, each holding a numerical opinion, arrive at a shared opinion through repeated averaging with their neighbors in the network. Motivated by the observation that consensus is rarely reached in real opinion dynamics, we study a related sociological model in which individuals' intrinsic beliefs counterbalance the averaging process and yield a diversity of opinions. By interpreting the repeated averaging as best-response dynamics in an underlying game with natural payoffs, and the limit of the process as an equilibrium, we are able to study the cost of disagreement in these models relative to a social optimum. We provide a tight bound on the cost at equilibrium relative to the optimum; our analysis draws a connection between these agreement models and extremal problems that lead to generalized eigenvalues. We also consider a natural network design problem in this setting: which links can we add to the underlying network to reduce the cost of disagreement at equilibrium?
1306.3727
Lin Chen
Lin Chen, Deshi Ye, Guochuan Zhang
A note on scheduling with low rank processing times
14 pages
null
null
null
cs.CC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the classical minimum makespan scheduling problem, where the processing time of job $j$ on machine $i$ is $p_{ij}$, and the matrix $P=(p_{ij})_{m\times n}$ is of a low rank. It is proved in (Bhaskara et al., SODA 2013) that rank 7 scheduling is NP-hard to approximate to a factor of $3/2-\epsilon$, and rank 4 scheduling is APX-hard (NP-hard to approximate within a factor of $1.03-\epsilon$). We improve this result by showing that rank 4 scheduling is already NP-hard to approximate within a factor of $3/2-\epsilon$, and meanwhile rank 3 scheduling is APX-hard.
[ { "created": "Mon, 17 Jun 2013 02:19:11 GMT", "version": "v1" } ]
2013-06-18
[ [ "Chen", "Lin", "" ], [ "Ye", "Deshi", "" ], [ "Zhang", "Guochuan", "" ] ]
We consider the classical minimum makespan scheduling problem, where the processing time of job $j$ on machine $i$ is $p_{ij}$, and the matrix $P=(p_{ij})_{m\times n}$ is of a low rank. It is proved in (Bhaskara et al., SODA 2013) that rank 7 scheduling is NP-hard to approximate to a factor of $3/2-\epsilon$, and rank 4 scheduling is APX-hard (NP-hard to approximate within a factor of $1.03-\epsilon$). We improve this result by showing that rank 4 scheduling is already NP-hard to approximate within a factor of $3/2-\epsilon$, and meanwhile rank 3 scheduling is APX-hard.
2404.05281
Boshko Koloski
Syrielle Montariol and Matej Martinc and Andra\v{z} Pelicon and Senja Pollak and Boshko Koloski and Igor Lon\v{c}arski and Aljo\v{s}a Valentin\v{c}i\v{c}
Multi-Task Learning for Features Extraction in Financial Annual Reports
Accepted at MIDAS Workshop at ECML-PKDD 2022
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
For assessing various performance indicators of companies, the focus is shifting from strictly financial (quantitative) publicly disclosed information to qualitative (textual) information. This textual data can provide valuable weak signals, for example through stylistic features, which can complement the quantitative data on financial performance or on Environmental, Social and Governance (ESG) criteria. In this work, we use various multi-task learning methods for financial text classification with the focus on financial sentiment, objectivity, forward-looking sentence prediction and ESG-content detection. We propose different methods to combine the information extracted from training jointly on different tasks; our best-performing method highlights the positive effect of explicitly adding auxiliary task predictions as features for the final target task during the multi-task training. Next, we use these classifiers to extract textual features from annual reports of FTSE350 companies and investigate the link between ESG quantitative scores and these features.
[ { "created": "Mon, 8 Apr 2024 08:13:40 GMT", "version": "v1" } ]
2024-04-09
[ [ "Montariol", "Syrielle", "" ], [ "Martinc", "Matej", "" ], [ "Pelicon", "Andraž", "" ], [ "Pollak", "Senja", "" ], [ "Koloski", "Boshko", "" ], [ "Lončarski", "Igor", "" ], [ "Valentinčič", "Aljoša", "" ] ]
For assessing various performance indicators of companies, the focus is shifting from strictly financial (quantitative) publicly disclosed information to qualitative (textual) information. This textual data can provide valuable weak signals, for example through stylistic features, which can complement the quantitative data on financial performance or on Environmental, Social and Governance (ESG) criteria. In this work, we use various multi-task learning methods for financial text classification with the focus on financial sentiment, objectivity, forward-looking sentence prediction and ESG-content detection. We propose different methods to combine the information extracted from training jointly on different tasks; our best-performing method highlights the positive effect of explicitly adding auxiliary task predictions as features for the final target task during the multi-task training. Next, we use these classifiers to extract textual features from annual reports of FTSE350 companies and investigate the link between ESG quantitative scores and these features.
2304.12139
Jimmy Lin
Xueguang Ma, Tommaso Teofili, Jimmy Lin
Anserini Gets Dense Retrieval: Integration of Lucene's HNSW Indexes
null
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Anserini is a Lucene-based toolkit for reproducible information retrieval research in Java that has been gaining traction in the community. It provides retrieval capabilities for both "traditional" bag-of-words retrieval models such as BM25 as well as retrieval using learned sparse representations such as SPLADE. With Pyserini, which provides a Python interface to Anserini, users gain access to both sparse and dense retrieval models, as Pyserini implements bindings to the Faiss vector search library alongside Lucene inverted indexes in a uniform, consistent interface. Nevertheless, hybrid fusion techniques that integrate sparse and dense retrieval models need to stitch together results from two completely different "software stacks", which creates unnecessary complexities and inefficiencies. However, the introduction of HNSW indexes for dense vector search in Lucene promises the integration of both dense and sparse retrieval within a single software framework. We explore exactly this integration in the context of Anserini. Experiments on the MS MARCO passage and BEIR datasets show that our Anserini HNSW integration supports (reasonably) effective and (reasonably) efficient approximate nearest neighbor search for dense retrieval models, using only Lucene.
[ { "created": "Mon, 24 Apr 2023 14:44:27 GMT", "version": "v1" } ]
2023-04-25
[ [ "Ma", "Xueguang", "" ], [ "Teofili", "Tommaso", "" ], [ "Lin", "Jimmy", "" ] ]
Anserini is a Lucene-based toolkit for reproducible information retrieval research in Java that has been gaining traction in the community. It provides retrieval capabilities for both "traditional" bag-of-words retrieval models such as BM25 as well as retrieval using learned sparse representations such as SPLADE. With Pyserini, which provides a Python interface to Anserini, users gain access to both sparse and dense retrieval models, as Pyserini implements bindings to the Faiss vector search library alongside Lucene inverted indexes in a uniform, consistent interface. Nevertheless, hybrid fusion techniques that integrate sparse and dense retrieval models need to stitch together results from two completely different "software stacks", which creates unnecessary complexities and inefficiencies. However, the introduction of HNSW indexes for dense vector search in Lucene promises the integration of both dense and sparse retrieval within a single software framework. We explore exactly this integration in the context of Anserini. Experiments on the MS MARCO passage and BEIR datasets show that our Anserini HNSW integration supports (reasonably) effective and (reasonably) efficient approximate nearest neighbor search for dense retrieval models, using only Lucene.
2404.03704
Luis Sigcha
Luis Sigcha, Luigi Borz\`i, Ignacio Pav\'on, N\'elson Costa, Susana Costa, Pedro Arezes, Juan-Manuel L\'opez, Guillermo De Arcas
Improvement of Performance in Freezing of Gait detection in Parkinsons Disease using Transformer networks and a single waist worn triaxial accelerometer
null
Engineering Applications of Artificial Intelligence Volume 116, November 2022, 105482
10.1016/j.engappai.2022.105482
null
cs.LG cs.AI eess.SP
http://creativecommons.org/licenses/by/4.0/
Freezing of gait (FOG) is one of the most incapacitating symptoms in Parkinsons disease, affecting more than 50 percent of patients in advanced stages of the disease. The presence of FOG may lead to falls and a loss of independence with a consequent reduction in the quality of life. Wearable technology and artificial intelligence have been used for automatic FOG detection to optimize monitoring. However, differences between laboratory and daily-life conditions present challenges for the implementation of reliable detection systems. Consequently, improvement of FOG detection methods remains important to provide accurate monitoring mechanisms intended for free-living and real-time use. This paper presents advances in automatic FOG detection using a single body-worn triaxial accelerometer and a novel classification algorithm based on Transformers and convolutional networks. This study was performed with data from 21 patients who manifested FOG episodes while performing activities of daily living in a home setting. Results indicate that the proposed FOG-Transformer can bring a significant improvement in FOG detection using leave-one-subject-out cross-validation (LOSO CV). These results bring opportunities for the implementation of accurate monitoring systems for use in ambulatory or home settings.
[ { "created": "Thu, 4 Apr 2024 09:02:17 GMT", "version": "v1" } ]
2024-04-08
[ [ "Sigcha", "Luis", "" ], [ "Borzì", "Luigi", "" ], [ "Pavón", "Ignacio", "" ], [ "Costa", "Nélson", "" ], [ "Costa", "Susana", "" ], [ "Arezes", "Pedro", "" ], [ "López", "Juan-Manuel", "" ], [ "De Arcas", "Guillermo", "" ] ]
Freezing of gait (FOG) is one of the most incapacitating symptoms in Parkinsons disease, affecting more than 50 percent of patients in advanced stages of the disease. The presence of FOG may lead to falls and a loss of independence with a consequent reduction in the quality of life. Wearable technology and artificial intelligence have been used for automatic FOG detection to optimize monitoring. However, differences between laboratory and daily-life conditions present challenges for the implementation of reliable detection systems. Consequently, improvement of FOG detection methods remains important to provide accurate monitoring mechanisms intended for free-living and real-time use. This paper presents advances in automatic FOG detection using a single body-worn triaxial accelerometer and a novel classification algorithm based on Transformers and convolutional networks. This study was performed with data from 21 patients who manifested FOG episodes while performing activities of daily living in a home setting. Results indicate that the proposed FOG-Transformer can bring a significant improvement in FOG detection using leave-one-subject-out cross-validation (LOSO CV). These results bring opportunities for the implementation of accurate monitoring systems for use in ambulatory or home settings.
2306.09298
Leonhard Horstmeyer
Leonhard Horstmeyer
Lakat: An open and permissionless architecture for continuous integration academic publishing
23 pages, 5 figures, 1 table
null
null
null
cs.NI
http://creativecommons.org/licenses/by-sa/4.0/
In this paper, we present three contributions to the field of academic publishing. Firstly, we introduce Lakat, a novel base layer for a publishing system that fosters collaboration, pluralism and permissionless participation. Drawing inspiration from the philosophy of Imre Lakatos, Lakat is designed as a peer-to-peer process- and conflict-oriented system that supports continuous integration across multiple branches. This architecture provides a robust foundation for the integration of existing reputation systems and incentive structures or the development of new ones. Secondly, we propose a new consensus mechanism, called Proof of Review, which ensures the integrity and quality of the content while promoting active participation from the community. Lastly, we present Lignification, a new finality gadget specifically designed for branched, permissionless systems. Lignification provides a deterministic way to find the consensual state in these systems, ensuring the system's robustness and reliability in handling complex scenarios where multiple contributors may be proposing changes simultaneously. Together, these contributions aim to provide a convenient starting point to tackle some of the issues in traditional paper-formatted publishing of research output. By prioritizing collaboration, process-orientation, and pluralism, Lakat aims to improve the way research is conducted and disseminated and ultimately hopes to contribute to a healthier and more productive academic culture.
[ { "created": "Thu, 15 Jun 2023 17:27:16 GMT", "version": "v1" } ]
2023-06-16
[ [ "Horstmeyer", "Leonhard", "" ] ]
In this paper, we present three contributions to the field of academic publishing. Firstly, we introduce Lakat, a novel base layer for a publishing system that fosters collaboration, pluralism and permissionless participation. Drawing inspiration from the philosophy of Imre Lakatos, Lakat is designed as a peer-to-peer process- and conflict-oriented system that supports continuous integration across multiple branches. This architecture provides a robust foundation for the integration of existing reputation systems and incentive structures or the development of new ones. Secondly, we propose a new consensus mechanism, called Proof of Review, which ensures the integrity and quality of the content while promoting active participation from the community. Lastly, we present Lignification, a new finality gadget specifically designed for branched, permissionless systems. Lignification provides a deterministic way to find the consensual state in these systems, ensuring the system's robustness and reliability in handling complex scenarios where multiple contributors may be proposing changes simultaneously. Together, these contributions aim to provide a convenient starting point to tackle some of the issues in traditional paper-formatted publishing of research output. By prioritizing collaboration, process-orientation, and pluralism, Lakat aims to improve the way research is conducted and disseminated and ultimately hopes to contribute to a healthier and more productive academic culture.
1708.01944
Abram Handler
Abram Handler, Brendan O'Connor
Rookie: A unique approach for exploring news archives
Presented at KDD 2017: Data Science + Journalism workshop
null
null
null
cs.HC cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
News archives are an invaluable primary source for placing current events in historical context. But current search engine tools do a poor job at uncovering broad themes and narratives across documents. We present Rookie: a practical software system which uses natural language processing (NLP) to help readers, reporters and editors uncover broad stories in news archives. Unlike prior work, Rookie's design emerged from 18 months of iterative development in consultation with editors and computational journalists. This process lead to a dramatically different approach from previous academic systems with similar goals. Our efforts offer a generalizable case study for others building real-world journalism software using NLP.
[ { "created": "Sun, 6 Aug 2017 22:20:02 GMT", "version": "v1" } ]
2017-08-08
[ [ "Handler", "Abram", "" ], [ "O'Connor", "Brendan", "" ] ]
News archives are an invaluable primary source for placing current events in historical context. But current search engine tools do a poor job at uncovering broad themes and narratives across documents. We present Rookie: a practical software system which uses natural language processing (NLP) to help readers, reporters and editors uncover broad stories in news archives. Unlike prior work, Rookie's design emerged from 18 months of iterative development in consultation with editors and computational journalists. This process lead to a dramatically different approach from previous academic systems with similar goals. Our efforts offer a generalizable case study for others building real-world journalism software using NLP.
1809.10508
Sebastien Ratel
Victor Chepoi and Arnaud Labourel and Sebastien Ratel
Distance and routing labeling schemes for cube-free median graphs
34 pages, 10 figures
null
null
null
cs.DM cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Distance labeling schemes are schemes that label the vertices of a graph with short labels in such a way that the distance between any two vertices $u$ and $v$ can be determined efficiently by merely inspecting the labels of $u$ and $v$, without using any other information. Similarly, routing labeling schemes label the vertices of a graph in a such a way that given the labels of a source node and a destination node, it is possible to compute efficiently the port number of the edge from the source that heads in the direction of the destination. One of important problems is finding natural classes of graphs admitting distance and/or routing labeling schemes with labels of polylogarithmic size. In this paper, we show that the class of cube-free median graphs on $n$ nodes enjoys distance and routing labeling schemes with labels of $O(\log^3 n)$ bits.
[ { "created": "Thu, 27 Sep 2018 13:30:48 GMT", "version": "v1" }, { "created": "Mon, 6 Jul 2020 13:20:44 GMT", "version": "v2" } ]
2020-07-07
[ [ "Chepoi", "Victor", "" ], [ "Labourel", "Arnaud", "" ], [ "Ratel", "Sebastien", "" ] ]
Distance labeling schemes are schemes that label the vertices of a graph with short labels in such a way that the distance between any two vertices $u$ and $v$ can be determined efficiently by merely inspecting the labels of $u$ and $v$, without using any other information. Similarly, routing labeling schemes label the vertices of a graph in a such a way that given the labels of a source node and a destination node, it is possible to compute efficiently the port number of the edge from the source that heads in the direction of the destination. One of important problems is finding natural classes of graphs admitting distance and/or routing labeling schemes with labels of polylogarithmic size. In this paper, we show that the class of cube-free median graphs on $n$ nodes enjoys distance and routing labeling schemes with labels of $O(\log^3 n)$ bits.
2306.14401
Jon Butler
Jon T. Butler, Tsutomu Sasao, and Shinobu Nagayama
On the distribution of sensitivities of symmetric Boolean functions
5 pages, 0 figures The submitted paper is a journal version of "Enumeration of Symmetric Boolean Functions By Sensitivity" by J. Butler, T. Sasao, and S. Nagayama presented at the Reed-Muller Workshop, Matsue, Japan on May 24, 2023. Paper was presented, but not distributed. Authors retained copyright
null
null
null
cs.CC
http://creativecommons.org/licenses/by/4.0/
A Boolean function $f({\vec x})$ is sensitive to bit $x_i$ if there is at least one input vector $\vec x$ and one bit $x_i$ in $\vec x$, such that changing $x_i$ changes $f$. A function has sensitivity $s$ if among all input vectors, the largest number of bits to which $f$ is sensitive is $s$. We count the $n$-variable symmetric Boolean functions that have maximum sensitivity. We show that most such functions have the largest possible sensitivity, $n$. This suggests sensitivity is limited as a complexity measure for symmetric Boolean functions.
[ { "created": "Mon, 26 Jun 2023 03:29:54 GMT", "version": "v1" } ]
2023-06-27
[ [ "Butler", "Jon T.", "" ], [ "Sasao", "Tsutomu", "" ], [ "Nagayama", "Shinobu", "" ] ]
A Boolean function $f({\vec x})$ is sensitive to bit $x_i$ if there is at least one input vector $\vec x$ and one bit $x_i$ in $\vec x$, such that changing $x_i$ changes $f$. A function has sensitivity $s$ if among all input vectors, the largest number of bits to which $f$ is sensitive is $s$. We count the $n$-variable symmetric Boolean functions that have maximum sensitivity. We show that most such functions have the largest possible sensitivity, $n$. This suggests sensitivity is limited as a complexity measure for symmetric Boolean functions.
1404.7610
Takeaki Uno
Takeaki Uno, Hiroko Satoh
An Efficient Algorithm for Enumerating Chordless Cycles and Chordless Paths
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A chordless cycle (induced cycle) $C$ of a graph is a cycle without any chord, meaning that there is no edge outside the cycle connecting two vertices of the cycle. A chordless path is defined similarly. In this paper, we consider the problems of enumerating chordless cycles/paths of a given graph $G=(V,E),$ and propose algorithms taking $O(|E|)$ time for each chordless cycle/path. In the existing studies, the problems had not been deeply studied in the theoretical computer science area, and no output polynomial time algorithm has been proposed. Our experiments showed that the computation time of our algorithms is constant per chordless cycle/path for non-dense random graphs and real-world graphs. They also show that the number of chordless cycles is much smaller than the number of cycles. We applied the algorithm to prediction of NMR (Nuclear Magnetic Resonance) spectra, and increased the accuracy of the prediction.
[ { "created": "Wed, 30 Apr 2014 06:57:09 GMT", "version": "v1" } ]
2014-05-01
[ [ "Uno", "Takeaki", "" ], [ "Satoh", "Hiroko", "" ] ]
A chordless cycle (induced cycle) $C$ of a graph is a cycle without any chord, meaning that there is no edge outside the cycle connecting two vertices of the cycle. A chordless path is defined similarly. In this paper, we consider the problems of enumerating chordless cycles/paths of a given graph $G=(V,E),$ and propose algorithms taking $O(|E|)$ time for each chordless cycle/path. In the existing studies, the problems had not been deeply studied in the theoretical computer science area, and no output polynomial time algorithm has been proposed. Our experiments showed that the computation time of our algorithms is constant per chordless cycle/path for non-dense random graphs and real-world graphs. They also show that the number of chordless cycles is much smaller than the number of cycles. We applied the algorithm to prediction of NMR (Nuclear Magnetic Resonance) spectra, and increased the accuracy of the prediction.
1202.4833
EPTCS
Vanda Santos (CISUC/ESTGV - IPV), Pedro Quaresma (CISUC/Department of Mathematics, University of Coimbra)
Integrating DGSs and GATPs in an Adaptative and Collaborative Blended-Learning Web-Environment
In Proceedings THedu'11, arXiv:1202.4535
EPTCS 79, 2012, pp. 111-123
10.4204/EPTCS.79.7
null
cs.CG cs.MS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The area of geometry with its very strong and appealing visual contents and its also strong and appealing connection between the visual content and its formal specification, is an area where computational tools can enhance, in a significant way, the learning environments. The dynamic geometry software systems (DGSs) can be used to explore the visual contents of geometry. This already mature tools allows an easy construction of geometric figures build from free objects and elementary constructions. The geometric automated theorem provers (GATPs) allows formal deductive reasoning about geometric constructions, extending the reasoning via concrete instances in a given model to formal deductive reasoning in a geometric theory. An adaptative and collaborative blended-learning environment where the DGS and GATP features could be fully explored would be, in our opinion a very rich and challenging learning environment for teachers and students. In this text we will describe the Web Geometry Laboratory a Web environment incorporating a DGS and a repository of geometric problems, that can be used in a synchronous and asynchronous fashion and with some adaptative and collaborative features. As future work we want to enhance the adaptative and collaborative aspects of the environment and also to incorporate a GATP, constructing a dynamic and individualised learning environment for geometry.
[ { "created": "Wed, 22 Feb 2012 06:42:02 GMT", "version": "v1" } ]
2012-02-23
[ [ "Santos", "Vanda", "", "CISUC/ESTGV - IPV" ], [ "Quaresma", "Pedro", "", "CISUC/Department of\n Mathematics, University of Coimbra" ] ]
The area of geometry with its very strong and appealing visual contents and its also strong and appealing connection between the visual content and its formal specification, is an area where computational tools can enhance, in a significant way, the learning environments. The dynamic geometry software systems (DGSs) can be used to explore the visual contents of geometry. This already mature tools allows an easy construction of geometric figures build from free objects and elementary constructions. The geometric automated theorem provers (GATPs) allows formal deductive reasoning about geometric constructions, extending the reasoning via concrete instances in a given model to formal deductive reasoning in a geometric theory. An adaptative and collaborative blended-learning environment where the DGS and GATP features could be fully explored would be, in our opinion a very rich and challenging learning environment for teachers and students. In this text we will describe the Web Geometry Laboratory a Web environment incorporating a DGS and a repository of geometric problems, that can be used in a synchronous and asynchronous fashion and with some adaptative and collaborative features. As future work we want to enhance the adaptative and collaborative aspects of the environment and also to incorporate a GATP, constructing a dynamic and individualised learning environment for geometry.
1901.03768
Taejoon Byun
Taejoon Byun, Vaibhav Sharma, Abhishek Vijayakumar, Sanjai Rayadurgam, Darren Cofer
Input Prioritization for Testing Neural Networks
null
null
null
null
cs.SE cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks (DNNs) are increasingly being adopted for sensing and control functions in a variety of safety and mission-critical systems such as self-driving cars, autonomous air vehicles, medical diagnostics, and industrial robotics. Failures of such systems can lead to loss of life or property, which necessitates stringent verification and validation for providing high assurance. Though formal verification approaches are being investigated, testing remains the primary technique for assessing the dependability of such systems. Due to the nature of the tasks handled by DNNs, the cost of obtaining test oracle data---the expected output, a.k.a. label, for a given input---is high, which significantly impacts the amount and quality of testing that can be performed. Thus, prioritizing input data for testing DNNs in meaningful ways to reduce the cost of labeling can go a long way in increasing testing efficacy. This paper proposes using gauges of the DNN's sentiment derived from the computation performed by the model, as a means to identify inputs that are likely to reveal weaknesses. We empirically assessed the efficacy of three such sentiment measures for prioritization---confidence, uncertainty, and surprise---and compare their effectiveness in terms of their fault-revealing capability and retraining effectiveness. The results indicate that sentiment measures can effectively flag inputs that expose unacceptable DNN behavior. For MNIST models, the average percentage of inputs correctly flagged ranged from 88% to 94.8%.
[ { "created": "Fri, 11 Jan 2019 23:13:47 GMT", "version": "v1" } ]
2019-01-15
[ [ "Byun", "Taejoon", "" ], [ "Sharma", "Vaibhav", "" ], [ "Vijayakumar", "Abhishek", "" ], [ "Rayadurgam", "Sanjai", "" ], [ "Cofer", "Darren", "" ] ]
Deep neural networks (DNNs) are increasingly being adopted for sensing and control functions in a variety of safety and mission-critical systems such as self-driving cars, autonomous air vehicles, medical diagnostics, and industrial robotics. Failures of such systems can lead to loss of life or property, which necessitates stringent verification and validation for providing high assurance. Though formal verification approaches are being investigated, testing remains the primary technique for assessing the dependability of such systems. Due to the nature of the tasks handled by DNNs, the cost of obtaining test oracle data---the expected output, a.k.a. label, for a given input---is high, which significantly impacts the amount and quality of testing that can be performed. Thus, prioritizing input data for testing DNNs in meaningful ways to reduce the cost of labeling can go a long way in increasing testing efficacy. This paper proposes using gauges of the DNN's sentiment derived from the computation performed by the model, as a means to identify inputs that are likely to reveal weaknesses. We empirically assessed the efficacy of three such sentiment measures for prioritization---confidence, uncertainty, and surprise---and compare their effectiveness in terms of their fault-revealing capability and retraining effectiveness. The results indicate that sentiment measures can effectively flag inputs that expose unacceptable DNN behavior. For MNIST models, the average percentage of inputs correctly flagged ranged from 88% to 94.8%.
2007.02833
Amelia Pollard
Amelia Elizabeth Pollard and Jonathan L. Shapiro
Eliminating Catastrophic Interference with Biased Competition
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present here a model to take advantage of the multi-task nature of complex datasets by learning to separate tasks and subtasks in and end to end manner by biasing competitive interactions in the network. This method does not require additional labelling or reformatting of data in a dataset. We propose an alternate view to the monolithic one-task-fits-all learning of multi-task problems, and describe a model based on a theory of neuronal attention from neuroscience, proposed by Desimone. We create and exhibit a new toy dataset, based on the MNIST dataset, which we call MNIST-QA, for testing Visual Question Answering architectures in a low-dimensional environment while preserving the more difficult components of the Visual Question Answering task, and demonstrate the proposed network architecture on this new dataset, as well as on COCO-QA and DAQUAR-FULL. We then demonstrate that this model eliminates catastrophic interference between tasks on a newly created toy dataset and provides competitive results in the Visual Question Answering space. We provide further evidence that Visual Question Answering can be approached as a multi-task problem, and demonstrate that this new architecture based on the Biased Competition model is capable of learning to separate and learn the tasks in an end-to-end fashion without the need for task labels.
[ { "created": "Fri, 3 Jul 2020 16:15:15 GMT", "version": "v1" } ]
2020-07-07
[ [ "Pollard", "Amelia Elizabeth", "" ], [ "Shapiro", "Jonathan L.", "" ] ]
We present here a model to take advantage of the multi-task nature of complex datasets by learning to separate tasks and subtasks in and end to end manner by biasing competitive interactions in the network. This method does not require additional labelling or reformatting of data in a dataset. We propose an alternate view to the monolithic one-task-fits-all learning of multi-task problems, and describe a model based on a theory of neuronal attention from neuroscience, proposed by Desimone. We create and exhibit a new toy dataset, based on the MNIST dataset, which we call MNIST-QA, for testing Visual Question Answering architectures in a low-dimensional environment while preserving the more difficult components of the Visual Question Answering task, and demonstrate the proposed network architecture on this new dataset, as well as on COCO-QA and DAQUAR-FULL. We then demonstrate that this model eliminates catastrophic interference between tasks on a newly created toy dataset and provides competitive results in the Visual Question Answering space. We provide further evidence that Visual Question Answering can be approached as a multi-task problem, and demonstrate that this new architecture based on the Biased Competition model is capable of learning to separate and learn the tasks in an end-to-end fashion without the need for task labels.
1609.09253
Ivan Grechikhin
Ivan S. Grechikhin
Heuristic with elements of tabu search for Truck and Trailer Routing Problem
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vehicle Routing Problem is a well-known problem in logistics and transportation, and the variety of such problems is explained by the fact that it occurs in many real-life situations. It is an NP-hard combinatorial optimization problem and finding an exact optimal solution is practically impossible. In this work, Site-Dependent Truck and Trailer Routing Problem with hard and soft Time Windows and Split Deliveries is considered (SDTTRPTWSD). In this article, we develop a heuristic with the elements of Tabu Search for solving SDTTRPTWSD. The heuristic uses the concept of neighborhoods and visits infeasible solutions during the search. A greedy heuristic is applied to construct an initial solution.
[ { "created": "Thu, 29 Sep 2016 08:37:48 GMT", "version": "v1" } ]
2016-09-30
[ [ "Grechikhin", "Ivan S.", "" ] ]
Vehicle Routing Problem is a well-known problem in logistics and transportation, and the variety of such problems is explained by the fact that it occurs in many real-life situations. It is an NP-hard combinatorial optimization problem and finding an exact optimal solution is practically impossible. In this work, Site-Dependent Truck and Trailer Routing Problem with hard and soft Time Windows and Split Deliveries is considered (SDTTRPTWSD). In this article, we develop a heuristic with the elements of Tabu Search for solving SDTTRPTWSD. The heuristic uses the concept of neighborhoods and visits infeasible solutions during the search. A greedy heuristic is applied to construct an initial solution.
2203.13412
Zengjie Song
Zengjie Song, Yuxi Wang, Junsong Fan, Tieniu Tan, Zhaoxiang Zhang
Self-Supervised Predictive Learning: A Negative-Free Method for Sound Source Localization in Visual Scenes
Camera-ready, CVPR 2022. Code: https://github.com/zjsong/SSPL
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sound source localization in visual scenes aims to localize objects emitting the sound in a given image. Recent works showing impressive localization performance typically rely on the contrastive learning framework. However, the random sampling of negatives, as commonly adopted in these methods, can result in misalignment between audio and visual features and thus inducing ambiguity in localization. In this paper, instead of following previous literature, we propose Self-Supervised Predictive Learning (SSPL), a negative-free method for sound localization via explicit positive mining. Specifically, we first devise a three-stream network to elegantly associate sound source with two augmented views of one corresponding video frame, leading to semantically coherent similarities between audio and visual features. Second, we introduce a novel predictive coding module for audio-visual feature alignment. Such a module assists SSPL to focus on target objects in a progressive manner and effectively lowers the positive-pair learning difficulty. Experiments show surprising results that SSPL outperforms the state-of-the-art approach on two standard sound localization benchmarks. In particular, SSPL achieves significant improvements of 8.6% cIoU and 3.4% AUC on SoundNet-Flickr compared to the previous best. Code is available at: https://github.com/zjsong/SSPL.
[ { "created": "Fri, 25 Mar 2022 01:42:42 GMT", "version": "v1" } ]
2022-03-28
[ [ "Song", "Zengjie", "" ], [ "Wang", "Yuxi", "" ], [ "Fan", "Junsong", "" ], [ "Tan", "Tieniu", "" ], [ "Zhang", "Zhaoxiang", "" ] ]
Sound source localization in visual scenes aims to localize objects emitting the sound in a given image. Recent works showing impressive localization performance typically rely on the contrastive learning framework. However, the random sampling of negatives, as commonly adopted in these methods, can result in misalignment between audio and visual features and thus inducing ambiguity in localization. In this paper, instead of following previous literature, we propose Self-Supervised Predictive Learning (SSPL), a negative-free method for sound localization via explicit positive mining. Specifically, we first devise a three-stream network to elegantly associate sound source with two augmented views of one corresponding video frame, leading to semantically coherent similarities between audio and visual features. Second, we introduce a novel predictive coding module for audio-visual feature alignment. Such a module assists SSPL to focus on target objects in a progressive manner and effectively lowers the positive-pair learning difficulty. Experiments show surprising results that SSPL outperforms the state-of-the-art approach on two standard sound localization benchmarks. In particular, SSPL achieves significant improvements of 8.6% cIoU and 3.4% AUC on SoundNet-Flickr compared to the previous best. Code is available at: https://github.com/zjsong/SSPL.
1709.06745
Huiju Wang Dr
Huiju Wang, Zhengkui Wang, Kian-Lee Tan, Chee-Yong Chan, Qi Fan, Xiao Yue
VCExplorer: A Interactive Graph Exploration Framework Based on Hub Vertices with Graph Consolidation
11 pages, 8 figures and 2 tables
null
null
null
cs.DB cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graphs have been widely used to model different information networks, such as the Web, biological networks and social networks (e.g. Twitter). Due to the size and complexity of these graphs, how to explore and utilize these graphs has become a very challenging problem. In this paper, we propose, VCExplorer, a new interactive graph exploration framework that integrates the strengths of graph visualization and graph summarization. Unlike existing graph visualization tools where vertices of a graph may be clustered into a smaller collection of super/virtual vertices, VCExplorer displays a small number of actual source graph vertices (called hubs) and summaries of the information between these vertices. We refer to such a graph as a HA-graph (Hub-based Aggregation Graph). This allows users to appreciate the relationship between the hubs, rather than super/virtual vertices. Users can navigate through the HA- graph by "drilling down" into the summaries between hubs to display more hubs. We illustrate how the graph aggregation techniques can be integrated into the exploring framework as the consolidated information to users. In addition, we propose efficient graph aggregation algorithms over multiple subgraphs via computation sharing. Extensive experimental evaluations have been conducted using both real and synthetic datasets and the results indicate the effectiveness and efficiency of VCExplorer for exploration.
[ { "created": "Wed, 20 Sep 2017 07:23:03 GMT", "version": "v1" } ]
2017-09-21
[ [ "Wang", "Huiju", "" ], [ "Wang", "Zhengkui", "" ], [ "Tan", "Kian-Lee", "" ], [ "Chan", "Chee-Yong", "" ], [ "Fan", "Qi", "" ], [ "Yue", "Xiao", "" ] ]
Graphs have been widely used to model different information networks, such as the Web, biological networks and social networks (e.g. Twitter). Due to the size and complexity of these graphs, how to explore and utilize these graphs has become a very challenging problem. In this paper, we propose, VCExplorer, a new interactive graph exploration framework that integrates the strengths of graph visualization and graph summarization. Unlike existing graph visualization tools where vertices of a graph may be clustered into a smaller collection of super/virtual vertices, VCExplorer displays a small number of actual source graph vertices (called hubs) and summaries of the information between these vertices. We refer to such a graph as a HA-graph (Hub-based Aggregation Graph). This allows users to appreciate the relationship between the hubs, rather than super/virtual vertices. Users can navigate through the HA- graph by "drilling down" into the summaries between hubs to display more hubs. We illustrate how the graph aggregation techniques can be integrated into the exploring framework as the consolidated information to users. In addition, we propose efficient graph aggregation algorithms over multiple subgraphs via computation sharing. Extensive experimental evaluations have been conducted using both real and synthetic datasets and the results indicate the effectiveness and efficiency of VCExplorer for exploration.
2407.06624
EPTCS
Gabriele Cecilia (Universit\`a degli Studi di Milano), Alberto Momigliano (Universit\`a degli Studi di Milano)
A Beluga Formalization of the Harmony Lemma in the $\pi$-Calculus
In Proceedings LFMTP 2024, arXiv:2407.05822
EPTCS 404, 2024, pp. 1-17
10.4204/EPTCS.404.1
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
The "Harmony Lemma", as formulated by Sangiorgi & Walker, establishes the equivalence between the labelled transition semantics and the reduction semantics in the $\pi$-calculus. Despite being a widely known and accepted result for the standard $\pi$-calculus, this assertion has never been rigorously proven, formally or informally. Hence, its validity may not be immediately apparent when considering extensions of the $\pi$-calculus. Contributing to the second challenge of the Concurrent Calculi Formalization Benchmark -- a set of challenges tackling the main issues related to the mechanization of concurrent systems -- we present a formalization of this result for the fragment of the $\pi$-calculus examined in the Benchmark. Our formalization is implemented in Beluga and draws inspiration from the HOAS formalization of the LTS semantics popularized by Honsell et al. In passing, we introduce a couple of useful encoding techniques for handling telescopes and lexicographic induction.
[ { "created": "Tue, 9 Jul 2024 07:51:33 GMT", "version": "v1" } ]
2024-07-10
[ [ "Cecilia", "Gabriele", "", "Università degli Studi di Milano" ], [ "Momigliano", "Alberto", "", "Università degli Studi di Milano" ] ]
The "Harmony Lemma", as formulated by Sangiorgi & Walker, establishes the equivalence between the labelled transition semantics and the reduction semantics in the $\pi$-calculus. Despite being a widely known and accepted result for the standard $\pi$-calculus, this assertion has never been rigorously proven, formally or informally. Hence, its validity may not be immediately apparent when considering extensions of the $\pi$-calculus. Contributing to the second challenge of the Concurrent Calculi Formalization Benchmark -- a set of challenges tackling the main issues related to the mechanization of concurrent systems -- we present a formalization of this result for the fragment of the $\pi$-calculus examined in the Benchmark. Our formalization is implemented in Beluga and draws inspiration from the HOAS formalization of the LTS semantics popularized by Honsell et al. In passing, we introduce a couple of useful encoding techniques for handling telescopes and lexicographic induction.
2110.11269
Alyssa Kody
Alyssa Kody, Samuel Chevalier, Spyros Chatzivasileiadis, Daniel Molzahn
Modeling the AC Power Flow Equations with Optimally Compact Neural Networks: Application to Unit Commitment
added acknowledgement, first two authors equally contributed, 8 pages, 3 figures, 1 table
null
null
null
cs.LG cs.SY eess.SY math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nonlinear power flow constraints render a variety of power system optimization problems computationally intractable. Emerging research shows, however, that the nonlinear AC power flow equations can be successfully modeled using Neural Networks (NNs). These NNs can be exactly transformed into Mixed Integer Linear Programs (MILPs) and embedded inside challenging optimization problems, thus replacing nonlinearities that are intractable for many applications with tractable piecewise linear approximations. Such approaches, though, suffer from an explosion of the number of binary variables needed to represent the NN. Accordingly, this paper develops a technique for training an "optimally compact" NN, i.e., one that can represent the power flow equations with a sufficiently high degree of accuracy while still maintaining a tractable number of binary variables. We show that the resulting NN model is more expressive than both the DC and linearized power flow approximations when embedded inside of a challenging optimization problem (i.e., the AC unit commitment problem).
[ { "created": "Thu, 21 Oct 2021 16:51:43 GMT", "version": "v1" }, { "created": "Thu, 28 Oct 2021 18:18:59 GMT", "version": "v2" } ]
2021-11-01
[ [ "Kody", "Alyssa", "" ], [ "Chevalier", "Samuel", "" ], [ "Chatzivasileiadis", "Spyros", "" ], [ "Molzahn", "Daniel", "" ] ]
Nonlinear power flow constraints render a variety of power system optimization problems computationally intractable. Emerging research shows, however, that the nonlinear AC power flow equations can be successfully modeled using Neural Networks (NNs). These NNs can be exactly transformed into Mixed Integer Linear Programs (MILPs) and embedded inside challenging optimization problems, thus replacing nonlinearities that are intractable for many applications with tractable piecewise linear approximations. Such approaches, though, suffer from an explosion of the number of binary variables needed to represent the NN. Accordingly, this paper develops a technique for training an "optimally compact" NN, i.e., one that can represent the power flow equations with a sufficiently high degree of accuracy while still maintaining a tractable number of binary variables. We show that the resulting NN model is more expressive than both the DC and linearized power flow approximations when embedded inside of a challenging optimization problem (i.e., the AC unit commitment problem).
2403.06100
Yusuke Yasuda
Yusuke Yasuda and Tomoki Toda
Automatic design optimization of preference-based subjective evaluation with online learning in crowdsourcing environment
null
null
null
null
cs.HC cs.CL cs.LG eess.AS stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A preference-based subjective evaluation is a key method for evaluating generative media reliably. However, its huge combinations of pairs prohibit it from being applied to large-scale evaluation using crowdsourcing. To address this issue, we propose an automatic optimization method for preference-based subjective evaluation in terms of pair combination selections and allocation of evaluation volumes with online learning in a crowdsourcing environment. We use a preference-based online learning method based on a sorting algorithm to identify the total order of evaluation targets with minimum sample volumes. Our online learning algorithm supports parallel and asynchronous execution under fixed-budget conditions required for crowdsourcing. Our experiment on preference-based subjective evaluation of synthetic speech shows that our method successfully optimizes the test by reducing pair combinations from 351 to 83 and allocating optimal evaluation volumes for each pair ranging from 30 to 663 without compromising evaluation accuracies and wasting budget allocations.
[ { "created": "Sun, 10 Mar 2024 05:55:00 GMT", "version": "v1" } ]
2024-03-12
[ [ "Yasuda", "Yusuke", "" ], [ "Toda", "Tomoki", "" ] ]
A preference-based subjective evaluation is a key method for evaluating generative media reliably. However, its huge combinations of pairs prohibit it from being applied to large-scale evaluation using crowdsourcing. To address this issue, we propose an automatic optimization method for preference-based subjective evaluation in terms of pair combination selections and allocation of evaluation volumes with online learning in a crowdsourcing environment. We use a preference-based online learning method based on a sorting algorithm to identify the total order of evaluation targets with minimum sample volumes. Our online learning algorithm supports parallel and asynchronous execution under fixed-budget conditions required for crowdsourcing. Our experiment on preference-based subjective evaluation of synthetic speech shows that our method successfully optimizes the test by reducing pair combinations from 351 to 83 and allocating optimal evaluation volumes for each pair ranging from 30 to 663 without compromising evaluation accuracies and wasting budget allocations.
2112.02380
Salman Parsa
Erin Wolf Chambers, Salman Parsa, Hannah Schreiber
On Complexity of Computing Bottleneck and Lexicographic Optimal Cycles in a Homology Class
null
null
null
null
cs.CG cs.CC
http://creativecommons.org/licenses/by/4.0/
Homology features of spaces which appear in applications, for instance 3D meshes, are among the most important topological properties of these objects. Given a non-trivial cycle in a homology class, we consider the problem of computing a representative in that homology class which is optimal. We study two measures of optimality, namely, the lexicographic order of cycles (the lex-optimal cycle) and the bottleneck norm (a bottleneck-optimal cycle). We give a simple algorithm for computing the lex-optimal cycle for a 1-homology lass in a closed orientable surface. In contrast to this, our main result is that, in the case of 3-Manifolds of size $n^2$ in the Euclidean 3-space, the problem of finding a bottleneck optimal cycle cannot be solved more efficiently than solving a system of linear equations with an $n \times n$ sparse matrix. From this reduction, we deduce several hardness results. Most notably, we show that for 3-manifolds given as a subset of the 3-space of size $n^2$, persistent homology computations are at least as hard as rank computation (for sparse matrices) while ordinary homology computations can be done in $O(n^2 \log n)$ time. This is the first such distinction between these two computations. Moreover, it follows that the same disparity exists between the height persistent homology computation and general sub-level set persistent homology computation for simplicial complexes in the 3-space.
[ { "created": "Sat, 4 Dec 2021 16:42:48 GMT", "version": "v1" }, { "created": "Wed, 16 Mar 2022 21:20:21 GMT", "version": "v2" } ]
2022-03-18
[ [ "Chambers", "Erin Wolf", "" ], [ "Parsa", "Salman", "" ], [ "Schreiber", "Hannah", "" ] ]
Homology features of spaces which appear in applications, for instance 3D meshes, are among the most important topological properties of these objects. Given a non-trivial cycle in a homology class, we consider the problem of computing a representative in that homology class which is optimal. We study two measures of optimality, namely, the lexicographic order of cycles (the lex-optimal cycle) and the bottleneck norm (a bottleneck-optimal cycle). We give a simple algorithm for computing the lex-optimal cycle for a 1-homology lass in a closed orientable surface. In contrast to this, our main result is that, in the case of 3-Manifolds of size $n^2$ in the Euclidean 3-space, the problem of finding a bottleneck optimal cycle cannot be solved more efficiently than solving a system of linear equations with an $n \times n$ sparse matrix. From this reduction, we deduce several hardness results. Most notably, we show that for 3-manifolds given as a subset of the 3-space of size $n^2$, persistent homology computations are at least as hard as rank computation (for sparse matrices) while ordinary homology computations can be done in $O(n^2 \log n)$ time. This is the first such distinction between these two computations. Moreover, it follows that the same disparity exists between the height persistent homology computation and general sub-level set persistent homology computation for simplicial complexes in the 3-space.
1312.0932
Inaki Estella
I\~naki Estella Aguerri and Deniz G\"und\"uz
Joint Source-Channel Coding with Time-Varying Channel and Side-Information
Submitted to IEEE Transactions on Information Theory
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transmission of a Gaussian source over a time-varying Gaussian channel is studied in the presence of time-varying correlated side information at the receiver. A block fading model is considered for both the channel and the side information, whose states are assumed to be known only at the receiver. The optimality of separate source and channel coding in terms of average end-to-end distortion is shown when the channel is static while the side information state follows a discrete or a continuous and quasiconcave distribution. When both the channel and side information states are time-varying, separate source and channel coding is suboptimal in general. A partially informed encoder lower bound is studied by providing the channel state information to the encoder. Several achievable transmission schemes are proposed based on uncoded transmission, separate source and channel coding, joint decoding as well as hybrid digital-analog transmission. Uncoded transmission is shown to be optimal for a class of continuous and quasiconcave side information state distributions, while the channel gain may have an arbitrary distribution. To the best of our knowledge, this is the first example in which the uncoded transmission achieves the optimal performance thanks to the time-varying nature of the states, while it is suboptimal in the static version of the same problem. Then, the optimal \emph{distortion exponent}, that quantifies the exponential decay rate of the expected distortion in the high SNR regime, is characterized for Nakagami distributed channel and side information states, and it is shown to be achieved by hybrid digital-analog and joint decoding schemes in certain cases, illustrating the suboptimality of pure digital or analog transmission in general.
[ { "created": "Tue, 3 Dec 2013 20:53:25 GMT", "version": "v1" }, { "created": "Tue, 26 May 2015 12:22:25 GMT", "version": "v2" } ]
2015-05-27
[ [ "Aguerri", "Iñaki Estella", "" ], [ "Gündüz", "Deniz", "" ] ]
Transmission of a Gaussian source over a time-varying Gaussian channel is studied in the presence of time-varying correlated side information at the receiver. A block fading model is considered for both the channel and the side information, whose states are assumed to be known only at the receiver. The optimality of separate source and channel coding in terms of average end-to-end distortion is shown when the channel is static while the side information state follows a discrete or a continuous and quasiconcave distribution. When both the channel and side information states are time-varying, separate source and channel coding is suboptimal in general. A partially informed encoder lower bound is studied by providing the channel state information to the encoder. Several achievable transmission schemes are proposed based on uncoded transmission, separate source and channel coding, joint decoding as well as hybrid digital-analog transmission. Uncoded transmission is shown to be optimal for a class of continuous and quasiconcave side information state distributions, while the channel gain may have an arbitrary distribution. To the best of our knowledge, this is the first example in which the uncoded transmission achieves the optimal performance thanks to the time-varying nature of the states, while it is suboptimal in the static version of the same problem. Then, the optimal \emph{distortion exponent}, that quantifies the exponential decay rate of the expected distortion in the high SNR regime, is characterized for Nakagami distributed channel and side information states, and it is shown to be achieved by hybrid digital-analog and joint decoding schemes in certain cases, illustrating the suboptimality of pure digital or analog transmission in general.
1608.07872
Monowar Hasan
Monowar Hasan, Sibin Mohan, Rakesh B. Bobba and Rodolfo Pellizzoni
Exploring Opportunistic Execution for Integrating Security into Legacy Hard Real-Time Systems
Accepted for publication, IEEE RTSS 2016
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to physical isolation as well as use of proprietary hardware and protocols, traditional real-time systems (RTS) were considered to be invulnerable to security breaches and external attacks. However, this assumption is being challenged by recent attacks that highlight the vulnerabilities in such systems. In this paper, we focus on integrating security mechanisms into RTS (especially legacy RTS) and provide a metric to measure the effectiveness of such mechanisms. We combine opportunistic execution with hierarchical scheduling to maintain compatibility with legacy systems while still providing flexibility. The proposed approach is shown to increase the security posture of RTS systems without impacting their temporal constraints.
[ { "created": "Mon, 29 Aug 2016 00:27:53 GMT", "version": "v1" }, { "created": "Tue, 30 Aug 2016 00:28:11 GMT", "version": "v2" } ]
2016-08-31
[ [ "Hasan", "Monowar", "" ], [ "Mohan", "Sibin", "" ], [ "Bobba", "Rakesh B.", "" ], [ "Pellizzoni", "Rodolfo", "" ] ]
Due to physical isolation as well as use of proprietary hardware and protocols, traditional real-time systems (RTS) were considered to be invulnerable to security breaches and external attacks. However, this assumption is being challenged by recent attacks that highlight the vulnerabilities in such systems. In this paper, we focus on integrating security mechanisms into RTS (especially legacy RTS) and provide a metric to measure the effectiveness of such mechanisms. We combine opportunistic execution with hierarchical scheduling to maintain compatibility with legacy systems while still providing flexibility. The proposed approach is shown to increase the security posture of RTS systems without impacting their temporal constraints.
2010.13540
Zhesong Yu
Zhesong Yu, Xingjian Du, Bilei Zhu, Zejun Ma
Contrastive Unsupervised Learning for Audio Fingerprinting
5 pages
null
null
null
cs.SD cs.LG cs.MM eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rise of video-sharing platforms has attracted more and more people to shoot videos and upload them to the Internet. These videos mostly contain a carefully-edited background audio track, where serious speech change, pitch shifting and various types of audio effects may involve, and existing audio identification systems may fail to recognize the audio. To solve this problem, in this paper, we introduce the idea of contrastive learning to the task of audio fingerprinting (AFP). Contrastive learning is an unsupervised approach to learn representations that can effectively group similar samples and discriminate dissimilar ones. In our work, we consider an audio track and its differently distorted versions as similar while considering different audio tracks as dissimilar. Based on the momentum contrast (MoCo) framework, we devise a contrastive learning method for AFP, which can generate fingerprints that are both discriminative and robust. A set of experiments showed that our AFP method is effective for audio identification, with robustness to serious audio distortions, including the challenging speed change and pitch shifting.
[ { "created": "Mon, 26 Oct 2020 12:49:39 GMT", "version": "v1" } ]
2020-10-27
[ [ "Yu", "Zhesong", "" ], [ "Du", "Xingjian", "" ], [ "Zhu", "Bilei", "" ], [ "Ma", "Zejun", "" ] ]
The rise of video-sharing platforms has attracted more and more people to shoot videos and upload them to the Internet. These videos mostly contain a carefully-edited background audio track, where serious speech change, pitch shifting and various types of audio effects may involve, and existing audio identification systems may fail to recognize the audio. To solve this problem, in this paper, we introduce the idea of contrastive learning to the task of audio fingerprinting (AFP). Contrastive learning is an unsupervised approach to learn representations that can effectively group similar samples and discriminate dissimilar ones. In our work, we consider an audio track and its differently distorted versions as similar while considering different audio tracks as dissimilar. Based on the momentum contrast (MoCo) framework, we devise a contrastive learning method for AFP, which can generate fingerprints that are both discriminative and robust. A set of experiments showed that our AFP method is effective for audio identification, with robustness to serious audio distortions, including the challenging speed change and pitch shifting.
0912.3098
Loet Leydesdorff
Loet Leydesdorff and Alkim Almila Akdag Salah
Maps on the basis of the Arts & Humanities Citation Index: The journals Leonardo and Art Journal versus "Digital Humanities" as a topic
null
null
null
null
cs.DL physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The possibilities of using the Arts & Humanities Citation Index (A&HCI) for journal mapping have not been sufficiently recognized because of the absence of a Journal Citations Report (JCR) for this database. A quasi-JCR for the A&HCI (2008) was constructed from the data contained in the Web-of-Science and is used for the evaluation of two journals as examples: Leonardo and Art Journal. The maps on the basis of the aggregated journal-journal citations within this domain can be compared with maps including references to journals in the Science Citation Index and Social Science Citation Index. Art journals are cited by (social) science journals more than by other art journals, but these journals draw upon one another in terms of their own references. This cultural impact in terms of being cited is not found when documents with a topic such as "digital humanities" are analyzed. This community of practice functions more as an intellectual organizer than a journal.
[ { "created": "Wed, 16 Dec 2009 10:53:03 GMT", "version": "v1" } ]
2009-12-17
[ [ "Leydesdorff", "Loet", "" ], [ "Salah", "Alkim Almila Akdag", "" ] ]
The possibilities of using the Arts & Humanities Citation Index (A&HCI) for journal mapping have not been sufficiently recognized because of the absence of a Journal Citations Report (JCR) for this database. A quasi-JCR for the A&HCI (2008) was constructed from the data contained in the Web-of-Science and is used for the evaluation of two journals as examples: Leonardo and Art Journal. The maps on the basis of the aggregated journal-journal citations within this domain can be compared with maps including references to journals in the Science Citation Index and Social Science Citation Index. Art journals are cited by (social) science journals more than by other art journals, but these journals draw upon one another in terms of their own references. This cultural impact in terms of being cited is not found when documents with a topic such as "digital humanities" are analyzed. This community of practice functions more as an intellectual organizer than a journal.
0909.2058
Sihem Amer-Yahia
Sihem Amer-Yahia (Yahoo! Research), Laks Lakshmanan (UBC), Cong Yu (Yahoo! Research)
SocialScope: Enabling Information Discovery on Social Content Sites
CIDR 2009
null
null
null
cs.DB cs.HC cs.IR cs.PL
http://creativecommons.org/licenses/by/3.0/
Recently, many content sites have started encouraging their users to engage in social activities such as adding buddies on Yahoo! Travel and sharing articles with their friends on New York Times. This has led to the emergence of {\em social content sites}, which is being facilitated by initiatives like OpenID (http://www.openid.net/) and OpenSocial (http://www.opensocial.org/). These community standards enable the open access to users' social profiles and connections by individual content sites and are bringing content-oriented sites and social networking sites ever closer. The integration of content and social information raises new challenges for {\em information management and discovery} over such sites. We propose a logical architecture, named \kw{SocialScope}, consisting of three layers, for tackling the challenges. The {\em content management} layer is responsible for integrating, maintaining and physically accessing the content and social data. The {\em information discovery} layer takes care of analyzing content to derive interesting new information, and interpreting and processing the user's information need to identify relevant information. Finally, the {\em information presentation} layer explores the discovered information and helps users better understand it in a principled way. We describe the challenges in each layer and propose solutions for some of those challenges. In particular, we propose a uniform algebraic framework, which can be leveraged to uniformly and flexibly specify many of the information discovery and analysis tasks and provide the foundation for the optimization of those tasks.
[ { "created": "Thu, 10 Sep 2009 22:08:17 GMT", "version": "v1" } ]
2016-09-08
[ [ "Amer-Yahia", "Sihem", "", "Yahoo! Research" ], [ "Lakshmanan", "Laks", "", "UBC" ], [ "Yu", "Cong", "", "Yahoo! Research" ] ]
Recently, many content sites have started encouraging their users to engage in social activities such as adding buddies on Yahoo! Travel and sharing articles with their friends on New York Times. This has led to the emergence of {\em social content sites}, which is being facilitated by initiatives like OpenID (http://www.openid.net/) and OpenSocial (http://www.opensocial.org/). These community standards enable the open access to users' social profiles and connections by individual content sites and are bringing content-oriented sites and social networking sites ever closer. The integration of content and social information raises new challenges for {\em information management and discovery} over such sites. We propose a logical architecture, named \kw{SocialScope}, consisting of three layers, for tackling the challenges. The {\em content management} layer is responsible for integrating, maintaining and physically accessing the content and social data. The {\em information discovery} layer takes care of analyzing content to derive interesting new information, and interpreting and processing the user's information need to identify relevant information. Finally, the {\em information presentation} layer explores the discovered information and helps users better understand it in a principled way. We describe the challenges in each layer and propose solutions for some of those challenges. In particular, we propose a uniform algebraic framework, which can be leveraged to uniformly and flexibly specify many of the information discovery and analysis tasks and provide the foundation for the optimization of those tasks.
1608.00936
Saad Nadeem
Saad Nadeem and Arie Kaufman
Multimodal Brain Visualization
SPIE Medical Imaging 2016, Proc. SPIE Medical Imaging: Biomedical Applications in Molecular, Structural, and Functional Imaging, 2016
SPIE Medical Imaging, pp. 97881Y-97881Y. 2016
10.1117/12.2217003
null
cs.GR q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current connectivity diagrams of human brain image data are either overly complex or overly simplistic. In this work we introduce simple yet accurate interactive visual representations of multiple brain image structures and the connectivity among them. We map cortical surfaces extracted from human brain magnetic resonance imaging (MRI) data onto 2D surfaces that preserve shape (angle), extent (area), and spatial (neighborhood) information for 2D (circular disk) and 3D (spherical) mapping, split these surfaces into separate patches, and cluster functional and diffusion tractography MRI connections between pairs of these patches. The resulting visualizations are easier to compute on and more visually intuitive to interact with than the original data, and facilitate simultaneous exploration of multiple data sets, modalities, and statistical maps.
[ { "created": "Tue, 2 Aug 2016 19:02:40 GMT", "version": "v1" }, { "created": "Sat, 6 Aug 2016 17:01:31 GMT", "version": "v2" }, { "created": "Tue, 9 Aug 2016 19:55:31 GMT", "version": "v3" }, { "created": "Thu, 1 Sep 2016 14:51:28 GMT", "version": "v4" } ]
2016-09-02
[ [ "Nadeem", "Saad", "" ], [ "Kaufman", "Arie", "" ] ]
Current connectivity diagrams of human brain image data are either overly complex or overly simplistic. In this work we introduce simple yet accurate interactive visual representations of multiple brain image structures and the connectivity among them. We map cortical surfaces extracted from human brain magnetic resonance imaging (MRI) data onto 2D surfaces that preserve shape (angle), extent (area), and spatial (neighborhood) information for 2D (circular disk) and 3D (spherical) mapping, split these surfaces into separate patches, and cluster functional and diffusion tractography MRI connections between pairs of these patches. The resulting visualizations are easier to compute on and more visually intuitive to interact with than the original data, and facilitate simultaneous exploration of multiple data sets, modalities, and statistical maps.
1911.12525
Min Ye
Min Ye
New constructions of cooperative MSR codes: Reducing node size to $\exp(O(n))$
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of multiple-node repair in distributed storage systems under the cooperative model, where the repair bandwidth includes the amount of data exchanged between any two different storage nodes. Recently, explicit constructions of MDS codes with optimal cooperative repair bandwidth for all possible parameters were given by Ye and Barg (IEEE Transactions on Information Theory, 2019). The node size (or sub-packetization) in this construction scales as $\exp(\Theta(n^h))$, where $h$ is the number of failed nodes and $n$ is the code length. In this paper, we give new explicit constructions of optimal MDS codes for all possible parameters under the cooperative model, and the node size of our new constructions only scales as $\exp(O(n))$ for any number of failed nodes. Furthermore, it is known that any optimal MDS code under the cooperative model (including, in particular, our new code construction) also achieves optimal repair bandwidth under the centralized model, where the amount of data exchanged between failed nodes is not included in the repair bandwidth. We further show that the node size of our new construction is also much smaller than that of the best known MDS code constructions for the centralized model.
[ { "created": "Thu, 28 Nov 2019 04:41:10 GMT", "version": "v1" } ]
2019-12-02
[ [ "Ye", "Min", "" ] ]
We consider the problem of multiple-node repair in distributed storage systems under the cooperative model, where the repair bandwidth includes the amount of data exchanged between any two different storage nodes. Recently, explicit constructions of MDS codes with optimal cooperative repair bandwidth for all possible parameters were given by Ye and Barg (IEEE Transactions on Information Theory, 2019). The node size (or sub-packetization) in this construction scales as $\exp(\Theta(n^h))$, where $h$ is the number of failed nodes and $n$ is the code length. In this paper, we give new explicit constructions of optimal MDS codes for all possible parameters under the cooperative model, and the node size of our new constructions only scales as $\exp(O(n))$ for any number of failed nodes. Furthermore, it is known that any optimal MDS code under the cooperative model (including, in particular, our new code construction) also achieves optimal repair bandwidth under the centralized model, where the amount of data exchanged between failed nodes is not included in the repair bandwidth. We further show that the node size of our new construction is also much smaller than that of the best known MDS code constructions for the centralized model.
2101.09333
Shenjie Huang
Shenjie Huang and Majid Safari
SPAD-Based Optical Wireless Communication with Signal Pre-Distortion and Noise Normalization
null
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by-nc-nd/4.0/
In recent years, there has been a growing interest in exploring the application of single-photon avalanche diode (SPAD) in optical wireless communication (OWC). As a photon counting detector, SPAD can provide much higher sensitivity compared to the other commonly used photodetectors. However, SPAD-based receivers suffer from significant dead-time-induced non-linear distortion and signal dependent noise. In this work, we propose a novel SPAD-based OWC system in which the non-linear distortion caused by dead time can be successfully eliminated by the pre-distortion of the signal at the transmitter. In addition, another system with joint pre-distortion and noise normalization functionality is proposed. Thanks to the additional noise normalization process, for the transformed signal at the receiver, the originally signal dependent noise becomes signal independent so that the conventional signal detection techniques designed for AWGN channels can be employed to decode the signal. Our numerical results demonstrate the superiority of the proposed SPAD-based systems compared to the existing systems in terms of BER performance and achievable data rate.
[ { "created": "Fri, 22 Jan 2021 21:11:27 GMT", "version": "v1" }, { "created": "Thu, 10 Feb 2022 22:36:20 GMT", "version": "v2" } ]
2022-02-14
[ [ "Huang", "Shenjie", "" ], [ "Safari", "Majid", "" ] ]
In recent years, there has been a growing interest in exploring the application of single-photon avalanche diode (SPAD) in optical wireless communication (OWC). As a photon counting detector, SPAD can provide much higher sensitivity compared to the other commonly used photodetectors. However, SPAD-based receivers suffer from significant dead-time-induced non-linear distortion and signal dependent noise. In this work, we propose a novel SPAD-based OWC system in which the non-linear distortion caused by dead time can be successfully eliminated by the pre-distortion of the signal at the transmitter. In addition, another system with joint pre-distortion and noise normalization functionality is proposed. Thanks to the additional noise normalization process, for the transformed signal at the receiver, the originally signal dependent noise becomes signal independent so that the conventional signal detection techniques designed for AWGN channels can be employed to decode the signal. Our numerical results demonstrate the superiority of the proposed SPAD-based systems compared to the existing systems in terms of BER performance and achievable data rate.
1602.04878
Clayton Davis
Clayton A Davis, Julia Heiman, Erick Janssen, Stephanie Sanders, Justin Garcia, Filippo Menczer
Kinsey Reporter: Citizen Science for Sex Research
Let's Talk About Sex (Apps) Workshop at CSCW 2015
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Kinsey Reporter is a global mobile app to share, explore, and visualize anonymous data about sex. Reports are submitted via smartphone, then visualized on a website or downloaded for offline analysis. In this paper we present the major features of the Kinsey Reporter citizen science platform designed to preserve the anonymity of its contributors, and preliminary data analyses that suggest questions for future research.
[ { "created": "Tue, 16 Feb 2016 01:07:32 GMT", "version": "v1" } ]
2016-02-17
[ [ "Davis", "Clayton A", "" ], [ "Heiman", "Julia", "" ], [ "Janssen", "Erick", "" ], [ "Sanders", "Stephanie", "" ], [ "Garcia", "Justin", "" ], [ "Menczer", "Filippo", "" ] ]
Kinsey Reporter is a global mobile app to share, explore, and visualize anonymous data about sex. Reports are submitted via smartphone, then visualized on a website or downloaded for offline analysis. In this paper we present the major features of the Kinsey Reporter citizen science platform designed to preserve the anonymity of its contributors, and preliminary data analyses that suggest questions for future research.
2304.10391
Avital Boruchovsky
Avital Boruchovsky, Daniella Bar-Lev and Eitan Yaakobi
DNA-Correcting Codes: End-to-end Correction in DNA Storage Systems
Extended version of the paper that appeared in ISIT 2023
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper introduces a new solution to DNA storage that integrates all three steps of retrieval, namely clustering, reconstruction, and error correction. DNA-correcting codes are presented as a unique solution to the problem of ensuring that the output of the storage system is unique for any valid set of input strands. To this end, we introduce a novel distance metric to capture the unique behavior of the DNA storage system and provide necessary and sufficient conditions for DNA-correcting codes. The paper also includes several bounds and constructions of DNA-correcting codes.
[ { "created": "Thu, 20 Apr 2023 15:27:14 GMT", "version": "v1" }, { "created": "Sun, 30 Jun 2024 10:56:10 GMT", "version": "v2" } ]
2024-07-02
[ [ "Boruchovsky", "Avital", "" ], [ "Bar-Lev", "Daniella", "" ], [ "Yaakobi", "Eitan", "" ] ]
This paper introduces a new solution to DNA storage that integrates all three steps of retrieval, namely clustering, reconstruction, and error correction. DNA-correcting codes are presented as a unique solution to the problem of ensuring that the output of the storage system is unique for any valid set of input strands. To this end, we introduce a novel distance metric to capture the unique behavior of the DNA storage system and provide necessary and sufficient conditions for DNA-correcting codes. The paper also includes several bounds and constructions of DNA-correcting codes.
2306.01346
Beatriz Soret
Beatriz Soret, Israel Leyva-Mayorga, Federico Lozano-Cuadra, and Mathias D. Thorsager
Q-learning for distributed routing in LEO satellite constellations
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
End-to-end routing in Low Earth Orbit (LEO) satellite constellations (LSatCs) is a complex and dynamic problem. The topology, of finite size, is dynamic and predictable, the traffic from/to Earth and transiting the space segment is highly imbalanced, and the delay is dominated by the propagation time in non-congested routes and by the queueing time at Inter-Satellite Links (ISLs) in congested routes. Traditional routing algorithms depend on excessive communication with ground or other satellites, and oversimplify the characterization of the path links towards the destination. We model the problem as a multi-agent Partially Observable Markov Decision Problem (POMDP) where the nodes (i.e., the satellites) interact only with nearby nodes. We propose a distributed Q-learning solution that leverages on the knowledge of the neighbours and the correlation of the routing decisions of each node. We compare our results to two centralized algorithms based on the shortest path: one aiming at using the highest data rate links and a second genie algorithm that knows the instantaneous queueing delays at all satellites. The results of our proposal are positive on every front: (1) it experiences delays that are comparable to the benchmarks in steady-state conditions; (2) it increases the supported traffic load without congestion; and (3) it can be easily implemented in a LSatC as it does not depend on the ground segment and minimizes the signaling overhead among satellites.
[ { "created": "Fri, 2 Jun 2023 08:18:43 GMT", "version": "v1" } ]
2023-06-05
[ [ "Soret", "Beatriz", "" ], [ "Leyva-Mayorga", "Israel", "" ], [ "Lozano-Cuadra", "Federico", "" ], [ "Thorsager", "Mathias D.", "" ] ]
End-to-end routing in Low Earth Orbit (LEO) satellite constellations (LSatCs) is a complex and dynamic problem. The topology, of finite size, is dynamic and predictable, the traffic from/to Earth and transiting the space segment is highly imbalanced, and the delay is dominated by the propagation time in non-congested routes and by the queueing time at Inter-Satellite Links (ISLs) in congested routes. Traditional routing algorithms depend on excessive communication with ground or other satellites, and oversimplify the characterization of the path links towards the destination. We model the problem as a multi-agent Partially Observable Markov Decision Problem (POMDP) where the nodes (i.e., the satellites) interact only with nearby nodes. We propose a distributed Q-learning solution that leverages on the knowledge of the neighbours and the correlation of the routing decisions of each node. We compare our results to two centralized algorithms based on the shortest path: one aiming at using the highest data rate links and a second genie algorithm that knows the instantaneous queueing delays at all satellites. The results of our proposal are positive on every front: (1) it experiences delays that are comparable to the benchmarks in steady-state conditions; (2) it increases the supported traffic load without congestion; and (3) it can be easily implemented in a LSatC as it does not depend on the ground segment and minimizes the signaling overhead among satellites.
1906.01010
Glorianna Jagfeld
Glorianna Jagfeld
A computational linguistic study of personal recovery in bipolar disorder
ACL Student Research Workshop 2019, research proposal
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mental health research can benefit increasingly fruitfully from computational linguistics methods, given the abundant availability of language data in the internet and advances of computational tools. This interdisciplinary project will collect and analyse social media data of individuals diagnosed with bipolar disorder with regard to their recovery experiences. Personal recovery - living a satisfying and contributing life along symptoms of severe mental health issues - so far has only been investigated qualitatively with structured interviews and quantitatively with standardised questionnaires with mainly English-speaking participants in Western countries. Complementary to this evidence, computational linguistic methods allow us to analyse first-person accounts shared online in large quantities, representing unstructured settings and a more heterogeneous, multilingual population, to draw a more complete picture of the aspects and mechanisms of personal recovery in bipolar disorder.
[ { "created": "Mon, 3 Jun 2019 18:17:09 GMT", "version": "v1" } ]
2019-06-05
[ [ "Jagfeld", "Glorianna", "" ] ]
Mental health research can benefit increasingly fruitfully from computational linguistics methods, given the abundant availability of language data in the internet and advances of computational tools. This interdisciplinary project will collect and analyse social media data of individuals diagnosed with bipolar disorder with regard to their recovery experiences. Personal recovery - living a satisfying and contributing life along symptoms of severe mental health issues - so far has only been investigated qualitatively with structured interviews and quantitatively with standardised questionnaires with mainly English-speaking participants in Western countries. Complementary to this evidence, computational linguistic methods allow us to analyse first-person accounts shared online in large quantities, representing unstructured settings and a more heterogeneous, multilingual population, to draw a more complete picture of the aspects and mechanisms of personal recovery in bipolar disorder.
2008.02840
Siddharth Reddy
Siddharth Reddy, Sergey Levine, Anca D. Dragan
Assisted Perception: Optimizing Observations to Communicate State
null
null
null
null
cs.LG cs.HC cs.RO stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We aim to help users estimate the state of the world in tasks like robotic teleoperation and navigation with visual impairments, where users may have systematic biases that lead to suboptimal behavior: they might struggle to process observations from multiple sensors simultaneously, receive delayed observations, or overestimate distances to obstacles. While we cannot directly change the user's internal beliefs or their internal state estimation process, our insight is that we can still assist them by modifying the user's observations. Instead of showing the user their true observations, we synthesize new observations that lead to more accurate internal state estimates when processed by the user. We refer to this method as assistive state estimation (ASE): an automated assistant uses the true observations to infer the state of the world, then generates a modified observation for the user to consume (e.g., through an augmented reality interface), and optimizes the modification to induce the user's new beliefs to match the assistant's current beliefs. We evaluate ASE in a user study with 12 participants who each perform four tasks: two tasks with known user biases -- bandwidth-limited image classification and a driving video game with observation delay -- and two with unknown biases that our method has to learn -- guided 2D navigation and a lunar lander teleoperation video game. A different assistance strategy emerges in each domain, such as quickly revealing informative pixels to speed up image classification, using a dynamics model to undo observation delay in driving, identifying nearby landmarks for navigation, and exaggerating a visual indicator of tilt in the lander game. The results show that ASE substantially improves the task performance of users with bandwidth constraints, observation delay, and other unknown biases.
[ { "created": "Thu, 6 Aug 2020 19:08:05 GMT", "version": "v1" } ]
2020-08-10
[ [ "Reddy", "Siddharth", "" ], [ "Levine", "Sergey", "" ], [ "Dragan", "Anca D.", "" ] ]
We aim to help users estimate the state of the world in tasks like robotic teleoperation and navigation with visual impairments, where users may have systematic biases that lead to suboptimal behavior: they might struggle to process observations from multiple sensors simultaneously, receive delayed observations, or overestimate distances to obstacles. While we cannot directly change the user's internal beliefs or their internal state estimation process, our insight is that we can still assist them by modifying the user's observations. Instead of showing the user their true observations, we synthesize new observations that lead to more accurate internal state estimates when processed by the user. We refer to this method as assistive state estimation (ASE): an automated assistant uses the true observations to infer the state of the world, then generates a modified observation for the user to consume (e.g., through an augmented reality interface), and optimizes the modification to induce the user's new beliefs to match the assistant's current beliefs. We evaluate ASE in a user study with 12 participants who each perform four tasks: two tasks with known user biases -- bandwidth-limited image classification and a driving video game with observation delay -- and two with unknown biases that our method has to learn -- guided 2D navigation and a lunar lander teleoperation video game. A different assistance strategy emerges in each domain, such as quickly revealing informative pixels to speed up image classification, using a dynamics model to undo observation delay in driving, identifying nearby landmarks for navigation, and exaggerating a visual indicator of tilt in the lander game. The results show that ASE substantially improves the task performance of users with bandwidth constraints, observation delay, and other unknown biases.
2310.19360
Yifei Wang
Yifei Wang, Liangchen Li, Jiansheng Yang, Zhouchen Lin, Yisen Wang
Balance, Imbalance, and Rebalance: Understanding Robust Overfitting from a Minimax Game Perspective
Accepted by NeurIPS 2023
null
null
null
cs.LG cs.AI cs.CV stat.ML
http://creativecommons.org/licenses/by/4.0/
Adversarial Training (AT) has become arguably the state-of-the-art algorithm for extracting robust features. However, researchers recently notice that AT suffers from severe robust overfitting problems, particularly after learning rate (LR) decay. In this paper, we explain this phenomenon by viewing adversarial training as a dynamic minimax game between the model trainer and the attacker. Specifically, we analyze how LR decay breaks the balance between the minimax game by empowering the trainer with a stronger memorization ability, and show such imbalance induces robust overfitting as a result of memorizing non-robust features. We validate this understanding with extensive experiments, and provide a holistic view of robust overfitting from the dynamics of both the two game players. This understanding further inspires us to alleviate robust overfitting by rebalancing the two players by either regularizing the trainer's capacity or improving the attack strength. Experiments show that the proposed ReBalanced Adversarial Training (ReBAT) can attain good robustness and does not suffer from robust overfitting even after very long training. Code is available at https://github.com/PKU-ML/ReBAT.
[ { "created": "Mon, 30 Oct 2023 09:00:11 GMT", "version": "v1" } ]
2023-10-31
[ [ "Wang", "Yifei", "" ], [ "Li", "Liangchen", "" ], [ "Yang", "Jiansheng", "" ], [ "Lin", "Zhouchen", "" ], [ "Wang", "Yisen", "" ] ]
Adversarial Training (AT) has become arguably the state-of-the-art algorithm for extracting robust features. However, researchers recently notice that AT suffers from severe robust overfitting problems, particularly after learning rate (LR) decay. In this paper, we explain this phenomenon by viewing adversarial training as a dynamic minimax game between the model trainer and the attacker. Specifically, we analyze how LR decay breaks the balance between the minimax game by empowering the trainer with a stronger memorization ability, and show such imbalance induces robust overfitting as a result of memorizing non-robust features. We validate this understanding with extensive experiments, and provide a holistic view of robust overfitting from the dynamics of both the two game players. This understanding further inspires us to alleviate robust overfitting by rebalancing the two players by either regularizing the trainer's capacity or improving the attack strength. Experiments show that the proposed ReBalanced Adversarial Training (ReBAT) can attain good robustness and does not suffer from robust overfitting even after very long training. Code is available at https://github.com/PKU-ML/ReBAT.
1810.04650
Ozan Sener
Ozan Sener, Vladlen Koltun
Multi-Task Learning as Multi-Objective Optimization
In Neural Information Processing Systems (NeurIPS) 2018
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In multi-task learning, multiple tasks are solved jointly, sharing inductive bias between them. Multi-task learning is inherently a multi-objective problem because different tasks may conflict, necessitating a trade-off. A common compromise is to optimize a proxy objective that minimizes a weighted linear combination of per-task losses. However, this workaround is only valid when the tasks do not compete, which is rarely the case. In this paper, we explicitly cast multi-task learning as multi-objective optimization, with the overall objective of finding a Pareto optimal solution. To this end, we use algorithms developed in the gradient-based multi-objective optimization literature. These algorithms are not directly applicable to large-scale learning problems since they scale poorly with the dimensionality of the gradients and the number of tasks. We therefore propose an upper bound for the multi-objective loss and show that it can be optimized efficiently. We further prove that optimizing this upper bound yields a Pareto optimal solution under realistic assumptions. We apply our method to a variety of multi-task deep learning problems including digit classification, scene understanding (joint semantic segmentation, instance segmentation, and depth estimation), and multi-label classification. Our method produces higher-performing models than recent multi-task learning formulations or per-task training.
[ { "created": "Wed, 10 Oct 2018 17:18:09 GMT", "version": "v1" }, { "created": "Fri, 11 Jan 2019 12:57:32 GMT", "version": "v2" } ]
2019-01-14
[ [ "Sener", "Ozan", "" ], [ "Koltun", "Vladlen", "" ] ]
In multi-task learning, multiple tasks are solved jointly, sharing inductive bias between them. Multi-task learning is inherently a multi-objective problem because different tasks may conflict, necessitating a trade-off. A common compromise is to optimize a proxy objective that minimizes a weighted linear combination of per-task losses. However, this workaround is only valid when the tasks do not compete, which is rarely the case. In this paper, we explicitly cast multi-task learning as multi-objective optimization, with the overall objective of finding a Pareto optimal solution. To this end, we use algorithms developed in the gradient-based multi-objective optimization literature. These algorithms are not directly applicable to large-scale learning problems since they scale poorly with the dimensionality of the gradients and the number of tasks. We therefore propose an upper bound for the multi-objective loss and show that it can be optimized efficiently. We further prove that optimizing this upper bound yields a Pareto optimal solution under realistic assumptions. We apply our method to a variety of multi-task deep learning problems including digit classification, scene understanding (joint semantic segmentation, instance segmentation, and depth estimation), and multi-label classification. Our method produces higher-performing models than recent multi-task learning formulations or per-task training.
2005.05276
Raoul Heese
Raoul Heese, Lukas Morand, Dirk Helm, Michael Bortz
CupNet -- Pruning a network for geometric data
4 pages, 2 figures, 1 table
Artificial Neural Networks and Machine Learning - ICANN 2021, pp 29-33
10.1007/978-3-030-86380-7_3
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using data from a simulated cup drawing process, we demonstrate how the inherent geometrical structure of cup meshes can be used to effectively prune an artificial neural network in a straightforward way.
[ { "created": "Mon, 11 May 2020 17:21:23 GMT", "version": "v1" }, { "created": "Mon, 13 Sep 2021 13:37:31 GMT", "version": "v2" } ]
2021-09-14
[ [ "Heese", "Raoul", "" ], [ "Morand", "Lukas", "" ], [ "Helm", "Dirk", "" ], [ "Bortz", "Michael", "" ] ]
Using data from a simulated cup drawing process, we demonstrate how the inherent geometrical structure of cup meshes can be used to effectively prune an artificial neural network in a straightforward way.
1904.11580
Matthias Kahl
Matthias Kahl, Thomas Kriechbaumer, Daniel Jorde, Anwar Ul Haq and Hans-Arno Jacobsen
Appliance Event Detection -- A Multivariate, Supervised Classification Approach
null
null
null
null
cs.OH cs.SY eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Non-intrusive load monitoring (NILM) is a modern and still expanding technique, helping to understand fundamental energy consumption patterns and appliance characteristics. Appliance event detection is an elementary step in the NILM pipeline. Unfortunately, several types of appliances (e.g., switching mode power supply (SMPS) or multi-state) are known to challenge state-of-the-art event detection systems due to their noisy consumption profiles. Classical rule-based event detection system become infeasible and complex for these appliances. By stepping away from distinct event definitions, we can learn from a consumer-configured event model to differentiate between relevant and irrelevant event transients. We introduce a boosting oriented adaptive training, that uses false positives from the initial training area to reduce the number of false positives on the test area substantially. The results show a false positive decrease by more than a factor of eight on a dataset that has a strong focus on SMPS-driven appliances. To obtain a stable event detection system, we applied several experiments on different parameters to measure its performance. These experiments include the evaluation of six event features from the spectral and time domain, different types of feature space normalization to eliminate undesired feature weighting, the conventional and adaptive training, and two common classifiers with its optimal parameter settings. The evaluations are performed on two publicly available energy datasets with high sampling rates: BLUED and BLOND-50.
[ { "created": "Wed, 24 Apr 2019 15:17:55 GMT", "version": "v1" } ]
2019-04-29
[ [ "Kahl", "Matthias", "" ], [ "Kriechbaumer", "Thomas", "" ], [ "Jorde", "Daniel", "" ], [ "Haq", "Anwar Ul", "" ], [ "Jacobsen", "Hans-Arno", "" ] ]
Non-intrusive load monitoring (NILM) is a modern and still expanding technique, helping to understand fundamental energy consumption patterns and appliance characteristics. Appliance event detection is an elementary step in the NILM pipeline. Unfortunately, several types of appliances (e.g., switching mode power supply (SMPS) or multi-state) are known to challenge state-of-the-art event detection systems due to their noisy consumption profiles. Classical rule-based event detection system become infeasible and complex for these appliances. By stepping away from distinct event definitions, we can learn from a consumer-configured event model to differentiate between relevant and irrelevant event transients. We introduce a boosting oriented adaptive training, that uses false positives from the initial training area to reduce the number of false positives on the test area substantially. The results show a false positive decrease by more than a factor of eight on a dataset that has a strong focus on SMPS-driven appliances. To obtain a stable event detection system, we applied several experiments on different parameters to measure its performance. These experiments include the evaluation of six event features from the spectral and time domain, different types of feature space normalization to eliminate undesired feature weighting, the conventional and adaptive training, and two common classifiers with its optimal parameter settings. The evaluations are performed on two publicly available energy datasets with high sampling rates: BLUED and BLOND-50.
2001.05609
Silei Xu
Silei Xu, Giovanni Campagna, Jian Li and Monica S. Lam
Schema2QA: High-Quality and Low-Cost Q&A Agents for the Structured Web
null
null
10.1145/3340531.3411974
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
Building a question-answering agent currently requires large annotated datasets, which are prohibitively expensive. This paper proposes Schema2QA, an open-source toolkit that can generate a Q&A system from a database schema augmented with a few annotations for each field. The key concept is to cover the space of possible compound queries on the database with a large number of in-domain questions synthesized with the help of a corpus of generic query templates. The synthesized data and a small paraphrase set are used to train a novel neural network based on the BERT pretrained model. We use Schema2QA to generate Q&A systems for five Schema.org domains, restaurants, people, movies, books and music, and obtain an overall accuracy between 64% and 75% on crowdsourced questions for these domains. Once annotations and paraphrases are obtained for a Schema.org schema, no additional manual effort is needed to create a Q&A agent for any website that uses the same schema. Furthermore, we demonstrate that learning can be transferred from the restaurant to the hotel domain, obtaining a 64% accuracy on crowdsourced questions with no manual effort. Schema2QA achieves an accuracy of 60% on popular restaurant questions that can be answered using Schema.org. Its performance is comparable to Google Assistant, 7% lower than Siri, and 15% higher than Alexa. It outperforms all these assistants by at least 18% on more complex, long-tail questions.
[ { "created": "Thu, 16 Jan 2020 01:49:16 GMT", "version": "v1" }, { "created": "Mon, 27 Jan 2020 20:32:49 GMT", "version": "v2" }, { "created": "Sun, 17 May 2020 19:44:06 GMT", "version": "v3" }, { "created": "Tue, 19 May 2020 17:13:27 GMT", "version": "v4" }, { "created": "Mon, 24 Aug 2020 21:35:26 GMT", "version": "v5" }, { "created": "Tue, 8 Jun 2021 01:30:11 GMT", "version": "v6" } ]
2023-05-03
[ [ "Xu", "Silei", "" ], [ "Campagna", "Giovanni", "" ], [ "Li", "Jian", "" ], [ "Lam", "Monica S.", "" ] ]
Building a question-answering agent currently requires large annotated datasets, which are prohibitively expensive. This paper proposes Schema2QA, an open-source toolkit that can generate a Q&A system from a database schema augmented with a few annotations for each field. The key concept is to cover the space of possible compound queries on the database with a large number of in-domain questions synthesized with the help of a corpus of generic query templates. The synthesized data and a small paraphrase set are used to train a novel neural network based on the BERT pretrained model. We use Schema2QA to generate Q&A systems for five Schema.org domains, restaurants, people, movies, books and music, and obtain an overall accuracy between 64% and 75% on crowdsourced questions for these domains. Once annotations and paraphrases are obtained for a Schema.org schema, no additional manual effort is needed to create a Q&A agent for any website that uses the same schema. Furthermore, we demonstrate that learning can be transferred from the restaurant to the hotel domain, obtaining a 64% accuracy on crowdsourced questions with no manual effort. Schema2QA achieves an accuracy of 60% on popular restaurant questions that can be answered using Schema.org. Its performance is comparable to Google Assistant, 7% lower than Siri, and 15% higher than Alexa. It outperforms all these assistants by at least 18% on more complex, long-tail questions.
2110.05183
Chuanting Zhang
Chuanting Zhang, Shuping Dang, Basem Shihada, Mohamed-Slim Alouini
Dual Attention-Based Federated Learning for Wireless Traffic Prediction
IEEE INFOCOM 2021 - IEEE Conference on Computer Communications
null
10.1109/INFOCOM42981.2021.9488883
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wireless traffic prediction is essential for cellular networks to realize intelligent network operations, such as load-aware resource management and predictive control. Existing prediction approaches usually adopt centralized training architectures and require the transferring of huge amounts of traffic data, which may raise delay and privacy concerns for certain scenarios. In this work, we propose a novel wireless traffic prediction framework named \textit{Dual Attention-Based Federated Learning} (FedDA), by which a high-quality prediction model is trained collaboratively by multiple edge clients. To simultaneously capture the various wireless traffic patterns and keep raw data locally, FedDA first groups the clients into different clusters by using a small augmentation dataset. Then, a quasi-global model is trained and shared among clients as prior knowledge, aiming to solve the statistical heterogeneity challenge confronted with federated learning. To construct the global model, a dual attention scheme is further proposed by aggregating the intra- and inter-cluster models, instead of simply averaging the weights of local models. We conduct extensive experiments on two real-world wireless traffic datasets and results show that FedDA outperforms state-of-the-art methods. The average mean squared error performance gains on the two datasets are up to 10\% and 30\%, respectively.
[ { "created": "Mon, 11 Oct 2021 12:00:21 GMT", "version": "v1" } ]
2021-10-12
[ [ "Zhang", "Chuanting", "" ], [ "Dang", "Shuping", "" ], [ "Shihada", "Basem", "" ], [ "Alouini", "Mohamed-Slim", "" ] ]
Wireless traffic prediction is essential for cellular networks to realize intelligent network operations, such as load-aware resource management and predictive control. Existing prediction approaches usually adopt centralized training architectures and require the transferring of huge amounts of traffic data, which may raise delay and privacy concerns for certain scenarios. In this work, we propose a novel wireless traffic prediction framework named \textit{Dual Attention-Based Federated Learning} (FedDA), by which a high-quality prediction model is trained collaboratively by multiple edge clients. To simultaneously capture the various wireless traffic patterns and keep raw data locally, FedDA first groups the clients into different clusters by using a small augmentation dataset. Then, a quasi-global model is trained and shared among clients as prior knowledge, aiming to solve the statistical heterogeneity challenge confronted with federated learning. To construct the global model, a dual attention scheme is further proposed by aggregating the intra- and inter-cluster models, instead of simply averaging the weights of local models. We conduct extensive experiments on two real-world wireless traffic datasets and results show that FedDA outperforms state-of-the-art methods. The average mean squared error performance gains on the two datasets are up to 10\% and 30\%, respectively.
2405.18375
Phakphum Artkaew
Phakphum Artkaew
Thai Winograd Schemas: A Benchmark for Thai Commonsense Reasoning
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Commonsense reasoning is one of the important aspect of natural language understanding, with several benchmarks developed to evaluate it. However, only a few of these benchmarks are available in languages other than English. Developing parallel benchmarks facilitates cross-lingual evaluation, enabling a better understanding of different languages. This research introduces a collection of Winograd Schemas in Thai, a novel dataset designed to evaluate commonsense reasoning capabilities in the context of the Thai language. Through a methodology involving native speakers, professional translators, and thorough validation, the schemas aim to closely reflect Thai language nuances, idioms, and cultural references while maintaining ambiguity and commonsense challenges. We evaluate the performance of popular large language models on this benchmark, revealing their strengths, limitations, and providing insights into the current state-of-the-art. Results indicate that while models like GPT-4 and Claude-3-Opus achieve high accuracy in English, their performance significantly drops in Thai, highlighting the need for further advancements in multilingual commonsense reasoning.
[ { "created": "Tue, 28 May 2024 17:14:02 GMT", "version": "v1" } ]
2024-05-29
[ [ "Artkaew", "Phakphum", "" ] ]
Commonsense reasoning is one of the important aspect of natural language understanding, with several benchmarks developed to evaluate it. However, only a few of these benchmarks are available in languages other than English. Developing parallel benchmarks facilitates cross-lingual evaluation, enabling a better understanding of different languages. This research introduces a collection of Winograd Schemas in Thai, a novel dataset designed to evaluate commonsense reasoning capabilities in the context of the Thai language. Through a methodology involving native speakers, professional translators, and thorough validation, the schemas aim to closely reflect Thai language nuances, idioms, and cultural references while maintaining ambiguity and commonsense challenges. We evaluate the performance of popular large language models on this benchmark, revealing their strengths, limitations, and providing insights into the current state-of-the-art. Results indicate that while models like GPT-4 and Claude-3-Opus achieve high accuracy in English, their performance significantly drops in Thai, highlighting the need for further advancements in multilingual commonsense reasoning.
1611.02806
Yu Wang
Yu Wang and Yang Feng and Xiyang Zhang and Jiebo Luo
Gender Politics in the 2016 U.S. Presidential Election: A Computer Vision Approach
null
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gender is playing an important role in the 2016 U.S. presidential election, especially with Hillary Clinton becoming the first female presidential nominee and Donald Trump being frequently accused of sexism. In this paper, we introduce computer vision to the study of gender politics and present an image-driven method that can measure the effects of gender in an accurate and timely manner. We first collect all the profile images of the candidates' Twitter followers. Then we train a convolutional neural network using images that contain gender labels. Lastly, we classify all the follower and unfollower images. Through two case studies, one on the `woman card' controversy and one on Sanders followers, we demonstrate how gender is informing the 2016 presidential election. Our framework of analysis can be readily generalized to other case studies and elections.
[ { "created": "Wed, 9 Nov 2016 03:42:13 GMT", "version": "v1" } ]
2016-11-10
[ [ "Wang", "Yu", "" ], [ "Feng", "Yang", "" ], [ "Zhang", "Xiyang", "" ], [ "Luo", "Jiebo", "" ] ]
Gender is playing an important role in the 2016 U.S. presidential election, especially with Hillary Clinton becoming the first female presidential nominee and Donald Trump being frequently accused of sexism. In this paper, we introduce computer vision to the study of gender politics and present an image-driven method that can measure the effects of gender in an accurate and timely manner. We first collect all the profile images of the candidates' Twitter followers. Then we train a convolutional neural network using images that contain gender labels. Lastly, we classify all the follower and unfollower images. Through two case studies, one on the `woman card' controversy and one on Sanders followers, we demonstrate how gender is informing the 2016 presidential election. Our framework of analysis can be readily generalized to other case studies and elections.
cs/9907012
Guido Minnen
Guido Minnen (University of Sussex)
Selective Magic HPSG Parsing
9 pages, LaTeX with 4 postscript figures (uses avm.sty, eaclap.sty and psfig-scale.sty)
Proceedings of EACL99, Bergen, Norway, June 8-11
null
null
cs.CL
null
We propose a parser for constraint-logic grammars implementing HPSG that combines the advantages of dynamic bottom-up and advanced top-down control. The parser allows the user to apply magic compilation to specific constraints in a grammar which as a result can be processed dynamically in a bottom-up and goal-directed fashion. State of the art top-down processing techniques are used to deal with the remaining constraints. We discuss various aspects concerning the implementation of the parser as part of a grammar development system.
[ { "created": "Thu, 8 Jul 1999 09:46:37 GMT", "version": "v1" } ]
2007-05-23
[ [ "Minnen", "Guido", "", "University of Sussex" ] ]
We propose a parser for constraint-logic grammars implementing HPSG that combines the advantages of dynamic bottom-up and advanced top-down control. The parser allows the user to apply magic compilation to specific constraints in a grammar which as a result can be processed dynamically in a bottom-up and goal-directed fashion. State of the art top-down processing techniques are used to deal with the remaining constraints. We discuss various aspects concerning the implementation of the parser as part of a grammar development system.
2010.05953
Jena Hwang
Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sakaguchi, Antoine Bosselut, Yejin Choi
COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs
null
Proceedings of the AAAI Conference on Artificial Intelligence (2021), 35(7), 6384-6392
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Recent years have brought about a renewed interest in commonsense representation and reasoning in the field of natural language understanding. The development of new commonsense knowledge graphs (CSKG) has been central to these advances as their diverse facts can be used and referenced by machine learning models for tackling new and challenging tasks. At the same time, there remain questions about the quality and coverage of these resources due to the massive scale required to comprehensively encompass general commonsense knowledge. In this work, we posit that manually constructed CSKGs will never achieve the coverage necessary to be applicable in all situations encountered by NLP agents. Therefore, we propose a new evaluation framework for testing the utility of KGs based on how effectively implicit knowledge representations can be learned from them. With this new goal, we propose ATOMIC 2020, a new CSKG of general-purpose commonsense knowledge containing knowledge that is not readily available in pretrained language models. We evaluate its properties in comparison with other leading CSKGs, performing the first large-scale pairwise study of commonsense knowledge resources. Next, we show that ATOMIC 2020 is better suited for training knowledge models that can generate accurate, representative knowledge for new, unseen entities and events. Finally, through human evaluation, we show that the few-shot performance of GPT-3 (175B parameters), while impressive, remains ~12 absolute points lower than a BART-based knowledge model trained on ATOMIC 2020 despite using over 430x fewer parameters.
[ { "created": "Mon, 12 Oct 2020 18:27:05 GMT", "version": "v1" }, { "created": "Thu, 16 Dec 2021 18:57:18 GMT", "version": "v2" } ]
2021-12-17
[ [ "Hwang", "Jena D.", "" ], [ "Bhagavatula", "Chandra", "" ], [ "Bras", "Ronan Le", "" ], [ "Da", "Jeff", "" ], [ "Sakaguchi", "Keisuke", "" ], [ "Bosselut", "Antoine", "" ], [ "Choi", "Yejin", "" ] ]
Recent years have brought about a renewed interest in commonsense representation and reasoning in the field of natural language understanding. The development of new commonsense knowledge graphs (CSKG) has been central to these advances as their diverse facts can be used and referenced by machine learning models for tackling new and challenging tasks. At the same time, there remain questions about the quality and coverage of these resources due to the massive scale required to comprehensively encompass general commonsense knowledge. In this work, we posit that manually constructed CSKGs will never achieve the coverage necessary to be applicable in all situations encountered by NLP agents. Therefore, we propose a new evaluation framework for testing the utility of KGs based on how effectively implicit knowledge representations can be learned from them. With this new goal, we propose ATOMIC 2020, a new CSKG of general-purpose commonsense knowledge containing knowledge that is not readily available in pretrained language models. We evaluate its properties in comparison with other leading CSKGs, performing the first large-scale pairwise study of commonsense knowledge resources. Next, we show that ATOMIC 2020 is better suited for training knowledge models that can generate accurate, representative knowledge for new, unseen entities and events. Finally, through human evaluation, we show that the few-shot performance of GPT-3 (175B parameters), while impressive, remains ~12 absolute points lower than a BART-based knowledge model trained on ATOMIC 2020 despite using over 430x fewer parameters.
1809.10745
Shafie Gholizadeh
Shafie Gholizadeh and Wlodek Zadrozny
A Short Survey of Topological Data Analysis in Time Series and Systems Analysis
null
null
null
null
cs.IR cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Topological Data Analysis (TDA) is the collection of mathematical tools that capture the structure of shapes in data. Despite computational topology and computational geometry, the utilization of TDA in time series and signal processing is relatively new. In some recent contributions, TDA has been utilized as an alternative to the conventional signal processing methods. Specifically, TDA is been considered to deal with noisy signals and time series. In these applications, TDA is used to find the shapes in data as the main properties, while the other properties are assumed much less informative. In this paper, we will review recent developments and contributions where topological data analysis especially persistent homology has been applied to time series analysis, dynamical systems and signal processing. We will cover problem statements such as stability determination, risk analysis, systems behaviour, and predicting critical transitions in financial markets.
[ { "created": "Thu, 27 Sep 2018 19:53:16 GMT", "version": "v1" }, { "created": "Sat, 20 Oct 2018 17:40:31 GMT", "version": "v2" } ]
2018-10-23
[ [ "Gholizadeh", "Shafie", "" ], [ "Zadrozny", "Wlodek", "" ] ]
Topological Data Analysis (TDA) is the collection of mathematical tools that capture the structure of shapes in data. Despite computational topology and computational geometry, the utilization of TDA in time series and signal processing is relatively new. In some recent contributions, TDA has been utilized as an alternative to the conventional signal processing methods. Specifically, TDA is been considered to deal with noisy signals and time series. In these applications, TDA is used to find the shapes in data as the main properties, while the other properties are assumed much less informative. In this paper, we will review recent developments and contributions where topological data analysis especially persistent homology has been applied to time series analysis, dynamical systems and signal processing. We will cover problem statements such as stability determination, risk analysis, systems behaviour, and predicting critical transitions in financial markets.
2002.06512
Vinod Ganapathy
Rakesh Rajan Beck and Abhishek Vijeev and Vinod Ganapathy
Privaros: A Framework for Privacy-Compliant Delivery Drones
null
null
10.1145/3372297.3417858
null
cs.CR cs.OS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present Privaros, a framework to enforce privacy policies on drones. Privaros is designed for commercial delivery drones, such as the ones that will likely be used by Amazon Prime Air. Such drones visit a number of host airspaces, each of which may have different privacy requirements. Privaros provides an information flow control framework to enforce the policies of these hosts on the guest delivery drones. The mechanisms in Privaros are built on top of ROS, a middleware popular in many drone platforms. This paper presents the design and implementation of these mechanisms, describes how policies are specified, and shows that Privaros's policy specification can be integrated with India's Digital Sky portal. Our evaluation shows that a drone running Privaros can robustly enforce various privacy policies specified by hosts, and that its core mechanisms only marginally increase communication latency and power consumption.
[ { "created": "Sun, 16 Feb 2020 05:51:41 GMT", "version": "v1" }, { "created": "Wed, 5 Aug 2020 04:42:57 GMT", "version": "v2" }, { "created": "Thu, 13 Aug 2020 18:00:46 GMT", "version": "v3" } ]
2020-08-17
[ [ "Beck", "Rakesh Rajan", "" ], [ "Vijeev", "Abhishek", "" ], [ "Ganapathy", "Vinod", "" ] ]
We present Privaros, a framework to enforce privacy policies on drones. Privaros is designed for commercial delivery drones, such as the ones that will likely be used by Amazon Prime Air. Such drones visit a number of host airspaces, each of which may have different privacy requirements. Privaros provides an information flow control framework to enforce the policies of these hosts on the guest delivery drones. The mechanisms in Privaros are built on top of ROS, a middleware popular in many drone platforms. This paper presents the design and implementation of these mechanisms, describes how policies are specified, and shows that Privaros's policy specification can be integrated with India's Digital Sky portal. Our evaluation shows that a drone running Privaros can robustly enforce various privacy policies specified by hosts, and that its core mechanisms only marginally increase communication latency and power consumption.
1301.0552
Ionut Aron
Ionut Aron, Pascal Van Hentenryck
A constraint satisfaction approach to the robust spanning tree problem with interval data
Appears in Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence (UAI2002)
null
null
UAI-P-2002-PG-18-25
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robust optimization is one of the fundamental approaches to deal with uncertainty in combinatorial optimization. This paper considers the robust spanning tree problem with interval data, which arises in a variety of telecommunication applications. It proposes a constraint satisfaction approach using a combinatorial lower bound, a pruning component that removes infeasible and suboptimal edges, as well as a search strategy exploring the most uncertain edges first. The resulting algorithm is shown to produce very dramatic improvements over the mathematical programming approach of Yaman et al. and to enlarge considerably the class of problems amenable to effective solutions
[ { "created": "Wed, 12 Dec 2012 15:55:09 GMT", "version": "v1" } ]
2013-01-07
[ [ "Aron", "Ionut", "" ], [ "Van Hentenryck", "Pascal", "" ] ]
Robust optimization is one of the fundamental approaches to deal with uncertainty in combinatorial optimization. This paper considers the robust spanning tree problem with interval data, which arises in a variety of telecommunication applications. It proposes a constraint satisfaction approach using a combinatorial lower bound, a pruning component that removes infeasible and suboptimal edges, as well as a search strategy exploring the most uncertain edges first. The resulting algorithm is shown to produce very dramatic improvements over the mathematical programming approach of Yaman et al. and to enlarge considerably the class of problems amenable to effective solutions
2211.16891
Lennart Reimann
Lennart M. Reimann, Sarp Erd\"onmez, Dominik Sisejkovic and Rainer Leupers
Quantitative Information Flow for Hardware: Advancing the Attack Landscape
4 pages, accepted at IEEE Latin American Symposium on Circuits and Systems (LASCAS), 2023
null
null
null
cs.CR cs.AR
http://creativecommons.org/licenses/by/4.0/
Security still remains an afterthought in modern Electronic Design Automation (EDA) tools, which solely focus on enhancing performance and reducing the chip size. Typically, the security analysis is conducted by hand, leading to vulnerabilities in the design remaining unnoticed. Security-aware EDA tools assist the designer in the identification and removal of security threats while keeping performance and area in mind. State-of-the-art approaches utilize information flow analysis to spot unintended information leakages in design structures. However, the classification of such threats is binary, resulting in negligible leakages being listed as well. A novel quantitative analysis allows the application of a metric to determine a numeric value for a leakage. Nonetheless, current approximations to quantify the leakage are still prone to overlooking leakages. The mathematical model 2D-QModel introduced in this work aims to overcome this shortcoming. Additionally, as previous work only includes a limited threat model, multiple threat models can be applied using the provided approach. Open-source benchmarks are used to show the capabilities of 2D-QModel to identify hardware Trojans in the design while ignoring insignificant leakages.
[ { "created": "Wed, 30 Nov 2022 10:44:54 GMT", "version": "v1" } ]
2022-12-01
[ [ "Reimann", "Lennart M.", "" ], [ "Erdönmez", "Sarp", "" ], [ "Sisejkovic", "Dominik", "" ], [ "Leupers", "Rainer", "" ] ]
Security still remains an afterthought in modern Electronic Design Automation (EDA) tools, which solely focus on enhancing performance and reducing the chip size. Typically, the security analysis is conducted by hand, leading to vulnerabilities in the design remaining unnoticed. Security-aware EDA tools assist the designer in the identification and removal of security threats while keeping performance and area in mind. State-of-the-art approaches utilize information flow analysis to spot unintended information leakages in design structures. However, the classification of such threats is binary, resulting in negligible leakages being listed as well. A novel quantitative analysis allows the application of a metric to determine a numeric value for a leakage. Nonetheless, current approximations to quantify the leakage are still prone to overlooking leakages. The mathematical model 2D-QModel introduced in this work aims to overcome this shortcoming. Additionally, as previous work only includes a limited threat model, multiple threat models can be applied using the provided approach. Open-source benchmarks are used to show the capabilities of 2D-QModel to identify hardware Trojans in the design while ignoring insignificant leakages.
1910.03422
Katja Tuma
Katja Tuma, Christian Sandberg, Urban Thorsson, Mathias Widman, Riccardo Scandariato
Finding Security Threats That Matter: An Industrial Case Study
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent trends in the software engineering (i.e., Agile, DevOps) have shortened the development life-cycle limiting resources spent on security analysis of software designs. In this context, architecture models are (often manually) analyzed for potential security threats. Risk-last threat analysis suggests identifying all security threats before prioritizing them. In contrast, risk-first threat analysis suggests identifying the risks before the threats, by-passing threat prioritization. This seems promising for organizations where developing speed is of great importance. Yet, little empirical evidence exists about the effect of sacrificing systematicity for high-priority threats on the performance and execution of threat analysis. To this aim, we conduct a case study with industrial experts from the automotive domain, where we empirically compare a risk-first technique to a risk-last technique. In this study, we consciously trade the amount of participants for a more realistic simulation of threat analysis sessions in practice. This allows us to closely observe industrial experts and gain deep insights into the industrial practice. This work contributes with: (i) a quantitative comparison of performance, (ii) a quantitative and qualitative comparison of execution, and (iii) a comparative discussion of the two techniques. We find no differences in the productivity and timeliness of discovering high-priority security threats. Yet, we find differences in analysis execution. In particular, participants using the risk-first technique found twice as many high-priority threats, developed detailed attack scenarios, and discussed threat feasibility in detail. On the other hand, participants using the risk-last technique found more medium and low-priority threats and finished early.
[ { "created": "Tue, 8 Oct 2019 14:29:21 GMT", "version": "v1" } ]
2019-10-09
[ [ "Tuma", "Katja", "" ], [ "Sandberg", "Christian", "" ], [ "Thorsson", "Urban", "" ], [ "Widman", "Mathias", "" ], [ "Scandariato", "Riccardo", "" ] ]
Recent trends in the software engineering (i.e., Agile, DevOps) have shortened the development life-cycle limiting resources spent on security analysis of software designs. In this context, architecture models are (often manually) analyzed for potential security threats. Risk-last threat analysis suggests identifying all security threats before prioritizing them. In contrast, risk-first threat analysis suggests identifying the risks before the threats, by-passing threat prioritization. This seems promising for organizations where developing speed is of great importance. Yet, little empirical evidence exists about the effect of sacrificing systematicity for high-priority threats on the performance and execution of threat analysis. To this aim, we conduct a case study with industrial experts from the automotive domain, where we empirically compare a risk-first technique to a risk-last technique. In this study, we consciously trade the amount of participants for a more realistic simulation of threat analysis sessions in practice. This allows us to closely observe industrial experts and gain deep insights into the industrial practice. This work contributes with: (i) a quantitative comparison of performance, (ii) a quantitative and qualitative comparison of execution, and (iii) a comparative discussion of the two techniques. We find no differences in the productivity and timeliness of discovering high-priority security threats. Yet, we find differences in analysis execution. In particular, participants using the risk-first technique found twice as many high-priority threats, developed detailed attack scenarios, and discussed threat feasibility in detail. On the other hand, participants using the risk-last technique found more medium and low-priority threats and finished early.
2402.14536
Siyin Wang
Siyin Wang, Jie Zhou, Qin Chen, Qi Zhang, Tao Gui, Xuanjing Huang
Domain Generalization via Causal Adjustment for Cross-Domain Sentiment Analysis
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Domain adaption has been widely adapted for cross-domain sentiment analysis to transfer knowledge from the source domain to the target domain. Whereas, most methods are proposed under the assumption that the target (test) domain is known, making them fail to generalize well on unknown test data that is not always available in practice. In this paper, we focus on the problem of domain generalization for cross-domain sentiment analysis. Specifically, we propose a backdoor adjustment-based causal model to disentangle the domain-specific and domain-invariant representations that play essential roles in tackling domain shift. First, we rethink the cross-domain sentiment analysis task in a causal view to model the causal-and-effect relationships among different variables. Then, to learn an invariant feature representation, we remove the effect of domain confounders (e.g., domain knowledge) using the backdoor adjustment. A series of experiments over many homologous and diverse datasets show the great performance and robustness of our model by comparing it with the state-of-the-art domain generalization baselines.
[ { "created": "Thu, 22 Feb 2024 13:26:56 GMT", "version": "v1" } ]
2024-02-23
[ [ "Wang", "Siyin", "" ], [ "Zhou", "Jie", "" ], [ "Chen", "Qin", "" ], [ "Zhang", "Qi", "" ], [ "Gui", "Tao", "" ], [ "Huang", "Xuanjing", "" ] ]
Domain adaption has been widely adapted for cross-domain sentiment analysis to transfer knowledge from the source domain to the target domain. Whereas, most methods are proposed under the assumption that the target (test) domain is known, making them fail to generalize well on unknown test data that is not always available in practice. In this paper, we focus on the problem of domain generalization for cross-domain sentiment analysis. Specifically, we propose a backdoor adjustment-based causal model to disentangle the domain-specific and domain-invariant representations that play essential roles in tackling domain shift. First, we rethink the cross-domain sentiment analysis task in a causal view to model the causal-and-effect relationships among different variables. Then, to learn an invariant feature representation, we remove the effect of domain confounders (e.g., domain knowledge) using the backdoor adjustment. A series of experiments over many homologous and diverse datasets show the great performance and robustness of our model by comparing it with the state-of-the-art domain generalization baselines.
2012.13219
Mustafa Hashmi
Ho-Pun Lam and Mustafa Hashmi and Akhil Kumar
Towards a Formal Framework for Partial Compliance of Business Processes
15 page, 4 figures, 2 tables; Under consideration at AICOL 2020, co-located with Jurix
null
null
null
cs.AI cs.LO
http://creativecommons.org/licenses/by/4.0/
Binary "YES-NO" notions of process compliance are not very helpful to managers for assessing the operational performance of their company because a large number of cases fall in the grey area of partial compliance. Hence, it is necessary to have ways to quantify partial compliance in terms of metrics and be able to classify actual cases by assigning a numeric value of compliance to them. In this paper, we formulate an evaluation framework to quantify the level of compliance of business processes across different levels of abstraction (such as task,trace and process level) and across multiple dimensions of each task (such as temporal, monetary, role-, data-, and quality-related) to provide managers more useful information about their operations and to help them improve their decision making processes. Our approach can also add social value by making social services provided by local, state and federal governments more flexible and improving the lives of citizens.
[ { "created": "Thu, 24 Dec 2020 12:38:40 GMT", "version": "v1" } ]
2020-12-25
[ [ "Lam", "Ho-Pun", "" ], [ "Hashmi", "Mustafa", "" ], [ "Kumar", "Akhil", "" ] ]
Binary "YES-NO" notions of process compliance are not very helpful to managers for assessing the operational performance of their company because a large number of cases fall in the grey area of partial compliance. Hence, it is necessary to have ways to quantify partial compliance in terms of metrics and be able to classify actual cases by assigning a numeric value of compliance to them. In this paper, we formulate an evaluation framework to quantify the level of compliance of business processes across different levels of abstraction (such as task,trace and process level) and across multiple dimensions of each task (such as temporal, monetary, role-, data-, and quality-related) to provide managers more useful information about their operations and to help them improve their decision making processes. Our approach can also add social value by making social services provided by local, state and federal governments more flexible and improving the lives of citizens.
2010.10573
Hoang Nguyen Hung Van
Hoang Van, David Kauchak, Gondy Leroy
AutoMeTS: The Autocomplete for Medical Text Simplification
9 pages, 3 figures, and 8 tables, Accpeted to COLING 2020
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The goal of text simplification (TS) is to transform difficult text into a version that is easier to understand and more broadly accessible to a wide variety of readers. In some domains, such as healthcare, fully automated approaches cannot be used since information must be accurately preserved. Instead, semi-automated approaches can be used that assist a human writer in simplifying text faster and at a higher quality. In this paper, we examine the application of autocomplete to text simplification in the medical domain. We introduce a new parallel medical data set consisting of aligned English Wikipedia with Simple English Wikipedia sentences and examine the application of pretrained neural language models (PNLMs) on this dataset. We compare four PNLMs(BERT, RoBERTa, XLNet, and GPT-2), and show how the additional context of the sentence to be simplified can be incorporated to achieve better results (6.17% absolute improvement over the best individual model). We also introduce an ensemble model that combines the four PNLMs and outperforms the best individual model by 2.1%, resulting in an overall word prediction accuracy of 64.52%.
[ { "created": "Tue, 20 Oct 2020 19:20:29 GMT", "version": "v1" } ]
2020-10-22
[ [ "Van", "Hoang", "" ], [ "Kauchak", "David", "" ], [ "Leroy", "Gondy", "" ] ]
The goal of text simplification (TS) is to transform difficult text into a version that is easier to understand and more broadly accessible to a wide variety of readers. In some domains, such as healthcare, fully automated approaches cannot be used since information must be accurately preserved. Instead, semi-automated approaches can be used that assist a human writer in simplifying text faster and at a higher quality. In this paper, we examine the application of autocomplete to text simplification in the medical domain. We introduce a new parallel medical data set consisting of aligned English Wikipedia with Simple English Wikipedia sentences and examine the application of pretrained neural language models (PNLMs) on this dataset. We compare four PNLMs(BERT, RoBERTa, XLNet, and GPT-2), and show how the additional context of the sentence to be simplified can be incorporated to achieve better results (6.17% absolute improvement over the best individual model). We also introduce an ensemble model that combines the four PNLMs and outperforms the best individual model by 2.1%, resulting in an overall word prediction accuracy of 64.52%.
2404.05073
Stefano Scanzio
Stefano Scanzio, Gianluca Cena, Adriano Valenzano
QRscript: Embedding a Programming Language in QR codes to support Decision and Management
preprint, 8 pages
27th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA 2022)
10.1109/ETFA52439.2022.9921530
null
cs.NI cs.CL
http://creativecommons.org/licenses/by/4.0/
Embedding a programming language in a QR code is a new and extremely promising opportunity, as it makes devices and objects smarter without necessarily requiring an Internet connection. In this paper, all the steps needed to translate a program written in a high-level programming language to its binary representation encoded in a QR code, and the opposite process that, starting from the QR code, executes it by means of a virtual machine, have been carefully detailed. The proposed programming language was named QRscript, and can be easily extended so as to integrate new features. One of the main design goals was to produce a very compact target binary code. In particular, in this work we propose a specific sub-language (a dialect) that is aimed at encoding decision trees. Besides industrial scenarios, this is useful in many other application fields. The reported example, related to the configuration of an industrial networked device, highlights the potential of the proposed technology, and permits to better understand all the translation steps.
[ { "created": "Sun, 7 Apr 2024 21:02:55 GMT", "version": "v1" } ]
2024-04-09
[ [ "Scanzio", "Stefano", "" ], [ "Cena", "Gianluca", "" ], [ "Valenzano", "Adriano", "" ] ]
Embedding a programming language in a QR code is a new and extremely promising opportunity, as it makes devices and objects smarter without necessarily requiring an Internet connection. In this paper, all the steps needed to translate a program written in a high-level programming language to its binary representation encoded in a QR code, and the opposite process that, starting from the QR code, executes it by means of a virtual machine, have been carefully detailed. The proposed programming language was named QRscript, and can be easily extended so as to integrate new features. One of the main design goals was to produce a very compact target binary code. In particular, in this work we propose a specific sub-language (a dialect) that is aimed at encoding decision trees. Besides industrial scenarios, this is useful in many other application fields. The reported example, related to the configuration of an industrial networked device, highlights the potential of the proposed technology, and permits to better understand all the translation steps.
2309.03720
Radek Svoboda
Radek Svoboda, Sebastian Basterrech, Jedrzej Kozal, Jan Platos, Michal Wozniak
A Natural Gas Consumption Forecasting System for Continual Learning Scenarios based on Hoeffding Trees with Change Point Detection Mechanism
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Forecasting natural gas consumption, considering seasonality and trends, is crucial in planning its supply and consumption and optimizing the cost of obtaining it, mainly by industrial entities. However, in times of threats to its supply, it is also a critical element that guarantees the supply of this raw material to meet individual consumers' needs, ensuring society's energy security. This article introduces a novel multistep ahead forecasting of natural gas consumption with change point detection integration for model collection selection with continual learning capabilities using data stream processing. The performance of the forecasting models based on the proposed approach is evaluated in a complex real-world use case of natural gas consumption forecasting. We employed Hoeffding tree predictors as forecasting models and the Pruned Exact Linear Time (PELT) algorithm for the change point detection procedure. The change point detection integration enables selecting a different model collection for successive time frames. Thus, three model collection selection procedures (with and without an error feedback loop) are defined and evaluated for forecasting scenarios with various densities of detected change points. These models were compared with change point agnostic baseline approaches. Our experiments show that fewer change points result in a lower forecasting error regardless of the model collection selection procedure employed. Also, simpler model collection selection procedures omitting forecasting error feedback leads to more robust forecasting models suitable for continual learning tasks.
[ { "created": "Thu, 7 Sep 2023 13:52:20 GMT", "version": "v1" }, { "created": "Thu, 30 Nov 2023 12:48:13 GMT", "version": "v2" }, { "created": "Mon, 4 Mar 2024 13:52:35 GMT", "version": "v3" }, { "created": "Mon, 12 Aug 2024 08:27:48 GMT", "version": "v4" } ]
2024-08-13
[ [ "Svoboda", "Radek", "" ], [ "Basterrech", "Sebastian", "" ], [ "Kozal", "Jedrzej", "" ], [ "Platos", "Jan", "" ], [ "Wozniak", "Michal", "" ] ]
Forecasting natural gas consumption, considering seasonality and trends, is crucial in planning its supply and consumption and optimizing the cost of obtaining it, mainly by industrial entities. However, in times of threats to its supply, it is also a critical element that guarantees the supply of this raw material to meet individual consumers' needs, ensuring society's energy security. This article introduces a novel multistep ahead forecasting of natural gas consumption with change point detection integration for model collection selection with continual learning capabilities using data stream processing. The performance of the forecasting models based on the proposed approach is evaluated in a complex real-world use case of natural gas consumption forecasting. We employed Hoeffding tree predictors as forecasting models and the Pruned Exact Linear Time (PELT) algorithm for the change point detection procedure. The change point detection integration enables selecting a different model collection for successive time frames. Thus, three model collection selection procedures (with and without an error feedback loop) are defined and evaluated for forecasting scenarios with various densities of detected change points. These models were compared with change point agnostic baseline approaches. Our experiments show that fewer change points result in a lower forecasting error regardless of the model collection selection procedure employed. Also, simpler model collection selection procedures omitting forecasting error feedback leads to more robust forecasting models suitable for continual learning tasks.
1703.06941
Kostas Peppas P
K. Denia Kanellopoulou and Kostas P. Peppas and P. Takis Mathiopoulos
A Unified Effective Capacity Performance Analysis of Lp-norm Diversity Reception over Arbitrary and Correlated Generalized Fading Channels
This manuscript was submitted on Sept. 30, 2017, for possible publication in the IEEE TCOM as TCOM-TPS-17-1021
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The effective capacity (EC) has been recently established as a rigorous alternative to the classical Shannon's ergodic capacity since it accounts for the delay constraints imposed by future wireless applications and their impact on the overall system performance. This paper presents a novel moment generating function (MGF)-based framework for the unified EC performance analysis of a generic Lp-norm diversity combining scheme operating over arbitrary and correlated generalized fading channels and a maximum delay constraint. The Lp-norm diversity is a generic diversity structure which includes as special cases various well-known diversity schemes such as equal gain combining (EGC) and maximal ratio combining (MRC). For MRC, the proposed methodology reduces to a previously published MGF-based approach for the evaluation of the EC, whereas, for EGC, analytical approach presented is novel and the associated performance evaluation results have not been published previously in the open technical literature. Based on this methodology, novel analytical closed-form expressions for the EC performance of dual branch Lp-norm diversity receivers operating over Gamma shadowed generalized Nakagami-m fading channels are deduced. For diversity order greater than two, a novel analytical approach for the asymptotic EC performance analysis is also developed and evaluated, revealing how basic system parameters affect the overall system performance. The overall mathematical formalism is validated with selected numerical and equivalent simulation performance evaluation results thus confirming the correctness of the proposed unified analytical methodology.
[ { "created": "Mon, 20 Mar 2017 19:34:29 GMT", "version": "v1" }, { "created": "Sat, 2 Feb 2019 21:45:50 GMT", "version": "v2" }, { "created": "Wed, 23 Oct 2019 19:10:20 GMT", "version": "v3" } ]
2019-10-25
[ [ "Kanellopoulou", "K. Denia", "" ], [ "Peppas", "Kostas P.", "" ], [ "Mathiopoulos", "P. Takis", "" ] ]
The effective capacity (EC) has been recently established as a rigorous alternative to the classical Shannon's ergodic capacity since it accounts for the delay constraints imposed by future wireless applications and their impact on the overall system performance. This paper presents a novel moment generating function (MGF)-based framework for the unified EC performance analysis of a generic Lp-norm diversity combining scheme operating over arbitrary and correlated generalized fading channels and a maximum delay constraint. The Lp-norm diversity is a generic diversity structure which includes as special cases various well-known diversity schemes such as equal gain combining (EGC) and maximal ratio combining (MRC). For MRC, the proposed methodology reduces to a previously published MGF-based approach for the evaluation of the EC, whereas, for EGC, analytical approach presented is novel and the associated performance evaluation results have not been published previously in the open technical literature. Based on this methodology, novel analytical closed-form expressions for the EC performance of dual branch Lp-norm diversity receivers operating over Gamma shadowed generalized Nakagami-m fading channels are deduced. For diversity order greater than two, a novel analytical approach for the asymptotic EC performance analysis is also developed and evaluated, revealing how basic system parameters affect the overall system performance. The overall mathematical formalism is validated with selected numerical and equivalent simulation performance evaluation results thus confirming the correctness of the proposed unified analytical methodology.
1704.03105
EPTCS
Yingfu Zeng (Rice University), Ferenc Bartha (Rice University), Walid Taha (Halmstad University)
Compile-Time Extensions to Hybrid ODEs
In Proceedings SNR 2017, arXiv:1704.02421
EPTCS 247, 2017, pp. 52-70
10.4204/EPTCS.247.5
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reachability analysis for hybrid systems is an active area of development and has resulted in many promising prototype tools. Most of these tools allow users to express hybrid system as automata with a set of ordinary differential equations (ODEs) associated with each state, as well as rules for transitions between states. Significant effort goes into developing and verifying and correctly implementing those tools. As such, it is desirable to expand the scope of applicability tools of such as far as possible. With this goal, we show how compile-time transformations can be used to extend the basic hybrid ODE formalism traditionally supported in hybrid reachability tools such as SpaceEx or Flow*. The extension supports certain types of partial derivatives and equational constraints. These extensions allow users to express, among other things, the Euler-Lagrangian equation, and to capture practically relevant constraints that arise naturally in mechanical systems. Achieving this level of expressiveness requires using a binding time-analysis (BTA), program differentiation, symbolic Gaussian elimination, and abstract interpretation using interval analysis. Except for BTA, the other components are either readily available or can be easily added to most reachability tools. The paper therefore focuses on presenting both the declarative and algorithmic specifications for the BTA phase, and establishes the soundness of the algorithmic specifications with respect to the declarative one.
[ { "created": "Tue, 11 Apr 2017 00:57:40 GMT", "version": "v1" } ]
2017-04-12
[ [ "Zeng", "Yingfu", "", "Rice University" ], [ "Bartha", "Ferenc", "", "Rice University" ], [ "Taha", "Walid", "", "Halmstad University" ] ]
Reachability analysis for hybrid systems is an active area of development and has resulted in many promising prototype tools. Most of these tools allow users to express hybrid system as automata with a set of ordinary differential equations (ODEs) associated with each state, as well as rules for transitions between states. Significant effort goes into developing and verifying and correctly implementing those tools. As such, it is desirable to expand the scope of applicability tools of such as far as possible. With this goal, we show how compile-time transformations can be used to extend the basic hybrid ODE formalism traditionally supported in hybrid reachability tools such as SpaceEx or Flow*. The extension supports certain types of partial derivatives and equational constraints. These extensions allow users to express, among other things, the Euler-Lagrangian equation, and to capture practically relevant constraints that arise naturally in mechanical systems. Achieving this level of expressiveness requires using a binding time-analysis (BTA), program differentiation, symbolic Gaussian elimination, and abstract interpretation using interval analysis. Except for BTA, the other components are either readily available or can be easily added to most reachability tools. The paper therefore focuses on presenting both the declarative and algorithmic specifications for the BTA phase, and establishes the soundness of the algorithmic specifications with respect to the declarative one.
1804.05212
Avi Segal
Avi Segal, Yossi Ben David, Joseph Jay Williams, Kobi Gal, Yaar Shalom
Combining Difficulty Ranking with Multi-Armed Bandits to Sequence Educational Content
null
null
10.1016/j.physletb.2019.04.047
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As e-learning systems become more prevalent, there is a growing need for them to accommodate individual differences between students. This paper addresses the problem of how to personalize educational content to students in order to maximize their learning gains over time. We present a new computational approach to this problem called MAPLE (Multi-Armed Bandits based Personalization for Learning Environments) that combines difficulty ranking with multi-armed bandits. Given a set of target questions MAPLE estimates the expected learning gains for each question and uses an exploration-exploitation strategy to choose the next question to pose to the student. It maintains a personalized ranking over the difficulties of question in the target set which is used in two ways: First, to obtain initial estimates over the learning gains for the set of questions. Second, to update the estimates over time based on the students responses. We show in simulations that MAPLE was able to improve students' learning gains compared to approaches that sequence questions in increasing level of difficulty, or rely on content experts. When implemented in a live e-learning system in the wild, MAPLE showed promising results. This work demonstrates the efficacy of using stochastic approaches to the sequencing problem when augmented with information about question difficulty.
[ { "created": "Sat, 14 Apr 2018 12:36:00 GMT", "version": "v1" } ]
2019-04-24
[ [ "Segal", "Avi", "" ], [ "David", "Yossi Ben", "" ], [ "Williams", "Joseph Jay", "" ], [ "Gal", "Kobi", "" ], [ "Shalom", "Yaar", "" ] ]
As e-learning systems become more prevalent, there is a growing need for them to accommodate individual differences between students. This paper addresses the problem of how to personalize educational content to students in order to maximize their learning gains over time. We present a new computational approach to this problem called MAPLE (Multi-Armed Bandits based Personalization for Learning Environments) that combines difficulty ranking with multi-armed bandits. Given a set of target questions MAPLE estimates the expected learning gains for each question and uses an exploration-exploitation strategy to choose the next question to pose to the student. It maintains a personalized ranking over the difficulties of question in the target set which is used in two ways: First, to obtain initial estimates over the learning gains for the set of questions. Second, to update the estimates over time based on the students responses. We show in simulations that MAPLE was able to improve students' learning gains compared to approaches that sequence questions in increasing level of difficulty, or rely on content experts. When implemented in a live e-learning system in the wild, MAPLE showed promising results. This work demonstrates the efficacy of using stochastic approaches to the sequencing problem when augmented with information about question difficulty.
2007.13809
Andra Lutu
Andra Lutu, Byunjin Jun, Fabian Bustamante, Diego Perino, Marcelo Bagnulo, Carlos Gamboa Bontje
A first look at the IP eXchange Ecosystem
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The IPX Network interconnects about 800 Mobile Network Operators (MNOs) worldwide and a range of other service providers (such as cloud and content providers). It forms the core that enables global data roaming while supporting emerging applications, from VoLTE and video streaming to IoT verticals. This paper presents the first characterization of this, so-far opaque, IPX ecosystem and a first-of-its-kind in-depth analysis of ann IPX Provider (IPX-P). The IPX Network is a private network formed by a small set of tightly interconnected IPX-Ps. We analyze an operational dataset from a large IPX-P that includes BGP data as well as statistics from signaling. We shed light on the structure of the IPX Network as well as on the temporal, structural and geographic features of the IPX traffic. Our results are a first step in understanding the IPX Network at its core, key to fully understand the global mobile Internet.
[ { "created": "Mon, 27 Jul 2020 18:48:49 GMT", "version": "v1" } ]
2020-07-29
[ [ "Lutu", "Andra", "" ], [ "Jun", "Byunjin", "" ], [ "Bustamante", "Fabian", "" ], [ "Perino", "Diego", "" ], [ "Bagnulo", "Marcelo", "" ], [ "Bontje", "Carlos Gamboa", "" ] ]
The IPX Network interconnects about 800 Mobile Network Operators (MNOs) worldwide and a range of other service providers (such as cloud and content providers). It forms the core that enables global data roaming while supporting emerging applications, from VoLTE and video streaming to IoT verticals. This paper presents the first characterization of this, so-far opaque, IPX ecosystem and a first-of-its-kind in-depth analysis of ann IPX Provider (IPX-P). The IPX Network is a private network formed by a small set of tightly interconnected IPX-Ps. We analyze an operational dataset from a large IPX-P that includes BGP data as well as statistics from signaling. We shed light on the structure of the IPX Network as well as on the temporal, structural and geographic features of the IPX traffic. Our results are a first step in understanding the IPX Network at its core, key to fully understand the global mobile Internet.
1412.5619
Yoshihiro Terasawa
Yoshihiro Terasawa
A Simple construction of the Pseudorandom Generator from Permutation
I want to rewrite
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A simple construction of pseudorandom generator is appear.This pseudorandom generator is always passed by NIST statistical test.This paper reports a pseudorandom number generator which has good property is able to construct using only permutation and data rewriting by XOR.
[ { "created": "Tue, 16 Dec 2014 13:44:39 GMT", "version": "v1" }, { "created": "Sun, 13 Aug 2023 11:45:40 GMT", "version": "v2" } ]
2023-08-15
[ [ "Terasawa", "Yoshihiro", "" ] ]
A simple construction of pseudorandom generator is appear.This pseudorandom generator is always passed by NIST statistical test.This paper reports a pseudorandom number generator which has good property is able to construct using only permutation and data rewriting by XOR.
2112.08460
Molly Jane Nicholas
Molly Jane Nicholas, Brian A. Smith, Rajan Vaish
Friendscope: Exploring In-the-Moment Experience Sharing on Camera Glasses via a Shared Camera
ACM CSCW 2022
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
We introduce Friendscope, an instant, in-the-moment experience sharing system for lightweight commercial camera glasses. Friendscope explores a new concept called a shared camera. This concept allows a wearer to share control of their camera with a remote friend, making it possible for both people to capture photos/videos from the camera in the moment. Through a user study with 48 participants, we found that users felt connected to each other, describing the shared camera as a more intimate form of livestreaming. Moreover, even privacy-sensitive users were able to retain their sense of privacy and control with the shared camera. Friendscope's different shared camera configurations give wearers ultimate control over who they share the camera with and what photos/videos they share. We conclude with design implications for future experience sharing systems.
[ { "created": "Wed, 15 Dec 2021 20:15:11 GMT", "version": "v1" } ]
2021-12-17
[ [ "Nicholas", "Molly Jane", "" ], [ "Smith", "Brian A.", "" ], [ "Vaish", "Rajan", "" ] ]
We introduce Friendscope, an instant, in-the-moment experience sharing system for lightweight commercial camera glasses. Friendscope explores a new concept called a shared camera. This concept allows a wearer to share control of their camera with a remote friend, making it possible for both people to capture photos/videos from the camera in the moment. Through a user study with 48 participants, we found that users felt connected to each other, describing the shared camera as a more intimate form of livestreaming. Moreover, even privacy-sensitive users were able to retain their sense of privacy and control with the shared camera. Friendscope's different shared camera configurations give wearers ultimate control over who they share the camera with and what photos/videos they share. We conclude with design implications for future experience sharing systems.
2205.13248
Qingpeng Cai
Qingpeng Cai, Ruohan Zhan, Chi Zhang, Jie Zheng, Guangwei Ding, Pinghua Gong, Dong Zheng, Peng Jiang
Constrained Reinforcement Learning for Short Video Recommendation
null
null
null
null
cs.LG cs.IR
http://creativecommons.org/licenses/by/4.0/
The wide popularity of short videos on social media poses new opportunities and challenges to optimize recommender systems on the video-sharing platforms. Users provide complex and multi-faceted responses towards recommendations, including watch time and various types of interactions with videos. As a result, established recommendation algorithms that concern a single objective are not adequate to meet this new demand of optimizing comprehensive user experiences. In this paper, we formulate the problem of short video recommendation as a constrained Markov Decision Process (MDP), where platforms want to optimize the main goal of user watch time in long term, with the constraint of accommodating the auxiliary responses of user interactions such as sharing/downloading videos. To solve the constrained MDP, we propose a two-stage reinforcement learning approach based on actor-critic framework. At stage one, we learn individual policies to optimize each auxiliary response. At stage two, we learn a policy to (i) optimize the main response and (ii) stay close to policies learned at the first stage, which effectively guarantees the performance of this main policy on the auxiliaries. Through extensive simulations, we demonstrate effectiveness of our approach over alternatives in both optimizing the main goal as well as balancing the others. We further show the advantage of our approach in live experiments of short video recommendations, where it significantly outperforms other baselines in terms of watch time and interactions from video views. Our approach has been fully launched in the production system to optimize user experiences on the platform.
[ { "created": "Thu, 26 May 2022 09:36:20 GMT", "version": "v1" } ]
2022-05-27
[ [ "Cai", "Qingpeng", "" ], [ "Zhan", "Ruohan", "" ], [ "Zhang", "Chi", "" ], [ "Zheng", "Jie", "" ], [ "Ding", "Guangwei", "" ], [ "Gong", "Pinghua", "" ], [ "Zheng", "Dong", "" ], [ "Jiang", "Peng", "" ] ]
The wide popularity of short videos on social media poses new opportunities and challenges to optimize recommender systems on the video-sharing platforms. Users provide complex and multi-faceted responses towards recommendations, including watch time and various types of interactions with videos. As a result, established recommendation algorithms that concern a single objective are not adequate to meet this new demand of optimizing comprehensive user experiences. In this paper, we formulate the problem of short video recommendation as a constrained Markov Decision Process (MDP), where platforms want to optimize the main goal of user watch time in long term, with the constraint of accommodating the auxiliary responses of user interactions such as sharing/downloading videos. To solve the constrained MDP, we propose a two-stage reinforcement learning approach based on actor-critic framework. At stage one, we learn individual policies to optimize each auxiliary response. At stage two, we learn a policy to (i) optimize the main response and (ii) stay close to policies learned at the first stage, which effectively guarantees the performance of this main policy on the auxiliaries. Through extensive simulations, we demonstrate effectiveness of our approach over alternatives in both optimizing the main goal as well as balancing the others. We further show the advantage of our approach in live experiments of short video recommendations, where it significantly outperforms other baselines in terms of watch time and interactions from video views. Our approach has been fully launched in the production system to optimize user experiences on the platform.
2102.03237
Jinseok Kim
Jinseok Kim and Jason Owen-Smith
ORCID-linked labeled data for evaluating author name disambiguation at scale
A pre-print of a paper accepted for publication in the journal Scientometrics
null
10.1007/s11192-020-03826-6
null
cs.DL cs.IR
http://creativecommons.org/licenses/by/4.0/
How can we evaluate the performance of a disambiguation method implemented on big bibliographic data? This study suggests that the open researcher profile system, ORCID, can be used as an authority source to label name instances at scale. This study demonstrates the potential by evaluating the disambiguation performances of Author-ity2009 (which algorithmically disambiguates author names in MEDLINE) using 3 million name instances that are automatically labeled through linkage to 5 million ORCID researcher profiles. Results show that although ORCID-linked labeled data do not effectively represent the population of name instances in Author-ity2009, they do effectively capture the 'high precision over high recall' performances of Author-ity2009. In addition, ORCID-linked labeled data can provide nuanced details about the Author-ity2009's performance when name instances are evaluated within and across ethnicity categories. As ORCID continues to be expanded to include more researchers, labeled data via ORCID-linkage can be improved in representing the population of a whole disambiguated data and updated on a regular basis. This can benefit author name disambiguation researchers and practitioners who need large-scale labeled data but lack resources for manual labeling or access to other authority sources for linkage-based labeling. The ORCID-linked labeled data for Author-tiy2009 are publicly available for validation and reuse.
[ { "created": "Fri, 5 Feb 2021 15:34:08 GMT", "version": "v1" } ]
2021-02-08
[ [ "Kim", "Jinseok", "" ], [ "Owen-Smith", "Jason", "" ] ]
How can we evaluate the performance of a disambiguation method implemented on big bibliographic data? This study suggests that the open researcher profile system, ORCID, can be used as an authority source to label name instances at scale. This study demonstrates the potential by evaluating the disambiguation performances of Author-ity2009 (which algorithmically disambiguates author names in MEDLINE) using 3 million name instances that are automatically labeled through linkage to 5 million ORCID researcher profiles. Results show that although ORCID-linked labeled data do not effectively represent the population of name instances in Author-ity2009, they do effectively capture the 'high precision over high recall' performances of Author-ity2009. In addition, ORCID-linked labeled data can provide nuanced details about the Author-ity2009's performance when name instances are evaluated within and across ethnicity categories. As ORCID continues to be expanded to include more researchers, labeled data via ORCID-linkage can be improved in representing the population of a whole disambiguated data and updated on a regular basis. This can benefit author name disambiguation researchers and practitioners who need large-scale labeled data but lack resources for manual labeling or access to other authority sources for linkage-based labeling. The ORCID-linked labeled data for Author-tiy2009 are publicly available for validation and reuse.
1806.01214
Ivan Geffner
Ittai Abraham, Danny Dolev, Ivan Geffner, Joseph Y. Halpern
Implementing Mediators with Asynchronous Cheap Talk
null
null
null
null
cs.DC cs.CR cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A mediator can help non-cooperative agents obtain an equilibrium that may otherwise not be possible. We study the ability of players to obtain the same equilibrium without a mediator, using only cheap talk, that is, nonbinding pre-play communication. Previous work has considered this problem in a synchronous setting. Here we consider the effect of asynchrony on the problem, and provide upper bounds for implementing mediators. Considering asynchronous environments introduces new subtleties, including exactly what solution concept is most appropriate and determining what move is played if the cheap talk goes on forever. Different results are obtained depending on whether the move after such "infinite play" is under the control of the players or part of the description of the game.
[ { "created": "Mon, 4 Jun 2018 16:55:07 GMT", "version": "v1" } ]
2018-06-05
[ [ "Abraham", "Ittai", "" ], [ "Dolev", "Danny", "" ], [ "Geffner", "Ivan", "" ], [ "Halpern", "Joseph Y.", "" ] ]
A mediator can help non-cooperative agents obtain an equilibrium that may otherwise not be possible. We study the ability of players to obtain the same equilibrium without a mediator, using only cheap talk, that is, nonbinding pre-play communication. Previous work has considered this problem in a synchronous setting. Here we consider the effect of asynchrony on the problem, and provide upper bounds for implementing mediators. Considering asynchronous environments introduces new subtleties, including exactly what solution concept is most appropriate and determining what move is played if the cheap talk goes on forever. Different results are obtained depending on whether the move after such "infinite play" is under the control of the players or part of the description of the game.
2112.09690
Yinghao Xu
Yinghao Xu, Fangyun Wei, Xiao Sun, Ceyuan Yang, Yujun Shen, Bo Dai, Bolei Zhou, Stephen Lin
Cross-Model Pseudo-Labeling for Semi-Supervised Action Recognition
CVPR 2022 camera-ready, Project webpage: https://justimyhxu.github.io/projects/cmpl/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Semi-supervised action recognition is a challenging but important task due to the high cost of data annotation. A common approach to this problem is to assign unlabeled data with pseudo-labels, which are then used as additional supervision in training. Typically in recent work, the pseudo-labels are obtained by training a model on the labeled data, and then using confident predictions from the model to teach itself. In this work, we propose a more effective pseudo-labeling scheme, called Cross-Model Pseudo-Labeling (CMPL). Concretely, we introduce a lightweight auxiliary network in addition to the primary backbone, and ask them to predict pseudo-labels for each other. We observe that, due to their different structural biases, these two models tend to learn complementary representations from the same video clips. Each model can thus benefit from its counterpart by utilizing cross-model predictions as supervision. Experiments on different data partition protocols demonstrate the significant improvement of our framework over existing alternatives. For example, CMPL achieves $17.6\%$ and $25.1\%$ Top-1 accuracy on Kinetics-400 and UCF-101 using only the RGB modality and $1\%$ labeled data, outperforming our baseline model, FixMatch, by $9.0\%$ and $10.3\%$, respectively.
[ { "created": "Fri, 17 Dec 2021 18:59:41 GMT", "version": "v1" }, { "created": "Mon, 18 Apr 2022 12:03:08 GMT", "version": "v2" } ]
2022-04-19
[ [ "Xu", "Yinghao", "" ], [ "Wei", "Fangyun", "" ], [ "Sun", "Xiao", "" ], [ "Yang", "Ceyuan", "" ], [ "Shen", "Yujun", "" ], [ "Dai", "Bo", "" ], [ "Zhou", "Bolei", "" ], [ "Lin", "Stephen", "" ] ]
Semi-supervised action recognition is a challenging but important task due to the high cost of data annotation. A common approach to this problem is to assign unlabeled data with pseudo-labels, which are then used as additional supervision in training. Typically in recent work, the pseudo-labels are obtained by training a model on the labeled data, and then using confident predictions from the model to teach itself. In this work, we propose a more effective pseudo-labeling scheme, called Cross-Model Pseudo-Labeling (CMPL). Concretely, we introduce a lightweight auxiliary network in addition to the primary backbone, and ask them to predict pseudo-labels for each other. We observe that, due to their different structural biases, these two models tend to learn complementary representations from the same video clips. Each model can thus benefit from its counterpart by utilizing cross-model predictions as supervision. Experiments on different data partition protocols demonstrate the significant improvement of our framework over existing alternatives. For example, CMPL achieves $17.6\%$ and $25.1\%$ Top-1 accuracy on Kinetics-400 and UCF-101 using only the RGB modality and $1\%$ labeled data, outperforming our baseline model, FixMatch, by $9.0\%$ and $10.3\%$, respectively.
2102.06944
Saman Motamed
Saman Motamed and Farzad Khalvati
Multi-class Generative Adversarial Nets for Semi-supervised Image Classification
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
From generating never-before-seen images to domain adaptation, applications of Generative Adversarial Networks (GANs) spread wide in the domain of vision and graphics problems. With the remarkable ability of GANs in learning the distribution and generating images of a particular class, they can be used for semi-supervised classification tasks. However, the problem is that if two classes of images share similar characteristics, the GAN might learn to generalize and hinder the classification of the two classes. In this paper, we use various images from MNIST and Fashion-MNIST datasets to illustrate how similar images cause the GAN to generalize, leading to the poor classification of images. We propose a modification to the traditional training of GANs that allows for improved multi-class classification in similar classes of images in a semi-supervised learning framework.
[ { "created": "Sat, 13 Feb 2021 15:26:17 GMT", "version": "v1" }, { "created": "Mon, 22 Feb 2021 16:25:31 GMT", "version": "v2" } ]
2021-02-23
[ [ "Motamed", "Saman", "" ], [ "Khalvati", "Farzad", "" ] ]
From generating never-before-seen images to domain adaptation, applications of Generative Adversarial Networks (GANs) spread wide in the domain of vision and graphics problems. With the remarkable ability of GANs in learning the distribution and generating images of a particular class, they can be used for semi-supervised classification tasks. However, the problem is that if two classes of images share similar characteristics, the GAN might learn to generalize and hinder the classification of the two classes. In this paper, we use various images from MNIST and Fashion-MNIST datasets to illustrate how similar images cause the GAN to generalize, leading to the poor classification of images. We propose a modification to the traditional training of GANs that allows for improved multi-class classification in similar classes of images in a semi-supervised learning framework.
1707.05589
G\'abor Melis
G\'abor Melis, Chris Dyer, Phil Blunsom
On the State of the Art of Evaluation in Neural Language Models
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing code bases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset.
[ { "created": "Tue, 18 Jul 2017 12:35:53 GMT", "version": "v1" }, { "created": "Mon, 20 Nov 2017 17:57:58 GMT", "version": "v2" } ]
2017-11-21
[ [ "Melis", "Gábor", "" ], [ "Dyer", "Chris", "" ], [ "Blunsom", "Phil", "" ] ]
Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing code bases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset.
0909.4756
Brendan Lucier
Jason D. Hartline and Brendan Lucier
Bayesian Algorithmic Mechanism Design
null
null
10.1257/aer.20130712
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The principal problem in algorithmic mechanism design is in merging the incentive constraints imposed by selfish behavior with the algorithmic constraints imposed by computational intractability. This field is motivated by the observation that the preeminent approach for designing incentive compatible mechanisms, namely that of Vickrey, Clarke, and Groves; and the central approach for circumventing computational obstacles, that of approximation algorithms, are fundamentally incompatible: natural applications of the VCG approach to an approximation algorithm fails to yield an incentive compatible mechanism. We consider relaxing the desideratum of (ex post) incentive compatibility (IC) to Bayesian incentive compatibility (BIC), where truthtelling is a Bayes-Nash equilibrium (the standard notion of incentive compatibility in economics). For welfare maximization in single-parameter agent settings, we give a general black-box reduction that turns any approximation algorithm into a Bayesian incentive compatible mechanism with essentially the same approximation factor.
[ { "created": "Fri, 25 Sep 2009 18:00:59 GMT", "version": "v1" }, { "created": "Wed, 23 Feb 2011 22:16:38 GMT", "version": "v2" } ]
2017-08-21
[ [ "Hartline", "Jason D.", "" ], [ "Lucier", "Brendan", "" ] ]
The principal problem in algorithmic mechanism design is in merging the incentive constraints imposed by selfish behavior with the algorithmic constraints imposed by computational intractability. This field is motivated by the observation that the preeminent approach for designing incentive compatible mechanisms, namely that of Vickrey, Clarke, and Groves; and the central approach for circumventing computational obstacles, that of approximation algorithms, are fundamentally incompatible: natural applications of the VCG approach to an approximation algorithm fails to yield an incentive compatible mechanism. We consider relaxing the desideratum of (ex post) incentive compatibility (IC) to Bayesian incentive compatibility (BIC), where truthtelling is a Bayes-Nash equilibrium (the standard notion of incentive compatibility in economics). For welfare maximization in single-parameter agent settings, we give a general black-box reduction that turns any approximation algorithm into a Bayesian incentive compatible mechanism with essentially the same approximation factor.
1910.12073
Jillian Tompkins
Jillian Tompkins
Disinformation Detection: A review of linguistic feature selection and classification models in news veracity assessments
null
null
null
null
cs.CL cs.CY cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over the past couple of years, the topic of "fake news" and its influence over people's opinions has become a growing cause for concern. Although the spread of disinformation on the Internet is not a new phenomenon, the widespread use of social media has exacerbated its effects, providing more channels for dissemination and the potential to "go viral." Nowhere was this more evident than during the 2016 United States Presidential Election. Although the current of disinformation spread via trolls, bots, and hyperpartisan media outlets likely reinforced existing biases rather than sway undecided voters, the effects of this deluge of disinformation are by no means trivial. The consequences range in severity from an overall distrust in news media, to an ill-informed citizenry, and in extreme cases, provocation of violent action. It is clear that human ability to discern lies from truth is flawed at best. As such, greater attention has been given towards applying machine learning approaches to detect deliberately deceptive news articles. This paper looks at the work that has already been done in this area.
[ { "created": "Sat, 26 Oct 2019 14:29:37 GMT", "version": "v1" } ]
2019-10-29
[ [ "Tompkins", "Jillian", "" ] ]
Over the past couple of years, the topic of "fake news" and its influence over people's opinions has become a growing cause for concern. Although the spread of disinformation on the Internet is not a new phenomenon, the widespread use of social media has exacerbated its effects, providing more channels for dissemination and the potential to "go viral." Nowhere was this more evident than during the 2016 United States Presidential Election. Although the current of disinformation spread via trolls, bots, and hyperpartisan media outlets likely reinforced existing biases rather than sway undecided voters, the effects of this deluge of disinformation are by no means trivial. The consequences range in severity from an overall distrust in news media, to an ill-informed citizenry, and in extreme cases, provocation of violent action. It is clear that human ability to discern lies from truth is flawed at best. As such, greater attention has been given towards applying machine learning approaches to detect deliberately deceptive news articles. This paper looks at the work that has already been done in this area.
0804.4750
Icius Committee
Heru Tjahjana, Iwan Pranoto, Hari Muhammad, J. Naiborhu, and Miswanto
The Numerical Control Design for a Pair of Dubins Vehicles
Uploaded by ICIUS2007 Conference Organizer on behalf of the author(s). 3 pages, 2 figures
Proceedings of the International Conference on Intelligent Unmanned System (ICIUS 2007), Bali, Indonesia, October 24-25, 2007, Paper No. ICIUS2007-C003
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a model of a pair of Dubins vehicles is considered. The vehicles move from an initial position and orientation to final position and orientation. A long the motion, the two vehicles are not allowed to collide however the two vehicles cant to far each other. The optimal control of the vehicle is found using the Pontryagins Maximum Principle (PMP). This PMP leads to a Hamiltonian system consisting of a system of differential equation and its adjoint. The originally differential equation has initial and final condition but the adjoint system doesn't have one. The classical difficulty is solved numerically by the greatest gradient descent method. Some simulation results are presented in this paper.
[ { "created": "Wed, 30 Apr 2008 08:03:05 GMT", "version": "v1" } ]
2008-05-01
[ [ "Tjahjana", "Heru", "" ], [ "Pranoto", "Iwan", "" ], [ "Muhammad", "Hari", "" ], [ "Naiborhu", "J.", "" ], [ "Miswanto", "", "" ] ]
In this paper, a model of a pair of Dubins vehicles is considered. The vehicles move from an initial position and orientation to final position and orientation. A long the motion, the two vehicles are not allowed to collide however the two vehicles cant to far each other. The optimal control of the vehicle is found using the Pontryagins Maximum Principle (PMP). This PMP leads to a Hamiltonian system consisting of a system of differential equation and its adjoint. The originally differential equation has initial and final condition but the adjoint system doesn't have one. The classical difficulty is solved numerically by the greatest gradient descent method. Some simulation results are presented in this paper.
2203.15636
Guillaume Jeanneret
Guillaume Jeanneret, Lo\"ic Simon and Fr\'ed\'eric Jurie
Diffusion Models for Counterfactual Explanations
null
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Counterfactual explanations have shown promising results as a post-hoc framework to make image classifiers more explainable. In this paper, we propose DiME, a method allowing the generation of counterfactual images using the recent diffusion models. By leveraging the guided generative diffusion process, our proposed methodology shows how to use the gradients of the target classifier to generate counterfactual explanations of input instances. Further, we analyze current approaches to evaluate spurious correlations and extend the evaluation measurements by proposing a new metric: Correlation Difference. Our experimental validations show that the proposed algorithm surpasses previous State-of-the-Art results on 5 out of 6 metrics on CelebA.
[ { "created": "Tue, 29 Mar 2022 14:59:31 GMT", "version": "v1" } ]
2022-03-30
[ [ "Jeanneret", "Guillaume", "" ], [ "Simon", "Loïc", "" ], [ "Jurie", "Frédéric", "" ] ]
Counterfactual explanations have shown promising results as a post-hoc framework to make image classifiers more explainable. In this paper, we propose DiME, a method allowing the generation of counterfactual images using the recent diffusion models. By leveraging the guided generative diffusion process, our proposed methodology shows how to use the gradients of the target classifier to generate counterfactual explanations of input instances. Further, we analyze current approaches to evaluate spurious correlations and extend the evaluation measurements by proposing a new metric: Correlation Difference. Our experimental validations show that the proposed algorithm surpasses previous State-of-the-Art results on 5 out of 6 metrics on CelebA.
2212.05063
Asma Bensalah
Alicia Forn\'es, Asma Bensalah, Cristina Carmona-Duarte, Jialuo Chen, Miguel A. Ferrer, Andreas Fischer, Josep Llad\'os, Cristina Mart\'in, Eloy Opisso, R\'ejean Plamondon, Anna Scius-Bertrand, and Josep Maria Tormos
The RPM3D project: 3D Kinematics for Remote Patient Monitoring
null
null
10.1007/978-3-031-19745-1_16
null
cs.HC cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
This project explores the feasibility of remote patient monitoring based on the analysis of 3D movements captured with smartwatches. We base our analysis on the Kinematic Theory of Rapid Human Movement. We have validated our research in a real case scenario for stroke rehabilitation at the Guttmann Institute5 (neurorehabilitation hospital), showing promising results. Our work could have a great impact in remote healthcare applications, improving the medical efficiency and reducing the healthcare costs. Future steps include more clinical validation, developing multi-modal analysis architectures (analysing data from sensors, images, audio, etc.), and exploring the application of our technology to monitor other neurodegenerative diseases.
[ { "created": "Fri, 9 Dec 2022 14:16:32 GMT", "version": "v1" } ]
2022-12-13
[ [ "Fornés", "Alicia", "" ], [ "Bensalah", "Asma", "" ], [ "Carmona-Duarte", "Cristina", "" ], [ "Chen", "Jialuo", "" ], [ "Ferrer", "Miguel A.", "" ], [ "Fischer", "Andreas", "" ], [ "Lladós", "Josep", "" ], [ "Martín", "Cristina", "" ], [ "Opisso", "Eloy", "" ], [ "Plamondon", "Réjean", "" ], [ "Scius-Bertrand", "Anna", "" ], [ "Tormos", "Josep Maria", "" ] ]
This project explores the feasibility of remote patient monitoring based on the analysis of 3D movements captured with smartwatches. We base our analysis on the Kinematic Theory of Rapid Human Movement. We have validated our research in a real case scenario for stroke rehabilitation at the Guttmann Institute5 (neurorehabilitation hospital), showing promising results. Our work could have a great impact in remote healthcare applications, improving the medical efficiency and reducing the healthcare costs. Future steps include more clinical validation, developing multi-modal analysis architectures (analysing data from sensors, images, audio, etc.), and exploring the application of our technology to monitor other neurodegenerative diseases.
1811.07468
Jianing Li
Jianing Li, Shiliang Zhang, Tiejun Huang
Multi-scale 3D Convolution Network for Video Based Person Re-Identification
AAAI, 2019
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a two-stream convolution network to extract spatial and temporal cues for video based person Re-Identification (ReID). A temporal stream in this network is constructed by inserting several Multi-scale 3D (M3D) convolution layers into a 2D CNN network. The resulting M3D convolution network introduces a fraction of parameters into the 2D CNN, but gains the ability of multi-scale temporal feature learning. With this compact architecture, M3D convolution network is also more efficient and easier to optimize than existing 3D convolution networks. The temporal stream further involves Residual Attention Layers (RAL) to refine the temporal features. By jointly learning spatial-temporal attention masks in a residual manner, RAL identifies the discriminative spatial regions and temporal cues. The other stream in our network is implemented with a 2D CNN for spatial feature extraction. The spatial and temporal features from two streams are finally fused for the video based person ReID. Evaluations on three widely used benchmarks datasets, i.e., MARS, PRID2011, and iLIDS-VID demonstrate the substantial advantages of our method over existing 3D convolution networks and state-of-art methods.
[ { "created": "Mon, 19 Nov 2018 02:40:32 GMT", "version": "v1" } ]
2018-11-20
[ [ "Li", "Jianing", "" ], [ "Zhang", "Shiliang", "" ], [ "Huang", "Tiejun", "" ] ]
This paper proposes a two-stream convolution network to extract spatial and temporal cues for video based person Re-Identification (ReID). A temporal stream in this network is constructed by inserting several Multi-scale 3D (M3D) convolution layers into a 2D CNN network. The resulting M3D convolution network introduces a fraction of parameters into the 2D CNN, but gains the ability of multi-scale temporal feature learning. With this compact architecture, M3D convolution network is also more efficient and easier to optimize than existing 3D convolution networks. The temporal stream further involves Residual Attention Layers (RAL) to refine the temporal features. By jointly learning spatial-temporal attention masks in a residual manner, RAL identifies the discriminative spatial regions and temporal cues. The other stream in our network is implemented with a 2D CNN for spatial feature extraction. The spatial and temporal features from two streams are finally fused for the video based person ReID. Evaluations on three widely used benchmarks datasets, i.e., MARS, PRID2011, and iLIDS-VID demonstrate the substantial advantages of our method over existing 3D convolution networks and state-of-art methods.
0712.2959
Te Sun Han
Te Sun Han
Joint Source-Channel Coding Revisited: Information-Spectrum Approach
null
null
null
null
cs.IT math.IT
null
Given a general source with countably infinite source alphabet and a general channel with arbitrary abstract channel input/channel output alphabets, we study the joint source-channel coding problem from the information-spectrum point of view. First, we generalize Feinstein's lemma (direct part) and Verdu-Han's lemma (converse part) so as to be applicable to the general joint source-channel coding problem. Based on these lemmas, we establish a sufficient condition as well as a necessary condition for the source to be reliably transmissible over the channel with asymptotically vanishing probability of error. It is shown that our sufficient condition is equivalent to the sufficient condition derived by Vembu, Verdu and Steinberg, whereas our necessary condition is shown to be stronger than or equivalent to the necessary condition derived by them. It turns out, as a direct consequence, that separation principle in a relevantly generalized sense holds for a wide class of sources and channels, as was shown in a quite dfifferent manner by Vembu, Verdu and Steinberg. It should also be remarked that a nice duality is found between our necessary and sufficient conditions, whereas we cannot fully enjoy such a duality between the necessary condition and the sufficient condition by Vembu, Verdu and Steinberg. In addition, we demonstrate a sufficient condition as well as a necessary condition for the epsilon-transmissibility. Finally, the separation theorem of the traditional standard form is shown to hold for the class of sources and channels that satisfy the semi-strong converse property.
[ { "created": "Tue, 18 Dec 2007 13:33:58 GMT", "version": "v1" } ]
2007-12-19
[ [ "Han", "Te Sun", "" ] ]
Given a general source with countably infinite source alphabet and a general channel with arbitrary abstract channel input/channel output alphabets, we study the joint source-channel coding problem from the information-spectrum point of view. First, we generalize Feinstein's lemma (direct part) and Verdu-Han's lemma (converse part) so as to be applicable to the general joint source-channel coding problem. Based on these lemmas, we establish a sufficient condition as well as a necessary condition for the source to be reliably transmissible over the channel with asymptotically vanishing probability of error. It is shown that our sufficient condition is equivalent to the sufficient condition derived by Vembu, Verdu and Steinberg, whereas our necessary condition is shown to be stronger than or equivalent to the necessary condition derived by them. It turns out, as a direct consequence, that separation principle in a relevantly generalized sense holds for a wide class of sources and channels, as was shown in a quite dfifferent manner by Vembu, Verdu and Steinberg. It should also be remarked that a nice duality is found between our necessary and sufficient conditions, whereas we cannot fully enjoy such a duality between the necessary condition and the sufficient condition by Vembu, Verdu and Steinberg. In addition, we demonstrate a sufficient condition as well as a necessary condition for the epsilon-transmissibility. Finally, the separation theorem of the traditional standard form is shown to hold for the class of sources and channels that satisfy the semi-strong converse property.
2404.19467
Harshini Gangapuram
Harshini Gangapuram and Vidya Manian
Bayesian Functional Connectivity and Graph Convolutional Network for Working Memory Load Classification
null
null
null
null
cs.LG eess.SP q-bio.NC
http://creativecommons.org/licenses/by-sa/4.0/
Brain responses related to working memory originate from distinct brain areas and oscillate at different frequencies. EEG signals with high temporal correlation can effectively capture these responses. Therefore, estimating the functional connectivity of EEG for working memory protocols in different frequency bands plays a significant role in analyzing the brain dynamics with increasing memory and cognitive loads, which remains largely unexplored. The present study introduces a Bayesian structure learning algorithm to learn the functional connectivity of EEG in sensor space. Next, the functional connectivity graphs are taken as input to the graph convolutional network to classify the working memory loads. The intrasubject (subject-specific) classification performed on 154 subjects for six different verbal working memory loads produced the highest classification accuracy of 96% and average classification accuracy of 89%, outperforming state-of-the-art classification models proposed in the literature. Furthermore, the proposed Bayesian structure learning algorithm is compared with state-of-the-art functional connectivity estimation methods through intersubject and intrasubject statistical analysis of variance. The results also show that the alpha and theta bands have better classification accuracy than the beta band.
[ { "created": "Tue, 30 Apr 2024 11:31:07 GMT", "version": "v1" } ]
2024-05-01
[ [ "Gangapuram", "Harshini", "" ], [ "Manian", "Vidya", "" ] ]
Brain responses related to working memory originate from distinct brain areas and oscillate at different frequencies. EEG signals with high temporal correlation can effectively capture these responses. Therefore, estimating the functional connectivity of EEG for working memory protocols in different frequency bands plays a significant role in analyzing the brain dynamics with increasing memory and cognitive loads, which remains largely unexplored. The present study introduces a Bayesian structure learning algorithm to learn the functional connectivity of EEG in sensor space. Next, the functional connectivity graphs are taken as input to the graph convolutional network to classify the working memory loads. The intrasubject (subject-specific) classification performed on 154 subjects for six different verbal working memory loads produced the highest classification accuracy of 96% and average classification accuracy of 89%, outperforming state-of-the-art classification models proposed in the literature. Furthermore, the proposed Bayesian structure learning algorithm is compared with state-of-the-art functional connectivity estimation methods through intersubject and intrasubject statistical analysis of variance. The results also show that the alpha and theta bands have better classification accuracy than the beta band.
1404.7060
Mitsuru Kusumoto
Mitsuru Kusumoto and Yuichi Yoshida
Testing Forest-Isomorphism in the Adjacency List Model
ICALP 2014 to appear
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of testing if two input forests are isomorphic or are far from being so. An algorithm is called an $\varepsilon$-tester for forest-isomorphism if given an oracle access to two forests $G$ and $H$ in the adjacency list model, with high probability, accepts if $G$ and $H$ are isomorphic and rejects if we must modify at least $\varepsilon n$ edges to make $G$ isomorphic to $H$. We show an $\varepsilon$-tester for forest-isomorphism with a query complexity $\mathrm{polylog}(n)$ and a lower bound of $\Omega(\sqrt{\log{n}})$. Further, with the aid of the tester, we show that every graph property is testable in the adjacency list model with $\mathrm{polylog}(n)$ queries if the input graph is a forest.
[ { "created": "Mon, 28 Apr 2014 17:10:35 GMT", "version": "v1" } ]
2014-04-29
[ [ "Kusumoto", "Mitsuru", "" ], [ "Yoshida", "Yuichi", "" ] ]
We consider the problem of testing if two input forests are isomorphic or are far from being so. An algorithm is called an $\varepsilon$-tester for forest-isomorphism if given an oracle access to two forests $G$ and $H$ in the adjacency list model, with high probability, accepts if $G$ and $H$ are isomorphic and rejects if we must modify at least $\varepsilon n$ edges to make $G$ isomorphic to $H$. We show an $\varepsilon$-tester for forest-isomorphism with a query complexity $\mathrm{polylog}(n)$ and a lower bound of $\Omega(\sqrt{\log{n}})$. Further, with the aid of the tester, we show that every graph property is testable in the adjacency list model with $\mathrm{polylog}(n)$ queries if the input graph is a forest.
2310.16677
Niki Maria Foteinopoulou
Niki Maria Foteinopoulou, Ioannis Patras
Machine Learning Approaches for Fine-Grained Symptom Estimation in Schizophrenia: A Comprehensive Review
19 pages, 5 figures
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Schizophrenia is a severe yet treatable mental disorder, it is diagnosed using a multitude of primary and secondary symptoms. Diagnosis and treatment for each individual depends on the severity of the symptoms, therefore there is a need for accurate, personalised assessments. However, the process can be both time-consuming and subjective; hence, there is a motivation to explore automated methods that can offer consistent diagnosis and precise symptom assessments, thereby complementing the work of healthcare practitioners. Machine Learning has demonstrated impressive capabilities across numerous domains, including medicine; the use of Machine Learning in patient assessment holds great promise for healthcare professionals and patients alike, as it can lead to more consistent and accurate symptom estimation.This survey aims to review methodologies that utilise Machine Learning for diagnosis and assessment of schizophrenia. Contrary to previous reviews that primarily focused on binary classification, this work recognises the complexity of the condition and instead, offers an overview of Machine Learning methods designed for fine-grained symptom estimation. We cover multiple modalities, namely Medical Imaging, Electroencephalograms and Audio-Visual, as the illness symptoms can manifest themselves both in a patient's pathology and behaviour. Finally, we analyse the datasets and methodologies used in the studies and identify trends, gaps as well as opportunities for future research.
[ { "created": "Wed, 25 Oct 2023 14:42:58 GMT", "version": "v1" } ]
2023-10-26
[ [ "Foteinopoulou", "Niki Maria", "" ], [ "Patras", "Ioannis", "" ] ]
Schizophrenia is a severe yet treatable mental disorder, it is diagnosed using a multitude of primary and secondary symptoms. Diagnosis and treatment for each individual depends on the severity of the symptoms, therefore there is a need for accurate, personalised assessments. However, the process can be both time-consuming and subjective; hence, there is a motivation to explore automated methods that can offer consistent diagnosis and precise symptom assessments, thereby complementing the work of healthcare practitioners. Machine Learning has demonstrated impressive capabilities across numerous domains, including medicine; the use of Machine Learning in patient assessment holds great promise for healthcare professionals and patients alike, as it can lead to more consistent and accurate symptom estimation.This survey aims to review methodologies that utilise Machine Learning for diagnosis and assessment of schizophrenia. Contrary to previous reviews that primarily focused on binary classification, this work recognises the complexity of the condition and instead, offers an overview of Machine Learning methods designed for fine-grained symptom estimation. We cover multiple modalities, namely Medical Imaging, Electroencephalograms and Audio-Visual, as the illness symptoms can manifest themselves both in a patient's pathology and behaviour. Finally, we analyse the datasets and methodologies used in the studies and identify trends, gaps as well as opportunities for future research.
2109.02625
Guande Wu
Guande Wu, Jianzhe Lin, Claudio T. Silva
ERA: Entity Relationship Aware Video Summarization with Wasserstein GAN
8 pages, 3 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video summarization aims to simplify large scale video browsing by generating concise, short summaries that diver from but well represent the original video. Due to the scarcity of video annotations, recent progress for video summarization concentrates on unsupervised methods, among which the GAN based methods are most prevalent. This type of methods includes a summarizer and a discriminator. The summarized video from the summarizer will be assumed as the final output, only if the video reconstructed from this summary cannot be discriminated from the original one by the discriminator. The primary problems of this GAN based methods are two folds. First, the summarized video in this way is a subset of original video with low redundancy and contains high priority events/entities. This summarization criterion is not enough. Second, the training of the GAN framework is not stable. This paper proposes a novel Entity relationship Aware video summarization method (ERA) to address the above problems. To be more specific, we introduce an Adversarial Spatio Temporal network to construct the relationship among entities, which we think should also be given high priority in the summarization. The GAN training problem is solved by introducing the Wasserstein GAN and two newly proposed video patch/score sum losses. In addition, the score sum loss can also relieve the model sensitivity to the varying video lengths, which is an inherent problem for most current video analysis tasks. Our method substantially lifts the performance on the target benchmark datasets and exceeds the current leaderboard Rank 1 state of the art CSNet (2.1% F1 score increase on TVSum and 3.1% F1 score increase on SumMe). We hope our straightforward yet effective approach will shed some light on the future research of unsupervised video summarization.
[ { "created": "Mon, 6 Sep 2021 17:46:59 GMT", "version": "v1" } ]
2021-09-07
[ [ "Wu", "Guande", "" ], [ "Lin", "Jianzhe", "" ], [ "Silva", "Claudio T.", "" ] ]
Video summarization aims to simplify large scale video browsing by generating concise, short summaries that diver from but well represent the original video. Due to the scarcity of video annotations, recent progress for video summarization concentrates on unsupervised methods, among which the GAN based methods are most prevalent. This type of methods includes a summarizer and a discriminator. The summarized video from the summarizer will be assumed as the final output, only if the video reconstructed from this summary cannot be discriminated from the original one by the discriminator. The primary problems of this GAN based methods are two folds. First, the summarized video in this way is a subset of original video with low redundancy and contains high priority events/entities. This summarization criterion is not enough. Second, the training of the GAN framework is not stable. This paper proposes a novel Entity relationship Aware video summarization method (ERA) to address the above problems. To be more specific, we introduce an Adversarial Spatio Temporal network to construct the relationship among entities, which we think should also be given high priority in the summarization. The GAN training problem is solved by introducing the Wasserstein GAN and two newly proposed video patch/score sum losses. In addition, the score sum loss can also relieve the model sensitivity to the varying video lengths, which is an inherent problem for most current video analysis tasks. Our method substantially lifts the performance on the target benchmark datasets and exceeds the current leaderboard Rank 1 state of the art CSNet (2.1% F1 score increase on TVSum and 3.1% F1 score increase on SumMe). We hope our straightforward yet effective approach will shed some light on the future research of unsupervised video summarization.
2209.07302
Xiaomin Li
Jianrong Wang, Xiaomin Li, Xuewei Li, Mei Yu, Qiang Fang, Li Liu
MVNet: Memory Assistance and Vocal Reinforcement Network for Speech Enhancement
ICONIP 2022
null
null
null
cs.SD eess.AS
http://creativecommons.org/publicdomain/zero/1.0/
Speech enhancement improves speech quality and promotes the performance of various downstream tasks. However, most current speech enhancement work was mainly devoted to improving the performance of downstream automatic speech recognition (ASR), only a relatively small amount of work focused on the automatic speaker verification (ASV) task. In this work, we propose a MVNet consisted of a memory assistance module which improves the performance of downstream ASR and a vocal reinforcement module which boosts the performance of ASV. In addition, we design a new loss function to improve speaker vocal similarity. Experimental results on the Libri2mix dataset show that our method outperforms baseline methods in several metrics, including speech quality, intelligibility, and speaker vocal similarity et al.
[ { "created": "Thu, 15 Sep 2022 13:57:48 GMT", "version": "v1" } ]
2022-09-16
[ [ "Wang", "Jianrong", "" ], [ "Li", "Xiaomin", "" ], [ "Li", "Xuewei", "" ], [ "Yu", "Mei", "" ], [ "Fang", "Qiang", "" ], [ "Liu", "Li", "" ] ]
Speech enhancement improves speech quality and promotes the performance of various downstream tasks. However, most current speech enhancement work was mainly devoted to improving the performance of downstream automatic speech recognition (ASR), only a relatively small amount of work focused on the automatic speaker verification (ASV) task. In this work, we propose a MVNet consisted of a memory assistance module which improves the performance of downstream ASR and a vocal reinforcement module which boosts the performance of ASV. In addition, we design a new loss function to improve speaker vocal similarity. Experimental results on the Libri2mix dataset show that our method outperforms baseline methods in several metrics, including speech quality, intelligibility, and speaker vocal similarity et al.
2301.02711
Thomas Thuesen Enevoldsen
Thomas T. Enevoldsen, Mogens Blanke, Roberto Galeazzi
Autonomy for Ferries and Harbour Buses: a Collision Avoidance Perspective
Accepted for presentation at the IFAC World Congress 2023
null
null
null
cs.RO cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper provides a collision avoidance perspective to maritime autonomy, in the shift towards Maritime Autonomous Surface Ships (MASS). In particular, the paper presents the developments related to the Greenhopper, Denmark's first autonomous harbour bus. The collision and grounding avoidance scheme, called the Short Horizon Planner (SHP), is described and discussed in detail. Furthermore, the required autonomy stack for facilitating safe and rule-compliant collision avoidance is presented. The inherent difficulties related to adhering to the COLREGs are outlined, highlighting some of the operational constraints and challenges within the space of autonomous ferries and harbour buses. Finally, collision and grounding avoidance is demonstrated using a simulation of the whole Greenhopper autonomy stack.
[ { "created": "Fri, 6 Jan 2023 20:57:47 GMT", "version": "v1" }, { "created": "Thu, 20 Apr 2023 09:05:48 GMT", "version": "v2" } ]
2023-04-21
[ [ "Enevoldsen", "Thomas T.", "" ], [ "Blanke", "Mogens", "" ], [ "Galeazzi", "Roberto", "" ] ]
This paper provides a collision avoidance perspective to maritime autonomy, in the shift towards Maritime Autonomous Surface Ships (MASS). In particular, the paper presents the developments related to the Greenhopper, Denmark's first autonomous harbour bus. The collision and grounding avoidance scheme, called the Short Horizon Planner (SHP), is described and discussed in detail. Furthermore, the required autonomy stack for facilitating safe and rule-compliant collision avoidance is presented. The inherent difficulties related to adhering to the COLREGs are outlined, highlighting some of the operational constraints and challenges within the space of autonomous ferries and harbour buses. Finally, collision and grounding avoidance is demonstrated using a simulation of the whole Greenhopper autonomy stack.
2110.05747
Khizar Hayat
Tanzila Qazi, Mushtaq Ali and Khizar Hayat
Seamless Copy Move Manipulation in Digital Images
9 pages and 9 figures (most having subfigures)
J. Imaging 2022, 8(3), 69
10.3390/jimaging8030069
null
cs.CV cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The importance and relevance of digital image forensics has attracted researchers to establish different techniques for creating as well as detecting forgeries. The core category in passive image forgery is copy-move image forgery that affects the originality of image by applying a different transformation. In this paper frequency domain image manipulation method is being presented.The method exploits the localized nature of discrete wavelet transform (DWT) to get hold of the region of the host image to be manipulated. Both the patch and host image are subjected to DWT at the same level $l$ to get $3l + 1$ sub-bands and each sub-band of the patch is pasted to the identified region in the corresponding sub-band of the host image. The resultant manipulated host sub-bands are then subjected to inverse DWT to get the final manipulated host image. The proposed method shows good resistance against detection by two frequency domain forgery detection methods from the literature. The purpose of this research work is to create the forgery and highlight the need to produce forgery detection methods that are robust against the malicious copy-move forgery.
[ { "created": "Tue, 12 Oct 2021 05:35:26 GMT", "version": "v1" } ]
2022-03-11
[ [ "Qazi", "Tanzila", "" ], [ "Ali", "Mushtaq", "" ], [ "Hayat", "Khizar", "" ] ]
The importance and relevance of digital image forensics has attracted researchers to establish different techniques for creating as well as detecting forgeries. The core category in passive image forgery is copy-move image forgery that affects the originality of image by applying a different transformation. In this paper frequency domain image manipulation method is being presented.The method exploits the localized nature of discrete wavelet transform (DWT) to get hold of the region of the host image to be manipulated. Both the patch and host image are subjected to DWT at the same level $l$ to get $3l + 1$ sub-bands and each sub-band of the patch is pasted to the identified region in the corresponding sub-band of the host image. The resultant manipulated host sub-bands are then subjected to inverse DWT to get the final manipulated host image. The proposed method shows good resistance against detection by two frequency domain forgery detection methods from the literature. The purpose of this research work is to create the forgery and highlight the need to produce forgery detection methods that are robust against the malicious copy-move forgery.
2006.16471
Mazin Hnewa
Mazin Hnewa and Hayder Radha
Object Detection Under Rainy Conditions for Autonomous Vehicles: A Review of State-of-the-Art and Emerging Techniques
null
IEEE Signal Processing Magazine, vol. 38, no. 1, pp. 53-67, Jan. 2021
10.1109/MSP.2020.2984801
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advanced automotive active-safety systems, in general, and autonomous vehicles, in particular, rely heavily on visual data to classify and localize objects such as pedestrians, traffic signs and lights, and other nearby cars, to assist the corresponding vehicles maneuver safely in their environments. However, the performance of object detection methods could degrade rather significantly under challenging weather scenarios including rainy conditions. Despite major advancements in the development of deraining approaches, the impact of rain on object detection has largely been understudied, especially in the context of autonomous driving. The main objective of this paper is to present a tutorial on state-of-the-art and emerging techniques that represent leading candidates for mitigating the influence of rainy conditions on an autonomous vehicle's ability to detect objects. Our goal includes surveying and analyzing the performance of object detection methods trained and tested using visual data captured under clear and rainy conditions. Moreover, we survey and evaluate the efficacy and limitations of leading deraining approaches, deep-learning based domain adaptation, and image translation frameworks that are being considered for addressing the problem of object detection under rainy conditions. Experimental results of a variety of the surveyed techniques are presented as part of this tutorial.
[ { "created": "Tue, 30 Jun 2020 02:05:10 GMT", "version": "v1" }, { "created": "Wed, 8 Jul 2020 19:06:43 GMT", "version": "v2" }, { "created": "Fri, 10 Jul 2020 19:51:52 GMT", "version": "v3" }, { "created": "Fri, 12 Feb 2021 02:16:15 GMT", "version": "v4" } ]
2021-02-17
[ [ "Hnewa", "Mazin", "" ], [ "Radha", "Hayder", "" ] ]
Advanced automotive active-safety systems, in general, and autonomous vehicles, in particular, rely heavily on visual data to classify and localize objects such as pedestrians, traffic signs and lights, and other nearby cars, to assist the corresponding vehicles maneuver safely in their environments. However, the performance of object detection methods could degrade rather significantly under challenging weather scenarios including rainy conditions. Despite major advancements in the development of deraining approaches, the impact of rain on object detection has largely been understudied, especially in the context of autonomous driving. The main objective of this paper is to present a tutorial on state-of-the-art and emerging techniques that represent leading candidates for mitigating the influence of rainy conditions on an autonomous vehicle's ability to detect objects. Our goal includes surveying and analyzing the performance of object detection methods trained and tested using visual data captured under clear and rainy conditions. Moreover, we survey and evaluate the efficacy and limitations of leading deraining approaches, deep-learning based domain adaptation, and image translation frameworks that are being considered for addressing the problem of object detection under rainy conditions. Experimental results of a variety of the surveyed techniques are presented as part of this tutorial.