id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2308.06480
Yunshan Ma
Yunshan Ma, Chenchen Ye, Zijian Wu, Xiang Wang, Yixin Cao, Tat-Seng Chua
Context-aware Event Forecasting via Graph Disentanglement
KDD 2023, 9 pages, 7 figures, 4 tables
null
10.1145/3580305.3599285
null
cs.IR
http://creativecommons.org/licenses/by/4.0/
Event forecasting has been a demanding and challenging task throughout the entire human history. It plays a pivotal role in crisis alarming and disaster prevention in various aspects of the whole society. The task of event forecasting aims to model the relational and temporal patterns based on historical events and makes forecasting to what will happen in the future. Most existing studies on event forecasting formulate it as a problem of link prediction on temporal event graphs. However, such pure structured formulation suffers from two main limitations: 1) most events fall into general and high-level types in the event ontology, and therefore they tend to be coarse-grained and offers little utility which inevitably harms the forecasting accuracy; and 2) the events defined by a fixed ontology are unable to retain the out-of-ontology contextual information. To address these limitations, we propose a novel task of context-aware event forecasting which incorporates auxiliary contextual information. First, the categorical context provides supplementary fine-grained information to the coarse-grained events. Second and more importantly, the context provides additional information towards specific situation and condition, which is crucial or even determinant to what will happen next. However, it is challenging to properly integrate context into the event forecasting framework, considering the complex patterns in the multi-context scenario. Towards this end, we design a novel framework named Separation and Collaboration Graph Disentanglement (short as SeCoGD) for context-aware event forecasting. Since there is no available dataset for this novel task, we construct three large-scale datasets based on GDELT. Experimental results demonstrate that our model outperforms a list of SOTA methods.
[ { "created": "Sat, 12 Aug 2023 06:23:41 GMT", "version": "v1" } ]
2023-08-15
[ [ "Ma", "Yunshan", "" ], [ "Ye", "Chenchen", "" ], [ "Wu", "Zijian", "" ], [ "Wang", "Xiang", "" ], [ "Cao", "Yixin", "" ], [ "Chua", "Tat-Seng", "" ] ]
Event forecasting has been a demanding and challenging task throughout the entire human history. It plays a pivotal role in crisis alarming and disaster prevention in various aspects of the whole society. The task of event forecasting aims to model the relational and temporal patterns based on historical events and makes forecasting to what will happen in the future. Most existing studies on event forecasting formulate it as a problem of link prediction on temporal event graphs. However, such pure structured formulation suffers from two main limitations: 1) most events fall into general and high-level types in the event ontology, and therefore they tend to be coarse-grained and offers little utility which inevitably harms the forecasting accuracy; and 2) the events defined by a fixed ontology are unable to retain the out-of-ontology contextual information. To address these limitations, we propose a novel task of context-aware event forecasting which incorporates auxiliary contextual information. First, the categorical context provides supplementary fine-grained information to the coarse-grained events. Second and more importantly, the context provides additional information towards specific situation and condition, which is crucial or even determinant to what will happen next. However, it is challenging to properly integrate context into the event forecasting framework, considering the complex patterns in the multi-context scenario. Towards this end, we design a novel framework named Separation and Collaboration Graph Disentanglement (short as SeCoGD) for context-aware event forecasting. Since there is no available dataset for this novel task, we construct three large-scale datasets based on GDELT. Experimental results demonstrate that our model outperforms a list of SOTA methods.
2104.05091
Daniel Ting
Daniel Ting
Simple, Optimal Algorithms for Random Sampling Without Replacement
null
null
null
null
cs.DS stat.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Consider the fundamental problem of drawing a simple random sample of size k without replacement from [n] := {1, . . . , n}. Although a number of classical algorithms exist for this problem, we construct algorithms that are even simpler, easier to implement, and have optimal space and time complexity.
[ { "created": "Sun, 11 Apr 2021 20:06:13 GMT", "version": "v1" } ]
2021-04-13
[ [ "Ting", "Daniel", "" ] ]
Consider the fundamental problem of drawing a simple random sample of size k without replacement from [n] := {1, . . . , n}. Although a number of classical algorithms exist for this problem, we construct algorithms that are even simpler, easier to implement, and have optimal space and time complexity.
1609.05561
Ricardo Fabbri
Anil Usumezbas and Ricardo Fabbri and Benjamin B. Kimia
From Multiview Image Curves to 3D Drawings
Expanded ECCV 2016 version with tweaked figures and including an overview of the supplementary material available at multiview-3d-drawing.sourceforge.net
Lecture Notes in Computer Science, 9908, pp 70-87, september 2016
10.1007/978-3-319-46493-0_5
null
cs.CV cs.CG cs.GR cs.RO
http://creativecommons.org/licenses/by/4.0/
Reconstructing 3D scenes from multiple views has made impressive strides in recent years, chiefly by correlating isolated feature points, intensity patterns, or curvilinear structures. In the general setting - without controlled acquisition, abundant texture, curves and surfaces following specific models or limiting scene complexity - most methods produce unorganized point clouds, meshes, or voxel representations, with some exceptions producing unorganized clouds of 3D curve fragments. Ideally, many applications require structured representations of curves, surfaces and their spatial relationships. This paper presents a step in this direction by formulating an approach that combines 2D image curves into a collection of 3D curves, with topological connectivity between them represented as a 3D graph. This results in a 3D drawing, which is complementary to surface representations in the same sense as a 3D scaffold complements a tent taut over it. We evaluate our results against truth on synthetic and real datasets.
[ { "created": "Sun, 18 Sep 2016 22:20:35 GMT", "version": "v1" } ]
2016-09-20
[ [ "Usumezbas", "Anil", "" ], [ "Fabbri", "Ricardo", "" ], [ "Kimia", "Benjamin B.", "" ] ]
Reconstructing 3D scenes from multiple views has made impressive strides in recent years, chiefly by correlating isolated feature points, intensity patterns, or curvilinear structures. In the general setting - without controlled acquisition, abundant texture, curves and surfaces following specific models or limiting scene complexity - most methods produce unorganized point clouds, meshes, or voxel representations, with some exceptions producing unorganized clouds of 3D curve fragments. Ideally, many applications require structured representations of curves, surfaces and their spatial relationships. This paper presents a step in this direction by formulating an approach that combines 2D image curves into a collection of 3D curves, with topological connectivity between them represented as a 3D graph. This results in a 3D drawing, which is complementary to surface representations in the same sense as a 3D scaffold complements a tent taut over it. We evaluate our results against truth on synthetic and real datasets.
2407.11861
Muzhaffar Hazman
Muzhaffar Hazman, Susan McKeever, Josephine Griffith
What Makes a Meme a Meme? Identifying Memes for Memetics-Aware Dataset Creation
Accepted for Publication at AAAI-ICWSM 2025
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Warning: This paper contains memes that may be offensive to some readers. Multimodal Internet Memes are now a ubiquitous fixture in online discourse. One strand of meme-based research is the classification of memes according to various affects, such as sentiment and hate, supported by manually compiled meme datasets. Understanding the unique characteristics of memes is crucial for meme classification. Unlike other user-generated content, memes spread via memetics, i.e. the process by which memes are imitated and transformed into symbols used to create new memes. In effect, there exists an ever-evolving pool of visual and linguistic symbols that underpin meme culture and are crucial to interpreting the meaning of individual memes. The current approach of training supervised learning models on static datasets, without taking memetics into account, limits the depth and accuracy of meme interpretation. We argue that meme datasets must contain genuine memes, as defined via memetics, so that effective meme classifiers can be built. In this work, we develop a meme identification protocol which distinguishes meme from non-memetic content by recognising the memetics within it. We apply our protocol to random samplings of the leading 7 meme classification datasets and observe that more than half (50. 4\%) of the evaluated samples were found to contain no signs of memetics. Our work also provides a meme typology grounded in memetics, providing the basis for more effective approaches to the interpretation of memes and the creation of meme datasets.
[ { "created": "Tue, 16 Jul 2024 15:48:36 GMT", "version": "v1" } ]
2024-07-17
[ [ "Hazman", "Muzhaffar", "" ], [ "McKeever", "Susan", "" ], [ "Griffith", "Josephine", "" ] ]
Warning: This paper contains memes that may be offensive to some readers. Multimodal Internet Memes are now a ubiquitous fixture in online discourse. One strand of meme-based research is the classification of memes according to various affects, such as sentiment and hate, supported by manually compiled meme datasets. Understanding the unique characteristics of memes is crucial for meme classification. Unlike other user-generated content, memes spread via memetics, i.e. the process by which memes are imitated and transformed into symbols used to create new memes. In effect, there exists an ever-evolving pool of visual and linguistic symbols that underpin meme culture and are crucial to interpreting the meaning of individual memes. The current approach of training supervised learning models on static datasets, without taking memetics into account, limits the depth and accuracy of meme interpretation. We argue that meme datasets must contain genuine memes, as defined via memetics, so that effective meme classifiers can be built. In this work, we develop a meme identification protocol which distinguishes meme from non-memetic content by recognising the memetics within it. We apply our protocol to random samplings of the leading 7 meme classification datasets and observe that more than half (50. 4\%) of the evaluated samples were found to contain no signs of memetics. Our work also provides a meme typology grounded in memetics, providing the basis for more effective approaches to the interpretation of memes and the creation of meme datasets.
1911.06985
Dominik K\"oppl
Hideo Bannai and Juha K\"arkk\"ainen and Dominik K\"oppl and Marcin Pi\c{a}tkowski
Constructing the Bijective and the Extended Burrows-Wheeler Transform in Linear Time
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Burrows-Wheeler transform (BWT) is a permutation whose applications are prevalent in data compression and text indexing. The bijective BWT (BBWT) is a bijective variant of it. Although it is known that the BWT can be constructed in linear time for integer alphabets by using a linear time suffix array construction algorithm, it was up to now only conjectured that the BBWT can also be constructed in linear time. We confirm this conjecture by proposing a construction algorithm that is based on SAIS, improving the best known result of $O(n \lg n /\lg \lg n)$ time to linear.
[ { "created": "Sat, 16 Nov 2019 08:04:25 GMT", "version": "v1" }, { "created": "Thu, 22 Apr 2021 06:48:12 GMT", "version": "v2" } ]
2021-04-23
[ [ "Bannai", "Hideo", "" ], [ "Kärkkäinen", "Juha", "" ], [ "Köppl", "Dominik", "" ], [ "Picatkowski", "Marcin", "" ] ]
The Burrows-Wheeler transform (BWT) is a permutation whose applications are prevalent in data compression and text indexing. The bijective BWT (BBWT) is a bijective variant of it. Although it is known that the BWT can be constructed in linear time for integer alphabets by using a linear time suffix array construction algorithm, it was up to now only conjectured that the BBWT can also be constructed in linear time. We confirm this conjecture by proposing a construction algorithm that is based on SAIS, improving the best known result of $O(n \lg n /\lg \lg n)$ time to linear.
1909.06251
Eric Horton
Eric Horton, Chris Parnin
V2: Fast Detection of Configuration Drift in Python
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Code snippets are prevalent, but are hard to reuse because they often lack an accompanying environment configuration. Most are not actively maintained, allowing for drift between the most recent possible configuration and the code snippet as the snippet becomes out-of-date over time. Recent work has identified the problem of validating and detecting out-of-date code snippets as the most important consideration for code reuse. However, determining if a snippet is correct, but simply out-of-date, is a non-trivial task. In the best case, breaking changes are well documented, allowing developers to manually determine when a code snippet contains an out-of-date API usage. In the worst case, determining if and when a breaking change was made requires an exhaustive search through previous dependency versions. We present V2, a strategy for determining if a code snippet is out-of-date by detecting discrete instances of configuration drift, where the snippet uses an API which has since undergone a breaking change. Each instance of configuration drift is classified by a failure encountered during validation and a configuration patch, consisting of dependency version changes, which fixes the underlying fault. V2 uses feedback-directed search to explore the possible configuration space for a code snippet, reducing the number of potential environment configurations that need to be validated. When run on a corpus of public Python snippets from prior research, V2 identifies 248 instances of configuration drift.
[ { "created": "Fri, 13 Sep 2019 14:25:06 GMT", "version": "v1" } ]
2019-09-16
[ [ "Horton", "Eric", "" ], [ "Parnin", "Chris", "" ] ]
Code snippets are prevalent, but are hard to reuse because they often lack an accompanying environment configuration. Most are not actively maintained, allowing for drift between the most recent possible configuration and the code snippet as the snippet becomes out-of-date over time. Recent work has identified the problem of validating and detecting out-of-date code snippets as the most important consideration for code reuse. However, determining if a snippet is correct, but simply out-of-date, is a non-trivial task. In the best case, breaking changes are well documented, allowing developers to manually determine when a code snippet contains an out-of-date API usage. In the worst case, determining if and when a breaking change was made requires an exhaustive search through previous dependency versions. We present V2, a strategy for determining if a code snippet is out-of-date by detecting discrete instances of configuration drift, where the snippet uses an API which has since undergone a breaking change. Each instance of configuration drift is classified by a failure encountered during validation and a configuration patch, consisting of dependency version changes, which fixes the underlying fault. V2 uses feedback-directed search to explore the possible configuration space for a code snippet, reducing the number of potential environment configurations that need to be validated. When run on a corpus of public Python snippets from prior research, V2 identifies 248 instances of configuration drift.
2310.12337
Luke Geeson
Luke Geeson, Lee Smith
Compiler Testing With Relaxed Memory Models
12 pages, Accepted to IEEE/ACM International Symposium on Code Generation and Optimization
null
null
null
cs.PL cs.AR cs.SE
http://creativecommons.org/licenses/by-sa/4.0/
Finding bugs is key to the correctness of compilers in wide use today. If the behaviour of a compiled program, as allowed by its architecture memory model, is not a behaviour of the source program under its source model, then there is a bug. This holds for all programs, but we focus on concurrency bugs that occur only with two or more threads of execution. We focus on testing techniques that detect such bugs in C/C++ compilers. We seek a testing technique that automatically covers concurrency bugs up to fixed bounds on program sizes and that scales to find bugs in compiled programs with many lines of code. Otherwise, a testing technique can miss bugs. Unfortunately, the state-of-the-art techniques are yet to satisfy all of these properties. We present the T\'el\'echat compiler testing tool for concurrent programs. T\'el\'echat compiles a concurrent C/C++ program and compares source and compiled program behaviours using source and architecture memory models. We make three claims: T\'el\'echat improves the state-of-the-art at finding bugs in code generation for multi-threaded execution, it is the first public description of a compiler testing tool for concurrency that is deployed in industry, and it is the first tool that takes a significant step towards the desired properties. We provide experimental evidence suggesting T\'el\'echat finds bugs missed by other state-of-the-art techniques, case studies indicating that T\'el\'echat satisfies the properties, and reports of our experience deploying T\'el\'echat in industry regression testing.
[ { "created": "Wed, 18 Oct 2023 21:24:26 GMT", "version": "v1" }, { "created": "Fri, 15 Dec 2023 17:02:39 GMT", "version": "v2" }, { "created": "Mon, 29 Jan 2024 20:38:43 GMT", "version": "v3" } ]
2024-01-31
[ [ "Geeson", "Luke", "" ], [ "Smith", "Lee", "" ] ]
Finding bugs is key to the correctness of compilers in wide use today. If the behaviour of a compiled program, as allowed by its architecture memory model, is not a behaviour of the source program under its source model, then there is a bug. This holds for all programs, but we focus on concurrency bugs that occur only with two or more threads of execution. We focus on testing techniques that detect such bugs in C/C++ compilers. We seek a testing technique that automatically covers concurrency bugs up to fixed bounds on program sizes and that scales to find bugs in compiled programs with many lines of code. Otherwise, a testing technique can miss bugs. Unfortunately, the state-of-the-art techniques are yet to satisfy all of these properties. We present the T\'el\'echat compiler testing tool for concurrent programs. T\'el\'echat compiles a concurrent C/C++ program and compares source and compiled program behaviours using source and architecture memory models. We make three claims: T\'el\'echat improves the state-of-the-art at finding bugs in code generation for multi-threaded execution, it is the first public description of a compiler testing tool for concurrency that is deployed in industry, and it is the first tool that takes a significant step towards the desired properties. We provide experimental evidence suggesting T\'el\'echat finds bugs missed by other state-of-the-art techniques, case studies indicating that T\'el\'echat satisfies the properties, and reports of our experience deploying T\'el\'echat in industry regression testing.
2203.14814
Raghul Parthipan
Raghul Parthipan, Hannah M. Christensen, J. Scott Hosking, Damon J. Wischik
Using Probabilistic Machine Learning to Better Model Temporal Patterns in Parameterizations: a case study with the Lorenz 96 model
Submitted to Geoscientific Model Development (GMD). 26 pages, 10 figures. The manuscript was revised following helpful comments from the reviewers after rejection from the Journal of Advances in Modeling Earth Systems (JAMES). These included further experimental results and changes to the narrative, amongst other revisions. New version created to include grant numbers of funding bodies
null
null
null
cs.LG physics.ao-ph
http://creativecommons.org/licenses/by/4.0/
The modelling of small-scale processes is a major source of error in climate models, hindering the accuracy of low-cost models which must approximate such processes through parameterization. Red noise is essential to many operational parameterization schemes, helping model temporal correlations. We show how to build on the successes of red noise by combining the known benefits of stochasticity with machine learning. This is done using a physically-informed recurrent neural network within a probabilistic framework. Our model is competitive and often superior to both a bespoke baseline and an existing probabilistic machine learning approach (GAN) when applied to the Lorenz 96 atmospheric simulation. This is due to its superior ability to model temporal patterns compared to standard first-order autoregressive schemes. It also generalises to unseen scenarios. We evaluate across a number of metrics from the literature, and also discuss the benefits of using the probabilistic metric of hold-out likelihood.
[ { "created": "Mon, 28 Mar 2022 14:51:42 GMT", "version": "v1" }, { "created": "Sat, 3 Sep 2022 23:56:19 GMT", "version": "v2" }, { "created": "Fri, 9 Sep 2022 10:17:37 GMT", "version": "v3" }, { "created": "Mon, 12 Sep 2022 11:01:05 GMT", "version": "v4" } ]
2022-09-13
[ [ "Parthipan", "Raghul", "" ], [ "Christensen", "Hannah M.", "" ], [ "Hosking", "J. Scott", "" ], [ "Wischik", "Damon J.", "" ] ]
The modelling of small-scale processes is a major source of error in climate models, hindering the accuracy of low-cost models which must approximate such processes through parameterization. Red noise is essential to many operational parameterization schemes, helping model temporal correlations. We show how to build on the successes of red noise by combining the known benefits of stochasticity with machine learning. This is done using a physically-informed recurrent neural network within a probabilistic framework. Our model is competitive and often superior to both a bespoke baseline and an existing probabilistic machine learning approach (GAN) when applied to the Lorenz 96 atmospheric simulation. This is due to its superior ability to model temporal patterns compared to standard first-order autoregressive schemes. It also generalises to unseen scenarios. We evaluate across a number of metrics from the literature, and also discuss the benefits of using the probabilistic metric of hold-out likelihood.
2211.02348
Nasim Rahaman
Nasim Rahaman and Martin Weiss and Frederik Tr\"auble and Francesco Locatello and Alexandre Lacoste and Yoshua Bengio and Chris Pal and Li Erran Li and Bernhard Sch\"olkopf
A General Purpose Neural Architecture for Geospatial Systems
Presented at AI + HADR Workshop at NeurIPS 2022
null
null
null
cs.LG cs.AI cs.CY
http://creativecommons.org/licenses/by/4.0/
Geospatial Information Systems are used by researchers and Humanitarian Assistance and Disaster Response (HADR) practitioners to support a wide variety of important applications. However, collaboration between these actors is difficult due to the heterogeneous nature of geospatial data modalities (e.g., multi-spectral images of various resolutions, timeseries, weather data) and diversity of tasks (e.g., regression of human activity indicators or detecting forest fires). In this work, we present a roadmap towards the construction of a general-purpose neural architecture (GPNA) with a geospatial inductive bias, pre-trained on large amounts of unlabelled earth observation data in a self-supervised manner. We envision how such a model may facilitate cooperation between members of the community. We show preliminary results on the first step of the roadmap, where we instantiate an architecture that can process a wide variety of geospatial data modalities and demonstrate that it can achieve competitive performance with domain-specific architectures on tasks relating to the U.N.'s Sustainable Development Goals.
[ { "created": "Fri, 4 Nov 2022 09:58:57 GMT", "version": "v1" } ]
2022-11-07
[ [ "Rahaman", "Nasim", "" ], [ "Weiss", "Martin", "" ], [ "Träuble", "Frederik", "" ], [ "Locatello", "Francesco", "" ], [ "Lacoste", "Alexandre", "" ], [ "Bengio", "Yoshua", "" ], [ "Pal", "Chris", "" ], [ "Li", "Li Erran", "" ], [ "Schölkopf", "Bernhard", "" ] ]
Geospatial Information Systems are used by researchers and Humanitarian Assistance and Disaster Response (HADR) practitioners to support a wide variety of important applications. However, collaboration between these actors is difficult due to the heterogeneous nature of geospatial data modalities (e.g., multi-spectral images of various resolutions, timeseries, weather data) and diversity of tasks (e.g., regression of human activity indicators or detecting forest fires). In this work, we present a roadmap towards the construction of a general-purpose neural architecture (GPNA) with a geospatial inductive bias, pre-trained on large amounts of unlabelled earth observation data in a self-supervised manner. We envision how such a model may facilitate cooperation between members of the community. We show preliminary results on the first step of the roadmap, where we instantiate an architecture that can process a wide variety of geospatial data modalities and demonstrate that it can achieve competitive performance with domain-specific architectures on tasks relating to the U.N.'s Sustainable Development Goals.
2005.13749
Shuangyi Wang
Shuangyi Wang, Xilong Hou, Richard Housden, Zengguang Hou, Davinder Singh, Kawal Rhode
IoT-based Remote Control Study of a Robotic Trans-esophageal Ultrasound Probe via LAN and 5G
9 pages, 5 figures, to be submitted to MICCAI ASMUS 2020 workshop
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A robotic trans-esophageal echocardiography (TEE) probe has been recently developed to address the problems with manual control in the X-ray envi-ronment when a conventional probe is used for interventional procedure guidance. However, the robot was exclusively to be used in local areas and the effectiveness of remote control has not been scientifically tested. In this study, we implemented an Internet-of-things (IoT)-based configuration to the TEE robot so the system can set up a local area network (LAN) or be configured to connect to an internet cloud over 5G. To investigate the re-mote control, backlash hysteresis effects were measured and analysed. A joy-stick-based device and a button-based gamepad were then employed and compared with the manual control in a target reaching experiment for the two steering axes. The results indicated different hysteresis curves for the left-right and up-down steering axes with the input wheel's deadbands found to be 15 deg and deg, respectively. Similar magnitudes of positioning errors at approximately 0.5 deg and maximum overshoots at around 2.5 deg were found when manually and robotically controlling the TEE probe. The amount of time to finish the task indicated a better performance using the button-based gamepad over joystick-based device, although both were worse than the manual control. It is concluded that the IoT-based remote control of the TEE probe is feasible and a trained user can accurately manipulate the probe. The main identified problem was the backlash hysteresis in the steering axes, which can result in continuous oscillations and overshoots.
[ { "created": "Thu, 28 May 2020 02:43:31 GMT", "version": "v1" } ]
2020-05-29
[ [ "Wang", "Shuangyi", "" ], [ "Hou", "Xilong", "" ], [ "Housden", "Richard", "" ], [ "Hou", "Zengguang", "" ], [ "Singh", "Davinder", "" ], [ "Rhode", "Kawal", "" ] ]
A robotic trans-esophageal echocardiography (TEE) probe has been recently developed to address the problems with manual control in the X-ray envi-ronment when a conventional probe is used for interventional procedure guidance. However, the robot was exclusively to be used in local areas and the effectiveness of remote control has not been scientifically tested. In this study, we implemented an Internet-of-things (IoT)-based configuration to the TEE robot so the system can set up a local area network (LAN) or be configured to connect to an internet cloud over 5G. To investigate the re-mote control, backlash hysteresis effects were measured and analysed. A joy-stick-based device and a button-based gamepad were then employed and compared with the manual control in a target reaching experiment for the two steering axes. The results indicated different hysteresis curves for the left-right and up-down steering axes with the input wheel's deadbands found to be 15 deg and deg, respectively. Similar magnitudes of positioning errors at approximately 0.5 deg and maximum overshoots at around 2.5 deg were found when manually and robotically controlling the TEE probe. The amount of time to finish the task indicated a better performance using the button-based gamepad over joystick-based device, although both were worse than the manual control. It is concluded that the IoT-based remote control of the TEE probe is feasible and a trained user can accurately manipulate the probe. The main identified problem was the backlash hysteresis in the steering axes, which can result in continuous oscillations and overshoots.
1512.02727
MohammadHossein Bateni
Kevin Aydin and MohammadHossein Bateni and Vahab Mirrokni
Distributed Balanced Partitioning via Linear Embedding
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Balanced partitioning is often a crucial first step in solving large-scale graph optimization problems, e.g., in some cases, a big graph can be chopped into pieces that fit on one machine to be processed independently before stitching the results together. In other cases, links between different parts may show up in the running time and/or network communications cost. We study a distributed balanced partitioning problem where the goal is to partition the vertices of a given graph into k pieces so as to minimize the total cut size. Our algorithm is composed of a few steps that are easily implementable in distributed computation frameworks. The algorithm first embeds nodes of the graph onto a line, and then processes nodes in a distributed manner guided by the linear embedding order. We examine various ways to find the first embedding, e.g., via a hierarchical clustering or Hilbert curves. Then we apply four different techniques including local swaps, minimum cuts on the boundaries of partitions, as well as contraction and dynamic programming. As our empirical study, we compare the above techniques with each other, and also to previous work in distributed graph algorithms, e.g., a label propagation method, FENNEL and Spinner. We report our results both on a private map graph and several public social networks, and show that our results beat previous distributed algorithms: e.g., compared to the label propagation algorithm, we report an improvement of 15-25% in the cut value. We also observe that our algorithms allow for scalable distributed implementation for any number of partitions. Finally, we apply our techniques for the Google Maps Driving Directions to minimize the number of multi-shard queries with the goal of saving in CPU usage. During live experiments, we observe an ~40% drop in the number of multi-shard queries when comparing our method with a standard geography-based method.
[ { "created": "Wed, 9 Dec 2015 02:44:51 GMT", "version": "v1" } ]
2015-12-10
[ [ "Aydin", "Kevin", "" ], [ "Bateni", "MohammadHossein", "" ], [ "Mirrokni", "Vahab", "" ] ]
Balanced partitioning is often a crucial first step in solving large-scale graph optimization problems, e.g., in some cases, a big graph can be chopped into pieces that fit on one machine to be processed independently before stitching the results together. In other cases, links between different parts may show up in the running time and/or network communications cost. We study a distributed balanced partitioning problem where the goal is to partition the vertices of a given graph into k pieces so as to minimize the total cut size. Our algorithm is composed of a few steps that are easily implementable in distributed computation frameworks. The algorithm first embeds nodes of the graph onto a line, and then processes nodes in a distributed manner guided by the linear embedding order. We examine various ways to find the first embedding, e.g., via a hierarchical clustering or Hilbert curves. Then we apply four different techniques including local swaps, minimum cuts on the boundaries of partitions, as well as contraction and dynamic programming. As our empirical study, we compare the above techniques with each other, and also to previous work in distributed graph algorithms, e.g., a label propagation method, FENNEL and Spinner. We report our results both on a private map graph and several public social networks, and show that our results beat previous distributed algorithms: e.g., compared to the label propagation algorithm, we report an improvement of 15-25% in the cut value. We also observe that our algorithms allow for scalable distributed implementation for any number of partitions. Finally, we apply our techniques for the Google Maps Driving Directions to minimize the number of multi-shard queries with the goal of saving in CPU usage. During live experiments, we observe an ~40% drop in the number of multi-shard queries when comparing our method with a standard geography-based method.
2408.04767
Zachary Daniels
Saurabh Farkya, Zachary Alan Daniels, Aswin Raghavan, Gooitzen van der Wal, Michael Isnardi, Michael Piacentino, David Zhang
Data-Driven Pixel Control: Challenges and Prospects
Accepted to the Conference on Dynamic Data-Driven Applications Systems (DDDAS2024)
null
null
null
cs.CV cs.AI cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-sa/4.0/
Recent advancements in sensors have led to high resolution and high data throughput at the pixel level. Simultaneously, the adoption of increasingly large (deep) neural networks (NNs) has lead to significant progress in computer vision. Currently, visual intelligence comes at increasingly high computational complexity, energy, and latency. We study a data-driven system that combines dynamic sensing at the pixel level with computer vision analytics at the video level and propose a feedback control loop to minimize data movement between the sensor front-end and computational back-end without compromising detection and tracking precision. Our contributions are threefold: (1) We introduce anticipatory attention and show that it leads to high precision prediction with sparse activation of pixels; (2) Leveraging the feedback control, we show that the dimensionality of learned feature vectors can be significantly reduced with increased sparsity; and (3) We emulate analog design choices (such as varying RGB or Bayer pixel format and analog noise) and study their impact on the key metrics of the data-driven system. Comparative analysis with traditional pixel and deep learning models shows significant performance enhancements. Our system achieves a 10X reduction in bandwidth and a 15-30X improvement in Energy-Delay Product (EDP) when activating only 30% of pixels, with a minor reduction in object detection and tracking precision. Based on analog emulation, our system can achieve a throughput of 205 megapixels/sec (MP/s) with a power consumption of only 110 mW per MP, i.e., a theoretical improvement of ~30X in EDP.
[ { "created": "Thu, 8 Aug 2024 21:49:19 GMT", "version": "v1" } ]
2024-08-12
[ [ "Farkya", "Saurabh", "" ], [ "Daniels", "Zachary Alan", "" ], [ "Raghavan", "Aswin", "" ], [ "van der Wal", "Gooitzen", "" ], [ "Isnardi", "Michael", "" ], [ "Piacentino", "Michael", "" ], [ "Zhang", "David", "" ] ]
Recent advancements in sensors have led to high resolution and high data throughput at the pixel level. Simultaneously, the adoption of increasingly large (deep) neural networks (NNs) has lead to significant progress in computer vision. Currently, visual intelligence comes at increasingly high computational complexity, energy, and latency. We study a data-driven system that combines dynamic sensing at the pixel level with computer vision analytics at the video level and propose a feedback control loop to minimize data movement between the sensor front-end and computational back-end without compromising detection and tracking precision. Our contributions are threefold: (1) We introduce anticipatory attention and show that it leads to high precision prediction with sparse activation of pixels; (2) Leveraging the feedback control, we show that the dimensionality of learned feature vectors can be significantly reduced with increased sparsity; and (3) We emulate analog design choices (such as varying RGB or Bayer pixel format and analog noise) and study their impact on the key metrics of the data-driven system. Comparative analysis with traditional pixel and deep learning models shows significant performance enhancements. Our system achieves a 10X reduction in bandwidth and a 15-30X improvement in Energy-Delay Product (EDP) when activating only 30% of pixels, with a minor reduction in object detection and tracking precision. Based on analog emulation, our system can achieve a throughput of 205 megapixels/sec (MP/s) with a power consumption of only 110 mW per MP, i.e., a theoretical improvement of ~30X in EDP.
2205.02689
Cam Nguyen Van
Van-Cam Nguyen, Hong-Tuan-Dinh Le, Huu-Thuan Huynh
Hardware System Implementation for Human Detection using HOG and SVM Algorithm
null
null
null
null
cs.AR cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human detection is a popular issue and has been widely used in many applications. However, including complexities in computation, leading to the human detection system implemented hardly in real-time applications. This paper presents the architecture of hardware, a human detection system that was simulated in the ModelSim tool. As a co-processor, this system was built to off-load to Central Processor Unit (CPU) and speed up the computation timing. The 130x66 RGB pixels of static input image attracted features and classify by using the Histogram of Oriented Gradient (HOG) algorithm and Support Vector Machine (SVM) algorithm, respectively. As a result, the accuracy rate of this system reaches 84.35 percent. And the timing for detection decreases to 0.757 ms at 50MHz frequency (54 times faster when this system was implemented in software by using the Matlab tool).
[ { "created": "Thu, 5 May 2022 14:54:37 GMT", "version": "v1" } ]
2022-05-06
[ [ "Nguyen", "Van-Cam", "" ], [ "Le", "Hong-Tuan-Dinh", "" ], [ "Huynh", "Huu-Thuan", "" ] ]
Human detection is a popular issue and has been widely used in many applications. However, including complexities in computation, leading to the human detection system implemented hardly in real-time applications. This paper presents the architecture of hardware, a human detection system that was simulated in the ModelSim tool. As a co-processor, this system was built to off-load to Central Processor Unit (CPU) and speed up the computation timing. The 130x66 RGB pixels of static input image attracted features and classify by using the Histogram of Oriented Gradient (HOG) algorithm and Support Vector Machine (SVM) algorithm, respectively. As a result, the accuracy rate of this system reaches 84.35 percent. And the timing for detection decreases to 0.757 ms at 50MHz frequency (54 times faster when this system was implemented in software by using the Matlab tool).
2205.10475
Chenguang Wang
Chenguang Wang, Xiao Liu, Zui Chen, Haoyun Hong, Jie Tang, Dawn Song
DeepStruct: Pretraining of Language Models for Structure Prediction
ACL 2022
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
We introduce a method for improving the structural understanding abilities of language models. Unlike previous approaches that finetune the models with task-specific augmentation, we pretrain language models on a collection of task-agnostic corpora to generate structures from text. Our structure pretraining enables zero-shot transfer of the learned knowledge that models have about the structure tasks. We study the performance of this approach on 28 datasets, spanning 10 structure prediction tasks including open information extraction, joint entity and relation extraction, named entity recognition, relation classification, semantic role labeling, event extraction, coreference resolution, factual probe, intent detection, and dialogue state tracking. We further enhance the pretraining with the task-specific training sets. We show that a 10B parameter language model transfers non-trivially to most tasks and obtains state-of-the-art performance on 21 of 28 datasets that we evaluate.
[ { "created": "Sat, 21 May 2022 00:58:22 GMT", "version": "v1" }, { "created": "Mon, 6 Mar 2023 00:49:01 GMT", "version": "v2" } ]
2023-03-07
[ [ "Wang", "Chenguang", "" ], [ "Liu", "Xiao", "" ], [ "Chen", "Zui", "" ], [ "Hong", "Haoyun", "" ], [ "Tang", "Jie", "" ], [ "Song", "Dawn", "" ] ]
We introduce a method for improving the structural understanding abilities of language models. Unlike previous approaches that finetune the models with task-specific augmentation, we pretrain language models on a collection of task-agnostic corpora to generate structures from text. Our structure pretraining enables zero-shot transfer of the learned knowledge that models have about the structure tasks. We study the performance of this approach on 28 datasets, spanning 10 structure prediction tasks including open information extraction, joint entity and relation extraction, named entity recognition, relation classification, semantic role labeling, event extraction, coreference resolution, factual probe, intent detection, and dialogue state tracking. We further enhance the pretraining with the task-specific training sets. We show that a 10B parameter language model transfers non-trivially to most tasks and obtains state-of-the-art performance on 21 of 28 datasets that we evaluate.
2107.04236
Zahra Fahimi
Z. Fahimi, M. R. Mahmoodi, M. Klachko, H. Nili, H. Kim, and D. B. Strukov
Mitigating Imperfections in Mixed-Signal Neuromorphic Circuits
null
null
null
null
cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The progress in neuromorphic computing is fueled by the development of novel nonvolatile memories capable of storing analog information and implementing neural computation efficiently. However, like most other analog circuits, these devices and circuits are prone to imperfections, such as temperature dependency, noise, tuning error, etc., often leading to considerable performance degradation in neural network implementations. Indeed, imperfections are major obstacles in the path of further progress and ultimate commercialization of these technologies. Hence, a practically viable approach should be developed to deal with these nonidealities and unleash the full potential of nonvolatile memories in neuromorphic systems. Here, for the first time, we report a comprehensive characterization of critical imperfections in two analog-grade memories, namely passively-integrated memristors and redesigned eFlash memories, which both feature long-term retention, high endurance, analog storage, low-power operation, and compact nano-scale footprint. Then, we propose a holistic approach that includes modifications in the training, tuning algorithm, memory state optimization, and circuit design to mitigate these imperfections. Our proposed methodology is corroborated on a hybrid software/experimental framework using two benchmarks: a moderate-size convolutional neural network and ResNet-18 trained on CIFAR-10 and ImageNet datasets, respectively. Our proposed approaches allow 2.5x to 9x improvements in the energy consumption of memory arrays during inference and sub-percent accuracy drop across 25-100 C temperature range. The defect tolerance is improved by >100x, and a sub-percent accuracy drop is demonstrated in deep neural networks built with 64x64 passive memristive crossbars featuring 25% normalized switching threshold variations.
[ { "created": "Fri, 9 Jul 2021 06:18:03 GMT", "version": "v1" } ]
2021-07-12
[ [ "Fahimi", "Z.", "" ], [ "Mahmoodi", "M. R.", "" ], [ "Klachko", "M.", "" ], [ "Nili", "H.", "" ], [ "Kim", "H.", "" ], [ "Strukov", "D. B.", "" ] ]
The progress in neuromorphic computing is fueled by the development of novel nonvolatile memories capable of storing analog information and implementing neural computation efficiently. However, like most other analog circuits, these devices and circuits are prone to imperfections, such as temperature dependency, noise, tuning error, etc., often leading to considerable performance degradation in neural network implementations. Indeed, imperfections are major obstacles in the path of further progress and ultimate commercialization of these technologies. Hence, a practically viable approach should be developed to deal with these nonidealities and unleash the full potential of nonvolatile memories in neuromorphic systems. Here, for the first time, we report a comprehensive characterization of critical imperfections in two analog-grade memories, namely passively-integrated memristors and redesigned eFlash memories, which both feature long-term retention, high endurance, analog storage, low-power operation, and compact nano-scale footprint. Then, we propose a holistic approach that includes modifications in the training, tuning algorithm, memory state optimization, and circuit design to mitigate these imperfections. Our proposed methodology is corroborated on a hybrid software/experimental framework using two benchmarks: a moderate-size convolutional neural network and ResNet-18 trained on CIFAR-10 and ImageNet datasets, respectively. Our proposed approaches allow 2.5x to 9x improvements in the energy consumption of memory arrays during inference and sub-percent accuracy drop across 25-100 C temperature range. The defect tolerance is improved by >100x, and a sub-percent accuracy drop is demonstrated in deep neural networks built with 64x64 passive memristive crossbars featuring 25% normalized switching threshold variations.
2401.04729
Joshua Holstein
Philipp Spitzer and Joshua Holstein and Patrick Hemmer and Michael V\"ossing and Niklas K\"uhl and Dominik Martin and Gerhard Satzger
On the Effect of Contextual Information on Human Delegation Behavior in Human-AI collaboration
null
null
null
null
cs.HC cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
The constantly increasing capabilities of artificial intelligence (AI) open new possibilities for human-AI collaboration. One promising approach to leverage existing complementary capabilities is allowing humans to delegate individual instances to the AI. However, enabling humans to delegate instances effectively requires them to assess both their own and the AI's capabilities in the context of the given task. In this work, we explore the effects of providing contextual information on human decisions to delegate instances to an AI. We find that providing participants with contextual information significantly improves the human-AI team performance. Additionally, we show that the delegation behavior changes significantly when participants receive varying types of contextual information. Overall, this research advances the understanding of human-AI interaction in human delegation and provides actionable insights for designing more effective collaborative systems.
[ { "created": "Tue, 9 Jan 2024 18:59:47 GMT", "version": "v1" } ]
2024-01-10
[ [ "Spitzer", "Philipp", "" ], [ "Holstein", "Joshua", "" ], [ "Hemmer", "Patrick", "" ], [ "Vössing", "Michael", "" ], [ "Kühl", "Niklas", "" ], [ "Martin", "Dominik", "" ], [ "Satzger", "Gerhard", "" ] ]
The constantly increasing capabilities of artificial intelligence (AI) open new possibilities for human-AI collaboration. One promising approach to leverage existing complementary capabilities is allowing humans to delegate individual instances to the AI. However, enabling humans to delegate instances effectively requires them to assess both their own and the AI's capabilities in the context of the given task. In this work, we explore the effects of providing contextual information on human decisions to delegate instances to an AI. We find that providing participants with contextual information significantly improves the human-AI team performance. Additionally, we show that the delegation behavior changes significantly when participants receive varying types of contextual information. Overall, this research advances the understanding of human-AI interaction in human delegation and provides actionable insights for designing more effective collaborative systems.
2402.04249
Mantas Mazeika
Mantas Mazeika, Long Phan, Xuwang Yin, Andy Zou, Zifan Wang, Norman Mu, Elham Sakhaee, Nathaniel Li, Steven Basart, Bo Li, David Forsyth, Dan Hendrycks
HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal
Website: https://www.harmbench.org
null
null
null
cs.LG cs.AI cs.CL cs.CV
http://creativecommons.org/licenses/by/4.0/
Automated red teaming holds substantial promise for uncovering and mitigating the risks associated with the malicious use of large language models (LLMs), yet the field lacks a standardized evaluation framework to rigorously assess new methods. To address this issue, we introduce HarmBench, a standardized evaluation framework for automated red teaming. We identify several desirable properties previously unaccounted for in red teaming evaluations and systematically design HarmBench to meet these criteria. Using HarmBench, we conduct a large-scale comparison of 18 red teaming methods and 33 target LLMs and defenses, yielding novel insights. We also introduce a highly efficient adversarial training method that greatly enhances LLM robustness across a wide range of attacks, demonstrating how HarmBench enables codevelopment of attacks and defenses. We open source HarmBench at https://github.com/centerforaisafety/HarmBench.
[ { "created": "Tue, 6 Feb 2024 18:59:08 GMT", "version": "v1" }, { "created": "Tue, 27 Feb 2024 04:43:08 GMT", "version": "v2" } ]
2024-02-28
[ [ "Mazeika", "Mantas", "" ], [ "Phan", "Long", "" ], [ "Yin", "Xuwang", "" ], [ "Zou", "Andy", "" ], [ "Wang", "Zifan", "" ], [ "Mu", "Norman", "" ], [ "Sakhaee", "Elham", "" ], [ "Li", "Nathaniel", "" ], [ "Basart", "Steven", "" ], [ "Li", "Bo", "" ], [ "Forsyth", "David", "" ], [ "Hendrycks", "Dan", "" ] ]
Automated red teaming holds substantial promise for uncovering and mitigating the risks associated with the malicious use of large language models (LLMs), yet the field lacks a standardized evaluation framework to rigorously assess new methods. To address this issue, we introduce HarmBench, a standardized evaluation framework for automated red teaming. We identify several desirable properties previously unaccounted for in red teaming evaluations and systematically design HarmBench to meet these criteria. Using HarmBench, we conduct a large-scale comparison of 18 red teaming methods and 33 target LLMs and defenses, yielding novel insights. We also introduce a highly efficient adversarial training method that greatly enhances LLM robustness across a wide range of attacks, demonstrating how HarmBench enables codevelopment of attacks and defenses. We open source HarmBench at https://github.com/centerforaisafety/HarmBench.
2405.07180
Hoang Dau
Thi Xinh Dinh, Ba Thong Le, Son Hoang Dau, Serdar Boztas, Stanislav Kruglik, Han Mao Kiah, Emanuele Viterbo, Tuvi Etzion, and Yeow Meng Chee
Repairing Reed-Solomon Codes with Side Information
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We generalize the problem of recovering a lost/erased symbol in a Reed-Solomon code to the scenario in which some side information about the lost symbol is known. The side information is represented as a set $S$ of linearly independent combinations of the sub-symbols of the lost symbol. When $S = \varnothing$, this reduces to the standard problem of repairing a single codeword symbol. When $S$ is a set of sub-symbols of the erased one, this becomes the repair problem with partially lost/erased symbol. We first establish that the minimum repair bandwidth depends on $|S|$ and not the content of $S$ and construct a lower bound on the repair bandwidth of a linear repair scheme with side information $S$. We then consider the well-known subspace-polynomial repair schemes and show that their repair bandwidths can be optimized by choosing the right subspaces. Finally, we demonstrate several parameter regimes where the optimal bandwidths can be achieved for full-length Reed-Solomon codes.
[ { "created": "Sun, 12 May 2024 06:48:24 GMT", "version": "v1" } ]
2024-05-14
[ [ "Dinh", "Thi Xinh", "" ], [ "Le", "Ba Thong", "" ], [ "Dau", "Son Hoang", "" ], [ "Boztas", "Serdar", "" ], [ "Kruglik", "Stanislav", "" ], [ "Kiah", "Han Mao", "" ], [ "Viterbo", "Emanuele", "" ], [ "Etzion", "Tuvi", "" ], [ "Chee", "Yeow Meng", "" ] ]
We generalize the problem of recovering a lost/erased symbol in a Reed-Solomon code to the scenario in which some side information about the lost symbol is known. The side information is represented as a set $S$ of linearly independent combinations of the sub-symbols of the lost symbol. When $S = \varnothing$, this reduces to the standard problem of repairing a single codeword symbol. When $S$ is a set of sub-symbols of the erased one, this becomes the repair problem with partially lost/erased symbol. We first establish that the minimum repair bandwidth depends on $|S|$ and not the content of $S$ and construct a lower bound on the repair bandwidth of a linear repair scheme with side information $S$. We then consider the well-known subspace-polynomial repair schemes and show that their repair bandwidths can be optimized by choosing the right subspaces. Finally, we demonstrate several parameter regimes where the optimal bandwidths can be achieved for full-length Reed-Solomon codes.
2110.04639
Sami Fakhry
Sami Fakhry (1 and 2) and Romain Couillet (1 and 2 and 3) and Malik Tiomoko (1 and 2) ((1) GIPSA-Lab, (2) Grenoble-Alps University, (3) LIG-Lab)
Multi-task learning on the edge: cost-efficiency and theoretical optimality
4 pages, 5 figures, code to reproduce figure available at: https://github.com/Sami-fak/DistributedMTLSPCA
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article proposes a distributed multi-task learning (MTL) algorithm based on supervised principal component analysis (SPCA) which is: (i) theoretically optimal for Gaussian mixtures, (ii) computationally cheap and scalable. Supporting experiments on synthetic and real benchmark data demonstrate that significant energy gains can be obtained with no performance loss.
[ { "created": "Sat, 9 Oct 2021 19:59:02 GMT", "version": "v1" } ]
2021-10-12
[ [ "Fakhry", "Sami", "", "1 and 2" ], [ "Couillet", "Romain", "", "1 and 2 and 3" ], [ "Tiomoko", "Malik", "", "1 and 2" ] ]
This article proposes a distributed multi-task learning (MTL) algorithm based on supervised principal component analysis (SPCA) which is: (i) theoretically optimal for Gaussian mixtures, (ii) computationally cheap and scalable. Supporting experiments on synthetic and real benchmark data demonstrate that significant energy gains can be obtained with no performance loss.
2011.09845
Feng Li
Youming Tao, Shuzhen Chen, Feng Li, Dongxiao Yu, Jiguo Yu, Hao Sheng
A Distributed Privacy-Preserving Learning Dynamics in General Social Networks
null
null
null
null
cs.SI cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study a distributed privacy-preserving learning problem in social networks with general topology. The agents can communicate with each other over the network, which may result in privacy disclosure, since the trustworthiness of the agents cannot be guaranteed. Given a set of options which yield unknown stochastic rewards, each agent is required to learn the best one, aiming at maximizing the resulting expected average cumulative reward. To serve the above goal, we propose a four-staged distributed algorithm which efficiently exploits the collaboration among the agents while preserving the local privacy for each of them. In particular, our algorithm proceeds iteratively, and in every round, each agent i) randomly perturbs its adoption for the privacy-preserving purpose, ii) disseminates the perturbed adoption over the social network in a nearly uniform manner through random walking, iii) selects an option by referring to the perturbed suggestions received from its peers, and iv) decides whether or not to adopt the selected option as preference according to its latest reward feedback. Through solid theoretical analysis, we quantify the trade-off among the number of agents (or communication overhead), privacy preserving and learning utility. We also perform extensive simulations to verify the efficacy of our proposed social learning algorithm.
[ { "created": "Sun, 15 Nov 2020 04:00:45 GMT", "version": "v1" }, { "created": "Fri, 27 Jan 2023 11:57:38 GMT", "version": "v2" } ]
2023-01-30
[ [ "Tao", "Youming", "" ], [ "Chen", "Shuzhen", "" ], [ "Li", "Feng", "" ], [ "Yu", "Dongxiao", "" ], [ "Yu", "Jiguo", "" ], [ "Sheng", "Hao", "" ] ]
In this paper, we study a distributed privacy-preserving learning problem in social networks with general topology. The agents can communicate with each other over the network, which may result in privacy disclosure, since the trustworthiness of the agents cannot be guaranteed. Given a set of options which yield unknown stochastic rewards, each agent is required to learn the best one, aiming at maximizing the resulting expected average cumulative reward. To serve the above goal, we propose a four-staged distributed algorithm which efficiently exploits the collaboration among the agents while preserving the local privacy for each of them. In particular, our algorithm proceeds iteratively, and in every round, each agent i) randomly perturbs its adoption for the privacy-preserving purpose, ii) disseminates the perturbed adoption over the social network in a nearly uniform manner through random walking, iii) selects an option by referring to the perturbed suggestions received from its peers, and iv) decides whether or not to adopt the selected option as preference according to its latest reward feedback. Through solid theoretical analysis, we quantify the trade-off among the number of agents (or communication overhead), privacy preserving and learning utility. We also perform extensive simulations to verify the efficacy of our proposed social learning algorithm.
2312.05291
Mohammad Reza Taesiri
Mohammad Reza Taesiri, Tianjun Feng, Anh Nguyen, Cor-Paul Bezemer
GlitchBench: Can large multimodal models detect video game glitches?
CVPR 2024
null
null
null
cs.CV cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
Large multimodal models (LMMs) have evolved from large language models (LLMs) to integrate multiple input modalities, such as visual inputs. This integration augments the capacity of LLMs for tasks requiring visual comprehension and reasoning. However, the extent and limitations of their enhanced abilities are not fully understood, especially when it comes to real-world tasks. To address this gap, we introduce GlitchBench, a novel benchmark derived from video game quality assurance tasks, to test and evaluate the reasoning capabilities of LMMs. Our benchmark is curated from a variety of unusual and glitched scenarios from video games and aims to challenge both the visual and linguistic reasoning powers of LMMs in detecting and interpreting out-of-the-ordinary events. We evaluate multiple state-of-the-art LMMs, and we show that GlitchBench presents a new challenge for these models. Code and data are available at: https://glitchbench.github.io/
[ { "created": "Fri, 8 Dec 2023 18:14:21 GMT", "version": "v1" }, { "created": "Fri, 29 Mar 2024 16:49:59 GMT", "version": "v2" } ]
2024-04-01
[ [ "Taesiri", "Mohammad Reza", "" ], [ "Feng", "Tianjun", "" ], [ "Nguyen", "Anh", "" ], [ "Bezemer", "Cor-Paul", "" ] ]
Large multimodal models (LMMs) have evolved from large language models (LLMs) to integrate multiple input modalities, such as visual inputs. This integration augments the capacity of LLMs for tasks requiring visual comprehension and reasoning. However, the extent and limitations of their enhanced abilities are not fully understood, especially when it comes to real-world tasks. To address this gap, we introduce GlitchBench, a novel benchmark derived from video game quality assurance tasks, to test and evaluate the reasoning capabilities of LMMs. Our benchmark is curated from a variety of unusual and glitched scenarios from video games and aims to challenge both the visual and linguistic reasoning powers of LMMs in detecting and interpreting out-of-the-ordinary events. We evaluate multiple state-of-the-art LMMs, and we show that GlitchBench presents a new challenge for these models. Code and data are available at: https://glitchbench.github.io/
1810.04763
Thorsten Wissmann
Pawe{\l} Parys
Recursion Schemes, the MSO Logic, and the U quantifier
null
Logical Methods in Computer Science, Volume 16, Issue 1 (February 18, 2020) lmcs:4885
10.23638/LMCS-16(1:20)2020
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
We study the model-checking problem for recursion schemes: does the tree generated by a given higher-order recursion scheme satisfy a given logical sentence. The problem is known to be decidable for sentences of the MSO logic. We prove decidability for an extension of MSO in which we additionally have an unbounding quantifier U, saying that a subformula is true for arbitrarily large finite sets. This quantifier can be used only for subformulae in which all free variables represent finite sets (while an unrestricted use of the quantifier leads to undecidability). We also show that the logic has the properties of reflection and effective selection for trees generated by recursion schemes.
[ { "created": "Wed, 10 Oct 2018 22:14:01 GMT", "version": "v1" }, { "created": "Fri, 8 Nov 2019 05:01:16 GMT", "version": "v2" }, { "created": "Mon, 17 Feb 2020 06:50:17 GMT", "version": "v3" } ]
2023-06-22
[ [ "Parys", "Paweł", "" ] ]
We study the model-checking problem for recursion schemes: does the tree generated by a given higher-order recursion scheme satisfy a given logical sentence. The problem is known to be decidable for sentences of the MSO logic. We prove decidability for an extension of MSO in which we additionally have an unbounding quantifier U, saying that a subformula is true for arbitrarily large finite sets. This quantifier can be used only for subformulae in which all free variables represent finite sets (while an unrestricted use of the quantifier leads to undecidability). We also show that the logic has the properties of reflection and effective selection for trees generated by recursion schemes.
0909.4692
Frederic Dorn Harald
Frederic Dorn
Planar Subgraph Isomorphism Revisited
13 pages, 4 figures
null
null
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of Subgraph Isomorphism is defined as follows: Given a pattern H and a host graph G on n vertices, does G contain a subgraph that is isomorphic to H? Eppstein [SODA 95, J'GAA 99] gives the first linear time algorithm for subgraph isomorphism for a fixed-size pattern, say of order k, and arbitrary planar host graph, improving upon the O(n^\sqrt{k})-time algorithm when using the ``Color-coding'' technique of Alon et al [J'ACM 95]. Eppstein's algorithm runs in time k^O(k) n, that is, the dependency on k is superexponential. We solve an open problem posed in Eppstein's paper and improve the running time to 2^O(k) n, that is, single exponential in k while keeping the term in n linear. Next to deciding subgraph isomorphism, we can construct a solution and enumerate all solutions in the same asymptotic running time. We may list w subgraphs with an additive term O(w k) in the running time of our algorithm. We introduce the technique of "embedded dynamic programming" on a suitably structured graph decomposition, which exploits the topology of the underlying embeddings of the subgraph pattern (rather than of the host graph). To achieve our results, we give an upper bound on the number of partial solutions in each dynamic programming step as a function of pattern size--as it turns out, for the planar subgraph isomorphism problem, that function is single exponential in the number of vertices in the pattern.
[ { "created": "Fri, 25 Sep 2009 13:15:31 GMT", "version": "v1" } ]
2009-09-28
[ [ "Dorn", "Frederic", "" ] ]
The problem of Subgraph Isomorphism is defined as follows: Given a pattern H and a host graph G on n vertices, does G contain a subgraph that is isomorphic to H? Eppstein [SODA 95, J'GAA 99] gives the first linear time algorithm for subgraph isomorphism for a fixed-size pattern, say of order k, and arbitrary planar host graph, improving upon the O(n^\sqrt{k})-time algorithm when using the ``Color-coding'' technique of Alon et al [J'ACM 95]. Eppstein's algorithm runs in time k^O(k) n, that is, the dependency on k is superexponential. We solve an open problem posed in Eppstein's paper and improve the running time to 2^O(k) n, that is, single exponential in k while keeping the term in n linear. Next to deciding subgraph isomorphism, we can construct a solution and enumerate all solutions in the same asymptotic running time. We may list w subgraphs with an additive term O(w k) in the running time of our algorithm. We introduce the technique of "embedded dynamic programming" on a suitably structured graph decomposition, which exploits the topology of the underlying embeddings of the subgraph pattern (rather than of the host graph). To achieve our results, we give an upper bound on the number of partial solutions in each dynamic programming step as a function of pattern size--as it turns out, for the planar subgraph isomorphism problem, that function is single exponential in the number of vertices in the pattern.
1107.5870
Alireza Abbasi
Alireza Abbasi, Liaquat Hossain, Shahadat Uddin, Kim J.R. Rasmussen
Evolutionary Dynamics of Scientific Collaboration Networks: Multi-Levels and Cross-time Analysis
Accepted for publication in Scientometrics
null
null
null
cs.SI cs.DL physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several studies exist which use scientific literature for comparing scientific activities (e.g., productivity, and collaboration). In this study, using co-authorship data over the last 40 years, we present the evolutionary dynamics of multi level (i.e., individual, institutional and national) collaboration networks for exploring the emergence of collaborations in the research field of "steel structures". The collaboration network of scientists in the field has been analyzed using author affiliations extracted from Scopus between 1970 and 2009. We have studied collaboration distribution networks at the micro-, meso- and macro-levels for the 40 years. We compared and analyzed a number of properties of these networks (i.e., density, centrality measures, the giant component and clustering coefficient) for presenting a longitudinal analysis and statistical validation of the evolutionary dynamics of "steel structures" collaboration networks. At all levels, the scientific collaborations network structures were central considering the closeness centralization while betweenness and degree centralization were much lower. In general networks density, connectedness, centralization and clustering coefficient were highest in marco-level and decreasing as the network size grow to the lowest in micro-level. We also find that the average distance between countries about two and institutes five and for authors eight meaning that only about eight steps are necessary to get from one randomly chosen author to another.
[ { "created": "Fri, 29 Jul 2011 05:16:12 GMT", "version": "v1" } ]
2011-08-01
[ [ "Abbasi", "Alireza", "" ], [ "Hossain", "Liaquat", "" ], [ "Uddin", "Shahadat", "" ], [ "Rasmussen", "Kim J. R.", "" ] ]
Several studies exist which use scientific literature for comparing scientific activities (e.g., productivity, and collaboration). In this study, using co-authorship data over the last 40 years, we present the evolutionary dynamics of multi level (i.e., individual, institutional and national) collaboration networks for exploring the emergence of collaborations in the research field of "steel structures". The collaboration network of scientists in the field has been analyzed using author affiliations extracted from Scopus between 1970 and 2009. We have studied collaboration distribution networks at the micro-, meso- and macro-levels for the 40 years. We compared and analyzed a number of properties of these networks (i.e., density, centrality measures, the giant component and clustering coefficient) for presenting a longitudinal analysis and statistical validation of the evolutionary dynamics of "steel structures" collaboration networks. At all levels, the scientific collaborations network structures were central considering the closeness centralization while betweenness and degree centralization were much lower. In general networks density, connectedness, centralization and clustering coefficient were highest in marco-level and decreasing as the network size grow to the lowest in micro-level. We also find that the average distance between countries about two and institutes five and for authors eight meaning that only about eight steps are necessary to get from one randomly chosen author to another.
2110.11238
Islem Rekik
Umut Guvercin, Mohammed Amine Gharsallaoui and Islem Rekik
One Representative-Shot Learning Using a Population-Driven Template with Application to Brain Connectivity Classification and Evolution Prediction
null
null
null
null
cs.NE cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Few-shot learning presents a challenging paradigm for training discriminative models on a few training samples representing the target classes to discriminate. However, classification methods based on deep learning are ill-suited for such learning as they need large amounts of training data --let alone one-shot learning. Recently, graph neural networks (GNNs) have been introduced to the field of network neuroscience, where the brain connectivity is encoded in a graph. However, with scarce neuroimaging datasets particularly for rare diseases and low-resource clinical facilities, such data-devouring architectures might fail in learning the target task. In this paper, we take a very different approach in training GNNs, where we aim to learn with one sample and achieve the best performance --a formidable challenge to tackle. Specifically, we present the first one-shot paradigm where a GNN is trained on a single population-driven template --namely a connectional brain template (CBT). A CBT is a compact representation of a population of brain graphs capturing the unique connectivity patterns shared across individuals. It is analogous to brain image atlases for neuroimaging datasets. Using a one-representative CBT as a training sample, we alleviate the training load of GNN models while boosting their performance across a variety of classification and regression tasks. We demonstrate that our method significantly outperformed benchmark one-shot learning methods with downstream classification and time-dependent brain graph data forecasting tasks while competing with the train-on-all conventional training strategy. Our source code can be found at https://github.com/basiralab/one-representative-shot-learning.
[ { "created": "Wed, 6 Oct 2021 08:36:00 GMT", "version": "v1" } ]
2021-10-22
[ [ "Guvercin", "Umut", "" ], [ "Gharsallaoui", "Mohammed Amine", "" ], [ "Rekik", "Islem", "" ] ]
Few-shot learning presents a challenging paradigm for training discriminative models on a few training samples representing the target classes to discriminate. However, classification methods based on deep learning are ill-suited for such learning as they need large amounts of training data --let alone one-shot learning. Recently, graph neural networks (GNNs) have been introduced to the field of network neuroscience, where the brain connectivity is encoded in a graph. However, with scarce neuroimaging datasets particularly for rare diseases and low-resource clinical facilities, such data-devouring architectures might fail in learning the target task. In this paper, we take a very different approach in training GNNs, where we aim to learn with one sample and achieve the best performance --a formidable challenge to tackle. Specifically, we present the first one-shot paradigm where a GNN is trained on a single population-driven template --namely a connectional brain template (CBT). A CBT is a compact representation of a population of brain graphs capturing the unique connectivity patterns shared across individuals. It is analogous to brain image atlases for neuroimaging datasets. Using a one-representative CBT as a training sample, we alleviate the training load of GNN models while boosting their performance across a variety of classification and regression tasks. We demonstrate that our method significantly outperformed benchmark one-shot learning methods with downstream classification and time-dependent brain graph data forecasting tasks while competing with the train-on-all conventional training strategy. Our source code can be found at https://github.com/basiralab/one-representative-shot-learning.
2303.16004
Tristan Bilot
Tristan Bilot, Nour El Madhoun, Khaldoun Al Agha, Anis Zouaoui
A Survey on Malware Detection with Graph Representation Learning
Preprint, submitted to ACM Computing Surveys on March 2023. For any suggestions or improvements, please contact me directly by e-mail
null
null
null
cs.CR cs.LG
http://creativecommons.org/licenses/by/4.0/
Malware detection has become a major concern due to the increasing number and complexity of malware. Traditional detection methods based on signatures and heuristics are used for malware detection, but unfortunately, they suffer from poor generalization to unknown attacks and can be easily circumvented using obfuscation techniques. In recent years, Machine Learning (ML) and notably Deep Learning (DL) achieved impressive results in malware detection by learning useful representations from data and have become a solution preferred over traditional methods. More recently, the application of such techniques on graph-structured data has achieved state-of-the-art performance in various domains and demonstrates promising results in learning more robust representations from malware. Yet, no literature review focusing on graph-based deep learning for malware detection exists. In this survey, we provide an in-depth literature review to summarize and unify existing works under the common approaches and architectures. We notably demonstrate that Graph Neural Networks (GNNs) reach competitive results in learning robust embeddings from malware represented as expressive graph structures, leading to an efficient detection by downstream classifiers. This paper also reviews adversarial attacks that are utilized to fool graph-based detection methods. Challenges and future research directions are discussed at the end of the paper.
[ { "created": "Tue, 28 Mar 2023 14:27:08 GMT", "version": "v1" }, { "created": "Thu, 17 Aug 2023 12:28:57 GMT", "version": "v2" } ]
2023-08-21
[ [ "Bilot", "Tristan", "" ], [ "Madhoun", "Nour El", "" ], [ "Agha", "Khaldoun Al", "" ], [ "Zouaoui", "Anis", "" ] ]
Malware detection has become a major concern due to the increasing number and complexity of malware. Traditional detection methods based on signatures and heuristics are used for malware detection, but unfortunately, they suffer from poor generalization to unknown attacks and can be easily circumvented using obfuscation techniques. In recent years, Machine Learning (ML) and notably Deep Learning (DL) achieved impressive results in malware detection by learning useful representations from data and have become a solution preferred over traditional methods. More recently, the application of such techniques on graph-structured data has achieved state-of-the-art performance in various domains and demonstrates promising results in learning more robust representations from malware. Yet, no literature review focusing on graph-based deep learning for malware detection exists. In this survey, we provide an in-depth literature review to summarize and unify existing works under the common approaches and architectures. We notably demonstrate that Graph Neural Networks (GNNs) reach competitive results in learning robust embeddings from malware represented as expressive graph structures, leading to an efficient detection by downstream classifiers. This paper also reviews adversarial attacks that are utilized to fool graph-based detection methods. Challenges and future research directions are discussed at the end of the paper.
2301.05601
Nicolas Hubert
Nicolas Hubert, Pierre Monnin, Armelle Brun, Davy Monticolo
Sem@$K$: Is my knowledge graph embedding model semantic-aware?
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using knowledge graph embedding models (KGEMs) is a popular approach for predicting links in knowledge graphs (KGs). Traditionally, the performance of KGEMs for link prediction is assessed using rank-based metrics, which evaluate their ability to give high scores to ground-truth entities. However, the literature claims that the KGEM evaluation procedure would benefit from adding supplementary dimensions to assess. That is why, in this paper, we extend our previously introduced metric Sem@K that measures the capability of models to predict valid entities w.r.t. domain and range constraints. In particular, we consider a broad range of KGs and take their respective characteristics into account to propose different versions of Sem@K. We also perform an extensive study to qualify the abilities of KGEMs as measured by our metric. Our experiments show that Sem@K provides a new perspective on KGEM quality. Its joint analysis with rank-based metrics offers different conclusions on the predictive power of models. Regarding Sem@K, some KGEMs are inherently better than others, but this semantic superiority is not indicative of their performance w.r.t. rank-based metrics. In this work, we generalize conclusions about the relative performance of KGEMs w.r.t. rank-based and semantic-oriented metrics at the level of families of models. The joint analysis of the aforementioned metrics gives more insight into the peculiarities of each model. This work paves the way for a more comprehensive evaluation of KGEM adequacy for specific downstream tasks.
[ { "created": "Fri, 13 Jan 2023 15:06:47 GMT", "version": "v1" }, { "created": "Thu, 7 Dec 2023 16:13:24 GMT", "version": "v2" } ]
2023-12-08
[ [ "Hubert", "Nicolas", "" ], [ "Monnin", "Pierre", "" ], [ "Brun", "Armelle", "" ], [ "Monticolo", "Davy", "" ] ]
Using knowledge graph embedding models (KGEMs) is a popular approach for predicting links in knowledge graphs (KGs). Traditionally, the performance of KGEMs for link prediction is assessed using rank-based metrics, which evaluate their ability to give high scores to ground-truth entities. However, the literature claims that the KGEM evaluation procedure would benefit from adding supplementary dimensions to assess. That is why, in this paper, we extend our previously introduced metric Sem@K that measures the capability of models to predict valid entities w.r.t. domain and range constraints. In particular, we consider a broad range of KGs and take their respective characteristics into account to propose different versions of Sem@K. We also perform an extensive study to qualify the abilities of KGEMs as measured by our metric. Our experiments show that Sem@K provides a new perspective on KGEM quality. Its joint analysis with rank-based metrics offers different conclusions on the predictive power of models. Regarding Sem@K, some KGEMs are inherently better than others, but this semantic superiority is not indicative of their performance w.r.t. rank-based metrics. In this work, we generalize conclusions about the relative performance of KGEMs w.r.t. rank-based and semantic-oriented metrics at the level of families of models. The joint analysis of the aforementioned metrics gives more insight into the peculiarities of each model. This work paves the way for a more comprehensive evaluation of KGEM adequacy for specific downstream tasks.
2112.00941
Laurent Valentin Jospin
Laurent Valentin Jospin and Farid Boussaid and Hamid Laga and Mohammed Bennamoun
Generalized Closed-form Formulae for Feature-based Subpixel Alignment in Patch-based Matching
29 pages, 10 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cost-based image patch matching is at the core of various techniques in computer vision, photogrammetry and remote sensing. When the subpixel disparity between the reference patch in the source and target images is required, either the cost function or the target image have to be interpolated. While cost-based interpolation is the easiest to implement, multiple works have shown that image based interpolation can increase the accuracy of the subpixel matching, but usually at the cost of expensive search procedures. This, however, is problematic, especially for very computation intensive applications such as stereo matching or optical flow computation. In this paper, we show that closed form formulae for subpixel disparity computation for the case of one dimensional matching, e.g., in the case of rectified stereo images where the search space is of one dimension, exists when using the standard NCC, SSD and SAD cost functions. We then demonstrate how to generalize the proposed formulae to the case of high dimensional search spaces, which is required for unrectified stereo matching and optical flow extraction. We also compare our results with traditional cost volume interpolation formulae as well as with state-of-the-art cost-based refinement methods, and show that the proposed formulae bring a small improvement over the state-of-the-art cost-based methods in the case of one dimensional search spaces, and a significant improvement when the search space is two dimensional.
[ { "created": "Thu, 2 Dec 2021 02:42:58 GMT", "version": "v1" }, { "created": "Mon, 13 Feb 2023 02:26:43 GMT", "version": "v2" } ]
2023-02-14
[ [ "Jospin", "Laurent Valentin", "" ], [ "Boussaid", "Farid", "" ], [ "Laga", "Hamid", "" ], [ "Bennamoun", "Mohammed", "" ] ]
Cost-based image patch matching is at the core of various techniques in computer vision, photogrammetry and remote sensing. When the subpixel disparity between the reference patch in the source and target images is required, either the cost function or the target image have to be interpolated. While cost-based interpolation is the easiest to implement, multiple works have shown that image based interpolation can increase the accuracy of the subpixel matching, but usually at the cost of expensive search procedures. This, however, is problematic, especially for very computation intensive applications such as stereo matching or optical flow computation. In this paper, we show that closed form formulae for subpixel disparity computation for the case of one dimensional matching, e.g., in the case of rectified stereo images where the search space is of one dimension, exists when using the standard NCC, SSD and SAD cost functions. We then demonstrate how to generalize the proposed formulae to the case of high dimensional search spaces, which is required for unrectified stereo matching and optical flow extraction. We also compare our results with traditional cost volume interpolation formulae as well as with state-of-the-art cost-based refinement methods, and show that the proposed formulae bring a small improvement over the state-of-the-art cost-based methods in the case of one dimensional search spaces, and a significant improvement when the search space is two dimensional.
2010.13697
Kristina Wolfe
Kristina Wolfe, Douglas Swanson, Rupert Till
The Frequency Spectrum and Geometry of the Hal Saflieni Hypogeum Appear Tuned
8 pages, 6 figures. Accepted to Journal of Archaeological Science: Reports (2020)
Journal of Archaeological Science: Reports 34 (2020)
10.1016/j.jasrep.2020.102623
null
cs.SD eess.AS physics.app-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Hal Saflieni Hypogeum is a unique subterranean Maltese Neolithic sanctuary with a well-documented history of interest in its acoustics. Previous studies have noted its unusual strongly-defined frequency spectrum, but it is unknown if this was coincidental. In this paper, we present evidence that the Hypogeum's creators shaped the site's geometry to create or amplify its frequency spectrum, or another property closely correlated with the spectrum. Specifically, we show that the observed spectrum required jointly fine-tuning the dimensions of multiple non-contiguous cave walls across multiple independent chambers, to a degree that seems unlikely to be coincidental. We also note that the peak frequencies are evenly spaced and resemble a whole-tone scale in music, which is also unlikely to be coincidental and suggests the spectrum itself might have held some cultural significance. Taken together, it suggests acoustic or spectral properties may have played a motivational or cultural role for the site's Neolithic creators. This work identifies one of the earliest known examples of a manmade structure with a significant musical element to its interior architecture.
[ { "created": "Mon, 26 Oct 2020 16:28:49 GMT", "version": "v1" } ]
2020-11-03
[ [ "Wolfe", "Kristina", "" ], [ "Swanson", "Douglas", "" ], [ "Till", "Rupert", "" ] ]
The Hal Saflieni Hypogeum is a unique subterranean Maltese Neolithic sanctuary with a well-documented history of interest in its acoustics. Previous studies have noted its unusual strongly-defined frequency spectrum, but it is unknown if this was coincidental. In this paper, we present evidence that the Hypogeum's creators shaped the site's geometry to create or amplify its frequency spectrum, or another property closely correlated with the spectrum. Specifically, we show that the observed spectrum required jointly fine-tuning the dimensions of multiple non-contiguous cave walls across multiple independent chambers, to a degree that seems unlikely to be coincidental. We also note that the peak frequencies are evenly spaced and resemble a whole-tone scale in music, which is also unlikely to be coincidental and suggests the spectrum itself might have held some cultural significance. Taken together, it suggests acoustic or spectral properties may have played a motivational or cultural role for the site's Neolithic creators. This work identifies one of the earliest known examples of a manmade structure with a significant musical element to its interior architecture.
1603.00806
Florian Strub
Florian Strub (SEQUEL, CRIStAL), Jeremie Mary (CRIStAL, SEQUEL), Romaric Gaudel (LIFL)
Hybrid Collaborative Filtering with Autoencoders
null
null
null
null
cs.IR cs.AI cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Collaborative Filtering aims at exploiting the feedback of users to provide personalised recommendations. Such algorithms look for latent variables in a large sparse matrix of ratings. They can be enhanced by adding side information to tackle the well-known cold start problem. While Neu-ral Networks have tremendous success in image and speech recognition, they have received less attention in Collaborative Filtering. This is all the more surprising that Neural Networks are able to discover latent variables in large and heterogeneous datasets. In this paper, we introduce a Collaborative Filtering Neural network architecture aka CFN which computes a non-linear Matrix Factorization from sparse rating inputs and side information. We show experimentally on the MovieLens and Douban dataset that CFN outper-forms the state of the art and benefits from side information. We provide an implementation of the algorithm as a reusable plugin for Torch, a popular Neural Network framework.
[ { "created": "Wed, 2 Mar 2016 17:48:25 GMT", "version": "v1" }, { "created": "Wed, 9 Mar 2016 19:18:09 GMT", "version": "v2" }, { "created": "Tue, 19 Jul 2016 08:10:08 GMT", "version": "v3" } ]
2016-07-20
[ [ "Strub", "Florian", "", "SEQUEL, CRIStAL" ], [ "Mary", "Jeremie", "", "CRIStAL, SEQUEL" ], [ "Gaudel", "Romaric", "", "LIFL" ] ]
Collaborative Filtering aims at exploiting the feedback of users to provide personalised recommendations. Such algorithms look for latent variables in a large sparse matrix of ratings. They can be enhanced by adding side information to tackle the well-known cold start problem. While Neu-ral Networks have tremendous success in image and speech recognition, they have received less attention in Collaborative Filtering. This is all the more surprising that Neural Networks are able to discover latent variables in large and heterogeneous datasets. In this paper, we introduce a Collaborative Filtering Neural network architecture aka CFN which computes a non-linear Matrix Factorization from sparse rating inputs and side information. We show experimentally on the MovieLens and Douban dataset that CFN outper-forms the state of the art and benefits from side information. We provide an implementation of the algorithm as a reusable plugin for Torch, a popular Neural Network framework.
2301.03837
Junming Cao
Junming Cao, Bihuan Chen, Longjie Hu, Jie Gao, Kaifeng Huang, Xin Peng
Understanding the Complexity and Its Impact on Testing in ML-Enabled Systems
null
null
null
null
cs.SE cs.AI
http://creativecommons.org/licenses/by/4.0/
Machine learning (ML) enabled systems are emerging with recent breakthroughs in ML. A model-centric view is widely taken by the literature to focus only on the analysis of ML models. However, only a small body of work takes a system view that looks at how ML components work with the system and how they affect software engineering for MLenabled systems. In this paper, we adopt this system view, and conduct a case study on Rasa 3.0, an industrial dialogue system that has been widely adopted by various companies around the world. Our goal is to characterize the complexity of such a largescale ML-enabled system and to understand the impact of the complexity on testing. Our study reveals practical implications for software engineering for ML-enabled systems.
[ { "created": "Tue, 10 Jan 2023 08:13:24 GMT", "version": "v1" } ]
2023-01-11
[ [ "Cao", "Junming", "" ], [ "Chen", "Bihuan", "" ], [ "Hu", "Longjie", "" ], [ "Gao", "Jie", "" ], [ "Huang", "Kaifeng", "" ], [ "Peng", "Xin", "" ] ]
Machine learning (ML) enabled systems are emerging with recent breakthroughs in ML. A model-centric view is widely taken by the literature to focus only on the analysis of ML models. However, only a small body of work takes a system view that looks at how ML components work with the system and how they affect software engineering for MLenabled systems. In this paper, we adopt this system view, and conduct a case study on Rasa 3.0, an industrial dialogue system that has been widely adopted by various companies around the world. Our goal is to characterize the complexity of such a largescale ML-enabled system and to understand the impact of the complexity on testing. Our study reveals practical implications for software engineering for ML-enabled systems.
2010.08620
Jingfan Meng
Jingfan Meng, Long Gong and Jun (Jim) Xu
Sliding-Window QPS (SW-QPS): A Perfect Parallel Iterative Switching Algorithm for Input-Queued Switches
8 pages, 5 figures, to be published in ACM Performance Evaluation Review (PER)
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we first propose a parallel batch switching algorithm called Small-Batch Queue-Proportional Sampling (SB-QPS). Compared to other batch switching algorithms, SB-QPS significantly reduces the batch size without sacrificing the throughput performance and hence has much lower delay when traffic load is light to moderate. It also achieves the lowest possible time complexity of $O(1)$ per matching computation per port, via parallelization. We then propose another algorithm called Sliding-Window QPS (SW-QPS). SW-QPS retains and enhances all benefits of SB-QPS, and reduces the batching delay to zero via a novel switching framework called sliding-window switching. In addition, SW-QPS computes matchings of much higher qualities, as measured by the resulting throughput and delay performances, than QPS-1, the state-of-the-art regular switching algorithm that builds upon the same underlying bipartite matching algorithm.
[ { "created": "Fri, 16 Oct 2020 20:39:02 GMT", "version": "v1" } ]
2020-10-20
[ [ "Meng", "Jingfan", "", "Jim" ], [ "Gong", "Long", "", "Jim" ], [ "Jun", "", "", "Jim" ], [ "Xu", "", "" ] ]
In this work, we first propose a parallel batch switching algorithm called Small-Batch Queue-Proportional Sampling (SB-QPS). Compared to other batch switching algorithms, SB-QPS significantly reduces the batch size without sacrificing the throughput performance and hence has much lower delay when traffic load is light to moderate. It also achieves the lowest possible time complexity of $O(1)$ per matching computation per port, via parallelization. We then propose another algorithm called Sliding-Window QPS (SW-QPS). SW-QPS retains and enhances all benefits of SB-QPS, and reduces the batching delay to zero via a novel switching framework called sliding-window switching. In addition, SW-QPS computes matchings of much higher qualities, as measured by the resulting throughput and delay performances, than QPS-1, the state-of-the-art regular switching algorithm that builds upon the same underlying bipartite matching algorithm.
2202.04041
Andr\'es Beltr\'an-Pulido
Andr\'es Beltr\'an-Pulido, Ilias Bilionis, Dionysios Aliprantis
Physics-informed neural networks for solving parametric magnetostatic problems
12 pages, 10 figures
null
10.1109/TEC.2022.3180295
null
cs.CE cs.LG physics.comp-ph
http://creativecommons.org/licenses/by/4.0/
The objective of this paper is to investigate the ability of physics-informed neural networks to learn the magnetic field response as a function of design parameters in the context of a two-dimensional (2-D) magnetostatic problem. Our approach is as follows. First, we present a functional whose minimization is equivalent to solving parametric magnetostatic problems. Subsequently, we use a deep neural network (DNN) to represent the magnetic field as a function of space and parameters that describe geometric features and operating points. We train the DNN by minimizing the physics-informed functional using stochastic gradient descent. Lastly, we demonstrate our approach on a \mbox{ten-dimensional} EI-core electromagnet problem with parameterized geometry. We evaluate the accuracy of the DNN by comparing its predictions to those of finite element analysis.
[ { "created": "Tue, 8 Feb 2022 18:12:26 GMT", "version": "v1" }, { "created": "Thu, 29 Sep 2022 16:55:00 GMT", "version": "v2" } ]
2022-09-30
[ [ "Beltrán-Pulido", "Andrés", "" ], [ "Bilionis", "Ilias", "" ], [ "Aliprantis", "Dionysios", "" ] ]
The objective of this paper is to investigate the ability of physics-informed neural networks to learn the magnetic field response as a function of design parameters in the context of a two-dimensional (2-D) magnetostatic problem. Our approach is as follows. First, we present a functional whose minimization is equivalent to solving parametric magnetostatic problems. Subsequently, we use a deep neural network (DNN) to represent the magnetic field as a function of space and parameters that describe geometric features and operating points. We train the DNN by minimizing the physics-informed functional using stochastic gradient descent. Lastly, we demonstrate our approach on a \mbox{ten-dimensional} EI-core electromagnet problem with parameterized geometry. We evaluate the accuracy of the DNN by comparing its predictions to those of finite element analysis.
2112.08614
Liangke Gui
Liangke Gui, Borui Wang, Qiuyuan Huang, Alex Hauptmann, Yonatan Bisk, Jianfeng Gao
KAT: A Knowledge Augmented Transformer for Vision-and-Language
Accepted by NAACL 2022
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The primary focus of recent work with largescale transformers has been on optimizing the amount of information packed into the model's parameters. In this work, we ask a different question: Can multimodal transformers leverage explicit knowledge in their reasoning? Existing, primarily unimodal, methods have explored approaches under the paradigm of knowledge retrieval followed by answer prediction, but leave open questions about the quality and relevance of the retrieved knowledge used, and how the reasoning processes over implicit and explicit knowledge should be integrated. To address these challenges, we propose a novel model - Knowledge Augmented Transformer (KAT) - which achieves a strong state-of-the-art result (+6 points absolute) on the open-domain multimodal task of OK-VQA. Our approach integrates implicit and explicit knowledge in an end to end encoder-decoder architecture, while still jointly reasoning over both knowledge sources during answer generation. An additional benefit of explicit knowledge integration is seen in improved interpretability of model predictions in our analysis.
[ { "created": "Thu, 16 Dec 2021 04:37:10 GMT", "version": "v1" }, { "created": "Thu, 5 May 2022 04:20:06 GMT", "version": "v2" } ]
2022-05-06
[ [ "Gui", "Liangke", "" ], [ "Wang", "Borui", "" ], [ "Huang", "Qiuyuan", "" ], [ "Hauptmann", "Alex", "" ], [ "Bisk", "Yonatan", "" ], [ "Gao", "Jianfeng", "" ] ]
The primary focus of recent work with largescale transformers has been on optimizing the amount of information packed into the model's parameters. In this work, we ask a different question: Can multimodal transformers leverage explicit knowledge in their reasoning? Existing, primarily unimodal, methods have explored approaches under the paradigm of knowledge retrieval followed by answer prediction, but leave open questions about the quality and relevance of the retrieved knowledge used, and how the reasoning processes over implicit and explicit knowledge should be integrated. To address these challenges, we propose a novel model - Knowledge Augmented Transformer (KAT) - which achieves a strong state-of-the-art result (+6 points absolute) on the open-domain multimodal task of OK-VQA. Our approach integrates implicit and explicit knowledge in an end to end encoder-decoder architecture, while still jointly reasoning over both knowledge sources during answer generation. An additional benefit of explicit knowledge integration is seen in improved interpretability of model predictions in our analysis.
2310.10981
Bin Wang
Bin Wang, Zhengyuan Liu, Nancy F. Chen
Instructive Dialogue Summarization with Query Aggregations
EMNLP 2023 Main Conference - Summarization (update for acknowledgement)
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conventional dialogue summarization methods directly generate summaries and do not consider user's specific interests. This poses challenges in cases where the users are more focused on particular topics or aspects. With the advancement of instruction-finetuned language models, we introduce instruction-tuning to dialogues to expand the capability set of dialogue summarization models. To overcome the scarcity of instructive dialogue summarization data, we propose a three-step approach to synthesize high-quality query-based summarization triples. This process involves summary-anchored query generation, query filtering, and query-based summary generation. By training a unified model called InstructDS (Instructive Dialogue Summarization) on three summarization datasets with multi-purpose instructive triples, we expand the capability of dialogue summarization models. We evaluate our method on four datasets, including dialogue summarization and dialogue reading comprehension. Experimental results show that our approach outperforms the state-of-the-art models and even models with larger sizes. Additionally, our model exhibits higher generalizability and faithfulness, as confirmed by human subjective evaluations.
[ { "created": "Tue, 17 Oct 2023 04:03:00 GMT", "version": "v1" }, { "created": "Sat, 9 Dec 2023 05:38:43 GMT", "version": "v2" }, { "created": "Thu, 1 Aug 2024 09:53:49 GMT", "version": "v3" } ]
2024-08-02
[ [ "Wang", "Bin", "" ], [ "Liu", "Zhengyuan", "" ], [ "Chen", "Nancy F.", "" ] ]
Conventional dialogue summarization methods directly generate summaries and do not consider user's specific interests. This poses challenges in cases where the users are more focused on particular topics or aspects. With the advancement of instruction-finetuned language models, we introduce instruction-tuning to dialogues to expand the capability set of dialogue summarization models. To overcome the scarcity of instructive dialogue summarization data, we propose a three-step approach to synthesize high-quality query-based summarization triples. This process involves summary-anchored query generation, query filtering, and query-based summary generation. By training a unified model called InstructDS (Instructive Dialogue Summarization) on three summarization datasets with multi-purpose instructive triples, we expand the capability of dialogue summarization models. We evaluate our method on four datasets, including dialogue summarization and dialogue reading comprehension. Experimental results show that our approach outperforms the state-of-the-art models and even models with larger sizes. Additionally, our model exhibits higher generalizability and faithfulness, as confirmed by human subjective evaluations.
2007.03640
Rogan Morrow
Rogan Morrow, Wei-Chen Chiu
Benefiting Deep Latent Variable Models via Learning the Prior and Removing Latent Regularization
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There exist many forms of deep latent variable models, such as the variational autoencoder and adversarial autoencoder. Regardless of the specific class of model, there exists an implicit consensus that the latent distribution should be regularized towards the prior, even in the case where the prior distribution is learned. Upon investigating the effect of latent regularization on image generation our results indicate that in the case where a sufficiently expressive prior is learned, latent regularization is not necessary and may in fact be harmful insofar as image quality is concerned. We additionally investigate the benefit of learned priors on two common problems in computer vision: latent variable disentanglement, and diversity in image-to-image translation.
[ { "created": "Tue, 7 Jul 2020 17:25:37 GMT", "version": "v1" }, { "created": "Thu, 16 Jul 2020 13:05:49 GMT", "version": "v2" } ]
2020-07-17
[ [ "Morrow", "Rogan", "" ], [ "Chiu", "Wei-Chen", "" ] ]
There exist many forms of deep latent variable models, such as the variational autoencoder and adversarial autoencoder. Regardless of the specific class of model, there exists an implicit consensus that the latent distribution should be regularized towards the prior, even in the case where the prior distribution is learned. Upon investigating the effect of latent regularization on image generation our results indicate that in the case where a sufficiently expressive prior is learned, latent regularization is not necessary and may in fact be harmful insofar as image quality is concerned. We additionally investigate the benefit of learned priors on two common problems in computer vision: latent variable disentanglement, and diversity in image-to-image translation.
2207.01426
Liang Ding
Jun Rao, Liang Ding, Shuhan Qi, Meng Fang, Yang Liu, Li Shen, Dacheng Tao
Dynamic Contrastive Distillation for Image-Text Retrieval
null
null
null
null
cs.MM cs.AI cs.CL cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
Although the vision-and-language pretraining (VLP) equipped cross-modal image-text retrieval (ITR) has achieved remarkable progress in the past two years, it suffers from a major drawback: the ever-increasing size of VLP models restricts its deployment to real-world search scenarios (where the high latency is unacceptable). To alleviate this problem, we present a novel plug-in dynamic contrastive distillation (DCD) framework to compress the large VLP models for the ITR task. Technically, we face the following two challenges: 1) the typical uni-modal metric learning approach is difficult to directly apply to the cross-modal tasks, due to the limited GPU memory to optimize too many negative samples during handling cross-modal fusion features. 2) it is inefficient to static optimize the student network from different hard samples, which have different effects on distillation learning and student network optimization. We try to overcome these challenges from two points. First, to achieve multi-modal contrastive learning, and balance the training costs and effects, we propose to use a teacher network to estimate the difficult samples for students, making the students absorb the powerful knowledge from pre-trained teachers, and master the knowledge from hard samples. Second, to dynamic learn from hard sample pairs, we propose dynamic distillation to dynamically learn samples of different difficulties, from the perspective of better balancing the difficulty of knowledge and students' self-learning ability. We successfully apply our proposed DCD strategy to two state-of-the-art vision-language pretrained models, i.e. ViLT and METER. Extensive experiments on MS-COCO and Flickr30K benchmarks show the effectiveness and efficiency of our DCD framework. Encouragingly, we can speed up the inference at least 129$\times$ compared to the existing ITR models.
[ { "created": "Mon, 4 Jul 2022 14:08:59 GMT", "version": "v1" } ]
2022-07-05
[ [ "Rao", "Jun", "" ], [ "Ding", "Liang", "" ], [ "Qi", "Shuhan", "" ], [ "Fang", "Meng", "" ], [ "Liu", "Yang", "" ], [ "Shen", "Li", "" ], [ "Tao", "Dacheng", "" ] ]
Although the vision-and-language pretraining (VLP) equipped cross-modal image-text retrieval (ITR) has achieved remarkable progress in the past two years, it suffers from a major drawback: the ever-increasing size of VLP models restricts its deployment to real-world search scenarios (where the high latency is unacceptable). To alleviate this problem, we present a novel plug-in dynamic contrastive distillation (DCD) framework to compress the large VLP models for the ITR task. Technically, we face the following two challenges: 1) the typical uni-modal metric learning approach is difficult to directly apply to the cross-modal tasks, due to the limited GPU memory to optimize too many negative samples during handling cross-modal fusion features. 2) it is inefficient to static optimize the student network from different hard samples, which have different effects on distillation learning and student network optimization. We try to overcome these challenges from two points. First, to achieve multi-modal contrastive learning, and balance the training costs and effects, we propose to use a teacher network to estimate the difficult samples for students, making the students absorb the powerful knowledge from pre-trained teachers, and master the knowledge from hard samples. Second, to dynamic learn from hard sample pairs, we propose dynamic distillation to dynamically learn samples of different difficulties, from the perspective of better balancing the difficulty of knowledge and students' self-learning ability. We successfully apply our proposed DCD strategy to two state-of-the-art vision-language pretrained models, i.e. ViLT and METER. Extensive experiments on MS-COCO and Flickr30K benchmarks show the effectiveness and efficiency of our DCD framework. Encouragingly, we can speed up the inference at least 129$\times$ compared to the existing ITR models.
1412.2455
Shihao Yan
Shihao Yan, Robert Malaney, Ido Nevat, and Gareth W. Peters
Location Verification Systems for VANETs in Rician Fading Channels
12 pages, 6 figures
null
10.1109/TVT.2015.2453160
null
cs.NI cs.CR cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we propose and examine Location Verification Systems (LVSs) for Vehicular Ad Hoc Networks (VANETs) in the realistic setting of Rician fading channels. In our LVSs, a single authorized Base Station (BS) equipped with multiple antennas aims to detect a malicious vehicle that is spoofing its claimed location. We first determine the optimal attack strategy of the malicious vehicle, which in turn allows us to analyze the optimal LVS performance as a function of the Rician $K$-factor of the channel between the BS and a legitimate vehicle. Our analysis also allows us to formally prove that the LVS performance limit is independent of the properties of the channel between the BS and the malicious vehicle, provided the malicious vehicle's antenna number is above a specified value. We also investigate how tracking information on a vehicle quantitatively improves the detection performance of an LVS, showing how optimal performance is obtained under the assumption of the tracking length being randomly selected. The work presented here can be readily extended to multiple BS scenarios, and therefore forms the foundation for all optimal location authentication schemes within the context of Rician fading channels. Our study closes important gaps in the current understanding of LVS performance within the context of VANETs, and will be of practical value to certificate revocation schemes within IEEE 1609.2.
[ { "created": "Mon, 8 Dec 2014 05:47:32 GMT", "version": "v1" } ]
2016-09-01
[ [ "Yan", "Shihao", "" ], [ "Malaney", "Robert", "" ], [ "Nevat", "Ido", "" ], [ "Peters", "Gareth W.", "" ] ]
In this work we propose and examine Location Verification Systems (LVSs) for Vehicular Ad Hoc Networks (VANETs) in the realistic setting of Rician fading channels. In our LVSs, a single authorized Base Station (BS) equipped with multiple antennas aims to detect a malicious vehicle that is spoofing its claimed location. We first determine the optimal attack strategy of the malicious vehicle, which in turn allows us to analyze the optimal LVS performance as a function of the Rician $K$-factor of the channel between the BS and a legitimate vehicle. Our analysis also allows us to formally prove that the LVS performance limit is independent of the properties of the channel between the BS and the malicious vehicle, provided the malicious vehicle's antenna number is above a specified value. We also investigate how tracking information on a vehicle quantitatively improves the detection performance of an LVS, showing how optimal performance is obtained under the assumption of the tracking length being randomly selected. The work presented here can be readily extended to multiple BS scenarios, and therefore forms the foundation for all optimal location authentication schemes within the context of Rician fading channels. Our study closes important gaps in the current understanding of LVS performance within the context of VANETs, and will be of practical value to certificate revocation schemes within IEEE 1609.2.
1906.07280
Enrico Santus
Emmanuele Chersoni, Enrico Santus, Ludovica Pannitto, Alessandro Lenci, Philippe Blache, Chu-Ren Huang
A Structured Distributional Model of Sentence Meaning and Processing
accepted at JLNE; Journal of Natural Language Engineering; 26 pages, thematic fit, selectional preference, natural language processing, nlp, ai
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most compositional distributional semantic models represent sentence meaning with a single vector. In this paper, we propose a Structured Distributional Model (SDM) that combines word embeddings with formal semantics and is based on the assumption that sentences represent events and situations. The semantic representation of a sentence is a formal structure derived from Discourse Representation Theory and containing distributional vectors. This structure is dynamically and incrementally built by integrating knowledge about events and their typical participants, as they are activated by lexical items. Event knowledge is modeled as a graph extracted from parsed corpora and encoding roles and relationships between participants that are represented as distributional vectors. SDM is grounded on extensive psycholinguistic research showing that generalized knowledge about events stored in semantic memory plays a key role in sentence comprehension. We evaluate SDM on two recently introduced compositionality datasets, and our results show that combining a simple compositional model with event knowledge constantly improves performances, even with different types of word embeddings.
[ { "created": "Mon, 17 Jun 2019 21:31:40 GMT", "version": "v1" } ]
2019-06-19
[ [ "Chersoni", "Emmanuele", "" ], [ "Santus", "Enrico", "" ], [ "Pannitto", "Ludovica", "" ], [ "Lenci", "Alessandro", "" ], [ "Blache", "Philippe", "" ], [ "Huang", "Chu-Ren", "" ] ]
Most compositional distributional semantic models represent sentence meaning with a single vector. In this paper, we propose a Structured Distributional Model (SDM) that combines word embeddings with formal semantics and is based on the assumption that sentences represent events and situations. The semantic representation of a sentence is a formal structure derived from Discourse Representation Theory and containing distributional vectors. This structure is dynamically and incrementally built by integrating knowledge about events and their typical participants, as they are activated by lexical items. Event knowledge is modeled as a graph extracted from parsed corpora and encoding roles and relationships between participants that are represented as distributional vectors. SDM is grounded on extensive psycholinguistic research showing that generalized knowledge about events stored in semantic memory plays a key role in sentence comprehension. We evaluate SDM on two recently introduced compositionality datasets, and our results show that combining a simple compositional model with event knowledge constantly improves performances, even with different types of word embeddings.
2105.08878
Jeremy Chen
Jeremy Chen, Yuqing Huang, Mushi Wang, Semih Salihoglu, Ken Salem
Accurate Summary-based Cardinality Estimation Through the Lens of Cardinality Estimation Graphs
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study two classes of summary-based cardinality estimators that use statistics about input relations and small-size joins in the context of graph database management systems: (i) optimistic estimators that make uniformity and conditional independence assumptions; and (ii) the recent pessimistic estimators that use information theoretic linear programs. We begin by addressing the problem of how to make accurate estimates for optimistic estimators. We model these estimators as picking bottom-to-top paths in a cardinality estimation graph (CEG), which contains sub-queries as nodes and weighted edges between sub-queries that represent average degrees. We outline a space of heuristics to make an optimistic estimate in this framework and show that effective heuristics depend on the structure of the input queries. We observe that on acyclic queries and queries with small-size cycles, using the maximum-weight path is an effective technique to address the well known underestimation problem for optimistic estimators. We show that on a large suite of datasets and workloads, the accuracy of such estimates is up to three orders of magnitude more accurate in mean q-error than some prior heuristics that have been proposed in prior work. In contrast, we show that on queries with larger cycles these estimators tend to overestimate, which can partially be addressed by using minimum weight paths and more effectively by using an alternative CEG. We then show that CEGs can also model the recent pessimistic estimators. This surprising result allows us to connect two disparate lines of work on optimistic and pessimistic estimators, adopt an optimization from pessimistic estimators to optimistic ones, and provide insights into the pessimistic estimators, such as showing that there are alternative combinatorial solutions to the linear programs that define them.
[ { "created": "Wed, 19 May 2021 01:52:38 GMT", "version": "v1" } ]
2021-05-20
[ [ "Chen", "Jeremy", "" ], [ "Huang", "Yuqing", "" ], [ "Wang", "Mushi", "" ], [ "Salihoglu", "Semih", "" ], [ "Salem", "Ken", "" ] ]
We study two classes of summary-based cardinality estimators that use statistics about input relations and small-size joins in the context of graph database management systems: (i) optimistic estimators that make uniformity and conditional independence assumptions; and (ii) the recent pessimistic estimators that use information theoretic linear programs. We begin by addressing the problem of how to make accurate estimates for optimistic estimators. We model these estimators as picking bottom-to-top paths in a cardinality estimation graph (CEG), which contains sub-queries as nodes and weighted edges between sub-queries that represent average degrees. We outline a space of heuristics to make an optimistic estimate in this framework and show that effective heuristics depend on the structure of the input queries. We observe that on acyclic queries and queries with small-size cycles, using the maximum-weight path is an effective technique to address the well known underestimation problem for optimistic estimators. We show that on a large suite of datasets and workloads, the accuracy of such estimates is up to three orders of magnitude more accurate in mean q-error than some prior heuristics that have been proposed in prior work. In contrast, we show that on queries with larger cycles these estimators tend to overestimate, which can partially be addressed by using minimum weight paths and more effectively by using an alternative CEG. We then show that CEGs can also model the recent pessimistic estimators. This surprising result allows us to connect two disparate lines of work on optimistic and pessimistic estimators, adopt an optimization from pessimistic estimators to optimistic ones, and provide insights into the pessimistic estimators, such as showing that there are alternative combinatorial solutions to the linear programs that define them.
2312.04501
Derek Lim
Derek Lim, Haggai Maron, Marc T. Law, Jonathan Lorraine, James Lucas
Graph Metanetworks for Processing Diverse Neural Architectures
29 pages. v2 updated experimental results and details
null
null
null
cs.LG cs.AI stat.ML
http://creativecommons.org/licenses/by/4.0/
Neural networks efficiently encode learned information within their parameters. Consequently, many tasks can be unified by treating neural networks themselves as input data. When doing so, recent studies demonstrated the importance of accounting for the symmetries and geometry of parameter spaces. However, those works developed architectures tailored to specific networks such as MLPs and CNNs without normalization layers, and generalizing such architectures to other types of networks can be challenging. In this work, we overcome these challenges by building new metanetworks - neural networks that take weights from other neural networks as input. Put simply, we carefully build graphs representing the input neural networks and process the graphs using graph neural networks. Our approach, Graph Metanetworks (GMNs), generalizes to neural architectures where competing methods struggle, such as multi-head attention layers, normalization layers, convolutional layers, ResNet blocks, and group-equivariant linear layers. We prove that GMNs are expressive and equivariant to parameter permutation symmetries that leave the input neural network functions unchanged. We validate the effectiveness of our method on several metanetwork tasks over diverse neural network architectures.
[ { "created": "Thu, 7 Dec 2023 18:21:52 GMT", "version": "v1" }, { "created": "Fri, 29 Dec 2023 22:55:45 GMT", "version": "v2" } ]
2024-01-02
[ [ "Lim", "Derek", "" ], [ "Maron", "Haggai", "" ], [ "Law", "Marc T.", "" ], [ "Lorraine", "Jonathan", "" ], [ "Lucas", "James", "" ] ]
Neural networks efficiently encode learned information within their parameters. Consequently, many tasks can be unified by treating neural networks themselves as input data. When doing so, recent studies demonstrated the importance of accounting for the symmetries and geometry of parameter spaces. However, those works developed architectures tailored to specific networks such as MLPs and CNNs without normalization layers, and generalizing such architectures to other types of networks can be challenging. In this work, we overcome these challenges by building new metanetworks - neural networks that take weights from other neural networks as input. Put simply, we carefully build graphs representing the input neural networks and process the graphs using graph neural networks. Our approach, Graph Metanetworks (GMNs), generalizes to neural architectures where competing methods struggle, such as multi-head attention layers, normalization layers, convolutional layers, ResNet blocks, and group-equivariant linear layers. We prove that GMNs are expressive and equivariant to parameter permutation symmetries that leave the input neural network functions unchanged. We validate the effectiveness of our method on several metanetwork tasks over diverse neural network architectures.
2405.11265
Yu Huang
Yu Huang, Liang Guo, Wanqian Guo, Zhe Tao, Yang Lv, Zhihao Sun, Dongfang Zhao
EnviroExam: Benchmarking Environmental Science Knowledge of Large Language Models
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the field of environmental science, it is crucial to have robust evaluation metrics for large language models to ensure their efficacy and accuracy. We propose EnviroExam, a comprehensive evaluation method designed to assess the knowledge of large language models in the field of environmental science. EnviroExam is based on the curricula of top international universities, covering undergraduate, master's, and doctoral courses, and includes 936 questions across 42 core courses. By conducting 0-shot and 5-shot tests on 31 open-source large language models, EnviroExam reveals the performance differences among these models in the domain of environmental science and provides detailed evaluation standards. The results show that 61.3% of the models passed the 5-shot tests, while 48.39% passed the 0-shot tests. By introducing the coefficient of variation as an indicator, we evaluate the performance of mainstream open-source large language models in environmental science from multiple perspectives, providing effective criteria for selecting and fine-tuning language models in this field. Future research will involve constructing more domain-specific test sets using specialized environmental science textbooks to further enhance the accuracy and specificity of the evaluation.
[ { "created": "Sat, 18 May 2024 11:31:03 GMT", "version": "v1" } ]
2024-05-21
[ [ "Huang", "Yu", "" ], [ "Guo", "Liang", "" ], [ "Guo", "Wanqian", "" ], [ "Tao", "Zhe", "" ], [ "Lv", "Yang", "" ], [ "Sun", "Zhihao", "" ], [ "Zhao", "Dongfang", "" ] ]
In the field of environmental science, it is crucial to have robust evaluation metrics for large language models to ensure their efficacy and accuracy. We propose EnviroExam, a comprehensive evaluation method designed to assess the knowledge of large language models in the field of environmental science. EnviroExam is based on the curricula of top international universities, covering undergraduate, master's, and doctoral courses, and includes 936 questions across 42 core courses. By conducting 0-shot and 5-shot tests on 31 open-source large language models, EnviroExam reveals the performance differences among these models in the domain of environmental science and provides detailed evaluation standards. The results show that 61.3% of the models passed the 5-shot tests, while 48.39% passed the 0-shot tests. By introducing the coefficient of variation as an indicator, we evaluate the performance of mainstream open-source large language models in environmental science from multiple perspectives, providing effective criteria for selecting and fine-tuning language models in this field. Future research will involve constructing more domain-specific test sets using specialized environmental science textbooks to further enhance the accuracy and specificity of the evaluation.
1402.6239
Andr\'e Nichterlein
Sepp Hartung and Clemens Hoffmann and Andr\'e Nichterlein
Improved Upper and Lower Bound Heuristics for Degree Anonymization in Social Networks
null
null
null
null
cs.SI cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by a strongly growing interest in anonymizing social network data, we investigate the NP-hard Degree Anonymization problem: given an undirected graph, the task is to add a minimum number of edges such that the graph becomes k-anonymous. That is, for each vertex there have to be at least k-1 other vertices of exactly the same degree. The model of degree anonymization has been introduced by Liu and Terzi [ACM SIGMOD'08], who also proposed and evaluated a two-phase heuristic. We present an enhancement of this heuristic, including new algorithms for each phase which significantly improve on the previously known theoretical and practical running times. Moreover, our algorithms are optimized for large-scale social networks and provide upper and lower bounds for the optimal solution. Notably, on about 26 % of the real-world data we provide (provably) optimal solutions; whereas in the other cases our upper bounds significantly improve on known heuristic solutions.
[ { "created": "Tue, 25 Feb 2014 16:53:32 GMT", "version": "v1" } ]
2014-02-26
[ [ "Hartung", "Sepp", "" ], [ "Hoffmann", "Clemens", "" ], [ "Nichterlein", "André", "" ] ]
Motivated by a strongly growing interest in anonymizing social network data, we investigate the NP-hard Degree Anonymization problem: given an undirected graph, the task is to add a minimum number of edges such that the graph becomes k-anonymous. That is, for each vertex there have to be at least k-1 other vertices of exactly the same degree. The model of degree anonymization has been introduced by Liu and Terzi [ACM SIGMOD'08], who also proposed and evaluated a two-phase heuristic. We present an enhancement of this heuristic, including new algorithms for each phase which significantly improve on the previously known theoretical and practical running times. Moreover, our algorithms are optimized for large-scale social networks and provide upper and lower bounds for the optimal solution. Notably, on about 26 % of the real-world data we provide (provably) optimal solutions; whereas in the other cases our upper bounds significantly improve on known heuristic solutions.
1312.0655
Quan Geng
Quan Geng and Pramod Viswanath
The Optimal Mechanism in Differential Privacy: Multidimensional Setting
18 pages, 2 figures. arXiv admin note: text overlap with arXiv:1212.1186
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We derive the optimal $\epsilon$-differentially private mechanism for a general two-dimensional real-valued (histogram-like) query function under a utility-maximization (or cost-minimization) framework for the $\ell^1$ cost function. We show that the optimal noise probability distribution has a correlated multidimensional staircase-shaped probability density function. Compared with the Laplacian mechanism, we show that in the high privacy regime (as $\epsilon \to 0$), the Laplacian mechanism is approximately optimal; and in the low privacy regime (as $\epsilon \to +\infty$), the optimal cost is $\Theta(e^{-\frac{\epsilon}{3}})$, while the cost of the Laplacian mechanism is $\frac{2\Delta}{\epsilon}$, where $\Delta$ is the sensitivity of the query function. We conclude that the gain is more pronounced in the low privacy regime. We conjecture that the optimality of the staircase mechanism holds for vector-valued (histogram-like) query functions with arbitrary dimension, and holds for many other classes of cost functions as well.
[ { "created": "Mon, 2 Dec 2013 22:57:29 GMT", "version": "v1" } ]
2013-12-04
[ [ "Geng", "Quan", "" ], [ "Viswanath", "Pramod", "" ] ]
We derive the optimal $\epsilon$-differentially private mechanism for a general two-dimensional real-valued (histogram-like) query function under a utility-maximization (or cost-minimization) framework for the $\ell^1$ cost function. We show that the optimal noise probability distribution has a correlated multidimensional staircase-shaped probability density function. Compared with the Laplacian mechanism, we show that in the high privacy regime (as $\epsilon \to 0$), the Laplacian mechanism is approximately optimal; and in the low privacy regime (as $\epsilon \to +\infty$), the optimal cost is $\Theta(e^{-\frac{\epsilon}{3}})$, while the cost of the Laplacian mechanism is $\frac{2\Delta}{\epsilon}$, where $\Delta$ is the sensitivity of the query function. We conclude that the gain is more pronounced in the low privacy regime. We conjecture that the optimality of the staircase mechanism holds for vector-valued (histogram-like) query functions with arbitrary dimension, and holds for many other classes of cost functions as well.
2309.07782
Matteo Golinelli
Matteo Golinelli, Francesco Bonomi, Bruno Crispo
The Nonce-nce of Web Security: an Investigation of CSP Nonces Reuse
Accepted at the WASP workshop (ESORICS 2023)
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Content Security Policy (CSP) is an effective security mechanism that prevents the exploitation of Cross-Site Scripting (XSS) vulnerabilities on websites by specifying the sources from which their web pages can load resources, such as scripts and styles. CSP nonces enable websites to allow the execution of specific inline scripts and styles without relying on a whitelist. In this study, we measure and analyze the use of CSP nonces in the wild, specifically looking for nonce reuse, short nonces, and invalid nonces. We find that, of the 2271 sites that deploy a nonce-based policy, 598 of them reuse the same nonce value in more than one response, potentially enabling attackers to bypass protection offered by the CSP against XSS attacks. We analyze the causes of the nonce reuses to identify whether they are introduced by the server-side code or if the nonces are being cached by web caches. Moreover, we investigate whether nonces are only reused within the same session or for different sessions, as this impacts the effectiveness of CSP in preventing XSS attacks. Finally, we discuss the possibilities for attackers to bypass the CSP and achieve XSS in different nonce reuse scenarios.
[ { "created": "Thu, 14 Sep 2023 15:15:44 GMT", "version": "v1" } ]
2023-09-15
[ [ "Golinelli", "Matteo", "" ], [ "Bonomi", "Francesco", "" ], [ "Crispo", "Bruno", "" ] ]
Content Security Policy (CSP) is an effective security mechanism that prevents the exploitation of Cross-Site Scripting (XSS) vulnerabilities on websites by specifying the sources from which their web pages can load resources, such as scripts and styles. CSP nonces enable websites to allow the execution of specific inline scripts and styles without relying on a whitelist. In this study, we measure and analyze the use of CSP nonces in the wild, specifically looking for nonce reuse, short nonces, and invalid nonces. We find that, of the 2271 sites that deploy a nonce-based policy, 598 of them reuse the same nonce value in more than one response, potentially enabling attackers to bypass protection offered by the CSP against XSS attacks. We analyze the causes of the nonce reuses to identify whether they are introduced by the server-side code or if the nonces are being cached by web caches. Moreover, we investigate whether nonces are only reused within the same session or for different sessions, as this impacts the effectiveness of CSP in preventing XSS attacks. Finally, we discuss the possibilities for attackers to bypass the CSP and achieve XSS in different nonce reuse scenarios.
2206.08434
Jason Mars PhD
Jason Mars
The Case for a Wholistic Serverless Programming Paradigm and Full Stack Automation for AI and Beyond -- The Philosophy of Jaseci and Jac
null
null
null
null
cs.DC cs.AI cs.AR cs.PL
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this work, the case is made for a wholistic top-down re-envisioning of the system stack from the programming language level down through the system architecture to bridge this complexity gap. The key goal of our design is to address the critical need for the programmer to articulate solutions with higher level abstractions at the problem level while having the runtime system stack subsume and hide a broad scope of diffuse sub-applications and inter-machine resources. This work also presents the design of a production-grade realization of such a system stack architecture called Jaseci, and corresponding programming language Jac. Jac and Jaseci has been released as open source and has been leveraged by real product teams to accelerate developing and deploying sophisticated AI products and other applications at scale. Jac has been utilized in commercial production environments to accelerate AI development timelines by ~10x, with the Jaseci runtime automating the decisions and optimizations typically falling in the scope of manual engineering roles on a team such as what should and should not be a microservice and changing those dynamically.
[ { "created": "Thu, 16 Jun 2022 20:28:37 GMT", "version": "v1" } ]
2022-06-20
[ [ "Mars", "Jason", "" ] ]
In this work, the case is made for a wholistic top-down re-envisioning of the system stack from the programming language level down through the system architecture to bridge this complexity gap. The key goal of our design is to address the critical need for the programmer to articulate solutions with higher level abstractions at the problem level while having the runtime system stack subsume and hide a broad scope of diffuse sub-applications and inter-machine resources. This work also presents the design of a production-grade realization of such a system stack architecture called Jaseci, and corresponding programming language Jac. Jac and Jaseci has been released as open source and has been leveraged by real product teams to accelerate developing and deploying sophisticated AI products and other applications at scale. Jac has been utilized in commercial production environments to accelerate AI development timelines by ~10x, with the Jaseci runtime automating the decisions and optimizations typically falling in the scope of manual engineering roles on a team such as what should and should not be a microservice and changing those dynamically.
2110.09241
Wenhan Yang
Wenhan Yang, Haofeng Huang, Yueyu Hu, Ling-Yu Duan, Jiaying Liu
Video Coding for Machine: Compact Visual Representation Compression for Intelligent Collaborative Analytics
The first three authors had equal contribution. arXiv admin note: text overlap with arXiv:2106.08512
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video Coding for Machines (VCM) is committed to bridging to an extent separate research tracks of video/image compression and feature compression, and attempts to optimize compactness and efficiency jointly from a unified perspective of high accuracy machine vision and full fidelity human vision. In this paper, we summarize VCM methodology and philosophy based on existing academia and industrial efforts. The development of VCM follows a general rate-distortion optimization, and the categorization of key modules or techniques is established. From previous works, it is demonstrated that, although existing works attempt to reveal the nature of scalable representation in bits when dealing with machine and human vision tasks, there remains a rare study in the generality of low bit rate representation, and accordingly how to support a variety of visual analytic tasks. Therefore, we investigate a novel visual information compression for the analytics taxonomy problem to strengthen the capability of compact visual representations extracted from multiple tasks for visual analytics. A new perspective of task relationships versus compression is revisited. By keeping in mind the transferability among different machine vision tasks (e.g. high-level semantic and mid-level geometry-related), we aim to support multiple tasks jointly at low bit rates. In particular, to narrow the dimensionality gap between neural network generated features extracted from pixels and a variety of machine vision features/labels (e.g. scene class, segmentation labels), a codebook hyperprior is designed to compress the neural network-generated features. As demonstrated in our experiments, this new hyperprior model is expected to improve feature compression efficiency by estimating the signal entropy more accurately, which enables further investigation of the granularity of abstracting compact features among different tasks.
[ { "created": "Mon, 18 Oct 2021 12:42:13 GMT", "version": "v1" } ]
2021-10-19
[ [ "Yang", "Wenhan", "" ], [ "Huang", "Haofeng", "" ], [ "Hu", "Yueyu", "" ], [ "Duan", "Ling-Yu", "" ], [ "Liu", "Jiaying", "" ] ]
Video Coding for Machines (VCM) is committed to bridging to an extent separate research tracks of video/image compression and feature compression, and attempts to optimize compactness and efficiency jointly from a unified perspective of high accuracy machine vision and full fidelity human vision. In this paper, we summarize VCM methodology and philosophy based on existing academia and industrial efforts. The development of VCM follows a general rate-distortion optimization, and the categorization of key modules or techniques is established. From previous works, it is demonstrated that, although existing works attempt to reveal the nature of scalable representation in bits when dealing with machine and human vision tasks, there remains a rare study in the generality of low bit rate representation, and accordingly how to support a variety of visual analytic tasks. Therefore, we investigate a novel visual information compression for the analytics taxonomy problem to strengthen the capability of compact visual representations extracted from multiple tasks for visual analytics. A new perspective of task relationships versus compression is revisited. By keeping in mind the transferability among different machine vision tasks (e.g. high-level semantic and mid-level geometry-related), we aim to support multiple tasks jointly at low bit rates. In particular, to narrow the dimensionality gap between neural network generated features extracted from pixels and a variety of machine vision features/labels (e.g. scene class, segmentation labels), a codebook hyperprior is designed to compress the neural network-generated features. As demonstrated in our experiments, this new hyperprior model is expected to improve feature compression efficiency by estimating the signal entropy more accurately, which enables further investigation of the granularity of abstracting compact features among different tasks.
2302.13362
Quanyan Zhu
Quanyan Zhu
The Doctrine of Cyber Effect: An Ethics Framework for Defensive Cyber Deception
null
null
null
null
cs.CR cs.CY
http://creativecommons.org/licenses/by/4.0/
The lack of established rules and regulations in cyberspace is attributed to the absence of agreed-upon ethical principles, making it difficult to establish accountability, regulations, and laws. Addressing this challenge requires examining cyberspace from fundamental philosophical principles. This work focuses on the ethics of using defensive deception in cyberspace, proposing a doctrine of cyber effect that incorporates five ethical principles: goodwill, deontology, no-harm, transparency, and fairness. To guide the design of defensive cyber deception, we develop a reasoning framework, the game of ethical duplicity, which is consistent with the doctrine. While originally intended for cyber deception, this doctrine has broader applicability, including for ethical issues such as AI accountability and controversies related to YouTube recommendations. By establishing ethical principles, we can promote greater accountability, regulation, and protection in the digital realm.
[ { "created": "Sun, 26 Feb 2023 17:41:47 GMT", "version": "v1" } ]
2023-02-28
[ [ "Zhu", "Quanyan", "" ] ]
The lack of established rules and regulations in cyberspace is attributed to the absence of agreed-upon ethical principles, making it difficult to establish accountability, regulations, and laws. Addressing this challenge requires examining cyberspace from fundamental philosophical principles. This work focuses on the ethics of using defensive deception in cyberspace, proposing a doctrine of cyber effect that incorporates five ethical principles: goodwill, deontology, no-harm, transparency, and fairness. To guide the design of defensive cyber deception, we develop a reasoning framework, the game of ethical duplicity, which is consistent with the doctrine. While originally intended for cyber deception, this doctrine has broader applicability, including for ethical issues such as AI accountability and controversies related to YouTube recommendations. By establishing ethical principles, we can promote greater accountability, regulation, and protection in the digital realm.
2402.12635
Sinan Abdulhak
Sinan Abdulhak, Anthony Carvette, Kate Shen, Robert Goldman, Bill Tuck, Max Z. Li
User Feedback-Informed Interface Design for Flow Management Data and Services (FMDS)
8 pages, 8 figures
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The transition to a microservices-based Flow Management Data and Services (FMDS) architecture from the existing Traffic Flow Management System (TFMS) is a critical enabler of the vision for an Information-Centric National Airspace System (NAS). The need to design a user-centric interface for FMDS is a key technical gap, as this interface connects NAS data and services to the traffic management specialists within all stakeholder groups (e.g., FAA, airlines). We provide a research-driven approach towards designing such a graphical user interface (GUI) for FMDS. Major goals include unifying the more than 50 disparate traffic management services currently hosted on TFMS, as well as streamlining the process of evaluating, modeling, and monitoring Traffic Management Initiatives (TMIs). Motivated by this, we iteratively designed a GUI leveraging human factors engineering and user experience design principles, as well as user interviews. Through user testing and interviews, we identify workflow benefits of our GUI (e.g., reduction in task completion time), along with next steps for developing a live prototype.
[ { "created": "Tue, 20 Feb 2024 01:26:53 GMT", "version": "v1" } ]
2024-02-21
[ [ "Abdulhak", "Sinan", "" ], [ "Carvette", "Anthony", "" ], [ "Shen", "Kate", "" ], [ "Goldman", "Robert", "" ], [ "Tuck", "Bill", "" ], [ "Li", "Max Z.", "" ] ]
The transition to a microservices-based Flow Management Data and Services (FMDS) architecture from the existing Traffic Flow Management System (TFMS) is a critical enabler of the vision for an Information-Centric National Airspace System (NAS). The need to design a user-centric interface for FMDS is a key technical gap, as this interface connects NAS data and services to the traffic management specialists within all stakeholder groups (e.g., FAA, airlines). We provide a research-driven approach towards designing such a graphical user interface (GUI) for FMDS. Major goals include unifying the more than 50 disparate traffic management services currently hosted on TFMS, as well as streamlining the process of evaluating, modeling, and monitoring Traffic Management Initiatives (TMIs). Motivated by this, we iteratively designed a GUI leveraging human factors engineering and user experience design principles, as well as user interviews. Through user testing and interviews, we identify workflow benefits of our GUI (e.g., reduction in task completion time), along with next steps for developing a live prototype.
1604.01545
German Ros
German Ros, Simon Stent, Pablo F. Alcantarilla and Tomoki Watanabe
Training Constrained Deconvolutional Networks for Road Scene Semantic Segmentation
submitted as a conference paper
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we investigate the problem of road scene semantic segmentation using Deconvolutional Networks (DNs). Several constraints limit the practical performance of DNs in this context: firstly, the paucity of existing pixel-wise labelled training data, and secondly, the memory constraints of embedded hardware, which rule out the practical use of state-of-the-art DN architectures such as fully convolutional networks (FCN). To address the first constraint, we introduce a Multi-Domain Road Scene Semantic Segmentation (MDRS3) dataset, aggregating data from six existing densely and sparsely labelled datasets for training our models, and two existing, separate datasets for testing their generalisation performance. We show that, while MDRS3 offers a greater volume and variety of data, end-to-end training of a memory efficient DN does not yield satisfactory performance. We propose a new training strategy to overcome this, based on (i) the creation of a best-possible source network (S-Net) from the aggregated data, ignoring time and memory constraints; and (ii) the transfer of knowledge from S-Net to the memory-efficient target network (T-Net). We evaluate different techniques for S-Net creation and T-Net transferral, and demonstrate that training a constrained deconvolutional network in this manner can unlock better performance than existing training approaches. Specifically, we show that a target network can be trained to achieve improved accuracy versus an FCN despite using less than 1\% of the memory. We believe that our approach can be useful beyond automotive scenarios where labelled data is similarly scarce or fragmented and where practical constraints exist on the desired model size. We make available our network models and aggregated multi-domain dataset for reproducibility.
[ { "created": "Wed, 6 Apr 2016 09:02:50 GMT", "version": "v1" } ]
2016-04-07
[ [ "Ros", "German", "" ], [ "Stent", "Simon", "" ], [ "Alcantarilla", "Pablo F.", "" ], [ "Watanabe", "Tomoki", "" ] ]
In this work we investigate the problem of road scene semantic segmentation using Deconvolutional Networks (DNs). Several constraints limit the practical performance of DNs in this context: firstly, the paucity of existing pixel-wise labelled training data, and secondly, the memory constraints of embedded hardware, which rule out the practical use of state-of-the-art DN architectures such as fully convolutional networks (FCN). To address the first constraint, we introduce a Multi-Domain Road Scene Semantic Segmentation (MDRS3) dataset, aggregating data from six existing densely and sparsely labelled datasets for training our models, and two existing, separate datasets for testing their generalisation performance. We show that, while MDRS3 offers a greater volume and variety of data, end-to-end training of a memory efficient DN does not yield satisfactory performance. We propose a new training strategy to overcome this, based on (i) the creation of a best-possible source network (S-Net) from the aggregated data, ignoring time and memory constraints; and (ii) the transfer of knowledge from S-Net to the memory-efficient target network (T-Net). We evaluate different techniques for S-Net creation and T-Net transferral, and demonstrate that training a constrained deconvolutional network in this manner can unlock better performance than existing training approaches. Specifically, we show that a target network can be trained to achieve improved accuracy versus an FCN despite using less than 1\% of the memory. We believe that our approach can be useful beyond automotive scenarios where labelled data is similarly scarce or fragmented and where practical constraints exist on the desired model size. We make available our network models and aggregated multi-domain dataset for reproducibility.
2405.05583
Yuxia Wang
Yuxia Wang, Minghan Wang, Hasan Iqbal, Georgi Georgiev, Jiahui Geng, Preslav Nakov
OpenFactCheck: A Unified Framework for Factuality Evaluation of LLMs
19 pages, 8 tables, 8 figures
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The increased use of large language models (LLMs) across a variety of real-world applications calls for mechanisms to verify the factual accuracy of their outputs. Difficulties lie in assessing the factuality of free-form responses in open domains. Also, different papers use disparate evaluation benchmarks and measurements, which renders them hard to compare and hampers future progress. To mitigate these issues, we propose OpenFactCheck, a unified factuality evaluation framework for LLMs. OpenFactCheck consists of three modules: (i) CUSTCHECKER allows users to easily customize an automatic fact-checker and verify the factual correctness of documents and claims, (ii) LLMEVAL, a unified evaluation framework assesses LLM's factuality ability from various perspectives fairly, and (iii) CHECKEREVAL is an extensible solution for gauging the reliability of automatic fact-checkers' verification results using human-annotated datasets. OpenFactCheck is publicly released at https://github.com/yuxiaw/OpenFactCheck.
[ { "created": "Thu, 9 May 2024 07:15:19 GMT", "version": "v1" } ]
2024-05-10
[ [ "Wang", "Yuxia", "" ], [ "Wang", "Minghan", "" ], [ "Iqbal", "Hasan", "" ], [ "Georgiev", "Georgi", "" ], [ "Geng", "Jiahui", "" ], [ "Nakov", "Preslav", "" ] ]
The increased use of large language models (LLMs) across a variety of real-world applications calls for mechanisms to verify the factual accuracy of their outputs. Difficulties lie in assessing the factuality of free-form responses in open domains. Also, different papers use disparate evaluation benchmarks and measurements, which renders them hard to compare and hampers future progress. To mitigate these issues, we propose OpenFactCheck, a unified factuality evaluation framework for LLMs. OpenFactCheck consists of three modules: (i) CUSTCHECKER allows users to easily customize an automatic fact-checker and verify the factual correctness of documents and claims, (ii) LLMEVAL, a unified evaluation framework assesses LLM's factuality ability from various perspectives fairly, and (iii) CHECKEREVAL is an extensible solution for gauging the reliability of automatic fact-checkers' verification results using human-annotated datasets. OpenFactCheck is publicly released at https://github.com/yuxiaw/OpenFactCheck.
1601.07613
Andrew Giuliani
Andrew Giuliani and Lilia Krivodonova
Edge coloring in unstructured CFD codes
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a way of preventing race conditions in the evaluation of the surface integral contribution in discontinuous Galerkin and finite volume flow solvers by coloring the edges (or faces) of the computational mesh. In this work we use a partitioning algorithm that separates the edges of triangular elements into three groups and the faces of quadrangular and tetrahedral elements into four groups; we then extend this partitioning to adaptively refined, nonconforming meshes. We use the ascribed coloring to reduce code memory requirements and optimize accessing the elemental data in memory. This process reduces memory access latencies and speeds up computations on graphics processing units.
[ { "created": "Thu, 28 Jan 2016 01:24:40 GMT", "version": "v1" }, { "created": "Wed, 19 Apr 2017 15:50:24 GMT", "version": "v2" } ]
2017-04-20
[ [ "Giuliani", "Andrew", "" ], [ "Krivodonova", "Lilia", "" ] ]
We propose a way of preventing race conditions in the evaluation of the surface integral contribution in discontinuous Galerkin and finite volume flow solvers by coloring the edges (or faces) of the computational mesh. In this work we use a partitioning algorithm that separates the edges of triangular elements into three groups and the faces of quadrangular and tetrahedral elements into four groups; we then extend this partitioning to adaptively refined, nonconforming meshes. We use the ascribed coloring to reduce code memory requirements and optimize accessing the elemental data in memory. This process reduces memory access latencies and speeds up computations on graphics processing units.
1803.05101
Raef Bassily
Raef Bassily, Om Thakkar, Abhradeep Thakurta
Model-Agnostic Private Learning via Stability
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We design differentially private learning algorithms that are agnostic to the learning model. Our algorithms are interactive in nature, i.e., instead of outputting a model based on the training data, they provide predictions for a set of $m$ feature vectors that arrive online. We show that, for the feature vectors on which an ensemble of models (trained on random disjoint subsets of a dataset) makes consistent predictions, there is almost no-cost of privacy in generating accurate predictions for those feature vectors. To that end, we provide a novel coupling of the distance to instability framework with the sparse vector technique. We provide algorithms with formal privacy and utility guarantees for both binary/multi-class classification, and soft-label classification. For binary classification in the standard (agnostic) PAC model, we show how to bootstrap from our privately generated predictions to construct a computationally efficient private learner that outputs a final accurate hypothesis. Our construction - to the best of our knowledge - is the first computationally efficient construction for a label-private learner. We prove sample complexity upper bounds for this setting. As in non-private sample complexity bounds, the only relevant property of the given concept class is its VC dimension. For soft-label classification, our techniques are based on exploiting the stability properties of traditional learning algorithms, like stochastic gradient descent (SGD). We provide a new technique to boost the average-case stability properties of learning algorithms to strong (worst-case) stability properties, and then exploit them to obtain private classification algorithms. In the process, we also show that a large class of SGD methods satisfy average-case stability properties, in contrast to a smaller class of SGD methods that are uniformly stable as shown in prior work.
[ { "created": "Wed, 14 Mar 2018 02:09:15 GMT", "version": "v1" } ]
2018-03-15
[ [ "Bassily", "Raef", "" ], [ "Thakkar", "Om", "" ], [ "Thakurta", "Abhradeep", "" ] ]
We design differentially private learning algorithms that are agnostic to the learning model. Our algorithms are interactive in nature, i.e., instead of outputting a model based on the training data, they provide predictions for a set of $m$ feature vectors that arrive online. We show that, for the feature vectors on which an ensemble of models (trained on random disjoint subsets of a dataset) makes consistent predictions, there is almost no-cost of privacy in generating accurate predictions for those feature vectors. To that end, we provide a novel coupling of the distance to instability framework with the sparse vector technique. We provide algorithms with formal privacy and utility guarantees for both binary/multi-class classification, and soft-label classification. For binary classification in the standard (agnostic) PAC model, we show how to bootstrap from our privately generated predictions to construct a computationally efficient private learner that outputs a final accurate hypothesis. Our construction - to the best of our knowledge - is the first computationally efficient construction for a label-private learner. We prove sample complexity upper bounds for this setting. As in non-private sample complexity bounds, the only relevant property of the given concept class is its VC dimension. For soft-label classification, our techniques are based on exploiting the stability properties of traditional learning algorithms, like stochastic gradient descent (SGD). We provide a new technique to boost the average-case stability properties of learning algorithms to strong (worst-case) stability properties, and then exploit them to obtain private classification algorithms. In the process, we also show that a large class of SGD methods satisfy average-case stability properties, in contrast to a smaller class of SGD methods that are uniformly stable as shown in prior work.
2405.04063
Partha Protim Paul
Partha P. Paul, Md Tonoy Akanda, M. Raihan Ullah, Dipto Mondal, Nazia S. Chowdhury, and Fazle M. Tawsif
xNose: A Test Smell Detector for C#
Full report of our ICSE'24 poster
null
10.1145/3639478.3643116
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Test smells, similar to code smells, can negatively impact both the test code and the production code being tested. Despite extensive research on test smells in languages like Java, Scala, and Python, automated tools for detecting test smells in C# are lacking. This paper aims to bridge this gap by extending the study of test smells to C#, and developing a tool (xNose) to identify test smells in this language and analyze their distribution across projects. We identified 16 test smells from prior studies that were language-independent and had equivalent features in C# and evaluated xNose, achieving a precision score of 96.97% and a recall score of 96.03%. In addition, we conducted an empirical study to determine the prevalence of test smells in xUnit-based C# projects. This analysis sheds light on the frequency and distribution of test smells, deepening our understanding of their impact on C# projects and test suites. The development of xNose and our analysis of test smells in C# code aim to assist developers in maintaining code quality by addressing potential issues early in the development process.
[ { "created": "Tue, 7 May 2024 07:10:42 GMT", "version": "v1" } ]
2024-05-08
[ [ "Paul", "Partha P.", "" ], [ "Akanda", "Md Tonoy", "" ], [ "Ullah", "M. Raihan", "" ], [ "Mondal", "Dipto", "" ], [ "Chowdhury", "Nazia S.", "" ], [ "Tawsif", "Fazle M.", "" ] ]
Test smells, similar to code smells, can negatively impact both the test code and the production code being tested. Despite extensive research on test smells in languages like Java, Scala, and Python, automated tools for detecting test smells in C# are lacking. This paper aims to bridge this gap by extending the study of test smells to C#, and developing a tool (xNose) to identify test smells in this language and analyze their distribution across projects. We identified 16 test smells from prior studies that were language-independent and had equivalent features in C# and evaluated xNose, achieving a precision score of 96.97% and a recall score of 96.03%. In addition, we conducted an empirical study to determine the prevalence of test smells in xUnit-based C# projects. This analysis sheds light on the frequency and distribution of test smells, deepening our understanding of their impact on C# projects and test suites. The development of xNose and our analysis of test smells in C# code aim to assist developers in maintaining code quality by addressing potential issues early in the development process.
2206.02704
Jicong Fan
Jinyu Cai, Jicong Fan
Perturbation Learning Based Anomaly Detection
null
NeurIPS 2022
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a simple yet effective method for anomaly detection. The main idea is to learn small perturbations to perturb normal data and learn a classifier to classify the normal data and the perturbed data into two different classes. The perturbator and classifier are jointly learned using deep neural networks. Importantly, the perturbations should be as small as possible but the classifier is still able to recognize the perturbed data from unperturbed data. Therefore, the perturbed data are regarded as abnormal data and the classifier provides a decision boundary between the normal data and abnormal data, although the training data do not include any abnormal data. Compared with the state-of-the-art of anomaly detection, our method does not require any assumption about the shape (e.g. hypersphere) of the decision boundary and has fewer hyper-parameters to determine. Empirical studies on benchmark datasets verify the effectiveness and superiority of our method.
[ { "created": "Mon, 6 Jun 2022 16:01:01 GMT", "version": "v1" } ]
2023-02-07
[ [ "Cai", "Jinyu", "" ], [ "Fan", "Jicong", "" ] ]
This paper presents a simple yet effective method for anomaly detection. The main idea is to learn small perturbations to perturb normal data and learn a classifier to classify the normal data and the perturbed data into two different classes. The perturbator and classifier are jointly learned using deep neural networks. Importantly, the perturbations should be as small as possible but the classifier is still able to recognize the perturbed data from unperturbed data. Therefore, the perturbed data are regarded as abnormal data and the classifier provides a decision boundary between the normal data and abnormal data, although the training data do not include any abnormal data. Compared with the state-of-the-art of anomaly detection, our method does not require any assumption about the shape (e.g. hypersphere) of the decision boundary and has fewer hyper-parameters to determine. Empirical studies on benchmark datasets verify the effectiveness and superiority of our method.
1905.08790
Zirui Xu
Zirui Xu, Fuxun Yu, Xiang Chen
DoPa: A Comprehensive CNN Detection Methodology against Physical Adversarial Attacks
5 pages, 3 figures
null
null
null
cs.CR cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, Convolutional Neural Networks (CNNs) demonstrate a considerable vulnerability to adversarial attacks, which can be easily misled by adversarial perturbations. With more aggressive methods proposed, adversarial attacks can be also applied to the physical world, causing practical issues to various CNN powered applications. To secure CNNs, adversarial attack detection is considered as the most critical approach. However, most existing works focus on superficial patterns and merely search a particular method to differentiate the adversarial inputs and natural inputs, ignoring the analysis of CNN inner vulnerability. Therefore, they can only target to specific physical adversarial attacks, lacking expected versatility to different attacks. To address this issue, we propose DoPa -- a comprehensive CNN detection methodology for various physical adversarial attacks. By interpreting the CNN's vulnerability, we find that non-semantic adversarial perturbations can activate CNN with significantly abnormal activations and even overwhelm other semantic input patterns' activations. Therefore, we add a self-verification stage to analyze the semantics of distinguished activation patterns, which improves the CNN recognition process. We apply such a detection methodology into both image and audio CNN recognition scenarios. Experiments show that DoPa can achieve an average rate of 90% success for image attack detection and 92% success for audio attack detection. Announcement:[The original DoPa draft on arXiv was modified and submitted to a conference already, while this short abstract was submitted only for a presentation at the KDD 2019 AIoT Workshop.]
[ { "created": "Tue, 21 May 2019 19:53:38 GMT", "version": "v1" }, { "created": "Fri, 19 Jul 2019 18:56:50 GMT", "version": "v2" }, { "created": "Fri, 23 Aug 2019 20:38:44 GMT", "version": "v3" }, { "created": "Wed, 28 Aug 2019 15:07:07 GMT", "version": "v4" } ]
2019-08-29
[ [ "Xu", "Zirui", "" ], [ "Yu", "Fuxun", "" ], [ "Chen", "Xiang", "" ] ]
Recently, Convolutional Neural Networks (CNNs) demonstrate a considerable vulnerability to adversarial attacks, which can be easily misled by adversarial perturbations. With more aggressive methods proposed, adversarial attacks can be also applied to the physical world, causing practical issues to various CNN powered applications. To secure CNNs, adversarial attack detection is considered as the most critical approach. However, most existing works focus on superficial patterns and merely search a particular method to differentiate the adversarial inputs and natural inputs, ignoring the analysis of CNN inner vulnerability. Therefore, they can only target to specific physical adversarial attacks, lacking expected versatility to different attacks. To address this issue, we propose DoPa -- a comprehensive CNN detection methodology for various physical adversarial attacks. By interpreting the CNN's vulnerability, we find that non-semantic adversarial perturbations can activate CNN with significantly abnormal activations and even overwhelm other semantic input patterns' activations. Therefore, we add a self-verification stage to analyze the semantics of distinguished activation patterns, which improves the CNN recognition process. We apply such a detection methodology into both image and audio CNN recognition scenarios. Experiments show that DoPa can achieve an average rate of 90% success for image attack detection and 92% success for audio attack detection. Announcement:[The original DoPa draft on arXiv was modified and submitted to a conference already, while this short abstract was submitted only for a presentation at the KDD 2019 AIoT Workshop.]
1801.06349
Matei Mancas
Matei Mancas, Christian Frisson, Jo\"elle Tilmanne, Nicolas d'Alessandro, Petr Barborka, Furkan Bayansar, Francisco Bernard, Rebecca Fiebrink, Alexis Heloir, Edgar Hemery, Sohaib Laraba, Alexis Moinet, Fabrizio Nunnari, Thierry Ravet, Lo\"ic Reboursi\`ere, Alvaro Sarasua, Micka\"el Tits, No\'e Tits, Fran\c{c}ois Zaj\'ega, Paolo Alborno, Ksenia Kolykhalova, Emma Frid, Damiano Malafronte, Lisanne Huis in't Veld, H\"useyin Cakmak, Kevin El Haddad, Nicolas Riche, Julien Leroy, Pierre Marighetto, Bekir Berker T\"urker, Hossein Khaki, Roberto Pulisci, Emer Gilmartin, Fasih Haider, K\"ubra Cengiz, Martin Sulir, Ilaria Torre, Shabbir Marzban, Ramazan Yaz{\i}c{\i}, Furkan Burak B\^agc{\i}, Vedat Gazi K{\i}l{\i}, Hilal Sezer, Sena B\"usra Yenge, Charles-Alexandre Delestage, Sylvie Leleu-Merviel, Muriel Meyer-Chemenska, Daniel Schmitt, Willy Yvart, St\'ephane Dupont, Ozan Can Altiok, Ayseg\"ul Bumin, Ceren Dikmen, Ivan Giangreco, Silvan Heller, Emre K\"ulah, Gueorgui Pironkov, Luca Rossetto, Yusuf Sahillioglu, Heiko Schuldt, Omar Seddati, Yusuf Setinkaya, Metin Sezgin, Claudiu Tanase, Emre Toyan, Sean Wood, Doguhan Yeke, Fran\c{c}cois Rocca, Pierre-Henri De Deken, Alessandra Bandrabur, Fabien Grisard, Axel Jean-Caurant, Vincent Courboulay, Radhwan Ben Madhkour, Ambroise Moreau
Proceedings of eNTERFACE 2015 Workshop on Intelligent Interfaces
159 pages
null
null
null
cs.HC cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The 11th Summer Workshop on Multimodal Interfaces eNTERFACE 2015 was hosted by the Numediart Institute of Creative Technologies of the University of Mons from August 10th to September 2015. During the four weeks, students and researchers from all over the world came together in the Numediart Institute of the University of Mons to work on eight selected projects structured around intelligent interfaces. Eight projects were selected and their reports are shown here.
[ { "created": "Fri, 19 Jan 2018 10:03:35 GMT", "version": "v1" } ]
2018-01-22
[ [ "Mancas", "Matei", "" ], [ "Frisson", "Christian", "" ], [ "Tilmanne", "Joëlle", "" ], [ "d'Alessandro", "Nicolas", "" ], [ "Barborka", "Petr", "" ], [ "Bayansar", "Furkan", "" ], [ "Bernard", "Francisco", "" ], [ "Fiebrink", "Rebecca", "" ], [ "Heloir", "Alexis", "" ], [ "Hemery", "Edgar", "" ], [ "Laraba", "Sohaib", "" ], [ "Moinet", "Alexis", "" ], [ "Nunnari", "Fabrizio", "" ], [ "Ravet", "Thierry", "" ], [ "Reboursière", "Loïc", "" ], [ "Sarasua", "Alvaro", "" ], [ "Tits", "Mickaël", "" ], [ "Tits", "Noé", "" ], [ "Zajéga", "François", "" ], [ "Alborno", "Paolo", "" ], [ "Kolykhalova", "Ksenia", "" ], [ "Frid", "Emma", "" ], [ "Malafronte", "Damiano", "" ], [ "Veld", "Lisanne Huis in't", "" ], [ "Cakmak", "Hüseyin", "" ], [ "Haddad", "Kevin El", "" ], [ "Riche", "Nicolas", "" ], [ "Leroy", "Julien", "" ], [ "Marighetto", "Pierre", "" ], [ "Türker", "Bekir Berker", "" ], [ "Khaki", "Hossein", "" ], [ "Pulisci", "Roberto", "" ], [ "Gilmartin", "Emer", "" ], [ "Haider", "Fasih", "" ], [ "Cengiz", "Kübra", "" ], [ "Sulir", "Martin", "" ], [ "Torre", "Ilaria", "" ], [ "Marzban", "Shabbir", "" ], [ "Yazıcı", "Ramazan", "" ], [ "Bâgcı", "Furkan Burak", "" ], [ "Kılı", "Vedat Gazi", "" ], [ "Sezer", "Hilal", "" ], [ "Yenge", "Sena Büsra", "" ], [ "Delestage", "Charles-Alexandre", "" ], [ "Leleu-Merviel", "Sylvie", "" ], [ "Meyer-Chemenska", "Muriel", "" ], [ "Schmitt", "Daniel", "" ], [ "Yvart", "Willy", "" ], [ "Dupont", "Stéphane", "" ], [ "Altiok", "Ozan Can", "" ], [ "Bumin", "Aysegül", "" ], [ "Dikmen", "Ceren", "" ], [ "Giangreco", "Ivan", "" ], [ "Heller", "Silvan", "" ], [ "Külah", "Emre", "" ], [ "Pironkov", "Gueorgui", "" ], [ "Rossetto", "Luca", "" ], [ "Sahillioglu", "Yusuf", "" ], [ "Schuldt", "Heiko", "" ], [ "Seddati", "Omar", "" ], [ "Setinkaya", "Yusuf", "" ], [ "Sezgin", "Metin", "" ], [ "Tanase", "Claudiu", "" ], [ "Toyan", "Emre", "" ], [ "Wood", "Sean", "" ], [ "Yeke", "Doguhan", "" ], [ "Rocca", "Françcois", "" ], [ "De Deken", "Pierre-Henri", "" ], [ "Bandrabur", "Alessandra", "" ], [ "Grisard", "Fabien", "" ], [ "Jean-Caurant", "Axel", "" ], [ "Courboulay", "Vincent", "" ], [ "Madhkour", "Radhwan Ben", "" ], [ "Moreau", "Ambroise", "" ] ]
The 11th Summer Workshop on Multimodal Interfaces eNTERFACE 2015 was hosted by the Numediart Institute of Creative Technologies of the University of Mons from August 10th to September 2015. During the four weeks, students and researchers from all over the world came together in the Numediart Institute of the University of Mons to work on eight selected projects structured around intelligent interfaces. Eight projects were selected and their reports are shown here.
2010.02512
Lima Agnel Tony
Aashay Bhise, Shuvrangshu Jana, Lima Agnel Tony, Debasish Ghose
Target State Estimation and Prediction for High Speed Interception
arXiv admin note: substantial text overlap with arXiv:2009.00067
MBZIRC Symposium 2020, ADNEC, Abu Dhabi
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate estimation and prediction of trajectory is essential for interception of any high speed target. In this paper, an extended Kalman filter is used to estimate the current location of target from its visual information and then predict its future position by using the observation sequence. Target motion model is developed considering the approximate known pattern of the target trajectory. In this work, we utilise visual information of the target to carry out the predictions. The proposed algorithm is developed in ROS-Gazebo environment and is verified using hardware implementation.
[ { "created": "Sun, 4 Oct 2020 14:46:47 GMT", "version": "v1" } ]
2020-10-07
[ [ "Bhise", "Aashay", "" ], [ "Jana", "Shuvrangshu", "" ], [ "Tony", "Lima Agnel", "" ], [ "Ghose", "Debasish", "" ] ]
Accurate estimation and prediction of trajectory is essential for interception of any high speed target. In this paper, an extended Kalman filter is used to estimate the current location of target from its visual information and then predict its future position by using the observation sequence. Target motion model is developed considering the approximate known pattern of the target trajectory. In this work, we utilise visual information of the target to carry out the predictions. The proposed algorithm is developed in ROS-Gazebo environment and is verified using hardware implementation.
2210.01154
Sandipan Das
Sandipan Das, Navid Mahabadi, Maurice Fallon, Saikat Chatterjee
M-LIO: Multi-lidar, multi-IMU odometry with sensor dropout tolerance
For associated video check https://youtu.be/-xSbfaroEPs
2023 IEEE Intelligent Vehicles Symposium (IV), Anchorage, AK, USA, 2023
10.1109/IV55152.2023.10186548
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
We present a robust system for state estimation that fuses measurements from multiple lidars and inertial sensors with GNSS data. To initiate the method, we use the prior GNSS pose information. We then perform incremental motion in real-time, which produces robust motion estimates in a global frame by fusing lidar and IMU signals with GNSS translation components using a factor graph framework. We also propose methods to account for signal loss with a novel synchronization and fusion mechanism. To validate our approach extensive tests were carried out on data collected using Scania test vehicles (5 sequences for a total of ~ 7 Km). From our evaluations, we show an average improvement of 61% in relative translation and 42% rotational error compared to a state-of-the-art estimator fusing a single lidar/inertial sensor pair.
[ { "created": "Mon, 3 Oct 2022 18:05:57 GMT", "version": "v1" }, { "created": "Sun, 9 Oct 2022 05:02:33 GMT", "version": "v2" } ]
2023-09-14
[ [ "Das", "Sandipan", "" ], [ "Mahabadi", "Navid", "" ], [ "Fallon", "Maurice", "" ], [ "Chatterjee", "Saikat", "" ] ]
We present a robust system for state estimation that fuses measurements from multiple lidars and inertial sensors with GNSS data. To initiate the method, we use the prior GNSS pose information. We then perform incremental motion in real-time, which produces robust motion estimates in a global frame by fusing lidar and IMU signals with GNSS translation components using a factor graph framework. We also propose methods to account for signal loss with a novel synchronization and fusion mechanism. To validate our approach extensive tests were carried out on data collected using Scania test vehicles (5 sequences for a total of ~ 7 Km). From our evaluations, we show an average improvement of 61% in relative translation and 42% rotational error compared to a state-of-the-art estimator fusing a single lidar/inertial sensor pair.
1910.10944
Adish Singla
Farnam Mansouri, Yuxin Chen, Ara Vartanian, Xiaojin Zhu, Adish Singla
Preference-Based Batch and Sequential Teaching: Towards a Unified View of Models
NeurIPS 2019
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Algorithmic machine teaching studies the interaction between a teacher and a learner where the teacher selects labeled examples aiming at teaching a target hypothesis. In a quest to lower teaching complexity and to achieve more natural teacher-learner interactions, several teaching models and complexity measures have been proposed for both the batch settings (e.g., worst-case, recursive, preference-based, and non-clashing models) as well as the sequential settings (e.g., local preference-based model). To better understand the connections between these different batch and sequential models, we develop a novel framework which captures the teaching process via preference functions $\Sigma$. In our framework, each function $\sigma \in \Sigma$ induces a teacher-learner pair with teaching complexity as $\TD(\sigma)$. We show that the above-mentioned teaching models are equivalent to specific types/families of preference functions in our framework. This equivalence, in turn, allows us to study the differences between two important teaching models, namely $\sigma$ functions inducing the strongest batch (i.e., non-clashing) model and $\sigma$ functions inducing a weak sequential (i.e., local preference-based) model. Finally, we identify preference functions inducing a novel family of sequential models with teaching complexity linear in the VC dimension of the hypothesis class: this is in contrast to the best known complexity result for the batch models which is quadratic in the VC dimension.
[ { "created": "Thu, 24 Oct 2019 07:03:55 GMT", "version": "v1" } ]
2019-10-25
[ [ "Mansouri", "Farnam", "" ], [ "Chen", "Yuxin", "" ], [ "Vartanian", "Ara", "" ], [ "Zhu", "Xiaojin", "" ], [ "Singla", "Adish", "" ] ]
Algorithmic machine teaching studies the interaction between a teacher and a learner where the teacher selects labeled examples aiming at teaching a target hypothesis. In a quest to lower teaching complexity and to achieve more natural teacher-learner interactions, several teaching models and complexity measures have been proposed for both the batch settings (e.g., worst-case, recursive, preference-based, and non-clashing models) as well as the sequential settings (e.g., local preference-based model). To better understand the connections between these different batch and sequential models, we develop a novel framework which captures the teaching process via preference functions $\Sigma$. In our framework, each function $\sigma \in \Sigma$ induces a teacher-learner pair with teaching complexity as $\TD(\sigma)$. We show that the above-mentioned teaching models are equivalent to specific types/families of preference functions in our framework. This equivalence, in turn, allows us to study the differences between two important teaching models, namely $\sigma$ functions inducing the strongest batch (i.e., non-clashing) model and $\sigma$ functions inducing a weak sequential (i.e., local preference-based) model. Finally, we identify preference functions inducing a novel family of sequential models with teaching complexity linear in the VC dimension of the hypothesis class: this is in contrast to the best known complexity result for the batch models which is quadratic in the VC dimension.
2311.09270
Saeed Khalilian
Saeed Khalilian, Vasileios Tsouvalas, Tanir Ozcelebi, Nirvana Meratnia
FedCode: Communication-Efficient Federated Learning via Transferring Codebooks
null
null
null
null
cs.LG cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated Learning (FL) is a distributed machine learning paradigm that enables learning models from decentralized local data. While FL offers appealing properties for clients' data privacy, it imposes high communication burdens for exchanging model weights between a server and the clients. Existing approaches rely on model compression techniques, such as pruning and weight clustering to tackle this. However, transmitting the entire set of weight updates at each federated round, even in a compressed format, limits the potential for a substantial reduction in communication volume. We propose FedCode where clients transmit only codebooks, i.e., the cluster centers of updated model weight values. To ensure a smooth learning curve and proper calibration of clusters between the server and the clients, FedCode periodically transfers model weights after multiple rounds of solely communicating codebooks. This results in a significant reduction in communication volume between clients and the server in both directions, without imposing significant computational overhead on the clients or leading to major performance degradation of the models. We evaluate the effectiveness of FedCode using various publicly available datasets with ResNet-20 and MobileNet backbone model architectures. Our evaluations demonstrate a 12.2-fold data transmission reduction on average while maintaining a comparable model performance with an average accuracy loss of 1.3% compared to FedAvg. Further validation of FedCode performance under non-IID data distributions showcased an average accuracy loss of 2.0% compared to FedAvg while achieving approximately a 12.7-fold data transmission reduction.
[ { "created": "Wed, 15 Nov 2023 12:06:32 GMT", "version": "v1" } ]
2023-11-17
[ [ "Khalilian", "Saeed", "" ], [ "Tsouvalas", "Vasileios", "" ], [ "Ozcelebi", "Tanir", "" ], [ "Meratnia", "Nirvana", "" ] ]
Federated Learning (FL) is a distributed machine learning paradigm that enables learning models from decentralized local data. While FL offers appealing properties for clients' data privacy, it imposes high communication burdens for exchanging model weights between a server and the clients. Existing approaches rely on model compression techniques, such as pruning and weight clustering to tackle this. However, transmitting the entire set of weight updates at each federated round, even in a compressed format, limits the potential for a substantial reduction in communication volume. We propose FedCode where clients transmit only codebooks, i.e., the cluster centers of updated model weight values. To ensure a smooth learning curve and proper calibration of clusters between the server and the clients, FedCode periodically transfers model weights after multiple rounds of solely communicating codebooks. This results in a significant reduction in communication volume between clients and the server in both directions, without imposing significant computational overhead on the clients or leading to major performance degradation of the models. We evaluate the effectiveness of FedCode using various publicly available datasets with ResNet-20 and MobileNet backbone model architectures. Our evaluations demonstrate a 12.2-fold data transmission reduction on average while maintaining a comparable model performance with an average accuracy loss of 1.3% compared to FedAvg. Further validation of FedCode performance under non-IID data distributions showcased an average accuracy loss of 2.0% compared to FedAvg while achieving approximately a 12.7-fold data transmission reduction.
2205.05019
Antoine Yang
Antoine Yang, Antoine Miech, Josef Sivic, Ivan Laptev, Cordelia Schmid
Learning to Answer Visual Questions from Web Videos
Accepted at the TPAMI Special Issue on the Best Papers of ICCV 2021. Journal extension of the conference paper arXiv:2012.00451. 16 pages, 13 figures
null
null
null
cs.CV cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent methods for visual question answering rely on large-scale annotated datasets. Manual annotation of questions and answers for videos, however, is tedious, expensive and prevents scalability. In this work, we propose to avoid manual annotation and generate a large-scale training dataset for video question answering making use of automatic cross-modal supervision. We leverage a question generation transformer trained on text data and use it to generate question-answer pairs from transcribed video narrations. Given narrated videos, we then automatically generate the HowToVQA69M dataset with 69M video-question-answer triplets. To handle the open vocabulary of diverse answers in this dataset, we propose a training procedure based on a contrastive loss between a video-question multi-modal transformer and an answer transformer. We introduce the zero-shot VideoQA task and the VideoQA feature probe evaluation setting and show excellent results, in particular for rare answers. Furthermore, our method achieves competitive results on MSRVTT-QA, ActivityNet-QA, MSVD-QA and How2QA datasets. We also show that our VideoQA dataset generation approach generalizes to another source of web video and text data. We use our method to generate the WebVidVQA3M dataset from the WebVid dataset, i.e., videos with alt-text annotations, and show its benefits for training VideoQA models. Finally, for a detailed evaluation we introduce iVQA, a new VideoQA dataset with reduced language bias and high-quality manual annotations. Code, datasets and trained models are available at https://antoyang.github.io/just-ask.html
[ { "created": "Tue, 10 May 2022 16:34:26 GMT", "version": "v1" }, { "created": "Wed, 11 May 2022 05:31:08 GMT", "version": "v2" } ]
2022-05-12
[ [ "Yang", "Antoine", "" ], [ "Miech", "Antoine", "" ], [ "Sivic", "Josef", "" ], [ "Laptev", "Ivan", "" ], [ "Schmid", "Cordelia", "" ] ]
Recent methods for visual question answering rely on large-scale annotated datasets. Manual annotation of questions and answers for videos, however, is tedious, expensive and prevents scalability. In this work, we propose to avoid manual annotation and generate a large-scale training dataset for video question answering making use of automatic cross-modal supervision. We leverage a question generation transformer trained on text data and use it to generate question-answer pairs from transcribed video narrations. Given narrated videos, we then automatically generate the HowToVQA69M dataset with 69M video-question-answer triplets. To handle the open vocabulary of diverse answers in this dataset, we propose a training procedure based on a contrastive loss between a video-question multi-modal transformer and an answer transformer. We introduce the zero-shot VideoQA task and the VideoQA feature probe evaluation setting and show excellent results, in particular for rare answers. Furthermore, our method achieves competitive results on MSRVTT-QA, ActivityNet-QA, MSVD-QA and How2QA datasets. We also show that our VideoQA dataset generation approach generalizes to another source of web video and text data. We use our method to generate the WebVidVQA3M dataset from the WebVid dataset, i.e., videos with alt-text annotations, and show its benefits for training VideoQA models. Finally, for a detailed evaluation we introduce iVQA, a new VideoQA dataset with reduced language bias and high-quality manual annotations. Code, datasets and trained models are available at https://antoyang.github.io/just-ask.html
2309.09256
Kazuto Nakashima
Kazuto Nakashima, Ryo Kurazume
LiDAR Data Synthesis with Denoising Diffusion Probabilistic Models
ICRA 2024
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative modeling of 3D LiDAR data is an emerging task with promising applications for autonomous mobile robots, such as scalable simulation, scene manipulation, and sparse-to-dense completion of LiDAR point clouds. While existing approaches have demonstrated the feasibility of image-based LiDAR data generation using deep generative models, they still struggle with fidelity and training stability. In this work, we present R2DM, a novel generative model for LiDAR data that can generate diverse and high-fidelity 3D scene point clouds based on the image representation of range and reflectance intensity. Our method is built upon denoising diffusion probabilistic models (DDPMs), which have shown impressive results among generative model frameworks in recent years. To effectively train DDPMs in the LiDAR domain, we first conduct an in-depth analysis of data representation, loss functions, and spatial inductive biases. Leveraging our R2DM model, we also introduce a flexible LiDAR completion pipeline based on the powerful capabilities of DDPMs. We demonstrate that our method surpasses existing methods in generating tasks on the KITTI-360 and KITTI-Raw datasets, as well as in the completion task on the KITTI-360 dataset. Our project page can be found at https://kazuto1011.github.io/r2dm.
[ { "created": "Sun, 17 Sep 2023 12:26:57 GMT", "version": "v1" }, { "created": "Mon, 4 Mar 2024 07:37:55 GMT", "version": "v2" } ]
2024-03-05
[ [ "Nakashima", "Kazuto", "" ], [ "Kurazume", "Ryo", "" ] ]
Generative modeling of 3D LiDAR data is an emerging task with promising applications for autonomous mobile robots, such as scalable simulation, scene manipulation, and sparse-to-dense completion of LiDAR point clouds. While existing approaches have demonstrated the feasibility of image-based LiDAR data generation using deep generative models, they still struggle with fidelity and training stability. In this work, we present R2DM, a novel generative model for LiDAR data that can generate diverse and high-fidelity 3D scene point clouds based on the image representation of range and reflectance intensity. Our method is built upon denoising diffusion probabilistic models (DDPMs), which have shown impressive results among generative model frameworks in recent years. To effectively train DDPMs in the LiDAR domain, we first conduct an in-depth analysis of data representation, loss functions, and spatial inductive biases. Leveraging our R2DM model, we also introduce a flexible LiDAR completion pipeline based on the powerful capabilities of DDPMs. We demonstrate that our method surpasses existing methods in generating tasks on the KITTI-360 and KITTI-Raw datasets, as well as in the completion task on the KITTI-360 dataset. Our project page can be found at https://kazuto1011.github.io/r2dm.
2111.13613
Leon Bungert
Leon Bungert, Nicol\'as Garc\'ia Trillos, Ryan Murray
The Geometry of Adversarial Training in Binary Classification
null
Information and Inference: A Journal of the IMA, 2023
10.1093/imaiai/iaac029
null
cs.LG math.AP math.MG math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We establish an equivalence between a family of adversarial training problems for non-parametric binary classification and a family of regularized risk minimization problems where the regularizer is a nonlocal perimeter functional. The resulting regularized risk minimization problems admit exact convex relaxations of the type $L^1+$ (nonlocal) $\operatorname{TV}$, a form frequently studied in image analysis and graph-based learning. A rich geometric structure is revealed by this reformulation which in turn allows us to establish a series of properties of optimal solutions of the original problem, including the existence of minimal and maximal solutions (interpreted in a suitable sense), and the existence of regular solutions (also interpreted in a suitable sense). In addition, we highlight how the connection between adversarial training and perimeter minimization problems provides a novel, directly interpretable, statistical motivation for a family of regularized risk minimization problems involving perimeter/total variation. The majority of our theoretical results are independent of the distance used to define adversarial attacks.
[ { "created": "Fri, 26 Nov 2021 17:19:50 GMT", "version": "v1" }, { "created": "Mon, 1 Aug 2022 08:16:49 GMT", "version": "v2" } ]
2023-02-13
[ [ "Bungert", "Leon", "" ], [ "Trillos", "Nicolás García", "" ], [ "Murray", "Ryan", "" ] ]
We establish an equivalence between a family of adversarial training problems for non-parametric binary classification and a family of regularized risk minimization problems where the regularizer is a nonlocal perimeter functional. The resulting regularized risk minimization problems admit exact convex relaxations of the type $L^1+$ (nonlocal) $\operatorname{TV}$, a form frequently studied in image analysis and graph-based learning. A rich geometric structure is revealed by this reformulation which in turn allows us to establish a series of properties of optimal solutions of the original problem, including the existence of minimal and maximal solutions (interpreted in a suitable sense), and the existence of regular solutions (also interpreted in a suitable sense). In addition, we highlight how the connection between adversarial training and perimeter minimization problems provides a novel, directly interpretable, statistical motivation for a family of regularized risk minimization problems involving perimeter/total variation. The majority of our theoretical results are independent of the distance used to define adversarial attacks.
2209.10258
Dominik Braun
Dominik Braun, Timo M\"uller, Nada Sahlab, Nasser Jazdi, Wolfgang Schloegl and Michael Weyrich
A graph-based knowledge representation and pattern mining supporting the Digital Twin creation of existing manufacturing systems
4 pages, 3 figures. Accepted at IEEE ETFA 2022
null
10.1109/ETFA52439.2022.9921707
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The creation of a Digital Twin for existing manufacturing systems, so-called brownfield systems, is a challenging task due to the needed expert knowledge about the structure of brownfield systems and the effort to realize the digital models. Several approaches and methods have already been proposed that at least partially digitalize the information about a brownfield manufacturing system. A Digital Twin requires linked information from multiple sources. This paper presents a graph-based approach to merge information from heterogeneous sources. Furthermore, the approach provides a way to automatically identify templates using graph structure analysis to facilitate further work with the resulting Digital Twin and its further enhancement.
[ { "created": "Wed, 21 Sep 2022 11:08:34 GMT", "version": "v1" } ]
2023-09-04
[ [ "Braun", "Dominik", "" ], [ "Müller", "Timo", "" ], [ "Sahlab", "Nada", "" ], [ "Jazdi", "Nasser", "" ], [ "Schloegl", "Wolfgang", "" ], [ "Weyrich", "Michael", "" ] ]
The creation of a Digital Twin for existing manufacturing systems, so-called brownfield systems, is a challenging task due to the needed expert knowledge about the structure of brownfield systems and the effort to realize the digital models. Several approaches and methods have already been proposed that at least partially digitalize the information about a brownfield manufacturing system. A Digital Twin requires linked information from multiple sources. This paper presents a graph-based approach to merge information from heterogeneous sources. Furthermore, the approach provides a way to automatically identify templates using graph structure analysis to facilitate further work with the resulting Digital Twin and its further enhancement.
1910.10406
Tetsuo Yokoyama
Hiroki Masuda, Tetsuo Yokoyama
Analyzing Trade-offs in Reversible Linear and Binary Search Algorithms
Proceedings of the Third Workshop on Software Foundations for Data Interoperability (SFDI2019+), October 28, 2019, Fukuoka, Japan
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reversible algorithms are algorithms in which each step represents a partial injective function; they are useful for performance optimization in reversible systems. In this study, using Janus, a reversible imperative high-level programming language, we have developed reversible linear and binary search algorithms. We have analyzed the non-trivial space-time trade-offs between them, focusing on the memory usage disregarding original inputs and outputs, the size of the output garbage disregarding the original inputs, and the maximum amount of traversal of the input. The programs in this study can easily be adapted to other reversible programming languages. Our analysis reveals that the change of the output data and/or the data structure affects the design of efficient reversible algorithms. For example, the number of input data traversals depends on whether the search has succeeded or failed, while it expectedly never changes in corresponding irreversible linear and binary searches. Our observations indicate the importance of the selection of data structures and what is regarded as the output with the aim of the reversible algorithm design.
[ { "created": "Wed, 23 Oct 2019 08:24:15 GMT", "version": "v1" } ]
2019-10-24
[ [ "Masuda", "Hiroki", "" ], [ "Yokoyama", "Tetsuo", "" ] ]
Reversible algorithms are algorithms in which each step represents a partial injective function; they are useful for performance optimization in reversible systems. In this study, using Janus, a reversible imperative high-level programming language, we have developed reversible linear and binary search algorithms. We have analyzed the non-trivial space-time trade-offs between them, focusing on the memory usage disregarding original inputs and outputs, the size of the output garbage disregarding the original inputs, and the maximum amount of traversal of the input. The programs in this study can easily be adapted to other reversible programming languages. Our analysis reveals that the change of the output data and/or the data structure affects the design of efficient reversible algorithms. For example, the number of input data traversals depends on whether the search has succeeded or failed, while it expectedly never changes in corresponding irreversible linear and binary searches. Our observations indicate the importance of the selection of data structures and what is regarded as the output with the aim of the reversible algorithm design.
2006.13503
Ali HeydariGorji
Ali HeydariGorji, Seyede Mahya Safavi, Cheng-Ting Lee, Pai H. Chou
Head-mouse: A simple cursor controller based on optical measurement of head tilt
null
2017 IEEE Signal Processing in Medicine and Biology Symposium (SPMB), Philadelphia, PA, 2017, pp. 1-5
10.1109/SPMB.2017.8257058
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a wearable wireless mouse-cursor controller that optically tracks the degree of tilt of the user's head to move the mouse relative distances and therefore the degrees of tilt. The raw data can be processed locally on the wearable device before wirelessly transmitting the mouse-movement reports over Bluetooth Low Energy (BLE) protocol to the host computer; but for exploration of algorithms, the raw data can also be processed on the host. The use of standard Human-Interface Device (HID) profile enables plug-and-play of the proposed mouse device on modern computers without requiring separate driver installation. It can be used in two different modes to move the cursor, the joystick mode and the direct mapped mode. Experimental results show that this head-controlled mouse to be intuitive and effective in operating the mouse cursor with fine-grained control of the cursor even by untrained users.
[ { "created": "Wed, 24 Jun 2020 06:06:57 GMT", "version": "v1" } ]
2020-06-25
[ [ "HeydariGorji", "Ali", "" ], [ "Safavi", "Seyede Mahya", "" ], [ "Lee", "Cheng-Ting", "" ], [ "Chou", "Pai H.", "" ] ]
This paper describes a wearable wireless mouse-cursor controller that optically tracks the degree of tilt of the user's head to move the mouse relative distances and therefore the degrees of tilt. The raw data can be processed locally on the wearable device before wirelessly transmitting the mouse-movement reports over Bluetooth Low Energy (BLE) protocol to the host computer; but for exploration of algorithms, the raw data can also be processed on the host. The use of standard Human-Interface Device (HID) profile enables plug-and-play of the proposed mouse device on modern computers without requiring separate driver installation. It can be used in two different modes to move the cursor, the joystick mode and the direct mapped mode. Experimental results show that this head-controlled mouse to be intuitive and effective in operating the mouse cursor with fine-grained control of the cursor even by untrained users.
2105.13774
Michele Tizzoni
Serena Giurgola, Simone Piaggesi, M\'arton Karsai, Yelena Mejova, Andr\'e Panisson, Michele Tizzoni
Mapping urban socioeconomic inequalities in developing countries through Facebook advertising data
null
null
null
null
cs.CY cs.SI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Ending poverty in all its forms everywhere is the number one Sustainable Development Goal of the UN 2030 Agenda. To monitor the progress towards such an ambitious target, reliable, up-to-date and fine-grained measurements of socioeconomic indicators are necessary. When it comes to socioeconomic development, novel digital traces can provide a complementary data source to overcome the limits of traditional data collection methods, which are often not regularly updated and lack adequate spatial resolution. In this study, we collect publicly available and anonymous advertising audience estimates from Facebook to predict socioeconomic conditions of urban residents, at a fine spatial granularity, in four large urban areas: Atlanta (USA), Bogot\'a (Colombia), Santiago (Chile), and Casablanca (Morocco). We find that behavioral attributes inferred from the Facebook marketing platform can accurately map the socioeconomic status of residential areas within cities, and that predictive performance is comparable in both high and low-resource settings. We also show that training a model on attributes of adult Facebook users, aged more than 25, leads to a more accurate mapping of socioeconomic conditions in all cities. Our work provides additional evidence of the value of social advertising media data to measure human development.
[ { "created": "Fri, 28 May 2021 12:28:35 GMT", "version": "v1" } ]
2021-05-31
[ [ "Giurgola", "Serena", "" ], [ "Piaggesi", "Simone", "" ], [ "Karsai", "Márton", "" ], [ "Mejova", "Yelena", "" ], [ "Panisson", "André", "" ], [ "Tizzoni", "Michele", "" ] ]
Ending poverty in all its forms everywhere is the number one Sustainable Development Goal of the UN 2030 Agenda. To monitor the progress towards such an ambitious target, reliable, up-to-date and fine-grained measurements of socioeconomic indicators are necessary. When it comes to socioeconomic development, novel digital traces can provide a complementary data source to overcome the limits of traditional data collection methods, which are often not regularly updated and lack adequate spatial resolution. In this study, we collect publicly available and anonymous advertising audience estimates from Facebook to predict socioeconomic conditions of urban residents, at a fine spatial granularity, in four large urban areas: Atlanta (USA), Bogot\'a (Colombia), Santiago (Chile), and Casablanca (Morocco). We find that behavioral attributes inferred from the Facebook marketing platform can accurately map the socioeconomic status of residential areas within cities, and that predictive performance is comparable in both high and low-resource settings. We also show that training a model on attributes of adult Facebook users, aged more than 25, leads to a more accurate mapping of socioeconomic conditions in all cities. Our work provides additional evidence of the value of social advertising media data to measure human development.
1803.09665
Tianjian Chen
Tianjian Chen, Maximilian Haas-Heger and Matei Ciocarlie
Underactuated Hand Design Using Mechanically Realizable Manifolds
7 pages, 6 figures, 2018 IEEE International Conference on Robotics and Automation (ICRA)
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hand synergies, or joint coordination patterns, have become an effective tool for achieving versatile robotic grasping with simple hands or planning algorithms. Here we propose a method to determine the hand synergies such that they can be physically implemented in an underactuated fashion. Given a kinematic hand model and a set of desired grasps, our algorithm optimizes a Mechanically Realizable Manifold designed to be achievable by a physical underactuation mechanism, enabling the resulting hand to achieve the desired grasps with few actuators. Furthermore, in contrast to existing methods for determining synergies which are only concerned with hand posture, our method explicitly optimizes the stability of the target grasps. We implement this method in the design of a three-finger single-actuator hand as an example, and evaluate its effectiveness numerically and experimentally.
[ { "created": "Mon, 26 Mar 2018 15:32:24 GMT", "version": "v1" }, { "created": "Wed, 1 Aug 2018 04:36:44 GMT", "version": "v2" } ]
2018-08-02
[ [ "Chen", "Tianjian", "" ], [ "Haas-Heger", "Maximilian", "" ], [ "Ciocarlie", "Matei", "" ] ]
Hand synergies, or joint coordination patterns, have become an effective tool for achieving versatile robotic grasping with simple hands or planning algorithms. Here we propose a method to determine the hand synergies such that they can be physically implemented in an underactuated fashion. Given a kinematic hand model and a set of desired grasps, our algorithm optimizes a Mechanically Realizable Manifold designed to be achievable by a physical underactuation mechanism, enabling the resulting hand to achieve the desired grasps with few actuators. Furthermore, in contrast to existing methods for determining synergies which are only concerned with hand posture, our method explicitly optimizes the stability of the target grasps. We implement this method in the design of a three-finger single-actuator hand as an example, and evaluate its effectiveness numerically and experimentally.
2203.08426
Sridhar Iyer
Sridhar Iyer, Rahul Jashvantbhai Pandya, Rakhee Kallimani, Krishna Pai, Rajashri Khanai, Dattaprasad Torse, Swati Mavinkattimath
Survey on Internet of Things enabled by 6G Wireless Networks
null
null
null
null
cs.NI eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The 6G wireless technology is visualized to revolutionize multiple customer services with the Internet of Things (IoT), thereby contributing to a ubiquitous intelligent society comprising autonomous systems. In this chapter, we conduct a detailed survey on the IoT networks with 6G wireless networks and investigate the trending possibilities provided by the 6G technology within the IoT networks and the related utilization; Firstly, we detail the breakthrough IoT technologies and the technological drivers which are anticipated to strengthen IoT networks in future. Next, we present the relevant use cases detailing the discussion on the role of the 6G technology within a broad spectrum of IoT potential applications. Lastly, we highlight the several research scope and challenges and list the potential research needs and encourage further research within the thrust area of IoT enabled by 6G networks.
[ { "created": "Wed, 16 Mar 2022 07:00:57 GMT", "version": "v1" } ]
2022-04-12
[ [ "Iyer", "Sridhar", "" ], [ "Pandya", "Rahul Jashvantbhai", "" ], [ "Kallimani", "Rakhee", "" ], [ "Pai", "Krishna", "" ], [ "Khanai", "Rajashri", "" ], [ "Torse", "Dattaprasad", "" ], [ "Mavinkattimath", "Swati", "" ] ]
The 6G wireless technology is visualized to revolutionize multiple customer services with the Internet of Things (IoT), thereby contributing to a ubiquitous intelligent society comprising autonomous systems. In this chapter, we conduct a detailed survey on the IoT networks with 6G wireless networks and investigate the trending possibilities provided by the 6G technology within the IoT networks and the related utilization; Firstly, we detail the breakthrough IoT technologies and the technological drivers which are anticipated to strengthen IoT networks in future. Next, we present the relevant use cases detailing the discussion on the role of the 6G technology within a broad spectrum of IoT potential applications. Lastly, we highlight the several research scope and challenges and list the potential research needs and encourage further research within the thrust area of IoT enabled by 6G networks.
2103.14036
David Smith
David Smith, Frederik Geth, Elliott Vercoe, Andrew Feutrill, Ming Ding, Jonathan Chan, James Foster and Thierry Rakotoarivelo
Realistic Differentially-Private Transmission Power Flow Data Release
null
null
null
null
cs.CR cs.AI cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For the modeling, design and planning of future energy transmission networks, it is vital for stakeholders to access faithful and useful power flow data, while provably maintaining the privacy of business confidentiality of service providers. This critical challenge has recently been somewhat addressed in [1]. This paper significantly extends this existing work. First, we reduce the potential leakage information by proposing a fundamentally different post-processing method, using public information of grid losses rather than power dispatch, which achieve a higher level of privacy protection. Second, we protect more sensitive parameters, i.e., branch shunt susceptance in addition to series impedance (complete pi-model). This protects power flow data for the transmission high-voltage networks, using differentially private transformations that maintain the optimal power flow consistent with, and faithful to, expected model behaviour. Third, we tested our approach at a larger scale than previous work, using the PGLib-OPF test cases [10]. This resulted in the successful obfuscation of up to a 4700-bus system, which can be successfully solved with faithfulness of parameters and good utility to data analysts. Our approach addresses a more feasible and realistic scenario, and provides higher than state-of-the-art privacy guarantees, while maintaining solvability, fidelity and feasibility of the system.
[ { "created": "Thu, 25 Mar 2021 04:04:12 GMT", "version": "v1" } ]
2021-03-29
[ [ "Smith", "David", "" ], [ "Geth", "Frederik", "" ], [ "Vercoe", "Elliott", "" ], [ "Feutrill", "Andrew", "" ], [ "Ding", "Ming", "" ], [ "Chan", "Jonathan", "" ], [ "Foster", "James", "" ], [ "Rakotoarivelo", "Thierry", "" ] ]
For the modeling, design and planning of future energy transmission networks, it is vital for stakeholders to access faithful and useful power flow data, while provably maintaining the privacy of business confidentiality of service providers. This critical challenge has recently been somewhat addressed in [1]. This paper significantly extends this existing work. First, we reduce the potential leakage information by proposing a fundamentally different post-processing method, using public information of grid losses rather than power dispatch, which achieve a higher level of privacy protection. Second, we protect more sensitive parameters, i.e., branch shunt susceptance in addition to series impedance (complete pi-model). This protects power flow data for the transmission high-voltage networks, using differentially private transformations that maintain the optimal power flow consistent with, and faithful to, expected model behaviour. Third, we tested our approach at a larger scale than previous work, using the PGLib-OPF test cases [10]. This resulted in the successful obfuscation of up to a 4700-bus system, which can be successfully solved with faithfulness of parameters and good utility to data analysts. Our approach addresses a more feasible and realistic scenario, and provides higher than state-of-the-art privacy guarantees, while maintaining solvability, fidelity and feasibility of the system.
2209.08539
Zhuozhu Jian
Zhuozhu Jian, Zihong Yan, Xuanang Lei, Zihong Lu, Bin Lan, Xueqian Wang, Bin Liang
Dynamic Control Barrier Function-based Model Predictive Control to Safety-Critical Obstacle-Avoidance of Mobile Robot
Submitted to IEEE International Conference on Robotics and Automation (ICRA) 2023
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an efficient and safe method to avoid static and dynamic obstacles based on LiDAR. First, point cloud is used to generate a real-time local grid map for obstacle detection. Then, obstacles are clustered by DBSCAN algorithm and enclosed with minimum bounding ellipses (MBEs). In addition, data association is conducted to match each MBE with the obstacle in the current frame. Considering MBE as an observation, Kalman filter (KF) is used to estimate and predict the motion state of the obstacle. In this way, the trajectory of each obstacle in the forward time domain can be parameterized as a set of ellipses. Due to the uncertainty of the MBE, the semi-major and semi-minor axes of the parameterized ellipse are extended to ensure safety. We extend the traditional Control Barrier Function (CBF) and propose Dynamic Control Barrier Function (D-CBF). We combine D-CBF with Model Predictive Control (MPC) to implement safety-critical dynamic obstacle avoidance. Experiments in simulated and real scenarios are conducted to verify the effectiveness of our algorithm. The source code is released for the reference of the community.
[ { "created": "Sun, 18 Sep 2022 11:37:10 GMT", "version": "v1" } ]
2022-09-20
[ [ "Jian", "Zhuozhu", "" ], [ "Yan", "Zihong", "" ], [ "Lei", "Xuanang", "" ], [ "Lu", "Zihong", "" ], [ "Lan", "Bin", "" ], [ "Wang", "Xueqian", "" ], [ "Liang", "Bin", "" ] ]
This paper presents an efficient and safe method to avoid static and dynamic obstacles based on LiDAR. First, point cloud is used to generate a real-time local grid map for obstacle detection. Then, obstacles are clustered by DBSCAN algorithm and enclosed with minimum bounding ellipses (MBEs). In addition, data association is conducted to match each MBE with the obstacle in the current frame. Considering MBE as an observation, Kalman filter (KF) is used to estimate and predict the motion state of the obstacle. In this way, the trajectory of each obstacle in the forward time domain can be parameterized as a set of ellipses. Due to the uncertainty of the MBE, the semi-major and semi-minor axes of the parameterized ellipse are extended to ensure safety. We extend the traditional Control Barrier Function (CBF) and propose Dynamic Control Barrier Function (D-CBF). We combine D-CBF with Model Predictive Control (MPC) to implement safety-critical dynamic obstacle avoidance. Experiments in simulated and real scenarios are conducted to verify the effectiveness of our algorithm. The source code is released for the reference of the community.
2310.11398
Muhan Zhang
Muhan Zhang
Neural Attention: Enhancing QKV Calculation in Self-Attention Mechanism with Neural Networks
Updated the formulas in Section 3.2 "Detailed Methodology" and revised Section 2 "Background" for clarity and accuracy
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the realm of deep learning, the self-attention mechanism has substantiated its pivotal role across a myriad of tasks, encompassing natural language processing and computer vision. Despite achieving success across diverse applications, the traditional self-attention mechanism primarily leverages linear transformations for the computation of query, key, and value (QKV), which may not invariably be the optimal choice under specific circumstances. This paper probes into a novel methodology for QKV computation-implementing a specially-designed neural network structure for the calculation. Utilizing a modified Marian model, we conducted experiments on the IWSLT 2017 German-English translation task dataset and juxtaposed our method with the conventional approach. The experimental results unveil a significant enhancement in BLEU scores with our method. Furthermore, our approach also manifested superiority when training the Roberta model with the Wikitext-103 dataset, reflecting a notable reduction in model perplexity compared to its original counterpart. These experimental outcomes not only validate the efficacy of our method but also reveal the immense potential in optimizing the self-attention mechanism through neural network-based QKV computation, paving the way for future research and practical applications. The source code and implementation details for our proposed method can be accessed at https://github.com/ocislyjrti/NeuralAttention.
[ { "created": "Tue, 17 Oct 2023 17:06:26 GMT", "version": "v1" }, { "created": "Tue, 24 Oct 2023 17:12:49 GMT", "version": "v2" } ]
2023-10-25
[ [ "Zhang", "Muhan", "" ] ]
In the realm of deep learning, the self-attention mechanism has substantiated its pivotal role across a myriad of tasks, encompassing natural language processing and computer vision. Despite achieving success across diverse applications, the traditional self-attention mechanism primarily leverages linear transformations for the computation of query, key, and value (QKV), which may not invariably be the optimal choice under specific circumstances. This paper probes into a novel methodology for QKV computation-implementing a specially-designed neural network structure for the calculation. Utilizing a modified Marian model, we conducted experiments on the IWSLT 2017 German-English translation task dataset and juxtaposed our method with the conventional approach. The experimental results unveil a significant enhancement in BLEU scores with our method. Furthermore, our approach also manifested superiority when training the Roberta model with the Wikitext-103 dataset, reflecting a notable reduction in model perplexity compared to its original counterpart. These experimental outcomes not only validate the efficacy of our method but also reveal the immense potential in optimizing the self-attention mechanism through neural network-based QKV computation, paving the way for future research and practical applications. The source code and implementation details for our proposed method can be accessed at https://github.com/ocislyjrti/NeuralAttention.
1702.07703
Yury Polyanskiy
M. Dalai and Y. Polyanskiy
Bounds on the reliability of typewriter channels
null
null
null
null
cs.IT math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
New lower and upper bounds on the reliability function of typewriter channels are given. Our lower bounds improve upon the (multiletter) expurgated bound of Gallager, furnishing a new and simple counterexample to a conjecture made in 1967 by Shannon, Gallager and Berlekamp on its tightness. The only other known counterexample is due to Katsman, Tsfasman and Vl\u{a}du\c{t} who used algebraic-geometric codes on a $q$-ary symmetric channels, $q\geq 49$. Here we prove, by introducing dependence between codewords of a random ensemble, that the conjecture is false even for a typewriter channel with $q=4$ inputs. In the process, we also demonstrate that Lov\'asz's proof of the capacity of the pentagon was implicitly contained (but unnoticed!) in the works of Jelinek and Gallager on the expurgated bound done at least ten years before Lov\'asz. In the opposite direction, new upper bounds on the reliability function are derived for channels with an odd number of inputs by using an adaptation of Delsarte's linear programming bound. First we derive a bound based on the minimum distance, which combines Lov\'asz's construction for bounding the graph capacity with the McEliece-Rodemich-Rumsey-Welch construction for bounding the minimum distance of codes in the Hamming space. Then, for the particular case of cross-over probability $1/2$, we derive an improved bound by also using the method of Kalai and Linial to study the spectrum distribution of codes.
[ { "created": "Fri, 24 Feb 2017 18:47:29 GMT", "version": "v1" }, { "created": "Tue, 31 Oct 2017 23:56:59 GMT", "version": "v2" } ]
2017-11-02
[ [ "Dalai", "M.", "" ], [ "Polyanskiy", "Y.", "" ] ]
New lower and upper bounds on the reliability function of typewriter channels are given. Our lower bounds improve upon the (multiletter) expurgated bound of Gallager, furnishing a new and simple counterexample to a conjecture made in 1967 by Shannon, Gallager and Berlekamp on its tightness. The only other known counterexample is due to Katsman, Tsfasman and Vl\u{a}du\c{t} who used algebraic-geometric codes on a $q$-ary symmetric channels, $q\geq 49$. Here we prove, by introducing dependence between codewords of a random ensemble, that the conjecture is false even for a typewriter channel with $q=4$ inputs. In the process, we also demonstrate that Lov\'asz's proof of the capacity of the pentagon was implicitly contained (but unnoticed!) in the works of Jelinek and Gallager on the expurgated bound done at least ten years before Lov\'asz. In the opposite direction, new upper bounds on the reliability function are derived for channels with an odd number of inputs by using an adaptation of Delsarte's linear programming bound. First we derive a bound based on the minimum distance, which combines Lov\'asz's construction for bounding the graph capacity with the McEliece-Rodemich-Rumsey-Welch construction for bounding the minimum distance of codes in the Hamming space. Then, for the particular case of cross-over probability $1/2$, we derive an improved bound by also using the method of Kalai and Linial to study the spectrum distribution of codes.
2406.16722
Yuchen Zou
Yuchen Zou, Yineng Chen, Zuchao Li, Lefei Zhang, Hai Zhao
Venturing into Uncharted Waters: The Navigation Compass from Transformer to Mamba
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transformer, a deep neural network architecture, has long dominated the field of natural language processing and beyond. Nevertheless, the recent introduction of Mamba challenges its supremacy, sparks considerable interest among researchers, and gives rise to a series of Mamba-based models that have exhibited notable potential. This survey paper orchestrates a comprehensive discussion, diving into essential research dimensions, covering: (i) the functioning of the Mamba mechanism and its foundation on the principles of structured state space models; (ii) the proposed improvements and the integration of Mamba with various networks, exploring its potential as a substitute for Transformers; (iii) the combination of Transformers and Mamba to compensate for each other's shortcomings. We have also made efforts to interpret Mamba and Transformer in the framework of kernel functions, allowing for a comparison of their mathematical nature within a unified context. Our paper encompasses the vast majority of improvements related to Mamba to date.
[ { "created": "Mon, 24 Jun 2024 15:27:21 GMT", "version": "v1" } ]
2024-06-25
[ [ "Zou", "Yuchen", "" ], [ "Chen", "Yineng", "" ], [ "Li", "Zuchao", "" ], [ "Zhang", "Lefei", "" ], [ "Zhao", "Hai", "" ] ]
Transformer, a deep neural network architecture, has long dominated the field of natural language processing and beyond. Nevertheless, the recent introduction of Mamba challenges its supremacy, sparks considerable interest among researchers, and gives rise to a series of Mamba-based models that have exhibited notable potential. This survey paper orchestrates a comprehensive discussion, diving into essential research dimensions, covering: (i) the functioning of the Mamba mechanism and its foundation on the principles of structured state space models; (ii) the proposed improvements and the integration of Mamba with various networks, exploring its potential as a substitute for Transformers; (iii) the combination of Transformers and Mamba to compensate for each other's shortcomings. We have also made efforts to interpret Mamba and Transformer in the framework of kernel functions, allowing for a comparison of their mathematical nature within a unified context. Our paper encompasses the vast majority of improvements related to Mamba to date.
2203.13125
Prasang Gupta
Prasang Gupta, Shaz Hoda and Anand Rao
Intelligent Systematic Investment Agent: an ensemble of deep learning and evolutionary strategies
19 pages, 10 figures
null
null
null
cs.AI cs.CY cs.NE
http://creativecommons.org/licenses/by/4.0/
Machine learning driven trading strategies have garnered a lot of interest over the past few years. There is, however, limited consensus on the ideal approach for the development of such trading strategies. Further, most literature has focused on trading strategies for short-term trading, with little or no focus on strategies that attempt to build long-term wealth. Our paper proposes a new approach for developing long-term investment strategies using an ensemble of evolutionary algorithms and a deep learning model by taking a series of short-term purchase decisions. Our methodology focuses on building long-term wealth by improving systematic investment planning (SIP) decisions on Exchange Traded Funds (ETF) over a period of time. We provide empirical evidence of superior performance (around 1% higher returns) using our ensemble approach as compared to the traditional daily systematic investment practice on a given ETF. Our results are based on live trading decisions made by our algorithm and executed on the Robinhood trading platform.
[ { "created": "Thu, 24 Mar 2022 15:39:05 GMT", "version": "v1" } ]
2022-03-25
[ [ "Gupta", "Prasang", "" ], [ "Hoda", "Shaz", "" ], [ "Rao", "Anand", "" ] ]
Machine learning driven trading strategies have garnered a lot of interest over the past few years. There is, however, limited consensus on the ideal approach for the development of such trading strategies. Further, most literature has focused on trading strategies for short-term trading, with little or no focus on strategies that attempt to build long-term wealth. Our paper proposes a new approach for developing long-term investment strategies using an ensemble of evolutionary algorithms and a deep learning model by taking a series of short-term purchase decisions. Our methodology focuses on building long-term wealth by improving systematic investment planning (SIP) decisions on Exchange Traded Funds (ETF) over a period of time. We provide empirical evidence of superior performance (around 1% higher returns) using our ensemble approach as compared to the traditional daily systematic investment practice on a given ETF. Our results are based on live trading decisions made by our algorithm and executed on the Robinhood trading platform.
0905.0792
Jan Treibig
Jan Treibig, Georg Hager
Introducing a Performance Model for Bandwidth-Limited Loop Kernels
8 pages
null
null
null
cs.PF cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a performance model for bandwidth limited loop kernels which is founded on the analysis of modern cache based microarchitectures. This model allows an accurate performance prediction and evaluation for existing instruction codes. It provides an in-depth understanding of how performance for different memory hierarchy levels is made up. The performance of raw memory load, store and copy operations and a stream vector triad are analyzed and benchmarked on three modern x86-type quad-core architectures in order to demonstrate the capabilities of the model.
[ { "created": "Wed, 6 May 2009 10:55:04 GMT", "version": "v1" } ]
2009-05-07
[ [ "Treibig", "Jan", "" ], [ "Hager", "Georg", "" ] ]
We present a performance model for bandwidth limited loop kernels which is founded on the analysis of modern cache based microarchitectures. This model allows an accurate performance prediction and evaluation for existing instruction codes. It provides an in-depth understanding of how performance for different memory hierarchy levels is made up. The performance of raw memory load, store and copy operations and a stream vector triad are analyzed and benchmarked on three modern x86-type quad-core architectures in order to demonstrate the capabilities of the model.
2012.15006
Shimiao Li
Shimiao Li, Amritanshu Pandey, Bryan Hooi, Christos Faloutsos and Larry Pileggi
Dynamic Graph-Based Anomaly Detection in the Electrical Grid
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given sensor readings over time from a power grid, how can we accurately detect when an anomaly occurs? A key part of achieving this goal is to use the network of power grid sensors to quickly detect, in real-time, when any unusual events, whether natural faults or malicious, occur on the power grid. Existing bad-data detectors in the industry lack the sophistication to robustly detect broad types of anomalies, especially those due to emerging cyber-attacks, since they operate on a single measurement snapshot of the grid at a time. New ML methods are more widely applicable, but generally do not consider the impact of topology change on sensor measurements and thus cannot accommodate regular topology adjustments in historical data. Hence, we propose DYNWATCH, a domain knowledge based and topology-aware algorithm for anomaly detection using sensors placed on a dynamic grid. Our approach is accurate, outperforming existing approaches by 20% or more (F-measure) in experiments; and fast, running in less than 1.7ms on average per time tick per sensor on a 60K+ branch case using a laptop computer, and scaling linearly in the size of the graph.
[ { "created": "Wed, 30 Dec 2020 02:25:07 GMT", "version": "v1" }, { "created": "Fri, 1 Jan 2021 02:06:38 GMT", "version": "v2" }, { "created": "Wed, 29 Sep 2021 22:12:30 GMT", "version": "v3" }, { "created": "Thu, 2 Dec 2021 21:22:38 GMT", "version": "v4" } ]
2021-12-06
[ [ "Li", "Shimiao", "" ], [ "Pandey", "Amritanshu", "" ], [ "Hooi", "Bryan", "" ], [ "Faloutsos", "Christos", "" ], [ "Pileggi", "Larry", "" ] ]
Given sensor readings over time from a power grid, how can we accurately detect when an anomaly occurs? A key part of achieving this goal is to use the network of power grid sensors to quickly detect, in real-time, when any unusual events, whether natural faults or malicious, occur on the power grid. Existing bad-data detectors in the industry lack the sophistication to robustly detect broad types of anomalies, especially those due to emerging cyber-attacks, since they operate on a single measurement snapshot of the grid at a time. New ML methods are more widely applicable, but generally do not consider the impact of topology change on sensor measurements and thus cannot accommodate regular topology adjustments in historical data. Hence, we propose DYNWATCH, a domain knowledge based and topology-aware algorithm for anomaly detection using sensors placed on a dynamic grid. Our approach is accurate, outperforming existing approaches by 20% or more (F-measure) in experiments; and fast, running in less than 1.7ms on average per time tick per sensor on a 60K+ branch case using a laptop computer, and scaling linearly in the size of the graph.
2206.12708
Syrine Belakaria
Syrine Belakaria, Janardhan Rao Doppa, Nicolo Fusi, Rishit Sheth
Bayesian Optimization Over Iterative Learners with Structured Responses: A Budget-aware Planning Approach
null
Proceedings of the 26th International Conference on Artificial Intelligence and Statistics (AISTATS) 2023
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rising growth of deep neural networks (DNNs) and datasets in size motivates the need for efficient solutions for simultaneous model selection and training. Many methods for hyperparameter optimization (HPO) of iterative learners, including DNNs, attempt to solve this problem by querying and learning a response surface while searching for the optimum of that surface. However, many of these methods make myopic queries, do not consider prior knowledge about the response structure, and/or perform a biased cost-aware search, all of which exacerbate identifying the best-performing model when a total cost budget is specified. This paper proposes a novel approach referred to as {\bf B}udget-{\bf A}ware {\bf P}lanning for {\bf I}terative Learners (BAPI) to solve HPO problems under a constrained cost budget. BAPI is an efficient non-myopic Bayesian optimization solution that accounts for the budget and leverages the prior knowledge about the objective function and cost function to select better configurations and to take more informed decisions during the evaluation (training). Experiments on diverse HPO benchmarks for iterative learners show that BAPI performs better than state-of-the-art baselines in most cases.
[ { "created": "Sat, 25 Jun 2022 18:44:06 GMT", "version": "v1" }, { "created": "Fri, 8 Jul 2022 02:20:00 GMT", "version": "v2" }, { "created": "Mon, 27 Feb 2023 17:30:01 GMT", "version": "v3" } ]
2023-02-28
[ [ "Belakaria", "Syrine", "" ], [ "Doppa", "Janardhan Rao", "" ], [ "Fusi", "Nicolo", "" ], [ "Sheth", "Rishit", "" ] ]
The rising growth of deep neural networks (DNNs) and datasets in size motivates the need for efficient solutions for simultaneous model selection and training. Many methods for hyperparameter optimization (HPO) of iterative learners, including DNNs, attempt to solve this problem by querying and learning a response surface while searching for the optimum of that surface. However, many of these methods make myopic queries, do not consider prior knowledge about the response structure, and/or perform a biased cost-aware search, all of which exacerbate identifying the best-performing model when a total cost budget is specified. This paper proposes a novel approach referred to as {\bf B}udget-{\bf A}ware {\bf P}lanning for {\bf I}terative Learners (BAPI) to solve HPO problems under a constrained cost budget. BAPI is an efficient non-myopic Bayesian optimization solution that accounts for the budget and leverages the prior knowledge about the objective function and cost function to select better configurations and to take more informed decisions during the evaluation (training). Experiments on diverse HPO benchmarks for iterative learners show that BAPI performs better than state-of-the-art baselines in most cases.
1702.00135
Ding Zhao
Xinpeng Wang, Ding Zhao, Huei Peng, David J. LeBlanc
Analysis of Unprotected Intersection Left-Turn Conflicts based on Naturalistic Driving Data
null
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Analyzing and reconstructing driving scenarios is crucial for testing and evaluating automated vehicles. This research analyzed left turn / straight-driving conflicts at unprotected intersections by extracting actual vehicle motion data from a naturalistic driving database collected by the University of Michigan. Nearly 7,000 Left turn across path opposite direction (LTAP/OD) events involving heavy trucks and light vehicles were extracted and used to build a stochastic model of such LTAP/OD scenarios. Statistical analysis showed that vehicle type is a significant factor, whereas the change of season seems to have limited influence on the statistical nature of the conflict. The results can be used to build HAV testing environments to simulate the LTAP/OD crash cases in a stochastic manner, which is among the top NHTSA identified priority light-vehicle pre-crash scenarios.
[ { "created": "Wed, 1 Feb 2017 05:12:58 GMT", "version": "v1" }, { "created": "Mon, 3 Apr 2017 13:15:16 GMT", "version": "v2" } ]
2017-04-04
[ [ "Wang", "Xinpeng", "" ], [ "Zhao", "Ding", "" ], [ "Peng", "Huei", "" ], [ "LeBlanc", "David J.", "" ] ]
Analyzing and reconstructing driving scenarios is crucial for testing and evaluating automated vehicles. This research analyzed left turn / straight-driving conflicts at unprotected intersections by extracting actual vehicle motion data from a naturalistic driving database collected by the University of Michigan. Nearly 7,000 Left turn across path opposite direction (LTAP/OD) events involving heavy trucks and light vehicles were extracted and used to build a stochastic model of such LTAP/OD scenarios. Statistical analysis showed that vehicle type is a significant factor, whereas the change of season seems to have limited influence on the statistical nature of the conflict. The results can be used to build HAV testing environments to simulate the LTAP/OD crash cases in a stochastic manner, which is among the top NHTSA identified priority light-vehicle pre-crash scenarios.
1407.4335
Rehman Talukdar
Rehman Talukdar and Mridul Saikia
Evolution and Innovation in 5G Cellular Communication System and Beyond: A Study
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since the last few years there has been a phenomenal growth in the wireless industry. Widespread wireless technologies, increasing variety of user-friendly and multimedia- enabled terminals and wider availability of open source tools for content generation has lead encouraged user-centric networks resulting in a need for efficient network design. The objective of this paper is comprehensive study related to 5G technology of mobile communication. Existing research work in mobile communication is related to 5G technology. The major contribution of this study is the key provisions of 5G (Fifth Generation) technology of mobile communication, which is seen as consumer oriented. In 5G technology, the mobile consumer has given utmost priority compared to others. In this context, the existing and highly demanded technologies for 5G technologies has beed studied extensively. Open challenges are highlighted for researcher for further study of the emerging 5G systems.
[ { "created": "Wed, 16 Jul 2014 15:08:45 GMT", "version": "v1" } ]
2014-07-17
[ [ "Talukdar", "Rehman", "" ], [ "Saikia", "Mridul", "" ] ]
Since the last few years there has been a phenomenal growth in the wireless industry. Widespread wireless technologies, increasing variety of user-friendly and multimedia- enabled terminals and wider availability of open source tools for content generation has lead encouraged user-centric networks resulting in a need for efficient network design. The objective of this paper is comprehensive study related to 5G technology of mobile communication. Existing research work in mobile communication is related to 5G technology. The major contribution of this study is the key provisions of 5G (Fifth Generation) technology of mobile communication, which is seen as consumer oriented. In 5G technology, the mobile consumer has given utmost priority compared to others. In this context, the existing and highly demanded technologies for 5G technologies has beed studied extensively. Open challenges are highlighted for researcher for further study of the emerging 5G systems.
1807.04518
Melanie Schmidt
Dan Feldman and Melanie Schmidt and Christian Sohler
Turning Big data into tiny data: Constant-size coresets for k-means, PCA and projective clustering
The conference version of this work appeared at SODA 2013
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop and analyze a method to reduce the size of a very large set of data points in a high dimensional Euclidean space R d to a small set of weighted points such that the result of a predetermined data analysis task on the reduced set is approximately the same as that for the original point set. For example, computing the first k principal components of the reduced set will return approximately the first k principal components of the original set or computing the centers of a k-means clustering on the reduced set will return an approximation for the original set. Such a reduced set is also known as a coreset. The main new feature of our construction is that the cardinality of the reduced set is independent of the dimension d of the input space and that the sets are mergable. The latter property means that the union of two reduced sets is a reduced set for the union of the two original sets (this property has recently also been called composability, see Indyk et. al., PODS 2014). It allows us to turn our methods into streaming or distributed algorithms using standard approaches. For problems such as k-means and subspace approximation the coreset sizes are also independent of the number of input points. Our method is based on projecting the points on a low dimensional subspace and reducing the cardinality of the points inside this subspace using known methods. The proposed approach works for a wide range of data analysis techniques including k-means clustering, principal component analysis and subspace clustering. The main conceptual contribution is a new coreset definition that allows to charge costs that appear for every solution to an additive constant.
[ { "created": "Thu, 12 Jul 2018 10:25:12 GMT", "version": "v1" } ]
2018-07-13
[ [ "Feldman", "Dan", "" ], [ "Schmidt", "Melanie", "" ], [ "Sohler", "Christian", "" ] ]
We develop and analyze a method to reduce the size of a very large set of data points in a high dimensional Euclidean space R d to a small set of weighted points such that the result of a predetermined data analysis task on the reduced set is approximately the same as that for the original point set. For example, computing the first k principal components of the reduced set will return approximately the first k principal components of the original set or computing the centers of a k-means clustering on the reduced set will return an approximation for the original set. Such a reduced set is also known as a coreset. The main new feature of our construction is that the cardinality of the reduced set is independent of the dimension d of the input space and that the sets are mergable. The latter property means that the union of two reduced sets is a reduced set for the union of the two original sets (this property has recently also been called composability, see Indyk et. al., PODS 2014). It allows us to turn our methods into streaming or distributed algorithms using standard approaches. For problems such as k-means and subspace approximation the coreset sizes are also independent of the number of input points. Our method is based on projecting the points on a low dimensional subspace and reducing the cardinality of the points inside this subspace using known methods. The proposed approach works for a wide range of data analysis techniques including k-means clustering, principal component analysis and subspace clustering. The main conceptual contribution is a new coreset definition that allows to charge costs that appear for every solution to an additive constant.
1704.02806
Zeinab Yazdanshenasan Shahraki
Zeinab Yazdanshenasan, Harpreet S. Dhillon, and Peter Han Joo Chong
Serving Distance and Coverage in a Closed Access PHP-Based Heterogeneous Cellular Network
Proc., Biennial Symposium on Communications (BSC), 2016
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Heterogeneous cellular networks (HCNs) usually exhibit spatial separation amongst base stations (BSs) of different types (termed tiers in this paper). For instance, operators will usually not deploy a picocell in close proximity to a macrocell, thus inducing separation amongst the locations of pico and macrocells. This separation has recently been captured by modeling the small cell locations by a Poisson Hole Process (PHP) with the hole centers being the locations of the macrocells. Due to the presence of exclusion zones, the analysis of the resulting model is significantly more complex compared to the more popular Poisson Point Process (PPP) based models. In this paper, we derive a tight bound on the distribution of the distance of a typical user to the closest point of a PHP. Since the exact distribution of this distance is not known, it is often approximated in the literature. For this model, we then provide tight characterization of the downlink coverage probability for a typical user in a two-tier closed-access HCN under two cases: (i) typical user is served by the closest macrocell, and (ii) typical user is served by its closest small cell. The proposed approach can be extended to analyze other relevant cases of interest, e.g., coverage in a PHP-based open access HCN.
[ { "created": "Mon, 10 Apr 2017 11:18:32 GMT", "version": "v1" } ]
2017-04-11
[ [ "Yazdanshenasan", "Zeinab", "" ], [ "Dhillon", "Harpreet S.", "" ], [ "Chong", "Peter Han Joo", "" ] ]
Heterogeneous cellular networks (HCNs) usually exhibit spatial separation amongst base stations (BSs) of different types (termed tiers in this paper). For instance, operators will usually not deploy a picocell in close proximity to a macrocell, thus inducing separation amongst the locations of pico and macrocells. This separation has recently been captured by modeling the small cell locations by a Poisson Hole Process (PHP) with the hole centers being the locations of the macrocells. Due to the presence of exclusion zones, the analysis of the resulting model is significantly more complex compared to the more popular Poisson Point Process (PPP) based models. In this paper, we derive a tight bound on the distribution of the distance of a typical user to the closest point of a PHP. Since the exact distribution of this distance is not known, it is often approximated in the literature. For this model, we then provide tight characterization of the downlink coverage probability for a typical user in a two-tier closed-access HCN under two cases: (i) typical user is served by the closest macrocell, and (ii) typical user is served by its closest small cell. The proposed approach can be extended to analyze other relevant cases of interest, e.g., coverage in a PHP-based open access HCN.
1205.3655
Asia Furones
Asia Furones
P versus UP
Administratively withdrawn due to policy violations
null
null
null
cs.CC
http://creativecommons.org/licenses/by/3.0/
Admin note: withdrawn by arXiv admin because of the use of a pseudonym, in violation of arXiv policy.
[ { "created": "Wed, 16 May 2012 12:26:51 GMT", "version": "v1" }, { "created": "Mon, 4 Jun 2012 16:18:48 GMT", "version": "v2" }, { "created": "Mon, 23 Jul 2012 18:05:31 GMT", "version": "v3" }, { "created": "Mon, 13 Aug 2012 17:37:36 GMT", "version": "v4" }, { "created": "Mon, 19 Nov 2012 17:34:47 GMT", "version": "v5" } ]
2014-07-18
[ [ "Furones", "Asia", "" ] ]
Admin note: withdrawn by arXiv admin because of the use of a pseudonym, in violation of arXiv policy.
1306.3721
Huahua Wang
Huahua Wang and Arindam Banerjee
Online Alternating Direction Method (longer version)
Longer version of arXiv:1206.6448
null
null
null
cs.LG math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online optimization has emerged as powerful tool in large scale optimization. In this pa- per, we introduce efficient online optimization algorithms based on the alternating direction method (ADM), which can solve online convex optimization under linear constraints where the objective could be non-smooth. We introduce new proof techniques for ADM in the batch setting, which yields a O(1/T) convergence rate for ADM and forms the basis for regret anal- ysis in the online setting. We consider two scenarios in the online setting, based on whether an additional Bregman divergence is needed or not. In both settings, we establish regret bounds for both the objective function as well as constraints violation for general and strongly convex functions. We also consider inexact ADM updates where certain terms are linearized to yield efficient updates and show the stochastic convergence rates. In addition, we briefly discuss that online ADM can be used as projection- free online learning algorithm in some scenarios. Preliminary results are presented to illustrate the performance of the proposed algorithms.
[ { "created": "Mon, 17 Jun 2013 01:27:10 GMT", "version": "v1" }, { "created": "Wed, 10 Jul 2013 18:36:18 GMT", "version": "v2" } ]
2013-07-11
[ [ "Wang", "Huahua", "" ], [ "Banerjee", "Arindam", "" ] ]
Online optimization has emerged as powerful tool in large scale optimization. In this pa- per, we introduce efficient online optimization algorithms based on the alternating direction method (ADM), which can solve online convex optimization under linear constraints where the objective could be non-smooth. We introduce new proof techniques for ADM in the batch setting, which yields a O(1/T) convergence rate for ADM and forms the basis for regret anal- ysis in the online setting. We consider two scenarios in the online setting, based on whether an additional Bregman divergence is needed or not. In both settings, we establish regret bounds for both the objective function as well as constraints violation for general and strongly convex functions. We also consider inexact ADM updates where certain terms are linearized to yield efficient updates and show the stochastic convergence rates. In addition, we briefly discuss that online ADM can be used as projection- free online learning algorithm in some scenarios. Preliminary results are presented to illustrate the performance of the proposed algorithms.
0705.0350
Ruslan Sharipov
Ruslan Sharipov
Algorithms for laying points optimally on a plane and a circle
AmSTeX, 6 pages, amsppt style
null
null
null
cs.CG math.OC
null
Two averaging algorithms are considered which are intended for choosing an optimal plane and an optimal circle approximating a group of points in three-dimensional Euclidean space.
[ { "created": "Wed, 2 May 2007 19:41:44 GMT", "version": "v1" } ]
2007-05-23
[ [ "Sharipov", "Ruslan", "" ] ]
Two averaging algorithms are considered which are intended for choosing an optimal plane and an optimal circle approximating a group of points in three-dimensional Euclidean space.
2206.11187
Muhammed Fatih Bulut
Abdulhamid Adebayo, Daby Sow, Muhammed Fatih Bulut
Automated Compliance Blueprint Optimization with Artificial Intelligence
5 pages
null
null
null
cs.AI cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For highly regulated industries such as banking and healthcare, one of the major hindrances to the adoption of cloud computing is compliance with regulatory standards. This is a complex problem due to many regulatory and technical specification (techspec) documents that the companies need to comply with. The critical problem is to establish the mapping between techspecs and regulation controls so that from day one, companies can comply with regulations with minimal effort. We demonstrate the practicality of an approach to automatically analyze regulatory standards using Artificial Intelligence (AI) techniques. We present early results to identify the mapping between techspecs and regulation controls, and discuss challenges that must be overcome for this solution to be fully practical.
[ { "created": "Wed, 22 Jun 2022 15:59:16 GMT", "version": "v1" } ]
2022-06-23
[ [ "Adebayo", "Abdulhamid", "" ], [ "Sow", "Daby", "" ], [ "Bulut", "Muhammed Fatih", "" ] ]
For highly regulated industries such as banking and healthcare, one of the major hindrances to the adoption of cloud computing is compliance with regulatory standards. This is a complex problem due to many regulatory and technical specification (techspec) documents that the companies need to comply with. The critical problem is to establish the mapping between techspecs and regulation controls so that from day one, companies can comply with regulations with minimal effort. We demonstrate the practicality of an approach to automatically analyze regulatory standards using Artificial Intelligence (AI) techniques. We present early results to identify the mapping between techspecs and regulation controls, and discuss challenges that must be overcome for this solution to be fully practical.
2403.04460
Minju Kim
Minjin Kim, Minju Kim, Hana Kim, Beong-woo Kwak, Soyeon Chun, Hyunseo Kim, SeongKu Kang, Youngjae Yu, Jinyoung Yeo, Dongha Lee
Pearl: A Review-driven Persona-Knowledge Grounded Conversational Recommendation Dataset
Published at ACL 2024 Findings
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Conversational recommender system is an emerging area that has garnered an increasing interest in the community, especially with the advancements in large language models (LLMs) that enable diverse reasoning over conversational input. Despite the progress, the field has many aspects left to explore. The currently available public datasets for conversational recommendation lack specific user preferences and explanations for recommendations, hindering high-quality recommendations. To address such challenges, we present a novel conversational recommendation dataset named PEARL, synthesized with persona- and knowledge-augmented LLM simulators. We obtain detailed persona and knowledge from real-world reviews and construct a large-scale dataset with over 57k dialogues. Our experimental results demonstrate that utterances in PEARL include more specific user preferences, show expertise in the target domain, and provide recommendations more relevant to the dialogue context than those in prior datasets.
[ { "created": "Thu, 7 Mar 2024 12:57:16 GMT", "version": "v1" }, { "created": "Fri, 8 Mar 2024 04:54:31 GMT", "version": "v2" }, { "created": "Fri, 5 Apr 2024 11:11:01 GMT", "version": "v3" }, { "created": "Sat, 8 Jun 2024 17:40:14 GMT", "version": "v4" } ]
2024-06-11
[ [ "Kim", "Minjin", "" ], [ "Kim", "Minju", "" ], [ "Kim", "Hana", "" ], [ "Kwak", "Beong-woo", "" ], [ "Chun", "Soyeon", "" ], [ "Kim", "Hyunseo", "" ], [ "Kang", "SeongKu", "" ], [ "Yu", "Youngjae", "" ], [ "Yeo", "Jinyoung", "" ], [ "Lee", "Dongha", "" ] ]
Conversational recommender system is an emerging area that has garnered an increasing interest in the community, especially with the advancements in large language models (LLMs) that enable diverse reasoning over conversational input. Despite the progress, the field has many aspects left to explore. The currently available public datasets for conversational recommendation lack specific user preferences and explanations for recommendations, hindering high-quality recommendations. To address such challenges, we present a novel conversational recommendation dataset named PEARL, synthesized with persona- and knowledge-augmented LLM simulators. We obtain detailed persona and knowledge from real-world reviews and construct a large-scale dataset with over 57k dialogues. Our experimental results demonstrate that utterances in PEARL include more specific user preferences, show expertise in the target domain, and provide recommendations more relevant to the dialogue context than those in prior datasets.
2111.10188
Seyed Jalaleddin Mousavirad
Seyed Jalaleddin Mousavirad, Gerald Schaefer, Iakov Korovin, Diego Oliva, Mahshid Helali Moghadam, Mehrdad Saadatmand
HMS-OS: Improving the Human Mental Search Optimisation Algorithm by Grouping in both Search and Objective Space
7 pages, IEEE Symposium Series on Computational Intelligence (IEEE SSCI 2021), Orlando, USA
null
null
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The human mental search (HMS) algorithm is a relatively recent population-based metaheuristic algorithm, which has shown competitive performance in solving complex optimisation problems. It is based on three main operators: mental search, grouping, and movement. In the original HMS algorithm, a clustering algorithm is used to group the current population in order to identify a promising region in search space, while candidate solutions then move towards the best candidate solution in the promising region. In this paper, we propose a novel HMS algorithm, HMS-OS, which is based on clustering in both objective and search space, where clustering in objective space finds a set of best candidate solutions whose centroid is then also used in updating the population. For further improvement, HMSOS benefits from an adaptive selection of the number of mental processes in the mental search operator. Experimental results on CEC-2017 benchmark functions with dimensionalities of 50 and 100, and in comparison to other optimisation algorithms, indicate that HMS-OS yields excellent performance, superior to those of other methods.
[ { "created": "Fri, 19 Nov 2021 12:56:33 GMT", "version": "v1" }, { "created": "Fri, 3 Dec 2021 16:19:06 GMT", "version": "v2" } ]
2021-12-06
[ [ "Mousavirad", "Seyed Jalaleddin", "" ], [ "Schaefer", "Gerald", "" ], [ "Korovin", "Iakov", "" ], [ "Oliva", "Diego", "" ], [ "Moghadam", "Mahshid Helali", "" ], [ "Saadatmand", "Mehrdad", "" ] ]
The human mental search (HMS) algorithm is a relatively recent population-based metaheuristic algorithm, which has shown competitive performance in solving complex optimisation problems. It is based on three main operators: mental search, grouping, and movement. In the original HMS algorithm, a clustering algorithm is used to group the current population in order to identify a promising region in search space, while candidate solutions then move towards the best candidate solution in the promising region. In this paper, we propose a novel HMS algorithm, HMS-OS, which is based on clustering in both objective and search space, where clustering in objective space finds a set of best candidate solutions whose centroid is then also used in updating the population. For further improvement, HMSOS benefits from an adaptive selection of the number of mental processes in the mental search operator. Experimental results on CEC-2017 benchmark functions with dimensionalities of 50 and 100, and in comparison to other optimisation algorithms, indicate that HMS-OS yields excellent performance, superior to those of other methods.
2305.15594
Haonan Duan
Haonan Duan, Adam Dziedzic, Nicolas Papernot, Franziska Boenisch
Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models
null
null
null
null
cs.LG cs.CL cs.CR
http://creativecommons.org/licenses/by/4.0/
Large language models (LLMs) are excellent in-context learners. However, the sensitivity of data contained in prompts raises privacy concerns. Our work first shows that these concerns are valid: we instantiate a simple but highly effective membership inference attack against the data used to prompt LLMs. To address this vulnerability, one could forego prompting and resort to fine-tuning LLMs with known algorithms for private gradient descent. However, this comes at the expense of the practicality and efficiency offered by prompting. Therefore, we propose to privately learn to prompt. We first show that soft prompts can be obtained privately through gradient descent on downstream data. However, this is not the case for discrete prompts. Thus, we orchestrate a noisy vote among an ensemble of LLMs presented with different prompts, i.e., a flock of stochastic parrots. The vote privately transfers the flock's knowledge into a single public prompt. We show that LLMs prompted with our private algorithms closely match the non-private baselines. For example, using GPT3 as the base model, we achieve a downstream accuracy of 92.7% on the sst2 dataset with ($\epsilon=0.147, \delta=10^{-6}$)-differential privacy vs. 95.2% for the non-private baseline. Through our experiments, we also show that our prompt-based approach is easily deployed with existing commercial APIs.
[ { "created": "Wed, 24 May 2023 22:06:08 GMT", "version": "v1" } ]
2023-05-26
[ [ "Duan", "Haonan", "" ], [ "Dziedzic", "Adam", "" ], [ "Papernot", "Nicolas", "" ], [ "Boenisch", "Franziska", "" ] ]
Large language models (LLMs) are excellent in-context learners. However, the sensitivity of data contained in prompts raises privacy concerns. Our work first shows that these concerns are valid: we instantiate a simple but highly effective membership inference attack against the data used to prompt LLMs. To address this vulnerability, one could forego prompting and resort to fine-tuning LLMs with known algorithms for private gradient descent. However, this comes at the expense of the practicality and efficiency offered by prompting. Therefore, we propose to privately learn to prompt. We first show that soft prompts can be obtained privately through gradient descent on downstream data. However, this is not the case for discrete prompts. Thus, we orchestrate a noisy vote among an ensemble of LLMs presented with different prompts, i.e., a flock of stochastic parrots. The vote privately transfers the flock's knowledge into a single public prompt. We show that LLMs prompted with our private algorithms closely match the non-private baselines. For example, using GPT3 as the base model, we achieve a downstream accuracy of 92.7% on the sst2 dataset with ($\epsilon=0.147, \delta=10^{-6}$)-differential privacy vs. 95.2% for the non-private baseline. Through our experiments, we also show that our prompt-based approach is easily deployed with existing commercial APIs.
2303.13024
Hamid Ghaderi
Hamid Ghaderi, Brandon Foreman, Amin Nayebi, Sindhu Tipirneni, Chandan K. Reddy, Vignesh Subbian
Identifying TBI Physiological States by Clustering Multivariate Clinical Time-Series Data
10 pages, 7 figures, 2 tables
AMIA Annu Symp Proc. 2024 Jan 11;2023:379-388
null
null
cs.LG cs.AI eess.SP
http://creativecommons.org/licenses/by-nc-nd/4.0/
Determining clinically relevant physiological states from multivariate time series data with missing values is essential for providing appropriate treatment for acute conditions such as Traumatic Brain Injury (TBI), respiratory failure, and heart failure. Utilizing non-temporal clustering or data imputation and aggregation techniques may lead to loss of valuable information and biased analyses. In our study, we apply the SLAC-Time algorithm, an innovative self-supervision-based approach that maintains data integrity by avoiding imputation or aggregation, offering a more useful representation of acute patient states. By using SLAC-Time to cluster data in a large research dataset, we identified three distinct TBI physiological states and their specific feature profiles. We employed various clustering evaluation metrics and incorporated input from a clinical domain expert to validate and interpret the identified physiological states. Further, we discovered how specific clinical events and interventions can influence patient states and state transitions.
[ { "created": "Thu, 23 Mar 2023 04:16:00 GMT", "version": "v1" }, { "created": "Thu, 30 Mar 2023 10:50:11 GMT", "version": "v2" }, { "created": "Tue, 18 Jul 2023 03:06:42 GMT", "version": "v3" } ]
2024-03-29
[ [ "Ghaderi", "Hamid", "" ], [ "Foreman", "Brandon", "" ], [ "Nayebi", "Amin", "" ], [ "Tipirneni", "Sindhu", "" ], [ "Reddy", "Chandan K.", "" ], [ "Subbian", "Vignesh", "" ] ]
Determining clinically relevant physiological states from multivariate time series data with missing values is essential for providing appropriate treatment for acute conditions such as Traumatic Brain Injury (TBI), respiratory failure, and heart failure. Utilizing non-temporal clustering or data imputation and aggregation techniques may lead to loss of valuable information and biased analyses. In our study, we apply the SLAC-Time algorithm, an innovative self-supervision-based approach that maintains data integrity by avoiding imputation or aggregation, offering a more useful representation of acute patient states. By using SLAC-Time to cluster data in a large research dataset, we identified three distinct TBI physiological states and their specific feature profiles. We employed various clustering evaluation metrics and incorporated input from a clinical domain expert to validate and interpret the identified physiological states. Further, we discovered how specific clinical events and interventions can influence patient states and state transitions.
2212.13974
Hichem Sahbi
Hichem Sahbi and Sebastien Deschamps
Adversarial Virtual Exemplar Learning for Label-Frugal Satellite Image Change Detection
arXiv admin note: substantial text overlap with arXiv:2203.11559
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Satellite image change detection aims at finding occurrences of targeted changes in a given scene taken at different instants. This task is highly challenging due to the acquisition conditions and also to the subjectivity of changes. In this paper, we investigate satellite image change detection using active learning. Our method is interactive and relies on a question and answer model which asks the oracle (user) questions about the most informative display (dubbed as virtual exemplars), and according to the user's responses, updates change detections. The main contribution of our method consists in a novel adversarial model that allows frugally probing the oracle with only the most representative, diverse and uncertain virtual exemplars. The latter are learned to challenge the most the trained change decision criteria which ultimately leads to a better re-estimate of these criteria in the following iterations of active learning. Conducted experiments show the out-performance of our proposed adversarial display model against other display strategies as well as the related work.
[ { "created": "Wed, 28 Dec 2022 17:46:20 GMT", "version": "v1" } ]
2022-12-29
[ [ "Sahbi", "Hichem", "" ], [ "Deschamps", "Sebastien", "" ] ]
Satellite image change detection aims at finding occurrences of targeted changes in a given scene taken at different instants. This task is highly challenging due to the acquisition conditions and also to the subjectivity of changes. In this paper, we investigate satellite image change detection using active learning. Our method is interactive and relies on a question and answer model which asks the oracle (user) questions about the most informative display (dubbed as virtual exemplars), and according to the user's responses, updates change detections. The main contribution of our method consists in a novel adversarial model that allows frugally probing the oracle with only the most representative, diverse and uncertain virtual exemplars. The latter are learned to challenge the most the trained change decision criteria which ultimately leads to a better re-estimate of these criteria in the following iterations of active learning. Conducted experiments show the out-performance of our proposed adversarial display model against other display strategies as well as the related work.
1911.00516
Martin Barrere
Mart\'in Barr\`ere, Chris Hankin, Nicolas Nicolau, Demetrios G. Eliades, Thomas Parisini
MaxSAT Evaluation 2019 -- Benchmark: Identifying Security-Critical Cyber-Physical Components in Weighted AND/OR Graphs
arXiv admin note: substantial text overlap with arXiv:1905.04796
null
null
null
cs.CR cs.NI cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a MaxSAT benchmark focused on identifying critical nodes in AND/OR graphs. We use AND/OR graphs to model Industrial Control Systems (ICS) as they are able to semantically grasp intricate logical interdependencies among ICS components. However, identifying critical nodes in AND/OR graphs is an NP-complete problem. We address this problem by efficiently transforming the input AND/OR graph-based model into a weighted logical formula that is then used to build and solve a Weighted Partial MaxSAT problem. The benchmark includes 80 cases with AND/OR graphs of different size and composition as well as the optimal cost and solution for each case.
[ { "created": "Fri, 1 Nov 2019 17:24:16 GMT", "version": "v1" } ]
2019-11-05
[ [ "Barrère", "Martín", "" ], [ "Hankin", "Chris", "" ], [ "Nicolau", "Nicolas", "" ], [ "Eliades", "Demetrios G.", "" ], [ "Parisini", "Thomas", "" ] ]
This paper presents a MaxSAT benchmark focused on identifying critical nodes in AND/OR graphs. We use AND/OR graphs to model Industrial Control Systems (ICS) as they are able to semantically grasp intricate logical interdependencies among ICS components. However, identifying critical nodes in AND/OR graphs is an NP-complete problem. We address this problem by efficiently transforming the input AND/OR graph-based model into a weighted logical formula that is then used to build and solve a Weighted Partial MaxSAT problem. The benchmark includes 80 cases with AND/OR graphs of different size and composition as well as the optimal cost and solution for each case.
1411.6714
Ahmed Elgammal
Emily L. Spratt and Ahmed Elgammal
The Digital Humanities Unveiled: Perceptions Held by Art Historians and Computer Scientists about Computer Vision Technology
arXiv admin note: substantial text overlap with arXiv:1410.2488
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although computer scientists are generally familiar with the achievements of computer vision technology in art history, these accomplishments are little known and often misunderstood by scholars in the humanities. To clarify the parameters of this seeming disjuncture, we have addressed the concerns that one example of the digitization of the humanities poses on social, philosophical, and practical levels. In support of our assessment of the perceptions held by computer scientists and art historians about the use of computer vision technology to examine art, we based our interpretations on two surveys that were distributed in August 2014. In this paper, the development of these surveys and their results are discussed in the context of the major philosophical conclusions of our research in this area to date.
[ { "created": "Tue, 25 Nov 2014 03:16:02 GMT", "version": "v1" } ]
2014-11-26
[ [ "Spratt", "Emily L.", "" ], [ "Elgammal", "Ahmed", "" ] ]
Although computer scientists are generally familiar with the achievements of computer vision technology in art history, these accomplishments are little known and often misunderstood by scholars in the humanities. To clarify the parameters of this seeming disjuncture, we have addressed the concerns that one example of the digitization of the humanities poses on social, philosophical, and practical levels. In support of our assessment of the perceptions held by computer scientists and art historians about the use of computer vision technology to examine art, we based our interpretations on two surveys that were distributed in August 2014. In this paper, the development of these surveys and their results are discussed in the context of the major philosophical conclusions of our research in this area to date.
1403.1591
Jinchun Zhan
Jinchun Zhan and Namrata Vaswani
Robust PCA with Partial Subspace Knowledge
19 pages, 9 figures, submitted to IEEE Transaction on Signal Processing
null
10.1109/TSP.2015.2421485
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent work, robust Principal Components Analysis (PCA) has been posed as a problem of recovering a low-rank matrix $\mathbf{L}$ and a sparse matrix $\mathbf{S}$ from their sum, $\mathbf{M}:= \mathbf{L} + \mathbf{S}$ and a provably exact convex optimization solution called PCP has been proposed. This work studies the following problem. Suppose that we have partial knowledge about the column space of the low rank matrix $\mathbf{L}$. Can we use this information to improve the PCP solution, i.e. allow recovery under weaker assumptions? We propose here a simple but useful modification of the PCP idea, called modified-PCP, that allows us to use this knowledge. We derive its correctness result which shows that, when the available subspace knowledge is accurate, modified-PCP indeed requires significantly weaker incoherence assumptions than PCP. Extensive simulations are also used to illustrate this. Comparisons with PCP and other existing work are shown for a stylized real application as well. Finally, we explain how this problem naturally occurs in many applications involving time series data, i.e. in what is called the online or recursive robust PCA problem. A corollary for this case is also given.
[ { "created": "Thu, 6 Mar 2014 21:10:15 GMT", "version": "v1" }, { "created": "Tue, 26 Aug 2014 16:36:57 GMT", "version": "v2" }, { "created": "Thu, 28 Aug 2014 19:40:59 GMT", "version": "v3" }, { "created": "Fri, 26 Dec 2014 17:49:57 GMT", "version": "v4" } ]
2023-07-19
[ [ "Zhan", "Jinchun", "" ], [ "Vaswani", "Namrata", "" ] ]
In recent work, robust Principal Components Analysis (PCA) has been posed as a problem of recovering a low-rank matrix $\mathbf{L}$ and a sparse matrix $\mathbf{S}$ from their sum, $\mathbf{M}:= \mathbf{L} + \mathbf{S}$ and a provably exact convex optimization solution called PCP has been proposed. This work studies the following problem. Suppose that we have partial knowledge about the column space of the low rank matrix $\mathbf{L}$. Can we use this information to improve the PCP solution, i.e. allow recovery under weaker assumptions? We propose here a simple but useful modification of the PCP idea, called modified-PCP, that allows us to use this knowledge. We derive its correctness result which shows that, when the available subspace knowledge is accurate, modified-PCP indeed requires significantly weaker incoherence assumptions than PCP. Extensive simulations are also used to illustrate this. Comparisons with PCP and other existing work are shown for a stylized real application as well. Finally, we explain how this problem naturally occurs in many applications involving time series data, i.e. in what is called the online or recursive robust PCA problem. A corollary for this case is also given.
2310.12551
Jingwei Song
Jingwei Song, Keke Yang, Zheng Zhang, Meng Li, Tuoyu Cao, Maani Ghaffari
Iterative PnP and its application in 3D-2D vascular image registration for robot navigation
Submitted to ICRA 2024 Errors in Eq. 4 and Eq. 6 have been corrected. Updates include some minor improvements in Section II
null
null
null
cs.RO eess.IV
http://creativecommons.org/licenses/by/4.0/
This paper reports on a new real-time robot-centered 3D-2D vascular image alignment algorithm, which is robust to outliers and can align nonrigid shapes. Few works have managed to achieve both real-time and accurate performance for vascular intervention robots. This work bridges high-accuracy 3D-2D registration techniques and computational efficiency requirements in intervention robot applications. We categorize centerline-based vascular 3D-2D image registration problems as an iterative Perspective-n-Point (PnP) problem and propose to use the Levenberg-Marquardt solver on the Lie manifold. Then, the recently developed Reproducing Kernel Hilbert Space (RKHS) algorithm is introduced to overcome the ``big-to-small'' problem in typical robotic scenarios. Finally, an iterative reweighted least squares is applied to solve RKHS-based formulation efficiently. Experiments indicate that the proposed algorithm processes registration over 50 Hz (rigid) and 20 Hz (nonrigid) and obtains competing registration accuracy similar to other works. Results indicate that our Iterative PnP is suitable for future vascular intervention robot applications.
[ { "created": "Thu, 19 Oct 2023 07:59:26 GMT", "version": "v1" }, { "created": "Thu, 11 Jan 2024 09:07:51 GMT", "version": "v2" } ]
2024-01-12
[ [ "Song", "Jingwei", "" ], [ "Yang", "Keke", "" ], [ "Zhang", "Zheng", "" ], [ "Li", "Meng", "" ], [ "Cao", "Tuoyu", "" ], [ "Ghaffari", "Maani", "" ] ]
This paper reports on a new real-time robot-centered 3D-2D vascular image alignment algorithm, which is robust to outliers and can align nonrigid shapes. Few works have managed to achieve both real-time and accurate performance for vascular intervention robots. This work bridges high-accuracy 3D-2D registration techniques and computational efficiency requirements in intervention robot applications. We categorize centerline-based vascular 3D-2D image registration problems as an iterative Perspective-n-Point (PnP) problem and propose to use the Levenberg-Marquardt solver on the Lie manifold. Then, the recently developed Reproducing Kernel Hilbert Space (RKHS) algorithm is introduced to overcome the ``big-to-small'' problem in typical robotic scenarios. Finally, an iterative reweighted least squares is applied to solve RKHS-based formulation efficiently. Experiments indicate that the proposed algorithm processes registration over 50 Hz (rigid) and 20 Hz (nonrigid) and obtains competing registration accuracy similar to other works. Results indicate that our Iterative PnP is suitable for future vascular intervention robot applications.
2109.13116
Ekta Sood
Ekta Sood, Fabian K\"ogel, Florian Strohm, Prajit Dhar, Andreas Bulling
VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual Question Answering
CoNLL 2021
null
null
null
cs.CV cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
We present VQA-MHUG - a novel 49-participant dataset of multimodal human gaze on both images and questions during visual question answering (VQA) collected using a high-speed eye tracker. We use our dataset to analyze the similarity between human and neural attentive strategies learned by five state-of-the-art VQA models: Modular Co-Attention Network (MCAN) with either grid or region features, Pythia, Bilinear Attention Network (BAN), and the Multimodal Factorized Bilinear Pooling Network (MFB). While prior work has focused on studying the image modality, our analyses show - for the first time - that for all models, higher correlation with human attention on text is a significant predictor of VQA performance. This finding points at a potential for improving VQA performance and, at the same time, calls for further research on neural text attention mechanisms and their integration into architectures for vision and language tasks, including but potentially also beyond VQA.
[ { "created": "Mon, 27 Sep 2021 15:06:10 GMT", "version": "v1" } ]
2021-09-28
[ [ "Sood", "Ekta", "" ], [ "Kögel", "Fabian", "" ], [ "Strohm", "Florian", "" ], [ "Dhar", "Prajit", "" ], [ "Bulling", "Andreas", "" ] ]
We present VQA-MHUG - a novel 49-participant dataset of multimodal human gaze on both images and questions during visual question answering (VQA) collected using a high-speed eye tracker. We use our dataset to analyze the similarity between human and neural attentive strategies learned by five state-of-the-art VQA models: Modular Co-Attention Network (MCAN) with either grid or region features, Pythia, Bilinear Attention Network (BAN), and the Multimodal Factorized Bilinear Pooling Network (MFB). While prior work has focused on studying the image modality, our analyses show - for the first time - that for all models, higher correlation with human attention on text is a significant predictor of VQA performance. This finding points at a potential for improving VQA performance and, at the same time, calls for further research on neural text attention mechanisms and their integration into architectures for vision and language tasks, including but potentially also beyond VQA.
2104.07969
Jason Wang
Jason Wang and Robert E. Weiss
Hierarchical Topic Presence Models
null
null
null
null
cs.IR
http://creativecommons.org/licenses/by/4.0/
Topic models analyze text from a set of documents. Documents are modeled as a mixture of topics, with topics defined as probability distributions on words. Inferences of interest include the most probable topics and characterization of a topic by inspecting the topic's highest probability words. Motivated by a data set of web pages (documents) nested in web sites, we extend the Poisson factor analysis topic model to hierarchical topic presence models for analyzing text from documents nested in known groups. We incorporate an unknown binary topic presence parameter for each topic at the web site and/or the web page level to allow web sites and/or web pages to be sparse mixtures of topics and we propose logistic regression modeling of topic presence conditional on web site covariates. We introduce local topics into the Poisson factor analysis framework, where each web site has a local topic not found in other web sites. Two data augmentation methods, the Chinese table distribution and P\'{o}lya-Gamma augmentation, aid in constructing our sampler. We analyze text from web pages nested in United States local public health department web sites to abstract topical information and understand national patterns in topic presence.
[ { "created": "Fri, 16 Apr 2021 08:41:07 GMT", "version": "v1" } ]
2021-04-19
[ [ "Wang", "Jason", "" ], [ "Weiss", "Robert E.", "" ] ]
Topic models analyze text from a set of documents. Documents are modeled as a mixture of topics, with topics defined as probability distributions on words. Inferences of interest include the most probable topics and characterization of a topic by inspecting the topic's highest probability words. Motivated by a data set of web pages (documents) nested in web sites, we extend the Poisson factor analysis topic model to hierarchical topic presence models for analyzing text from documents nested in known groups. We incorporate an unknown binary topic presence parameter for each topic at the web site and/or the web page level to allow web sites and/or web pages to be sparse mixtures of topics and we propose logistic regression modeling of topic presence conditional on web site covariates. We introduce local topics into the Poisson factor analysis framework, where each web site has a local topic not found in other web sites. Two data augmentation methods, the Chinese table distribution and P\'{o}lya-Gamma augmentation, aid in constructing our sampler. We analyze text from web pages nested in United States local public health department web sites to abstract topical information and understand national patterns in topic presence.
1408.0262
Girish Varma
Girish Varma
Reducing uniformity in Khot-Saket hypergraph coloring hardness reductions
null
null
null
null
cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a recent result, Khot and Saket [FOCS 2014] proved the quasi-NP-hardness of coloring a 2-colorable 12-uniform hypergraph with $2^{(\log n)^{\Omega(1)}}$ colors. This result was proved using a novel outer PCP verifier which had a strong soundness guarantee. In this note, we show that we can reduce the arity of their result by modifying their 12-query inner verifier to an 8-query inner verifier based on the hypergraph coloring hardness reductions of Guruswami et. al. [STOC 2014]. More precisely, we prove quasi-NP-hardness of the following problems on n-vertex hypergraphs. - Coloring a 2-colorable 8-uniform hypergraph with $2^{(\log n)^{\Omega(1)}}$ colors. - Coloring a 4-colorable 4-uniform hypergraph with $2^{(\log n)^{\Omega(1)}}$ colors.
[ { "created": "Fri, 1 Aug 2014 18:49:47 GMT", "version": "v1" }, { "created": "Mon, 4 Aug 2014 06:56:38 GMT", "version": "v2" }, { "created": "Wed, 3 Dec 2014 17:20:45 GMT", "version": "v3" }, { "created": "Thu, 11 Dec 2014 06:18:40 GMT", "version": "v4" } ]
2014-12-12
[ [ "Varma", "Girish", "" ] ]
In a recent result, Khot and Saket [FOCS 2014] proved the quasi-NP-hardness of coloring a 2-colorable 12-uniform hypergraph with $2^{(\log n)^{\Omega(1)}}$ colors. This result was proved using a novel outer PCP verifier which had a strong soundness guarantee. In this note, we show that we can reduce the arity of their result by modifying their 12-query inner verifier to an 8-query inner verifier based on the hypergraph coloring hardness reductions of Guruswami et. al. [STOC 2014]. More precisely, we prove quasi-NP-hardness of the following problems on n-vertex hypergraphs. - Coloring a 2-colorable 8-uniform hypergraph with $2^{(\log n)^{\Omega(1)}}$ colors. - Coloring a 4-colorable 4-uniform hypergraph with $2^{(\log n)^{\Omega(1)}}$ colors.
1908.06752
Aakanksha Rana
Aakanksha Rana, Cagri Ozcinar, Aljoscha Smolic
Towards Generating Ambisonics Using Audio-Visual Cue for Virtual Reality
ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
null
10.1109/ICASSP.2019.8683318
null
cs.SD cs.CV cs.LG cs.MM eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ambisonics i.e., a full-sphere surround sound, is quintessential with 360-degree visual content to provide a realistic virtual reality (VR) experience. While 360-degree visual content capture gained a tremendous boost recently, the estimation of corresponding spatial sound is still challenging due to the required sound-field microphones or information about the sound-source locations. In this paper, we introduce a novel problem of generating Ambisonics in 360-degree videos using the audio-visual cue. With this aim, firstly, a novel 360-degree audio-visual video dataset of 265 videos is introduced with annotated sound-source locations. Secondly, a pipeline is designed for an automatic Ambisonic estimation problem. Benefiting from the deep learning-based audio-visual feature-embedding and prediction modules, our pipeline estimates the 3D sound-source locations and further use such locations to encode to the B-format. To benchmark our dataset and pipeline, we additionally propose evaluation criteria to investigate the performance using different 360-degree input representations. Our results demonstrate the efficacy of the proposed pipeline and open up a new area of research in 360-degree audio-visual analysis for future investigations.
[ { "created": "Fri, 16 Aug 2019 14:49:30 GMT", "version": "v1" } ]
2019-08-20
[ [ "Rana", "Aakanksha", "" ], [ "Ozcinar", "Cagri", "" ], [ "Smolic", "Aljoscha", "" ] ]
Ambisonics i.e., a full-sphere surround sound, is quintessential with 360-degree visual content to provide a realistic virtual reality (VR) experience. While 360-degree visual content capture gained a tremendous boost recently, the estimation of corresponding spatial sound is still challenging due to the required sound-field microphones or information about the sound-source locations. In this paper, we introduce a novel problem of generating Ambisonics in 360-degree videos using the audio-visual cue. With this aim, firstly, a novel 360-degree audio-visual video dataset of 265 videos is introduced with annotated sound-source locations. Secondly, a pipeline is designed for an automatic Ambisonic estimation problem. Benefiting from the deep learning-based audio-visual feature-embedding and prediction modules, our pipeline estimates the 3D sound-source locations and further use such locations to encode to the B-format. To benchmark our dataset and pipeline, we additionally propose evaluation criteria to investigate the performance using different 360-degree input representations. Our results demonstrate the efficacy of the proposed pipeline and open up a new area of research in 360-degree audio-visual analysis for future investigations.