id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1705.06891
Mohammad Hadi
Mohammad Hadi, Mohammad Reza Pakravan
Energy-Efficient Resource Allocation for Elastic Optical Networks using Convex Optimization
28 pages, 11 figures, 4 tables
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a two-stage algorithm for energy-efficient resource allocation constrained to QoS and physical requirements in OFDM-based EONs. The first stage deals with routing, grooming and traffic ordering and aims at minimizing amplifier power consumption and number of active transponders. We provide a heuristic procedure which yields an acceptable solution for the complex ILP formulation of the routing and grooming. In the second stage, we optimize transponder configuration including spectrum and transmit power parameters to minimize transponder power consumption. We show how QoS and transponder power consumption are represented by convex expressions and use the results to formulate a convex problem for configuring transponders in which transmit optical power is an optimization variable. Simulation results demonstrate that the power consumption is reduced by 9% when the proposed routing and grooming algorithm is applied to European Cost239 network with aggregate traffic 60 Tbps. It is shown that our convex formulation for transponder parameter assignment is considerably faster than its MINLP counterpart and its ability to optimize transmit optical power improves transponder power consumption by 8% for aggregate traffic 60 Tbps. Furthermore, we investigate the effect of adaptive modulation assignment and transponder capacity on inherent tradeoff between network CAPEX and OPEX.
[ { "created": "Fri, 19 May 2017 08:34:25 GMT", "version": "v1" } ]
2017-05-22
[ [ "Hadi", "Mohammad", "" ], [ "Pakravan", "Mohammad Reza", "" ] ]
We propose a two-stage algorithm for energy-efficient resource allocation constrained to QoS and physical requirements in OFDM-based EONs. The first stage deals with routing, grooming and traffic ordering and aims at minimizing amplifier power consumption and number of active transponders. We provide a heuristic procedure which yields an acceptable solution for the complex ILP formulation of the routing and grooming. In the second stage, we optimize transponder configuration including spectrum and transmit power parameters to minimize transponder power consumption. We show how QoS and transponder power consumption are represented by convex expressions and use the results to formulate a convex problem for configuring transponders in which transmit optical power is an optimization variable. Simulation results demonstrate that the power consumption is reduced by 9% when the proposed routing and grooming algorithm is applied to European Cost239 network with aggregate traffic 60 Tbps. It is shown that our convex formulation for transponder parameter assignment is considerably faster than its MINLP counterpart and its ability to optimize transmit optical power improves transponder power consumption by 8% for aggregate traffic 60 Tbps. Furthermore, we investigate the effect of adaptive modulation assignment and transponder capacity on inherent tradeoff between network CAPEX and OPEX.
1602.03337
Godphrey Kyambille Mr
Godphrey G Kyambille and Khamisi Kalegele
Enhancing Patient Appointments Scheduling that Uses Mobile Technology
7 pages
International Journal of Computer Science and Information Security, 13(11), 21 (2015)
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Appointment scheduling systems are utilized mainly by specialty care clinics to manage access to service providers as well as by hospitals to schedule patient appointments. When attending hospitals in Tanzania, patients experience challenges to see an appropriate specialist doctor because of service interval inconsistency. Timely availability of doctors is critical whenever a patient needs to see a specialist doctor for treatment and a serious bottleneck lies in the application of appropriate technology techniques to enhance appointment scheduling. In this paper, we present a mobile based application scheduling system for managing patient appointments. Furthermore, forthcoming opportunities for the innovative use of the mobile based application scheduling system are identified.
[ { "created": "Wed, 10 Feb 2016 11:56:48 GMT", "version": "v1" } ]
2016-02-11
[ [ "Kyambille", "Godphrey G", "" ], [ "Kalegele", "Khamisi", "" ] ]
Appointment scheduling systems are utilized mainly by specialty care clinics to manage access to service providers as well as by hospitals to schedule patient appointments. When attending hospitals in Tanzania, patients experience challenges to see an appropriate specialist doctor because of service interval inconsistency. Timely availability of doctors is critical whenever a patient needs to see a specialist doctor for treatment and a serious bottleneck lies in the application of appropriate technology techniques to enhance appointment scheduling. In this paper, we present a mobile based application scheduling system for managing patient appointments. Furthermore, forthcoming opportunities for the innovative use of the mobile based application scheduling system are identified.
2211.16611
Zhijie Qiao
Zhijie Qiao, Gedaliah Knizhnik, and Mark Yim
Holonomic Control of Arbitrary Configurations of Docked Modboats
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
The Modboat is a low-cost, underactuated, modular robot capable of surface swimming, docking to other modules, and undocking from them using only a single motor and two passive flippers. Undocking is achieved by causing intentional self-collision between the tails of neighboring modules in certain configurations; this becomes a challenge, however, when collective swimming as one connected component is desirable. Prior work has developed controllers that turn arbitrary configurations of docked Modboats into steerable vehicles, but they cannot counteract lateral forces and disturbances. In this work we present a centralized control strategy to create holonomic vehicles out of arbitrary configurations of docked Modboats using an iterative potential-field based search. We experimentally demonstrate that our controller performs well and can control surge and sway velocities and yaw angle simultaneously.
[ { "created": "Tue, 29 Nov 2022 22:14:46 GMT", "version": "v1" } ]
2022-12-01
[ [ "Qiao", "Zhijie", "" ], [ "Knizhnik", "Gedaliah", "" ], [ "Yim", "Mark", "" ] ]
The Modboat is a low-cost, underactuated, modular robot capable of surface swimming, docking to other modules, and undocking from them using only a single motor and two passive flippers. Undocking is achieved by causing intentional self-collision between the tails of neighboring modules in certain configurations; this becomes a challenge, however, when collective swimming as one connected component is desirable. Prior work has developed controllers that turn arbitrary configurations of docked Modboats into steerable vehicles, but they cannot counteract lateral forces and disturbances. In this work we present a centralized control strategy to create holonomic vehicles out of arbitrary configurations of docked Modboats using an iterative potential-field based search. We experimentally demonstrate that our controller performs well and can control surge and sway velocities and yaw angle simultaneously.
1805.10521
Lutz Bornmann Dr.
Lutz Bornmann, Antonio Osorio
The value and credits of n-authors publications
null
null
null
null
cs.DL physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Collaboration among researchers is becoming increasingly common, which raises a large number of scientometrics questions for which there is not a clear and generally accepted answer. For instance, what value should be given to a two-author or three-author publication with respect to a single-author publication? This paper uses axiomatic analysis and proposes a practical method to compute the expected value of an n-authors publication that takes into consideration the added value induced by collaboration in contexts in which there is no prior or ex-ante information about the publication's potential merits or scientific impact. The only information required is the number of authors. We compared the obtained theoretical values with the empirical values based on a large dataset from the Web of Science database. We found that the theoretical values are very close to the empirical values for some disciplines, but not for all. This observation provides support in favor of the method proposed in this paper. We expect that our findings can help researchers and decision-makers to choose more effective and fair counting methods that take into account the benefits of collaboration.
[ { "created": "Sat, 26 May 2018 18:57:01 GMT", "version": "v1" }, { "created": "Wed, 4 Jul 2018 06:39:17 GMT", "version": "v2" } ]
2018-07-05
[ [ "Bornmann", "Lutz", "" ], [ "Osorio", "Antonio", "" ] ]
Collaboration among researchers is becoming increasingly common, which raises a large number of scientometrics questions for which there is not a clear and generally accepted answer. For instance, what value should be given to a two-author or three-author publication with respect to a single-author publication? This paper uses axiomatic analysis and proposes a practical method to compute the expected value of an n-authors publication that takes into consideration the added value induced by collaboration in contexts in which there is no prior or ex-ante information about the publication's potential merits or scientific impact. The only information required is the number of authors. We compared the obtained theoretical values with the empirical values based on a large dataset from the Web of Science database. We found that the theoretical values are very close to the empirical values for some disciplines, but not for all. This observation provides support in favor of the method proposed in this paper. We expect that our findings can help researchers and decision-makers to choose more effective and fair counting methods that take into account the benefits of collaboration.
1907.04235
Philip Schniter
Philip Schniter
A Simple Derivation of AMP and its State Evolution via First-Order Cancellation
null
null
10.1109/TSP.2020.3005545
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the linear regression problem, where the goal is to recover the vector $\boldsymbol{x}\in\mathbb{R}^n$ from measurements $\boldsymbol{y}=\boldsymbol{A}\boldsymbol{x}+\boldsymbol{w}\in\mathbb{R}^m$ under known matrix $\boldsymbol{A}$ and unknown noise $\boldsymbol{w}$. For large i.i.d. sub-Gaussian $\boldsymbol{A}$, the approximate message passing (AMP) algorithm is precisely analyzable through a state-evolution (SE) formalism, which furthermore shows that AMP is Bayes optimal in certain regimes. The rigorous SE proof, however, is long and complicated. And, although the AMP algorithm can be derived as an approximation of loop belief propagation (LBP), this viewpoint provides little insight into why large i.i.d. $\boldsymbol{A}$ matrices are important for AMP, and why AMP has a state evolution. In this work, we provide a heuristic derivation of AMP and its state evolution, based on the idea of "first-order cancellation," that provides insights missing from the LBP derivation while being much shorter than the rigorous SE proof.
[ { "created": "Tue, 9 Jul 2019 15:19:19 GMT", "version": "v1" }, { "created": "Thu, 18 Jul 2019 19:24:46 GMT", "version": "v2" }, { "created": "Mon, 10 Feb 2020 20:47:01 GMT", "version": "v3" } ]
2023-07-19
[ [ "Schniter", "Philip", "" ] ]
We consider the linear regression problem, where the goal is to recover the vector $\boldsymbol{x}\in\mathbb{R}^n$ from measurements $\boldsymbol{y}=\boldsymbol{A}\boldsymbol{x}+\boldsymbol{w}\in\mathbb{R}^m$ under known matrix $\boldsymbol{A}$ and unknown noise $\boldsymbol{w}$. For large i.i.d. sub-Gaussian $\boldsymbol{A}$, the approximate message passing (AMP) algorithm is precisely analyzable through a state-evolution (SE) formalism, which furthermore shows that AMP is Bayes optimal in certain regimes. The rigorous SE proof, however, is long and complicated. And, although the AMP algorithm can be derived as an approximation of loop belief propagation (LBP), this viewpoint provides little insight into why large i.i.d. $\boldsymbol{A}$ matrices are important for AMP, and why AMP has a state evolution. In this work, we provide a heuristic derivation of AMP and its state evolution, based on the idea of "first-order cancellation," that provides insights missing from the LBP derivation while being much shorter than the rigorous SE proof.
2308.15244
Meng Yuan
Meng Yuan, Fuzhen Zhuang, Zhao Zhang, Deqing Wang and Jin Dong
Knowledge-based Multiple Adaptive Spaces Fusion for Recommendation
null
null
null
null
cs.IR cs.AI
http://creativecommons.org/licenses/by/4.0/
Since Knowledge Graphs (KGs) contain rich semantic information, recently there has been an influx of KG-enhanced recommendation methods. Most of existing methods are entirely designed based on euclidean space without considering curvature. However, recent studies have revealed that a tremendous graph-structured data exhibits highly non-euclidean properties. Motivated by these observations, in this work, we propose a knowledge-based multiple adaptive spaces fusion method for recommendation, namely MCKG. Unlike existing methods that solely adopt a specific manifold, we introduce the unified space that is compatible with hyperbolic, euclidean and spherical spaces. Furthermore, we fuse the multiple unified spaces in an attention manner to obtain the high-quality embeddings for better knowledge propagation. In addition, we propose a geometry-aware optimization strategy which enables the pull and push processes benefited from both hyperbolic and spherical spaces. Specifically, in hyperbolic space, we set smaller margins in the area near to the origin, which is conducive to distinguishing between highly similar positive items and negative ones. At the same time, we set larger margins in the area far from the origin to ensure the model has sufficient error tolerance. The similar manner also applies to spherical spaces. Extensive experiments on three real-world datasets demonstrate that the MCKG has a significant improvement over state-of-the-art recommendation methods. Further ablation experiments verify the importance of multi-space fusion and geometry-aware optimization strategy, justifying the rationality and effectiveness of MCKG.
[ { "created": "Tue, 29 Aug 2023 12:11:16 GMT", "version": "v1" } ]
2023-08-30
[ [ "Yuan", "Meng", "" ], [ "Zhuang", "Fuzhen", "" ], [ "Zhang", "Zhao", "" ], [ "Wang", "Deqing", "" ], [ "Dong", "Jin", "" ] ]
Since Knowledge Graphs (KGs) contain rich semantic information, recently there has been an influx of KG-enhanced recommendation methods. Most of existing methods are entirely designed based on euclidean space without considering curvature. However, recent studies have revealed that a tremendous graph-structured data exhibits highly non-euclidean properties. Motivated by these observations, in this work, we propose a knowledge-based multiple adaptive spaces fusion method for recommendation, namely MCKG. Unlike existing methods that solely adopt a specific manifold, we introduce the unified space that is compatible with hyperbolic, euclidean and spherical spaces. Furthermore, we fuse the multiple unified spaces in an attention manner to obtain the high-quality embeddings for better knowledge propagation. In addition, we propose a geometry-aware optimization strategy which enables the pull and push processes benefited from both hyperbolic and spherical spaces. Specifically, in hyperbolic space, we set smaller margins in the area near to the origin, which is conducive to distinguishing between highly similar positive items and negative ones. At the same time, we set larger margins in the area far from the origin to ensure the model has sufficient error tolerance. The similar manner also applies to spherical spaces. Extensive experiments on three real-world datasets demonstrate that the MCKG has a significant improvement over state-of-the-art recommendation methods. Further ablation experiments verify the importance of multi-space fusion and geometry-aware optimization strategy, justifying the rationality and effectiveness of MCKG.
1309.1630
Frederic Suter
Henri Casanova and Arnaud Giersch and Arnaud Legrand and Martin Quinson and Fr\'ed\'eric Suter
SimGrid: a Sustained Effort for the Versatile Simulation of Large Scale Distributed Systems
4 pages, submission to WSSSPE'13
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present Simgrid, a toolkit for the versatile simulation of large scale distributed systems, whose development effort has been sustained for the last fifteen years. Over this time period SimGrid has evolved from a one-laboratory project in the U.S. into a scientific instrument developed by an international collaboration. The keys to making this evolution possible have been securing of funding, improving the quality of the software, and increasing the user base. In this paper we describe how we have been able to make advances on all three fronts, on which we plan to intensify our efforts over the upcoming years.
[ { "created": "Fri, 6 Sep 2013 13:16:20 GMT", "version": "v1" } ]
2013-09-09
[ [ "Casanova", "Henri", "" ], [ "Giersch", "Arnaud", "" ], [ "Legrand", "Arnaud", "" ], [ "Quinson", "Martin", "" ], [ "Suter", "Frédéric", "" ] ]
In this paper we present Simgrid, a toolkit for the versatile simulation of large scale distributed systems, whose development effort has been sustained for the last fifteen years. Over this time period SimGrid has evolved from a one-laboratory project in the U.S. into a scientific instrument developed by an international collaboration. The keys to making this evolution possible have been securing of funding, improving the quality of the software, and increasing the user base. In this paper we describe how we have been able to make advances on all three fronts, on which we plan to intensify our efforts over the upcoming years.
2002.00069
Philokypros Ioulianou
Ryan Smith, Daniel Palin, Philokypros P. Ioulianou, Vassilios G. Vassilakis, Siamak F. Shahandashti
Battery draining attacks against edge computing nodes in IoT networks
19 pages,
Cyber-Physical Systems (2020), pp.1-21
10.1080/23335777.2020.1716268
null
cs.CR cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many IoT devices, especially those deployed at the network edge have limited power resources. A number of attacks aim to exhaust these resources and drain the batteries of such edge nodes. In this work, we study the effects of a variety of battery draining attacks against edge nodes. Through simulation, we clarify the extent to which such attacks are able to increase the usage and hence waste the power resources of edge nodes. Specifically, we implement hello flooding, packet flooding, selective forwarding, rank attack, and versioning attack in ContikiOS and simulate them in the Cooja simulator, and measure and report a number of time and power resource usage metrics including CPU time, low power mode time, TX/RX time, and battery consumption. Besides, we test the stretch attack with three different batteries as an extreme scenario. Our extensive measurements enable us to compare the effectiveness of these attacks. Our results show that Versioning attack is the most severe attack in terms of draining the power resources of the network, followed by Packet Flooding and Hello Flood attacks. Furthermore, we confirm that Selective Forwarding and Rank attacks are not able to considerably increase the power resource usage in our scenarios. By quantifying the effects of these attacks, we demonstrate that under specific scenarios, Versioning attack can be three to four times as effective as Packet Flooding and Hello Flood attacks in wasting network resources, while Packet Flooding is generally comparable to Hello Flood in CPU and TX time usage increase but twice as powerful in draining device batteries.
[ { "created": "Fri, 31 Jan 2020 21:44:21 GMT", "version": "v1" }, { "created": "Tue, 4 Feb 2020 17:03:48 GMT", "version": "v2" } ]
2020-02-05
[ [ "Smith", "Ryan", "" ], [ "Palin", "Daniel", "" ], [ "Ioulianou", "Philokypros P.", "" ], [ "Vassilakis", "Vassilios G.", "" ], [ "Shahandashti", "Siamak F.", "" ] ]
Many IoT devices, especially those deployed at the network edge have limited power resources. A number of attacks aim to exhaust these resources and drain the batteries of such edge nodes. In this work, we study the effects of a variety of battery draining attacks against edge nodes. Through simulation, we clarify the extent to which such attacks are able to increase the usage and hence waste the power resources of edge nodes. Specifically, we implement hello flooding, packet flooding, selective forwarding, rank attack, and versioning attack in ContikiOS and simulate them in the Cooja simulator, and measure and report a number of time and power resource usage metrics including CPU time, low power mode time, TX/RX time, and battery consumption. Besides, we test the stretch attack with three different batteries as an extreme scenario. Our extensive measurements enable us to compare the effectiveness of these attacks. Our results show that Versioning attack is the most severe attack in terms of draining the power resources of the network, followed by Packet Flooding and Hello Flood attacks. Furthermore, we confirm that Selective Forwarding and Rank attacks are not able to considerably increase the power resource usage in our scenarios. By quantifying the effects of these attacks, we demonstrate that under specific scenarios, Versioning attack can be three to four times as effective as Packet Flooding and Hello Flood attacks in wasting network resources, while Packet Flooding is generally comparable to Hello Flood in CPU and TX time usage increase but twice as powerful in draining device batteries.
2208.04537
Yingtong Dou
Ruitong Zhang, Hao Peng, Yingtong Dou, Jia Wu, Qingyun Sun, Jingyi Zhang, Philip S. Yu
Automating DBSCAN via Deep Reinforcement Learning
Accepted by CIKM 2022. The code is available at https://github.com/RingBDStack/DRL-DBSCAN
null
null
null
cs.LG cs.IR
http://creativecommons.org/licenses/by/4.0/
DBSCAN is widely used in many scientific and engineering fields because of its simplicity and practicality. However, due to its high sensitivity parameters, the accuracy of the clustering result depends heavily on practical experience. In this paper, we first propose a novel Deep Reinforcement Learning guided automatic DBSCAN parameters search framework, namely DRL-DBSCAN. The framework models the process of adjusting the parameter search direction by perceiving the clustering environment as a Markov decision process, which aims to find the best clustering parameters without manual assistance. DRL-DBSCAN learns the optimal clustering parameter search policy for different feature distributions via interacting with the clusters, using a weakly-supervised reward training policy network. In addition, we also present a recursive search mechanism driven by the scale of the data to efficiently and controllably process large parameter spaces. Extensive experiments are conducted on five artificial and real-world datasets based on the proposed four working modes. The results of offline and online tasks show that the DRL-DBSCAN not only consistently improves DBSCAN clustering accuracy by up to 26% and 25% respectively, but also can stably find the dominant parameters with high computational efficiency. The code is available at https://github.com/RingBDStack/DRL-DBSCAN.
[ { "created": "Tue, 9 Aug 2022 04:40:11 GMT", "version": "v1" } ]
2022-08-10
[ [ "Zhang", "Ruitong", "" ], [ "Peng", "Hao", "" ], [ "Dou", "Yingtong", "" ], [ "Wu", "Jia", "" ], [ "Sun", "Qingyun", "" ], [ "Zhang", "Jingyi", "" ], [ "Yu", "Philip S.", "" ] ]
DBSCAN is widely used in many scientific and engineering fields because of its simplicity and practicality. However, due to its high sensitivity parameters, the accuracy of the clustering result depends heavily on practical experience. In this paper, we first propose a novel Deep Reinforcement Learning guided automatic DBSCAN parameters search framework, namely DRL-DBSCAN. The framework models the process of adjusting the parameter search direction by perceiving the clustering environment as a Markov decision process, which aims to find the best clustering parameters without manual assistance. DRL-DBSCAN learns the optimal clustering parameter search policy for different feature distributions via interacting with the clusters, using a weakly-supervised reward training policy network. In addition, we also present a recursive search mechanism driven by the scale of the data to efficiently and controllably process large parameter spaces. Extensive experiments are conducted on five artificial and real-world datasets based on the proposed four working modes. The results of offline and online tasks show that the DRL-DBSCAN not only consistently improves DBSCAN clustering accuracy by up to 26% and 25% respectively, but also can stably find the dominant parameters with high computational efficiency. The code is available at https://github.com/RingBDStack/DRL-DBSCAN.
2308.02951
Sebastian Martschat
Yueling Li, Sebastian Martschat, Simone Paolo Ponzetto
Multi-Source (Pre-)Training for Cross-Domain Measurement, Unit and Context Extraction
Published as a workshop paper at BioNLP 2023
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a cross-domain approach for automated measurement and context extraction based on pre-trained language models. We construct a multi-source, multi-domain corpus and train an end-to-end extraction pipeline. We then apply multi-source task-adaptive pre-training and fine-tuning to benchmark the cross-domain generalization capability of our model. Further, we conceptualize and apply a task-specific error analysis and derive insights for future work. Our results suggest that multi-source training leads to the best overall results, while single-source training yields the best results for the respective individual domain. While our setup is successful at extracting quantity values and units, more research is needed to improve the extraction of contextual entities. We make the cross-domain corpus used in this work available online.
[ { "created": "Sat, 5 Aug 2023 20:33:39 GMT", "version": "v1" } ]
2023-08-08
[ [ "Li", "Yueling", "" ], [ "Martschat", "Sebastian", "" ], [ "Ponzetto", "Simone Paolo", "" ] ]
We present a cross-domain approach for automated measurement and context extraction based on pre-trained language models. We construct a multi-source, multi-domain corpus and train an end-to-end extraction pipeline. We then apply multi-source task-adaptive pre-training and fine-tuning to benchmark the cross-domain generalization capability of our model. Further, we conceptualize and apply a task-specific error analysis and derive insights for future work. Our results suggest that multi-source training leads to the best overall results, while single-source training yields the best results for the respective individual domain. While our setup is successful at extracting quantity values and units, more research is needed to improve the extraction of contextual entities. We make the cross-domain corpus used in this work available online.
1911.10927
Denys Rozumnyi
Denys Rozumnyi, Jan Kotera, Filip Sroubek, Jiri Matas
Sub-frame Appearance and 6D Pose Estimation of Fast Moving Objects
null
2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
10.1109/CVPR42600.2020.00681
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel method that tracks fast moving objects, mainly non-uniform spherical, in full 6 degrees of freedom, estimating simultaneously their 3D motion trajectory, 3D pose and object appearance changes with a time step that is a fraction of the video frame exposure time. The sub-frame object localization and appearance estimation allows realistic temporal super-resolution and precise shape estimation. The method, called TbD-3D (Tracking by Deblatting in 3D) relies on a novel reconstruction algorithm which solves a piece-wise deblurring and matting problem. The 3D rotation is estimated by minimizing the reprojection error. As a second contribution, we present a new challenging dataset with fast moving objects that change their appearance and distance to the camera. High speed camera recordings with zero lag between frame exposures were used to generate videos with different frame rates annotated with ground-truth trajectory and pose.
[ { "created": "Mon, 25 Nov 2019 14:13:25 GMT", "version": "v1" } ]
2020-10-30
[ [ "Rozumnyi", "Denys", "" ], [ "Kotera", "Jan", "" ], [ "Sroubek", "Filip", "" ], [ "Matas", "Jiri", "" ] ]
We propose a novel method that tracks fast moving objects, mainly non-uniform spherical, in full 6 degrees of freedom, estimating simultaneously their 3D motion trajectory, 3D pose and object appearance changes with a time step that is a fraction of the video frame exposure time. The sub-frame object localization and appearance estimation allows realistic temporal super-resolution and precise shape estimation. The method, called TbD-3D (Tracking by Deblatting in 3D) relies on a novel reconstruction algorithm which solves a piece-wise deblurring and matting problem. The 3D rotation is estimated by minimizing the reprojection error. As a second contribution, we present a new challenging dataset with fast moving objects that change their appearance and distance to the camera. High speed camera recordings with zero lag between frame exposures were used to generate videos with different frame rates annotated with ground-truth trajectory and pose.
1908.11262
Dogancan Temel
Dogancan Temel and Min-Hung Chen and Ghassan AlRegib
Traffic Sign Detection under Challenging Conditions: A Deeper Look Into Performance Variations and Spectral Characteristics
13 pages, 9 figures, 4 tables. IEEE Transactions on Intelligent Transportation Systems, 2019
null
10.1109/TITS.2019.2931429
null
cs.CV cs.LG eess.IV eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traffic signs are critical for maintaining the safety and efficiency of our roads. Therefore, we need to carefully assess the capabilities and limitations of automated traffic sign detection systems. Existing traffic sign datasets are limited in terms of type and severity of challenging conditions. Metadata corresponding to these conditions are unavailable and it is not possible to investigate the effect of a single factor because of simultaneous changes in numerous conditions. To overcome the shortcomings in existing datasets, we introduced the CURE-TSD-Real dataset, which is based on simulated challenging conditions that correspond to adversaries that can occur in real-world environments and systems. We test the performance of two benchmark algorithms and show that severe conditions can result in an average performance degradation of 29% in precision and 68% in recall. We investigate the effect of challenging conditions through spectral analysis and show that challenging conditions can lead to distinct magnitude spectrum characteristics. Moreover, we show that mean magnitude spectrum of changes in video sequences under challenging conditions can be an indicator of detection performance. CURE-TSD-Real dataset is available online at https://github.com/olivesgatech/CURE-TSD.
[ { "created": "Thu, 29 Aug 2019 14:37:40 GMT", "version": "v1" } ]
2019-08-30
[ [ "Temel", "Dogancan", "" ], [ "Chen", "Min-Hung", "" ], [ "AlRegib", "Ghassan", "" ] ]
Traffic signs are critical for maintaining the safety and efficiency of our roads. Therefore, we need to carefully assess the capabilities and limitations of automated traffic sign detection systems. Existing traffic sign datasets are limited in terms of type and severity of challenging conditions. Metadata corresponding to these conditions are unavailable and it is not possible to investigate the effect of a single factor because of simultaneous changes in numerous conditions. To overcome the shortcomings in existing datasets, we introduced the CURE-TSD-Real dataset, which is based on simulated challenging conditions that correspond to adversaries that can occur in real-world environments and systems. We test the performance of two benchmark algorithms and show that severe conditions can result in an average performance degradation of 29% in precision and 68% in recall. We investigate the effect of challenging conditions through spectral analysis and show that challenging conditions can lead to distinct magnitude spectrum characteristics. Moreover, we show that mean magnitude spectrum of changes in video sequences under challenging conditions can be an indicator of detection performance. CURE-TSD-Real dataset is available online at https://github.com/olivesgatech/CURE-TSD.
2005.11895
Maxime Bouton
Maxime Bouton, Alireza Nakhaei, David Isele, Kikuo Fujimura, and Mykel J. Kochenderfer
Reinforcement Learning with Iterative Reasoning for Merging in Dense Traffic
6pages, 5 figures
IEEE Intelligent Transportation Systems Conference (ITSC) 2020
null
null
cs.AI cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Maneuvering in dense traffic is a challenging task for autonomous vehicles because it requires reasoning about the stochastic behaviors of many other participants. In addition, the agent must achieve the maneuver within a limited time and distance. In this work, we propose a combination of reinforcement learning and game theory to learn merging behaviors. We design a training curriculum for a reinforcement learning agent using the concept of level-$k$ behavior. This approach exposes the agent to a broad variety of behaviors during training, which promotes learning policies that are robust to model discrepancies. We show that our approach learns more efficient policies than traditional training methods.
[ { "created": "Mon, 25 May 2020 02:57:19 GMT", "version": "v1" } ]
2020-05-26
[ [ "Bouton", "Maxime", "" ], [ "Nakhaei", "Alireza", "" ], [ "Isele", "David", "" ], [ "Fujimura", "Kikuo", "" ], [ "Kochenderfer", "Mykel J.", "" ] ]
Maneuvering in dense traffic is a challenging task for autonomous vehicles because it requires reasoning about the stochastic behaviors of many other participants. In addition, the agent must achieve the maneuver within a limited time and distance. In this work, we propose a combination of reinforcement learning and game theory to learn merging behaviors. We design a training curriculum for a reinforcement learning agent using the concept of level-$k$ behavior. This approach exposes the agent to a broad variety of behaviors during training, which promotes learning policies that are robust to model discrepancies. We show that our approach learns more efficient policies than traditional training methods.
0906.2582
Shun Watanabe
Shun Watanabe, Ryutaroh Matsumoto, and Tomohiko Uyematsu
Strongly Secure Privacy Amplification Cannot Be Obtained by Encoder of Slepian-Wolf Code
10 pages, no figure, A part of this paper will be presented at 2009 IEEE International Symposium on Information Theory in Seoul, Korea. Version 2 is a published version. The results are not changed from version 1. Explanations are polished and some references are added. In version 3, only style and DOI are edited
IEICE Trans. Fundamentals, vol. 93, no. 9, pp. 1650-1659, September 2010
10.1587/transfun.E93.A.1650
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The privacy amplification is a technique to distill a secret key from a random variable by a function so that the distilled key and eavesdropper's random variable are statistically independent. There are three kinds of security criteria for the key distilled by the privacy amplification: the normalized divergence criterion, which is also known as the weak security criterion, the variational distance criterion, and the divergence criterion, which is also known as the strong security criterion. As a technique to distill a secret key, it is known that the encoder of a Slepian-Wolf (the source coding with full side-information at the decoder) code can be used as a function for the privacy amplification if we employ the weak security criterion. In this paper, we show that the encoder of a Slepian-Wolf code cannot be used as a function for the privacy amplification if we employ the criteria other than the weak one.
[ { "created": "Mon, 15 Jun 2009 00:26:59 GMT", "version": "v1" }, { "created": "Sat, 5 Mar 2011 03:05:54 GMT", "version": "v2" }, { "created": "Tue, 8 Mar 2011 02:15:02 GMT", "version": "v3" } ]
2011-03-09
[ [ "Watanabe", "Shun", "" ], [ "Matsumoto", "Ryutaroh", "" ], [ "Uyematsu", "Tomohiko", "" ] ]
The privacy amplification is a technique to distill a secret key from a random variable by a function so that the distilled key and eavesdropper's random variable are statistically independent. There are three kinds of security criteria for the key distilled by the privacy amplification: the normalized divergence criterion, which is also known as the weak security criterion, the variational distance criterion, and the divergence criterion, which is also known as the strong security criterion. As a technique to distill a secret key, it is known that the encoder of a Slepian-Wolf (the source coding with full side-information at the decoder) code can be used as a function for the privacy amplification if we employ the weak security criterion. In this paper, we show that the encoder of a Slepian-Wolf code cannot be used as a function for the privacy amplification if we employ the criteria other than the weak one.
2304.10074
Xiyuan Wang
Xiyuan Wang, Pan Li, Muhan Zhang
Improving Graph Neural Networks on Multi-node Tasks with Labeling Tricks
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
In this paper, we provide a theory of using graph neural networks (GNNs) for \textit{multi-node representation learning}, where we are interested in learning a representation for a set of more than one node such as a link. Existing GNNs are mainly designed to learn single-node representations. When we want to learn a node-set representation involving multiple nodes, a common practice in previous works is to directly aggregate the single-node representations obtained by a GNN. In this paper, we show a fundamental limitation of such an approach, namely the inability to capture the dependence among multiple nodes in a node set, and argue that directly aggregating individual node representations fails to produce an effective joint representation for multiple nodes. A straightforward solution is to distinguish target nodes from others. Formalizing this idea, we propose \text{labeling trick}, which first labels nodes in the graph according to their relationships with the target node set before applying a GNN and then aggregates node representations obtained in the labeled graph for multi-node representations. The labeling trick also unifies a few previous successful works for multi-node representation learning, including SEAL, Distance Encoding, ID-GNN, and NBFNet. Besides node sets in graphs, we also extend labeling tricks to posets, subsets and hypergraphs. Experiments verify that the labeling trick technique can boost GNNs on various tasks, including undirected link prediction, directed link prediction, hyperedge prediction, and subgraph prediction. Our work explains the superior performance of previous node-labeling-based methods and establishes a theoretical foundation for using GNNs for multi-node representation learning.
[ { "created": "Thu, 20 Apr 2023 04:03:40 GMT", "version": "v1" } ]
2023-04-21
[ [ "Wang", "Xiyuan", "" ], [ "Li", "Pan", "" ], [ "Zhang", "Muhan", "" ] ]
In this paper, we provide a theory of using graph neural networks (GNNs) for \textit{multi-node representation learning}, where we are interested in learning a representation for a set of more than one node such as a link. Existing GNNs are mainly designed to learn single-node representations. When we want to learn a node-set representation involving multiple nodes, a common practice in previous works is to directly aggregate the single-node representations obtained by a GNN. In this paper, we show a fundamental limitation of such an approach, namely the inability to capture the dependence among multiple nodes in a node set, and argue that directly aggregating individual node representations fails to produce an effective joint representation for multiple nodes. A straightforward solution is to distinguish target nodes from others. Formalizing this idea, we propose \text{labeling trick}, which first labels nodes in the graph according to their relationships with the target node set before applying a GNN and then aggregates node representations obtained in the labeled graph for multi-node representations. The labeling trick also unifies a few previous successful works for multi-node representation learning, including SEAL, Distance Encoding, ID-GNN, and NBFNet. Besides node sets in graphs, we also extend labeling tricks to posets, subsets and hypergraphs. Experiments verify that the labeling trick technique can boost GNNs on various tasks, including undirected link prediction, directed link prediction, hyperedge prediction, and subgraph prediction. Our work explains the superior performance of previous node-labeling-based methods and establishes a theoretical foundation for using GNNs for multi-node representation learning.
0902.1258
Baptiste Jeudy
Baptiste Jeudy (LAHC), Fran\c{c}ois Rioult (GREYC)
Extraction de concepts sous contraintes dans des donn\'ees d'expression de g\`enes
null
Conf\'erence sur l'apprentissage automatique, Nice : France (2005)
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a technique to extract constrained formal concepts.
[ { "created": "Sat, 7 Feb 2009 18:01:09 GMT", "version": "v1" } ]
2009-02-10
[ [ "Jeudy", "Baptiste", "", "LAHC" ], [ "Rioult", "François", "", "GREYC" ] ]
In this paper, we propose a technique to extract constrained formal concepts.
2302.04185
Anthony Yazdani
Anthony Yazdani, Dimitrios Proios, Hossein Rouhizadeh, Douglas Teodoro
Efficient Joint Learning for Clinical Named Entity Recognition and Relation Extraction Using Fourier Networks: A Use Case in Adverse Drug Events
International Conference on Natural Language Processing (ICON 2022)
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Current approaches for clinical information extraction are inefficient in terms of computational costs and memory consumption, hindering their application to process large-scale electronic health records (EHRs). We propose an efficient end-to-end model, the Joint-NER-RE-Fourier (JNRF), to jointly learn the tasks of named entity recognition and relation extraction for documents of variable length. The architecture uses positional encoding and unitary batch sizes to process variable length documents and uses a weight-shared Fourier network layer for low-complexity token mixing. Finally, we reach the theoretical computational complexity lower bound for relation extraction using a selective pooling strategy and distance-aware attention weights with trainable polynomial distance functions. We evaluated the JNRF architecture using the 2018 N2C2 ADE benchmark to jointly extract medication-related entities and relations in variable-length EHR summaries. JNRF outperforms rolling window BERT with selective pooling by 0.42%, while being twice as fast to train. Compared to state-of-the-art BiLSTM-CRF architectures on the N2C2 ADE benchmark, results show that the proposed approach trains 22 times faster and reduces GPU memory consumption by 1.75 folds, with a reasonable performance tradeoff of 90%, without the use of external tools, hand-crafted rules or post-processing. Given the significant carbon footprint of deep learning models and the current energy crises, these methods could support efficient and cleaner information extraction in EHRs and other types of large-scale document databases.
[ { "created": "Wed, 8 Feb 2023 16:44:27 GMT", "version": "v1" } ]
2023-02-09
[ [ "Yazdani", "Anthony", "" ], [ "Proios", "Dimitrios", "" ], [ "Rouhizadeh", "Hossein", "" ], [ "Teodoro", "Douglas", "" ] ]
Current approaches for clinical information extraction are inefficient in terms of computational costs and memory consumption, hindering their application to process large-scale electronic health records (EHRs). We propose an efficient end-to-end model, the Joint-NER-RE-Fourier (JNRF), to jointly learn the tasks of named entity recognition and relation extraction for documents of variable length. The architecture uses positional encoding and unitary batch sizes to process variable length documents and uses a weight-shared Fourier network layer for low-complexity token mixing. Finally, we reach the theoretical computational complexity lower bound for relation extraction using a selective pooling strategy and distance-aware attention weights with trainable polynomial distance functions. We evaluated the JNRF architecture using the 2018 N2C2 ADE benchmark to jointly extract medication-related entities and relations in variable-length EHR summaries. JNRF outperforms rolling window BERT with selective pooling by 0.42%, while being twice as fast to train. Compared to state-of-the-art BiLSTM-CRF architectures on the N2C2 ADE benchmark, results show that the proposed approach trains 22 times faster and reduces GPU memory consumption by 1.75 folds, with a reasonable performance tradeoff of 90%, without the use of external tools, hand-crafted rules or post-processing. Given the significant carbon footprint of deep learning models and the current energy crises, these methods could support efficient and cleaner information extraction in EHRs and other types of large-scale document databases.
1712.03481
Xiao Lu
Xiao Lu, Hai Jiang, Dusit Niyato, Dong In Kim, and Zhu Han
Wireless-Powered Device-to-Device Communications with Ambient Backscattering: Performance Modeling and Analysis
24 Pages, 13 figures
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recent advanced wireless energy harvesting technology has enabled wireless-powered communications to accommodate wireless data services in a self-sustainable manner. However, wireless-powered communications rely on active RF signals to communicate, and result in high power consumption. On the other hand, ambient backscatter technology that passively reflects existing RF signal sources in the air to communicate has the potential to facilitate an implementation with ultra-low power consumption. In this paper, we introduce a hybrid D2D communication paradigm by integrating ambient backscattering with wireless-powered communications. The hybrid D2D communications are self-sustainable, as no dedicated external power supply is required. However, since the radio signals for energy harvesting and for backscattering come from the ambient, the performance of the hybrid D2D communications depends largely on environment factors, e.g., distribution, spatial density, and transmission load of ambient energy sources. Therefore, we design two mode selection protocols for the hybrid D2D transmitter, allowing a more flexible adaptation to the environment. We then introduce analytical models to characterize the impacts of the considered environment factors on the hybrid D2D communication performance. Together with extensive simulations, our analysis shows that the communication performance benefits from larger repulsion, transmission load and density of ambient energy sources. Further, we investigate how different mode selection mechanisms affect the communication performance.
[ { "created": "Sun, 10 Dec 2017 07:51:34 GMT", "version": "v1" } ]
2017-12-12
[ [ "Lu", "Xiao", "" ], [ "Jiang", "Hai", "" ], [ "Niyato", "Dusit", "" ], [ "Kim", "Dong In", "" ], [ "Han", "Zhu", "" ] ]
The recent advanced wireless energy harvesting technology has enabled wireless-powered communications to accommodate wireless data services in a self-sustainable manner. However, wireless-powered communications rely on active RF signals to communicate, and result in high power consumption. On the other hand, ambient backscatter technology that passively reflects existing RF signal sources in the air to communicate has the potential to facilitate an implementation with ultra-low power consumption. In this paper, we introduce a hybrid D2D communication paradigm by integrating ambient backscattering with wireless-powered communications. The hybrid D2D communications are self-sustainable, as no dedicated external power supply is required. However, since the radio signals for energy harvesting and for backscattering come from the ambient, the performance of the hybrid D2D communications depends largely on environment factors, e.g., distribution, spatial density, and transmission load of ambient energy sources. Therefore, we design two mode selection protocols for the hybrid D2D transmitter, allowing a more flexible adaptation to the environment. We then introduce analytical models to characterize the impacts of the considered environment factors on the hybrid D2D communication performance. Together with extensive simulations, our analysis shows that the communication performance benefits from larger repulsion, transmission load and density of ambient energy sources. Further, we investigate how different mode selection mechanisms affect the communication performance.
2006.03817
Benoit Guillard
Benoit Guillard, Edoardo Remelli, Pascal Fua
UCLID-Net: Single View Reconstruction in Object Space
Added supplementary material
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most state-of-the-art deep geometric learning single-view reconstruction approaches rely on encoder-decoder architectures that output either shape parametrizations or implicit representations. However, these representations rarely preserve the Euclidean structure of the 3D space objects exist in. In this paper, we show that building a geometry preserving 3-dimensional latent space helps the network concurrently learn global shape regularities and local reasoning in the object coordinate space and, as a result, boosts performance. We demonstrate both on ShapeNet synthetic images, which are often used for benchmarking purposes, and on real-world images that our approach outperforms state-of-the-art ones. Furthermore, the single-view pipeline naturally extends to multi-view reconstruction, which we also show.
[ { "created": "Sat, 6 Jun 2020 09:15:56 GMT", "version": "v1" }, { "created": "Tue, 16 Jun 2020 12:11:18 GMT", "version": "v2" } ]
2020-06-17
[ [ "Guillard", "Benoit", "" ], [ "Remelli", "Edoardo", "" ], [ "Fua", "Pascal", "" ] ]
Most state-of-the-art deep geometric learning single-view reconstruction approaches rely on encoder-decoder architectures that output either shape parametrizations or implicit representations. However, these representations rarely preserve the Euclidean structure of the 3D space objects exist in. In this paper, we show that building a geometry preserving 3-dimensional latent space helps the network concurrently learn global shape regularities and local reasoning in the object coordinate space and, as a result, boosts performance. We demonstrate both on ShapeNet synthetic images, which are often used for benchmarking purposes, and on real-world images that our approach outperforms state-of-the-art ones. Furthermore, the single-view pipeline naturally extends to multi-view reconstruction, which we also show.
1110.6886
Yevgeny Seldin
Yevgeny Seldin, Fran\c{c}ois Laviolette, Nicol\`o Cesa-Bianchi, John Shawe-Taylor, Peter Auer
PAC-Bayesian Inequalities for Martingales
null
null
null
null
cs.LG cs.IT math.IT stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a set of high-probability inequalities that control the concentration of weighted averages of multiple (possibly uncountably many) simultaneously evolving and interdependent martingales. Our results extend the PAC-Bayesian analysis in learning theory from the i.i.d. setting to martingales opening the way for its application to importance weighted sampling, reinforcement learning, and other interactive learning domains, as well as many other domains in probability theory and statistics, where martingales are encountered. We also present a comparison inequality that bounds the expectation of a convex function of a martingale difference sequence shifted to the [0,1] interval by the expectation of the same function of independent Bernoulli variables. This inequality is applied to derive a tighter analog of Hoeffding-Azuma's inequality.
[ { "created": "Mon, 31 Oct 2011 18:22:24 GMT", "version": "v1" }, { "created": "Mon, 25 Jun 2012 11:56:07 GMT", "version": "v2" }, { "created": "Mon, 30 Jul 2012 14:02:53 GMT", "version": "v3" } ]
2012-07-31
[ [ "Seldin", "Yevgeny", "" ], [ "Laviolette", "François", "" ], [ "Cesa-Bianchi", "Nicolò", "" ], [ "Shawe-Taylor", "John", "" ], [ "Auer", "Peter", "" ] ]
We present a set of high-probability inequalities that control the concentration of weighted averages of multiple (possibly uncountably many) simultaneously evolving and interdependent martingales. Our results extend the PAC-Bayesian analysis in learning theory from the i.i.d. setting to martingales opening the way for its application to importance weighted sampling, reinforcement learning, and other interactive learning domains, as well as many other domains in probability theory and statistics, where martingales are encountered. We also present a comparison inequality that bounds the expectation of a convex function of a martingale difference sequence shifted to the [0,1] interval by the expectation of the same function of independent Bernoulli variables. This inequality is applied to derive a tighter analog of Hoeffding-Azuma's inequality.
2402.00977
Jae-Sang Hyun
Won-Hoe Kim, Bongjoong Kim, Hyung-Gun Chi, Jae-Sang Hyun
Enhanced fringe-to-phase framework using deep learning
35 pages, 13 figures, 6 tables
null
null
null
cs.CV eess.IV
http://creativecommons.org/licenses/by-nc-nd/4.0/
In Fringe Projection Profilometry (FPP), achieving robust and accurate 3D reconstruction with a limited number of fringe patterns remains a challenge in structured light 3D imaging. Conventional methods require a set of fringe images, but using only one or two patterns complicates phase recovery and unwrapping. In this study, we introduce SFNet, a symmetric fusion network that transforms two fringe images into an absolute phase. To enhance output reliability, Our framework predicts refined phases by incorporating information from fringe images of a different frequency than those used as input. This allows us to achieve high accuracy with just two images. Comparative experiments and ablation studies validate the effectiveness of our proposed method. The dataset and code are publicly accessible on our project page https://wonhoe-kim.github.io/SFNet.
[ { "created": "Thu, 1 Feb 2024 19:47:34 GMT", "version": "v1" } ]
2024-02-05
[ [ "Kim", "Won-Hoe", "" ], [ "Kim", "Bongjoong", "" ], [ "Chi", "Hyung-Gun", "" ], [ "Hyun", "Jae-Sang", "" ] ]
In Fringe Projection Profilometry (FPP), achieving robust and accurate 3D reconstruction with a limited number of fringe patterns remains a challenge in structured light 3D imaging. Conventional methods require a set of fringe images, but using only one or two patterns complicates phase recovery and unwrapping. In this study, we introduce SFNet, a symmetric fusion network that transforms two fringe images into an absolute phase. To enhance output reliability, Our framework predicts refined phases by incorporating information from fringe images of a different frequency than those used as input. This allows us to achieve high accuracy with just two images. Comparative experiments and ablation studies validate the effectiveness of our proposed method. The dataset and code are publicly accessible on our project page https://wonhoe-kim.github.io/SFNet.
2203.01717
Felix Morsbach
Niklas Hasebrook, Felix Morsbach, Niclas Kannengie{\ss}er, Marc Z\"oller, J\"org Franke, Marius Lindauer, Frank Hutter, Ali Sunyaev
Practitioner Motives to Select Hyperparameter Optimization Methods
submitted to JMLR; currently under review
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Advanced programmatic hyperparameter optimization (HPO) methods, such as Bayesian optimization, have high sample efficiency in reproducibly finding optimal hyperparameter values of machine learning (ML) models. Yet, ML practitioners often apply less sample-efficient HPO methods, such as grid search, which often results in under-optimized ML models. As a reason for this behavior, we suspect practitioners choose HPO methods based on individual motives, consisting of contextual factors and individual goals. However, practitioners' motives still need to be clarified, hindering the evaluation of HPO methods for achieving specific goals and the user-centered development of HPO tools. To understand practitioners' motives for using specific HPO methods, we used a mixed-methods approach involving 20 semi-structured interviews and a survey study with 71 ML experts to gather evidence of the external validity of the interview results. By presenting six main goals (e.g., improving model understanding) and 14 contextual factors affecting practitioners' selection of HPO methods (e.g., available computer resources), our study explains why practitioners use HPO methods that seem inappropriate at first glance. This study lays a foundation for designing user-centered and context-adaptive HPO tools and, thus, linking social and technical research on HPO.
[ { "created": "Thu, 3 Mar 2022 13:55:38 GMT", "version": "v1" }, { "created": "Mon, 26 Jun 2023 08:50:40 GMT", "version": "v2" } ]
2023-06-27
[ [ "Hasebrook", "Niklas", "" ], [ "Morsbach", "Felix", "" ], [ "Kannengießer", "Niclas", "" ], [ "Zöller", "Marc", "" ], [ "Franke", "Jörg", "" ], [ "Lindauer", "Marius", "" ], [ "Hutter", "Frank", "" ], [ "Sunyaev", "Ali", "" ] ]
Advanced programmatic hyperparameter optimization (HPO) methods, such as Bayesian optimization, have high sample efficiency in reproducibly finding optimal hyperparameter values of machine learning (ML) models. Yet, ML practitioners often apply less sample-efficient HPO methods, such as grid search, which often results in under-optimized ML models. As a reason for this behavior, we suspect practitioners choose HPO methods based on individual motives, consisting of contextual factors and individual goals. However, practitioners' motives still need to be clarified, hindering the evaluation of HPO methods for achieving specific goals and the user-centered development of HPO tools. To understand practitioners' motives for using specific HPO methods, we used a mixed-methods approach involving 20 semi-structured interviews and a survey study with 71 ML experts to gather evidence of the external validity of the interview results. By presenting six main goals (e.g., improving model understanding) and 14 contextual factors affecting practitioners' selection of HPO methods (e.g., available computer resources), our study explains why practitioners use HPO methods that seem inappropriate at first glance. This study lays a foundation for designing user-centered and context-adaptive HPO tools and, thus, linking social and technical research on HPO.
2104.06815
Guo-Wang Xie
Guo-Wang Xie, Fei Yin, Xu-Yao Zhang, and Cheng-Lin Liu
Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network
null
International Workshop on Document Analysis Systems. Springer, Cham, 2020: 131-144
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As camera-based documents are increasingly used, the rectification of distorted document images becomes a need to improve the recognition performance. In this paper, we propose a novel framework for both rectifying distorted document image and removing background finely, by estimating pixel-wise displacements using a fully convolutional network (FCN). The document image is rectified by transformation according to the displacements of pixels. The FCN is trained by regressing displacements of synthesized distorted documents, and to control the smoothness of displacements, we propose a Local Smooth Constraint (LSC) in regularization. Our approach is easy to implement and consumes moderate computing resource. Experiments proved that our approach can dewarp document images effectively under various geometric distortions, and has achieved the state-of-the-art performance in terms of local details and overall effect.
[ { "created": "Wed, 14 Apr 2021 12:32:36 GMT", "version": "v1" } ]
2021-04-15
[ [ "Xie", "Guo-Wang", "" ], [ "Yin", "Fei", "" ], [ "Zhang", "Xu-Yao", "" ], [ "Liu", "Cheng-Lin", "" ] ]
As camera-based documents are increasingly used, the rectification of distorted document images becomes a need to improve the recognition performance. In this paper, we propose a novel framework for both rectifying distorted document image and removing background finely, by estimating pixel-wise displacements using a fully convolutional network (FCN). The document image is rectified by transformation according to the displacements of pixels. The FCN is trained by regressing displacements of synthesized distorted documents, and to control the smoothness of displacements, we propose a Local Smooth Constraint (LSC) in regularization. Our approach is easy to implement and consumes moderate computing resource. Experiments proved that our approach can dewarp document images effectively under various geometric distortions, and has achieved the state-of-the-art performance in terms of local details and overall effect.
2101.01132
Michel Breyer
Michel Breyer, Jen Jen Chung, Lionel Ott, Roland Siegwart, Juan Nieto
Volumetric Grasping Network: Real-time 6 DOF Grasp Detection in Clutter
Conference on Robot Learning (CoRL), 2020
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
General robot grasping in clutter requires the ability to synthesize grasps that work for previously unseen objects and that are also robust to physical interactions, such as collisions with other objects in the scene. In this work, we design and train a network that predicts 6 DOF grasps from 3D scene information gathered from an on-board sensor such as a wrist-mounted depth camera. Our proposed Volumetric Grasping Network (VGN) accepts a Truncated Signed Distance Function (TSDF) representation of the scene and directly outputs the predicted grasp quality and the associated gripper orientation and opening width for each voxel in the queried 3D volume. We show that our approach can plan grasps in only 10 ms and is able to clear 92% of the objects in real-world clutter removal experiments without the need for explicit collision checking. The real-time capability opens up the possibility for closed-loop grasp planning, allowing robots to handle disturbances, recover from errors and provide increased robustness. Code is available at https://github.com/ethz-asl/vgn.
[ { "created": "Mon, 4 Jan 2021 17:55:01 GMT", "version": "v1" } ]
2021-01-05
[ [ "Breyer", "Michel", "" ], [ "Chung", "Jen Jen", "" ], [ "Ott", "Lionel", "" ], [ "Siegwart", "Roland", "" ], [ "Nieto", "Juan", "" ] ]
General robot grasping in clutter requires the ability to synthesize grasps that work for previously unseen objects and that are also robust to physical interactions, such as collisions with other objects in the scene. In this work, we design and train a network that predicts 6 DOF grasps from 3D scene information gathered from an on-board sensor such as a wrist-mounted depth camera. Our proposed Volumetric Grasping Network (VGN) accepts a Truncated Signed Distance Function (TSDF) representation of the scene and directly outputs the predicted grasp quality and the associated gripper orientation and opening width for each voxel in the queried 3D volume. We show that our approach can plan grasps in only 10 ms and is able to clear 92% of the objects in real-world clutter removal experiments without the need for explicit collision checking. The real-time capability opens up the possibility for closed-loop grasp planning, allowing robots to handle disturbances, recover from errors and provide increased robustness. Code is available at https://github.com/ethz-asl/vgn.
1811.02536
Ross Horne
Ross Horne
A Bisimilarity Congruence for the Applied pi-Calculus Sufficiently Coarse to Verify Privacy Properties
null
null
null
null
cs.CR cs.LO
http://creativecommons.org/licenses/by/4.0/
This paper is the first thorough investigation into the coarsest notion of bisimilarity for the applied pi-calculus that is a congruence relation: open barbed bisimilarity. An open variant of labelled bisimilarity (quasi-open bisimilarity), better suited to constructing bisimulations, is proven to coincide with open barbed bisimilarity. These bisimilary congruences are shown to be characterised by an intuitionistic modal logic that can be used, for example, to describe an attack on privacy whenever a privacy property is violated. Open barbed bisimilarity provides a compositional approach to verifying cryptographic protocols, since properties proven can be reused in any context, including under input prefix. Furthermore, open barbed bisimilarity is sufficiently coarse for reasoning about security and privacy properties of cryptographic protocols; in constrast to the finer bisimilarity congruence, open bisimilarity, which cannot verify certain privacy properties.
[ { "created": "Tue, 6 Nov 2018 18:12:02 GMT", "version": "v1" } ]
2018-11-07
[ [ "Horne", "Ross", "" ] ]
This paper is the first thorough investigation into the coarsest notion of bisimilarity for the applied pi-calculus that is a congruence relation: open barbed bisimilarity. An open variant of labelled bisimilarity (quasi-open bisimilarity), better suited to constructing bisimulations, is proven to coincide with open barbed bisimilarity. These bisimilary congruences are shown to be characterised by an intuitionistic modal logic that can be used, for example, to describe an attack on privacy whenever a privacy property is violated. Open barbed bisimilarity provides a compositional approach to verifying cryptographic protocols, since properties proven can be reused in any context, including under input prefix. Furthermore, open barbed bisimilarity is sufficiently coarse for reasoning about security and privacy properties of cryptographic protocols; in constrast to the finer bisimilarity congruence, open bisimilarity, which cannot verify certain privacy properties.
2212.06801
Jennifer Hu
Jennifer Hu, Sammy Floyd, Olessia Jouravlev, Evelina Fedorenko, Edward Gibson
A fine-grained comparison of pragmatic language understanding in humans and language models
ACL 2023 camera-ready version
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Pragmatics and non-literal language understanding are essential to human communication, and present a long-standing challenge for artificial language models. We perform a fine-grained comparison of language models and humans on seven pragmatic phenomena, using zero-shot prompting on an expert-curated set of English materials. We ask whether models (1) select pragmatic interpretations of speaker utterances, (2) make similar error patterns as humans, and (3) use similar linguistic cues as humans to solve the tasks. We find that the largest models achieve high accuracy and match human error patterns: within incorrect responses, models favor literal interpretations over heuristic-based distractors. We also find preliminary evidence that models and humans are sensitive to similar linguistic cues. Our results suggest that pragmatic behaviors can emerge in models without explicitly constructed representations of mental states. However, models tend to struggle with phenomena relying on social expectation violations.
[ { "created": "Tue, 13 Dec 2022 18:34:59 GMT", "version": "v1" }, { "created": "Tue, 23 May 2023 18:35:34 GMT", "version": "v2" } ]
2023-05-25
[ [ "Hu", "Jennifer", "" ], [ "Floyd", "Sammy", "" ], [ "Jouravlev", "Olessia", "" ], [ "Fedorenko", "Evelina", "" ], [ "Gibson", "Edward", "" ] ]
Pragmatics and non-literal language understanding are essential to human communication, and present a long-standing challenge for artificial language models. We perform a fine-grained comparison of language models and humans on seven pragmatic phenomena, using zero-shot prompting on an expert-curated set of English materials. We ask whether models (1) select pragmatic interpretations of speaker utterances, (2) make similar error patterns as humans, and (3) use similar linguistic cues as humans to solve the tasks. We find that the largest models achieve high accuracy and match human error patterns: within incorrect responses, models favor literal interpretations over heuristic-based distractors. We also find preliminary evidence that models and humans are sensitive to similar linguistic cues. Our results suggest that pragmatic behaviors can emerge in models without explicitly constructed representations of mental states. However, models tend to struggle with phenomena relying on social expectation violations.
2308.03119
Piotr Sowinski
Piotr Sowinski, Ignacio Lacalle, Rafael Vano, Carlos E. Palau
Autonomous Choreography of WebAssembly Workloads in the Federated Cloud-Edge-IoT Continuum
null
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
The concept of the federated Cloud-Edge-IoT continuum promises to alleviate many woes of current systems, improving resource use, energy efficiency, quality of service, and more. However, this continuum is still far from being realized in practice, with no comprehensive solutions for developing, deploying, and managing continuum-native applications. Breakthrough innovations and novel system architectures are needed to cope with the ever-increasing heterogeneity and the multi-stakeholder nature of computing resources. This work proposes a novel architecture for choreographing workloads in the continuum, attempting to address these challenges. The architecture that tackles this issue comprehensively, spanning from the workloads themselves, through networking and data exchange, up to the orchestration and choreography mechanisms. The concept emphasizes the use of varied AI techniques, enabling autonomous and intelligent management of resources and workloads. Open standards are also a key part of the proposition, making it possible to fully engage third parties in multi-stakeholder scenarios. Although the presented architecture is promising, much work is required to realize it in practice. To this end, the key directions for future research are outlined.
[ { "created": "Sun, 6 Aug 2023 13:57:01 GMT", "version": "v1" } ]
2023-08-08
[ [ "Sowinski", "Piotr", "" ], [ "Lacalle", "Ignacio", "" ], [ "Vano", "Rafael", "" ], [ "Palau", "Carlos E.", "" ] ]
The concept of the federated Cloud-Edge-IoT continuum promises to alleviate many woes of current systems, improving resource use, energy efficiency, quality of service, and more. However, this continuum is still far from being realized in practice, with no comprehensive solutions for developing, deploying, and managing continuum-native applications. Breakthrough innovations and novel system architectures are needed to cope with the ever-increasing heterogeneity and the multi-stakeholder nature of computing resources. This work proposes a novel architecture for choreographing workloads in the continuum, attempting to address these challenges. The architecture that tackles this issue comprehensively, spanning from the workloads themselves, through networking and data exchange, up to the orchestration and choreography mechanisms. The concept emphasizes the use of varied AI techniques, enabling autonomous and intelligent management of resources and workloads. Open standards are also a key part of the proposition, making it possible to fully engage third parties in multi-stakeholder scenarios. Although the presented architecture is promising, much work is required to realize it in practice. To this end, the key directions for future research are outlined.
1911.12777
Peeter Laud
Peeter Laud and Alisa Pankova
Interpreting Epsilon of Differential Privacy in Terms of Advantage in Guessing or Approximating Sensitive Attributes
null
null
null
null
cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There are numerous methods of achieving $\epsilon$-differential privacy (DP). The question is what is the appropriate value of $\epsilon$, since there is no common agreement on a "sufficiently small" $\epsilon$, and its goodness depends on the query as well as the data. In this paper, we show how to compute $\epsilon$ that corresponds to $\delta$, defined as the adversary's advantage in probability of guessing some specific property of the output. The attacker's goal can be stated as Boolean expression over guessing particular attributes, possibly within some precision. The attributes combined in this way should be independent. We assume that both the input and the output distributions have corresponding probability density functions, or probability mass functions.
[ { "created": "Thu, 28 Nov 2019 16:24:51 GMT", "version": "v1" } ]
2019-12-02
[ [ "Laud", "Peeter", "" ], [ "Pankova", "Alisa", "" ] ]
There are numerous methods of achieving $\epsilon$-differential privacy (DP). The question is what is the appropriate value of $\epsilon$, since there is no common agreement on a "sufficiently small" $\epsilon$, and its goodness depends on the query as well as the data. In this paper, we show how to compute $\epsilon$ that corresponds to $\delta$, defined as the adversary's advantage in probability of guessing some specific property of the output. The attacker's goal can be stated as Boolean expression over guessing particular attributes, possibly within some precision. The attributes combined in this way should be independent. We assume that both the input and the output distributions have corresponding probability density functions, or probability mass functions.
2405.19758
Muzhi Han
Muzhi Han, Yifeng Zhu, Song-Chun Zhu, Ying Nian Wu, Yuke Zhu
InterPreT: Interactive Predicate Learning from Language Feedback for Generalizable Task Planning
RSS 2024; https://interpret-robot.github.io
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning abstract state representations and knowledge is crucial for long-horizon robot planning. We present InterPreT, an LLM-powered framework for robots to learn symbolic predicates from language feedback of human non-experts during embodied interaction. The learned predicates provide relational abstractions of the environment state, facilitating the learning of symbolic operators that capture action preconditions and effects. By compiling the learned predicates and operators into a PDDL domain on-the-fly, InterPreT allows effective planning toward arbitrary in-domain goals using a PDDL planner. In both simulated and real-world robot manipulation domains, we demonstrate that InterPreT reliably uncovers the key predicates and operators governing the environment dynamics. Although learned from simple training tasks, these predicates and operators exhibit strong generalization to novel tasks with significantly higher complexity. In the most challenging generalization setting, InterPreT attains success rates of 73% in simulation and 40% in the real world, substantially outperforming baseline methods.
[ { "created": "Thu, 30 May 2024 07:08:40 GMT", "version": "v1" } ]
2024-05-31
[ [ "Han", "Muzhi", "" ], [ "Zhu", "Yifeng", "" ], [ "Zhu", "Song-Chun", "" ], [ "Wu", "Ying Nian", "" ], [ "Zhu", "Yuke", "" ] ]
Learning abstract state representations and knowledge is crucial for long-horizon robot planning. We present InterPreT, an LLM-powered framework for robots to learn symbolic predicates from language feedback of human non-experts during embodied interaction. The learned predicates provide relational abstractions of the environment state, facilitating the learning of symbolic operators that capture action preconditions and effects. By compiling the learned predicates and operators into a PDDL domain on-the-fly, InterPreT allows effective planning toward arbitrary in-domain goals using a PDDL planner. In both simulated and real-world robot manipulation domains, we demonstrate that InterPreT reliably uncovers the key predicates and operators governing the environment dynamics. Although learned from simple training tasks, these predicates and operators exhibit strong generalization to novel tasks with significantly higher complexity. In the most challenging generalization setting, InterPreT attains success rates of 73% in simulation and 40% in the real world, substantially outperforming baseline methods.
1604.02303
Pavel Semukhin
Igor Potapov and Pavel Semukhin
Decidability of the Membership Problem for $2\times 2$ integer matrices
null
null
null
null
cs.DM cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The main result of this paper is the decidability of the membership problem for $2\times 2$ nonsingular integer matrices. Namely, we will construct the first algorithm that for any nonsingular $2\times 2$ integer matrices $M_1,\dots,M_n$ and $M$ decides whether $M$ belongs to the semigroup generated by $\{M_1,\dots,M_n\}$. Our algorithm relies on a translation of the numerical problem on matrices into combinatorial problems on words. It also makes use of some algebraical properties of well-known subgroups of $\mathrm{GL}(2,\mathbb{Z})$ and various new techniques and constructions that help to limit an infinite number of possibilities by reducing them to the membership problem for regular languages.
[ { "created": "Fri, 8 Apr 2016 10:53:55 GMT", "version": "v1" } ]
2016-04-11
[ [ "Potapov", "Igor", "" ], [ "Semukhin", "Pavel", "" ] ]
The main result of this paper is the decidability of the membership problem for $2\times 2$ nonsingular integer matrices. Namely, we will construct the first algorithm that for any nonsingular $2\times 2$ integer matrices $M_1,\dots,M_n$ and $M$ decides whether $M$ belongs to the semigroup generated by $\{M_1,\dots,M_n\}$. Our algorithm relies on a translation of the numerical problem on matrices into combinatorial problems on words. It also makes use of some algebraical properties of well-known subgroups of $\mathrm{GL}(2,\mathbb{Z})$ and various new techniques and constructions that help to limit an infinite number of possibilities by reducing them to the membership problem for regular languages.
2310.18807
Rim Assouel
Rim Assouel, Pau Rodriguez, Perouz Taslakian, David Vazquez, Yoshua Bengio
OC-NMN: Object-centric Compositional Neural Module Network for Generative Visual Analogical Reasoning
null
null
null
null
cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A key aspect of human intelligence is the ability to imagine -- composing learned concepts in novel ways -- to make sense of new scenarios. Such capacity is not yet attained for machine learning systems. In this work, in the context of visual reasoning, we show how modularity can be leveraged to derive a compositional data augmentation framework inspired by imagination. Our method, denoted Object-centric Compositional Neural Module Network (OC-NMN), decomposes visual generative reasoning tasks into a series of primitives applied to objects without using a domain-specific language. We show that our modular architectural choices can be used to generate new training tasks that lead to better out-of-distribution generalization. We compare our model to existing and new baselines in proposed visual reasoning benchmark that consists of applying arithmetic operations to MNIST digits.
[ { "created": "Sat, 28 Oct 2023 20:12:58 GMT", "version": "v1" } ]
2023-10-31
[ [ "Assouel", "Rim", "" ], [ "Rodriguez", "Pau", "" ], [ "Taslakian", "Perouz", "" ], [ "Vazquez", "David", "" ], [ "Bengio", "Yoshua", "" ] ]
A key aspect of human intelligence is the ability to imagine -- composing learned concepts in novel ways -- to make sense of new scenarios. Such capacity is not yet attained for machine learning systems. In this work, in the context of visual reasoning, we show how modularity can be leveraged to derive a compositional data augmentation framework inspired by imagination. Our method, denoted Object-centric Compositional Neural Module Network (OC-NMN), decomposes visual generative reasoning tasks into a series of primitives applied to objects without using a domain-specific language. We show that our modular architectural choices can be used to generate new training tasks that lead to better out-of-distribution generalization. We compare our model to existing and new baselines in proposed visual reasoning benchmark that consists of applying arithmetic operations to MNIST digits.
2104.12050
Chih-Yi Chiu
Munlika Rattaphun, Wen-Chieh Fang, and Chih-Yi Chiu
Attention on Global-Local Representation Spaces in Recommender Systems
This paper was accepted by IEEE Transactions on Computational Social Systems (TCSS) in November 2021
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this study, we present a novel clustering-based collaborative filtering (CF) method for recommender systems. Clustering-based CF methods can effectively deal with data sparsity and scalability problems. However, most of them are applied to a single representation space, which might not characterize complex user-item interactions well. We argue that the user-item interactions should be observed from multiple views and characterized in an adaptive way. To address this issue, we leveraged the global and local properties to construct multiple representation spaces by learning various training datasets and loss functions. An attention network was built to generate a blended representation according to the relative importance of the representation spaces for each user-item pair, providing a flexible way to characterize diverse user-item interactions. Substantial experiments were evaluated on four popular benchmark datasets. The results show that the proposed method is superior to several CF methods where only one representation space is considered.
[ { "created": "Sun, 25 Apr 2021 02:21:10 GMT", "version": "v1" }, { "created": "Tue, 16 Nov 2021 13:31:28 GMT", "version": "v2" } ]
2021-11-17
[ [ "Rattaphun", "Munlika", "" ], [ "Fang", "Wen-Chieh", "" ], [ "Chiu", "Chih-Yi", "" ] ]
In this study, we present a novel clustering-based collaborative filtering (CF) method for recommender systems. Clustering-based CF methods can effectively deal with data sparsity and scalability problems. However, most of them are applied to a single representation space, which might not characterize complex user-item interactions well. We argue that the user-item interactions should be observed from multiple views and characterized in an adaptive way. To address this issue, we leveraged the global and local properties to construct multiple representation spaces by learning various training datasets and loss functions. An attention network was built to generate a blended representation according to the relative importance of the representation spaces for each user-item pair, providing a flexible way to characterize diverse user-item interactions. Substantial experiments were evaluated on four popular benchmark datasets. The results show that the proposed method is superior to several CF methods where only one representation space is considered.
1503.02086
Hemant Purohit
Hemant Purohit (1,2), Tanvi Banerjee (1,2), Andrew Hampton (1,3), Valerie L. Shalin (1,3), Nayanesh Bhandutia (4) and Amit P. Sheth (1,2) ((1) Ohio Center of Excellence in Knowledge-enabled Computing (Kno.e.sis), Wright State University, USA, (2) Department of Computer Science and Engineering, (3) Department of Psychology, (4) United Nations Population Fund Headquarters NYC, USA)
Gender-Based Violence in 140 Characters or Fewer: A #BigData Case Study of Twitter
null
null
null
null
cs.SI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Public institutions are increasingly reliant on data from social media sites to measure public attitude and provide timely public engagement. Such reliance includes the exploration of public views on important social issues such as gender-based violence (GBV). In this study, we examine big (social) data consisting of nearly fourteen million tweets collected from Twitter over a period of ten months to analyze public opinion regarding GBV, highlighting the nature of tweeting practices by geographical location and gender. We demonstrate the utility of Computational Social Science to mine insight from the corpus while accounting for the influence of both transient events and sociocultural factors. We reveal public awareness regarding GBV tolerance and suggest opportunities for intervention and the measurement of intervention effectiveness assisting both governmental and non-governmental organizations in policy development.
[ { "created": "Fri, 6 Mar 2015 21:02:59 GMT", "version": "v1" }, { "created": "Mon, 29 Jun 2015 19:08:52 GMT", "version": "v2" } ]
2015-06-30
[ [ "Purohit", "Hemant", "" ], [ "Banerjee", "Tanvi", "" ], [ "Hampton", "Andrew", "" ], [ "Shalin", "Valerie L.", "" ], [ "Bhandutia", "Nayanesh", "" ], [ "Sheth", "Amit P.", "" ] ]
Public institutions are increasingly reliant on data from social media sites to measure public attitude and provide timely public engagement. Such reliance includes the exploration of public views on important social issues such as gender-based violence (GBV). In this study, we examine big (social) data consisting of nearly fourteen million tweets collected from Twitter over a period of ten months to analyze public opinion regarding GBV, highlighting the nature of tweeting practices by geographical location and gender. We demonstrate the utility of Computational Social Science to mine insight from the corpus while accounting for the influence of both transient events and sociocultural factors. We reveal public awareness regarding GBV tolerance and suggest opportunities for intervention and the measurement of intervention effectiveness assisting both governmental and non-governmental organizations in policy development.
2203.15970
Jonathan Warrell
Jonathan Warrell, Alexey Potapov, Adam Vandervorst, Ben Goertzel
A meta-probabilistic-programming language for bisimulation of probabilistic and non-well-founded type systems
18 pages, 3 figures
null
null
null
cs.AI cs.FL cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a formal meta-language for probabilistic programming, capable of expressing both programs and the type systems in which they are embedded. We are motivated here by the desire to allow an AGI to learn not only relevant knowledge (programs/proofs), but also appropriate ways of reasoning (logics/type systems). We draw on the frameworks of cubical type theory and dependent typed metagraphs to formalize our approach. In doing so, we show that specific constructions within the meta-language can be related via bisimulation (implying path equivalence) to the type systems they correspond. This allows our approach to provide a convenient means of deriving synthetic denotational semantics for various type systems. Particularly, we derive bisimulations for pure type systems (PTS), and probabilistic dependent type systems (PDTS). We discuss further the relationship of PTS to non-well-founded set theory, and demonstrate the feasibility of our approach with an implementation of a bisimulation proof in a Guarded Cubical Type Theory type checker.
[ { "created": "Wed, 30 Mar 2022 01:07:37 GMT", "version": "v1" }, { "created": "Sun, 1 May 2022 01:29:48 GMT", "version": "v2" }, { "created": "Tue, 16 Aug 2022 06:07:57 GMT", "version": "v3" } ]
2022-08-17
[ [ "Warrell", "Jonathan", "" ], [ "Potapov", "Alexey", "" ], [ "Vandervorst", "Adam", "" ], [ "Goertzel", "Ben", "" ] ]
We introduce a formal meta-language for probabilistic programming, capable of expressing both programs and the type systems in which they are embedded. We are motivated here by the desire to allow an AGI to learn not only relevant knowledge (programs/proofs), but also appropriate ways of reasoning (logics/type systems). We draw on the frameworks of cubical type theory and dependent typed metagraphs to formalize our approach. In doing so, we show that specific constructions within the meta-language can be related via bisimulation (implying path equivalence) to the type systems they correspond. This allows our approach to provide a convenient means of deriving synthetic denotational semantics for various type systems. Particularly, we derive bisimulations for pure type systems (PTS), and probabilistic dependent type systems (PDTS). We discuss further the relationship of PTS to non-well-founded set theory, and demonstrate the feasibility of our approach with an implementation of a bisimulation proof in a Guarded Cubical Type Theory type checker.
2405.18786
Hongduan Tian
Hongduan Tian, Feng Liu, Tongliang Liu, Bo Du, Yiu-ming Cheung, Bo Han
MOKD: Cross-domain Finetuning for Few-shot Classification via Maximizing Optimized Kernel Dependence
null
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In cross-domain few-shot classification, \emph{nearest centroid classifier} (NCC) aims to learn representations to construct a metric space where few-shot classification can be performed by measuring the similarities between samples and the prototype of each class. An intuition behind NCC is that each sample is pulled closer to the class centroid it belongs to while pushed away from those of other classes. However, in this paper, we find that there exist high similarities between NCC-learned representations of two samples from different classes. In order to address this problem, we propose a bi-level optimization framework, \emph{maximizing optimized kernel dependence} (MOKD) to learn a set of class-specific representations that match the cluster structures indicated by labeled data of the given task. Specifically, MOKD first optimizes the kernel adopted in \emph{Hilbert-Schmidt independence criterion} (HSIC) to obtain the optimized kernel HSIC (opt-HSIC) that can capture the dependence more precisely. Then, an optimization problem regarding the opt-HSIC is addressed to simultaneously maximize the dependence between representations and labels and minimize the dependence among all samples. Extensive experiments on Meta-Dataset demonstrate that MOKD can not only achieve better generalization performance on unseen domains in most cases but also learn better data representation clusters. The project repository of MOKD is available at: \href{https://github.com/tmlr-group/MOKD}{https://github.com/tmlr-group/MOKD}.
[ { "created": "Wed, 29 May 2024 05:59:52 GMT", "version": "v1" } ]
2024-05-30
[ [ "Tian", "Hongduan", "" ], [ "Liu", "Feng", "" ], [ "Liu", "Tongliang", "" ], [ "Du", "Bo", "" ], [ "Cheung", "Yiu-ming", "" ], [ "Han", "Bo", "" ] ]
In cross-domain few-shot classification, \emph{nearest centroid classifier} (NCC) aims to learn representations to construct a metric space where few-shot classification can be performed by measuring the similarities between samples and the prototype of each class. An intuition behind NCC is that each sample is pulled closer to the class centroid it belongs to while pushed away from those of other classes. However, in this paper, we find that there exist high similarities between NCC-learned representations of two samples from different classes. In order to address this problem, we propose a bi-level optimization framework, \emph{maximizing optimized kernel dependence} (MOKD) to learn a set of class-specific representations that match the cluster structures indicated by labeled data of the given task. Specifically, MOKD first optimizes the kernel adopted in \emph{Hilbert-Schmidt independence criterion} (HSIC) to obtain the optimized kernel HSIC (opt-HSIC) that can capture the dependence more precisely. Then, an optimization problem regarding the opt-HSIC is addressed to simultaneously maximize the dependence between representations and labels and minimize the dependence among all samples. Extensive experiments on Meta-Dataset demonstrate that MOKD can not only achieve better generalization performance on unseen domains in most cases but also learn better data representation clusters. The project repository of MOKD is available at: \href{https://github.com/tmlr-group/MOKD}{https://github.com/tmlr-group/MOKD}.
2104.13135
Fotios Logothetis Dr
Roberto Mecca, Fotios Logothetis, Ignas Budvytis, Roberto Cipolla
LUCES: A Dataset for Near-Field Point Light Source Photometric Stereo
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Three-dimensional reconstruction of objects from shading information is a challenging task in computer vision. As most of the approaches facing the Photometric Stereo problem use simplified far-field assumptions, real-world scenarios have essentially more complex physical effects that need to be handled for accurately reconstructing the 3D shape. An increasing number of methods have been proposed to address the problem when point light sources are assumed to be nearby the target object. The proximity of the light sources complicates the modeling of the image formation as the light behaviour requires non-linear parameterisation to describe its propagation and attenuation. To understand the capability of the approaches dealing with this near-field scenario, the literature till now has used synthetically rendered photometric images or minimal and very customised real-world data. In order to fill the gap in evaluating near-field photometric stereo methods, we introduce LUCES the first real-world 'dataset for near-fieLd point light soUrCe photomEtric Stereo' of 14 objects of a varying of materials. A device counting 52 LEDs has been designed to lit each object positioned 10 to 30 centimeters away from the camera. Together with the raw images, in order to evaluate the 3D reconstructions, the dataset includes both normal and depth maps for comparing different features of the retrieved 3D geometry. Furthermore, we evaluate the performance of the latest near-field Photometric Stereo algorithms on the proposed dataset to assess the SOTA method with respect to actual close range effects and object materials.
[ { "created": "Tue, 27 Apr 2021 12:30:42 GMT", "version": "v1" }, { "created": "Tue, 12 Oct 2021 16:32:29 GMT", "version": "v2" } ]
2021-10-13
[ [ "Mecca", "Roberto", "" ], [ "Logothetis", "Fotios", "" ], [ "Budvytis", "Ignas", "" ], [ "Cipolla", "Roberto", "" ] ]
Three-dimensional reconstruction of objects from shading information is a challenging task in computer vision. As most of the approaches facing the Photometric Stereo problem use simplified far-field assumptions, real-world scenarios have essentially more complex physical effects that need to be handled for accurately reconstructing the 3D shape. An increasing number of methods have been proposed to address the problem when point light sources are assumed to be nearby the target object. The proximity of the light sources complicates the modeling of the image formation as the light behaviour requires non-linear parameterisation to describe its propagation and attenuation. To understand the capability of the approaches dealing with this near-field scenario, the literature till now has used synthetically rendered photometric images or minimal and very customised real-world data. In order to fill the gap in evaluating near-field photometric stereo methods, we introduce LUCES the first real-world 'dataset for near-fieLd point light soUrCe photomEtric Stereo' of 14 objects of a varying of materials. A device counting 52 LEDs has been designed to lit each object positioned 10 to 30 centimeters away from the camera. Together with the raw images, in order to evaluate the 3D reconstructions, the dataset includes both normal and depth maps for comparing different features of the retrieved 3D geometry. Furthermore, we evaluate the performance of the latest near-field Photometric Stereo algorithms on the proposed dataset to assess the SOTA method with respect to actual close range effects and object materials.
1912.08785
Piotr S. Maci\k{a}g
Piotr S. Maci\k{a}g (1), Marzena Kryszkiewicz (1), Robert Bembenik (1), Jesus L. Lobo (2), Javier Del Ser (2 and 3) ((1) Institute of Computer Science, Warsaw University of Technology, Warsaw, Poland, (2) TECNALIA Parque Tecnol\'ogico de Bizkaia, Derio, Spain, (3) University of the Basque Country UPV/EHU, Bilbao, Spain)
Unsupervised Anomaly Detection in Stream Data with Online Evolving Spiking Neural Networks
52 pages
Neural Networks, Volume 139, 2021, Pages 118-139
10.1016/j.neunet.2021.02.017
null
cs.NE cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Unsupervised anomaly discovery in stream data is a research topic with many practical applications. However, in many cases, it is not easy to collect enough training data with labeled anomalies for supervised learning of an anomaly detector in order to deploy it later for identification of real anomalies in streaming data. It is thus important to design anomalies detectors that can correctly detect anomalies without access to labeled training data. Our idea is to adapt the Online evolving Spiking Neural Network (OeSNN) classifier to the anomaly detection task. As a result, we offer an Online evolving Spiking Neural Network for Unsupervised Anomaly Detection algorithm (OeSNN-UAD), which, unlike OeSNN, works in an unsupervised way and does not separate output neurons into disjoint decision classes. OeSNN-UAD uses our proposed new two-step anomaly detection method. Also, we derive new theoretical properties of neuronal model and input layer encoding of OeSNN, which enable more effective and efficient detection of anomalies in our OeSNN-UAD approach. The proposed OeSNN-UAD detector was experimentally compared with state-of-the-art unsupervised and semi-supervised detectors of anomalies in stream data from the Numenta Anomaly Benchmark and Yahoo Anomaly Datasets repositories. Our approach outperforms the other solutions provided in the literature in the case of data streams from the Numenta Anomaly Benchmark repository. Also, in the case of real data files of the Yahoo Anomaly Benchmark repository, OeSNN-UAD outperforms other selected algorithms, whereas in the case of Yahoo Anomaly Benchmark synthetic data files, it provides competitive results to the results recently reported in the literature.
[ { "created": "Wed, 18 Dec 2019 18:36:01 GMT", "version": "v1" }, { "created": "Mon, 8 Mar 2021 20:17:42 GMT", "version": "v2" } ]
2021-03-10
[ [ "Maciąg", "Piotr S.", "", "2 and 3" ], [ "Kryszkiewicz", "Marzena", "", "2 and 3" ], [ "Bembenik", "Robert", "", "2 and 3" ], [ "Lobo", "Jesus L.", "", "2 and 3" ], [ "Del Ser", "Javier", "", "2 and 3" ] ]
Unsupervised anomaly discovery in stream data is a research topic with many practical applications. However, in many cases, it is not easy to collect enough training data with labeled anomalies for supervised learning of an anomaly detector in order to deploy it later for identification of real anomalies in streaming data. It is thus important to design anomalies detectors that can correctly detect anomalies without access to labeled training data. Our idea is to adapt the Online evolving Spiking Neural Network (OeSNN) classifier to the anomaly detection task. As a result, we offer an Online evolving Spiking Neural Network for Unsupervised Anomaly Detection algorithm (OeSNN-UAD), which, unlike OeSNN, works in an unsupervised way and does not separate output neurons into disjoint decision classes. OeSNN-UAD uses our proposed new two-step anomaly detection method. Also, we derive new theoretical properties of neuronal model and input layer encoding of OeSNN, which enable more effective and efficient detection of anomalies in our OeSNN-UAD approach. The proposed OeSNN-UAD detector was experimentally compared with state-of-the-art unsupervised and semi-supervised detectors of anomalies in stream data from the Numenta Anomaly Benchmark and Yahoo Anomaly Datasets repositories. Our approach outperforms the other solutions provided in the literature in the case of data streams from the Numenta Anomaly Benchmark repository. Also, in the case of real data files of the Yahoo Anomaly Benchmark repository, OeSNN-UAD outperforms other selected algorithms, whereas in the case of Yahoo Anomaly Benchmark synthetic data files, it provides competitive results to the results recently reported in the literature.
2211.09106
Weiqiang Yuan
Xinrui Jia, Ola Svensson, Weiqiang Yuan
The Exact Bipartite Matching Polytope Has Exponential Extension Complexity
SODA 2023
null
null
null
cs.CC cs.DM
http://creativecommons.org/licenses/by/4.0/
Given a graph with edges colored red or blue and an integer $k$, the exact perfect matching problem asks if there exists a perfect matching with exactly $k$ red edges. There exists a randomized polylogarithmic-time parallel algorithm to solve this problem, dating back to the eighties, but no deterministic polynomial-time algorithm is known, even for bipartite graphs. In this paper we show that there is no sub-exponential sized linear program that can describe the convex hull of exact matchings in bipartite graphs. In fact, we prove something stronger, that there is no sub-exponential sized linear program to describe the convex hull of perfect matchings with an odd number of red edges.
[ { "created": "Wed, 16 Nov 2022 18:47:39 GMT", "version": "v1" } ]
2022-11-17
[ [ "Jia", "Xinrui", "" ], [ "Svensson", "Ola", "" ], [ "Yuan", "Weiqiang", "" ] ]
Given a graph with edges colored red or blue and an integer $k$, the exact perfect matching problem asks if there exists a perfect matching with exactly $k$ red edges. There exists a randomized polylogarithmic-time parallel algorithm to solve this problem, dating back to the eighties, but no deterministic polynomial-time algorithm is known, even for bipartite graphs. In this paper we show that there is no sub-exponential sized linear program that can describe the convex hull of exact matchings in bipartite graphs. In fact, we prove something stronger, that there is no sub-exponential sized linear program to describe the convex hull of perfect matchings with an odd number of red edges.
1611.06587
Mithileysh Sathiyanarayanan Mr
Mithileysh Sathiyanarayanan and Tobias Mulling
Wellformedness Properties in Euler Diagrams: An Eye Tracking Study for Visualisation Evaluation
4 pages, 2 figures, the Brazilian Computing Society, the XIV Brazilian Symposium on Human Factors in Computer Systems (IHC 2015)
null
null
null
cs.HC cs.CY cs.SI
http://creativecommons.org/publicdomain/zero/1.0/
In the field of information visualisation, Euler diagrams are an important tool used in various application areas such as engineering, medicine and social analysis. To effectively use Euler diagrams, some of the wellformedness properties needs to be avoided, as they are considered to reduce user comprehension. From the previous empirical studies, we know some properties are swappable but there is no clear justification which property would be the best to use. In this paper, we considered two main wellformedness properties (duplicated curve labels and disconnected zones) to test which among the two affect user comprehension the most, based on the task performance (accuracy and response time), preference and eye movements of the users. Twelve participants performed three different types of tasks with nine diagrams of each property (so, in total eighteen diagrams) and the results showed that duplicated curve labels property slows down and trigger extra eye movements, causing delays for the tasks. Though there is no significant difference in the accuracy but the insights obtained from the response time, preference and eye movements will be useful for software developers on the optimal way to visualise Euler diagrams in real world application areas.
[ { "created": "Sun, 20 Nov 2016 20:33:11 GMT", "version": "v1" } ]
2016-11-22
[ [ "Sathiyanarayanan", "Mithileysh", "" ], [ "Mulling", "Tobias", "" ] ]
In the field of information visualisation, Euler diagrams are an important tool used in various application areas such as engineering, medicine and social analysis. To effectively use Euler diagrams, some of the wellformedness properties needs to be avoided, as they are considered to reduce user comprehension. From the previous empirical studies, we know some properties are swappable but there is no clear justification which property would be the best to use. In this paper, we considered two main wellformedness properties (duplicated curve labels and disconnected zones) to test which among the two affect user comprehension the most, based on the task performance (accuracy and response time), preference and eye movements of the users. Twelve participants performed three different types of tasks with nine diagrams of each property (so, in total eighteen diagrams) and the results showed that duplicated curve labels property slows down and trigger extra eye movements, causing delays for the tasks. Though there is no significant difference in the accuracy but the insights obtained from the response time, preference and eye movements will be useful for software developers on the optimal way to visualise Euler diagrams in real world application areas.
2108.06167
Haoming Li
Haoming Li, Feiyang Pan, Xiang Ao, Zhao Yang, Min Lu, Junwei Pan, Dapeng Liu, Lei Xiao, Qing He
Follow the Prophet: Accurate Online Conversion Rate Prediction in the Face of Delayed Feedback
In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '21), July 11--15, 2021, Virtual Event, Canada. ACM, New York, NY, USA, 5 pages. https://doi.org/10.1145/3404835.3463045
null
10.1145/3404835.3463045
null
cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
The delayed feedback problem is one of the imperative challenges in online advertising, which is caused by the highly diversified feedback delay of a conversion varying from a few minutes to several days. It is hard to design an appropriate online learning system under these non-identical delay for different types of ads and users. In this paper, we propose to tackle the delayed feedback problem in online advertising by "Following the Prophet" (FTP for short). The key insight is that, if the feedback came instantly for all the logged samples, we could get a model without delayed feedback, namely the "prophet". Although the prophet cannot be obtained during online learning, we show that we could predict the prophet's predictions by an aggregation policy on top of a set of multi-task predictions, where each task captures the feedback patterns of different periods. We propose the objective and optimization approach for the policy, and use the logged data to imitate the prophet. Extensive experiments on three real-world advertising datasets show that our method outperforms the previous state-of-the-art baselines.
[ { "created": "Fri, 13 Aug 2021 10:51:09 GMT", "version": "v1" } ]
2021-08-16
[ [ "Li", "Haoming", "" ], [ "Pan", "Feiyang", "" ], [ "Ao", "Xiang", "" ], [ "Yang", "Zhao", "" ], [ "Lu", "Min", "" ], [ "Pan", "Junwei", "" ], [ "Liu", "Dapeng", "" ], [ "Xiao", "Lei", "" ], [ "He", "Qing", "" ] ]
The delayed feedback problem is one of the imperative challenges in online advertising, which is caused by the highly diversified feedback delay of a conversion varying from a few minutes to several days. It is hard to design an appropriate online learning system under these non-identical delay for different types of ads and users. In this paper, we propose to tackle the delayed feedback problem in online advertising by "Following the Prophet" (FTP for short). The key insight is that, if the feedback came instantly for all the logged samples, we could get a model without delayed feedback, namely the "prophet". Although the prophet cannot be obtained during online learning, we show that we could predict the prophet's predictions by an aggregation policy on top of a set of multi-task predictions, where each task captures the feedback patterns of different periods. We propose the objective and optimization approach for the policy, and use the logged data to imitate the prophet. Extensive experiments on three real-world advertising datasets show that our method outperforms the previous state-of-the-art baselines.
0911.3641
Loet Leydesdorff
Rob Goldstone, Loet Leydesdorff
The Import and Export of Cognitive Science
null
Cognitive Science 30(6) (2006), 983-993
null
null
cs.DL physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
From its inception, a large part of the motivation for Cognitive Science has been the need for an interdisciplinary journal for the study of minds and intelligent systems. One threat to the interdisciplinarity of Cognitive Science, both the field and journal, is that it may become, or already be, too dominated by psychologists. In 2005, psychology was a keyword for 51% of submissions, followed distantly by linguistics (17%), artificial intelligence (13%), neuroscience (10%), computer science (9%), and philosophy (8%). The Institute for Scientific Information (ISI) gathers data not only on how individual articles cite one another, but also on macroscopic citation patterns among journals. Journals or sets of journals can be considered as proxies for fields. As fields become established, they often create journals. By studying the patterns of citations among journals that cite and are cited by Cognitive Science, we can better: 1) appreciate the scholarly ecology surrounding the journal and the journals role within this ecology, 2) establish competitor and alternate journals, and 3) determine the natural clustering of fields related to cognitive science.
[ { "created": "Wed, 18 Nov 2009 20:00:02 GMT", "version": "v1" } ]
2009-11-19
[ [ "Goldstone", "Rob", "" ], [ "Leydesdorff", "Loet", "" ] ]
From its inception, a large part of the motivation for Cognitive Science has been the need for an interdisciplinary journal for the study of minds and intelligent systems. One threat to the interdisciplinarity of Cognitive Science, both the field and journal, is that it may become, or already be, too dominated by psychologists. In 2005, psychology was a keyword for 51% of submissions, followed distantly by linguistics (17%), artificial intelligence (13%), neuroscience (10%), computer science (9%), and philosophy (8%). The Institute for Scientific Information (ISI) gathers data not only on how individual articles cite one another, but also on macroscopic citation patterns among journals. Journals or sets of journals can be considered as proxies for fields. As fields become established, they often create journals. By studying the patterns of citations among journals that cite and are cited by Cognitive Science, we can better: 1) appreciate the scholarly ecology surrounding the journal and the journals role within this ecology, 2) establish competitor and alternate journals, and 3) determine the natural clustering of fields related to cognitive science.
2310.12666
Segev Wasserkrug
Segev Wasserkrug, Takayuki Osogami
Who Benefits from a Multi-Cloud Market? A Trading Networks Based Analysis
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In enterprise cloud computing, there is a big and increasing investment to move to multi-cloud computing, which allows enterprises to seamlessly utilize IT resources from multiple cloud providers, so as to take advantage of different cloud providers' capabilities and costs. This investment raises several key questions: Will multi-cloud always be more beneficial to the cloud users? How will this impact the cloud providers? Is it possible to create a multi-cloud market that is beneficial to all participants? In this work, we begin addressing these questions by using the game theoretic model of trading networks and formally compare between the single and multi-cloud markets. This comparson a) provides a sufficient condition under which the multi-cloud network can be considered more efficient than the single cloud one in the sense that a centralized coordinator having full information can impose an outcome that is strongly Pareto-dominant for all players and b) shows a surprising result that without centralized coordination, settings are possible in which even the cloud buyers' utilities may decrease when moving from a single cloud to a multi-cloud network. As these two results emphasize the need for centralized coordination to ensure a Pareto-dominant outcome and as the aforementioned Pareto-dominant result requires truthful revelation of participant's private information, we provide an automated mechanism design (AMD) approach, which, in the Bayesian setting, finds mechanisms which result in expectation in such Pareto-dominant outcomes, and in which truthful revelation of the parties' private information is the dominant strategy. We also provide empirical analysis to show the validity of our AMD approach.
[ { "created": "Thu, 19 Oct 2023 11:49:34 GMT", "version": "v1" } ]
2023-10-20
[ [ "Wasserkrug", "Segev", "" ], [ "Osogami", "Takayuki", "" ] ]
In enterprise cloud computing, there is a big and increasing investment to move to multi-cloud computing, which allows enterprises to seamlessly utilize IT resources from multiple cloud providers, so as to take advantage of different cloud providers' capabilities and costs. This investment raises several key questions: Will multi-cloud always be more beneficial to the cloud users? How will this impact the cloud providers? Is it possible to create a multi-cloud market that is beneficial to all participants? In this work, we begin addressing these questions by using the game theoretic model of trading networks and formally compare between the single and multi-cloud markets. This comparson a) provides a sufficient condition under which the multi-cloud network can be considered more efficient than the single cloud one in the sense that a centralized coordinator having full information can impose an outcome that is strongly Pareto-dominant for all players and b) shows a surprising result that without centralized coordination, settings are possible in which even the cloud buyers' utilities may decrease when moving from a single cloud to a multi-cloud network. As these two results emphasize the need for centralized coordination to ensure a Pareto-dominant outcome and as the aforementioned Pareto-dominant result requires truthful revelation of participant's private information, we provide an automated mechanism design (AMD) approach, which, in the Bayesian setting, finds mechanisms which result in expectation in such Pareto-dominant outcomes, and in which truthful revelation of the parties' private information is the dominant strategy. We also provide empirical analysis to show the validity of our AMD approach.
2102.01409
B{\l}a\.zej Leporowski Mr
B{\l}a\.zej Leporowski, Daniella Tola, Casper Hansen and Alexandros Iosifidis
AURSAD: Universal Robot Screwdriving Anomaly Detection Dataset
null
null
null
null
cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Screwdriving is one of the most popular industrial processes. As such, it is increasingly common to automate that procedure by using various robots. Even though the automation increases the efficiency of the screwdriving process, if the process is not monitored correctly, faults may occur during operation, which can impact the effectiveness and quality of assembly. Machine Learning (ML) has the potential to detect those undesirable events and limit their impact. In order to do so, first a dataset that fully describes the operation of an industrial robot performing automated screwdriving must be available. This report describes a dataset created using a UR3e series robot and OnRobot Screwdriver. We create different scenarios and introduce 4 types of anomalies to the process while all available robot and screwdriver sensors are continuously recorded. The resulting data contains 2042 samples of normal and anomalous robot operation. Brief ML benchmarks using this data are also provided, showcasing the data's suitability and potential for further analysis and experimentation.
[ { "created": "Tue, 2 Feb 2021 09:59:23 GMT", "version": "v1" }, { "created": "Sat, 6 Feb 2021 09:17:21 GMT", "version": "v2" } ]
2021-02-09
[ [ "Leporowski", "Błażej", "" ], [ "Tola", "Daniella", "" ], [ "Hansen", "Casper", "" ], [ "Iosifidis", "Alexandros", "" ] ]
Screwdriving is one of the most popular industrial processes. As such, it is increasingly common to automate that procedure by using various robots. Even though the automation increases the efficiency of the screwdriving process, if the process is not monitored correctly, faults may occur during operation, which can impact the effectiveness and quality of assembly. Machine Learning (ML) has the potential to detect those undesirable events and limit their impact. In order to do so, first a dataset that fully describes the operation of an industrial robot performing automated screwdriving must be available. This report describes a dataset created using a UR3e series robot and OnRobot Screwdriver. We create different scenarios and introduce 4 types of anomalies to the process while all available robot and screwdriver sensors are continuously recorded. The resulting data contains 2042 samples of normal and anomalous robot operation. Brief ML benchmarks using this data are also provided, showcasing the data's suitability and potential for further analysis and experimentation.
1706.02247
Andre Braga Reis
Andre B. Reis, Susana Sargento, Ozan K. Tonguz
Smarter Cities with Parked Cars as Roadside Units
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Real-time monitoring of traffic density, road congestion, public transportation, and parking availability are key to realizing the vision of a smarter city and, with the advent of vehicular networking technologies such as IEEE 802.11p and WAVE, this information can now be gathered directly from the vehicles in an urban area. To act as a backbone to the network of moving vehicles, collecting, aggregating, and disseminating their information, the use of parked cars has been proposed as an alternative to costly deployments of fixed Roadside Units. In this paper, we introduce novel mechanisms for parking vehicles to self-organize and form efficient vehicular support networks that provide widespread coverage to a city. These mechanisms are innovative in their ability to keep the network of parked cars under continuous optimization, in their multi-criteria decision process that can be focused on key network performance metrics, and in their ability to manage the battery usage of each car, rotating roadside unit roles between vehicles as required. We also present the first comprehensive study of the performance of such an approach, via realistic modeling of mobility, parking, and communication, thorough simulations, and an experimental verification of concepts that are key to self-organization. Our analysis brings strong evidence that parked cars can serve as an alternative to fixed roadside units, and organize to form networks that can support smarter transportation and mobility.
[ { "created": "Wed, 7 Jun 2017 16:40:16 GMT", "version": "v1" } ]
2017-06-08
[ [ "Reis", "Andre B.", "" ], [ "Sargento", "Susana", "" ], [ "Tonguz", "Ozan K.", "" ] ]
Real-time monitoring of traffic density, road congestion, public transportation, and parking availability are key to realizing the vision of a smarter city and, with the advent of vehicular networking technologies such as IEEE 802.11p and WAVE, this information can now be gathered directly from the vehicles in an urban area. To act as a backbone to the network of moving vehicles, collecting, aggregating, and disseminating their information, the use of parked cars has been proposed as an alternative to costly deployments of fixed Roadside Units. In this paper, we introduce novel mechanisms for parking vehicles to self-organize and form efficient vehicular support networks that provide widespread coverage to a city. These mechanisms are innovative in their ability to keep the network of parked cars under continuous optimization, in their multi-criteria decision process that can be focused on key network performance metrics, and in their ability to manage the battery usage of each car, rotating roadside unit roles between vehicles as required. We also present the first comprehensive study of the performance of such an approach, via realistic modeling of mobility, parking, and communication, thorough simulations, and an experimental verification of concepts that are key to self-organization. Our analysis brings strong evidence that parked cars can serve as an alternative to fixed roadside units, and organize to form networks that can support smarter transportation and mobility.
2311.15221
Zihao Wang
Kaizhao Liu, Zihao Wang, Lei Wu
The Local Landscape of Phase Retrieval Under Limited Samples
41 pages
null
null
null
cs.IT cs.LG eess.SP math.IT math.OC math.ST stat.ML stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we provide a fine-grained analysis of the local landscape of phase retrieval under the regime with limited samples. Our aim is to ascertain the minimal sample size necessary to guarantee a benign local landscape surrounding global minima in high dimensions. Let $n$ and $d$ denote the sample size and input dimension, respectively. We first explore the local convexity and establish that when $n=o(d\log d)$, for almost every fixed point in the local ball, the Hessian matrix must have negative eigenvalues as long as $d$ is sufficiently large. Consequently, the local landscape is highly non-convex. We next consider the one-point strong convexity and show that as long as $n=\omega(d)$, with high probability, the landscape is one-point strongly convex in the local annulus: $\{w\in\mathbb{R}^d: o_d(1)\leqslant \|w-w^*\|\leqslant c\}$, where $w^*$ is the ground truth and $c$ is an absolute constant. This implies that gradient descent initialized from any point in this domain can converge to an $o_d(1)$-loss solution exponentially fast. Furthermore, we show that when $n=o(d\log d)$, there is a radius of $\widetilde\Theta\left(\sqrt{1/d}\right)$ such that one-point convexity breaks in the corresponding smaller local ball. This indicates an impossibility to establish a convergence to exact $w^*$ for gradient descent under limited samples by relying solely on one-point convexity.
[ { "created": "Sun, 26 Nov 2023 07:22:35 GMT", "version": "v1" } ]
2023-11-28
[ [ "Liu", "Kaizhao", "" ], [ "Wang", "Zihao", "" ], [ "Wu", "Lei", "" ] ]
In this paper, we provide a fine-grained analysis of the local landscape of phase retrieval under the regime with limited samples. Our aim is to ascertain the minimal sample size necessary to guarantee a benign local landscape surrounding global minima in high dimensions. Let $n$ and $d$ denote the sample size and input dimension, respectively. We first explore the local convexity and establish that when $n=o(d\log d)$, for almost every fixed point in the local ball, the Hessian matrix must have negative eigenvalues as long as $d$ is sufficiently large. Consequently, the local landscape is highly non-convex. We next consider the one-point strong convexity and show that as long as $n=\omega(d)$, with high probability, the landscape is one-point strongly convex in the local annulus: $\{w\in\mathbb{R}^d: o_d(1)\leqslant \|w-w^*\|\leqslant c\}$, where $w^*$ is the ground truth and $c$ is an absolute constant. This implies that gradient descent initialized from any point in this domain can converge to an $o_d(1)$-loss solution exponentially fast. Furthermore, we show that when $n=o(d\log d)$, there is a radius of $\widetilde\Theta\left(\sqrt{1/d}\right)$ such that one-point convexity breaks in the corresponding smaller local ball. This indicates an impossibility to establish a convergence to exact $w^*$ for gradient descent under limited samples by relying solely on one-point convexity.
1606.02738
Matthieu Schaller
Matthieu Schaller (1), Pedro Gonnet (2,3), Aidan B. G. Chalk (2), Peter W. Draper (1) ((1) ICC, Durham University, (2) ECS, Durham University, (3) Google Switzerland GmbH)
SWIFT: Using task-based parallelism, fully asynchronous communication, and graph partition-based domain decomposition for strong scaling on more than 100,000 cores
9 pages, 7 figures. Code, scripts and examples available at http://icc.dur.ac.uk/swift/
null
10.1145/2929908.2929916
null
cs.DC astro-ph.IM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new open-source cosmological code, called SWIFT, designed to solve the equations of hydrodynamics using a particle-based approach (Smooth Particle Hydrodynamics) on hybrid shared/distributed-memory architectures. SWIFT was designed from the bottom up to provide excellent strong scaling on both commodity clusters (Tier-2 systems) and Top100-supercomputers (Tier-0 systems), without relying on architecture-specific features or specialized accelerator hardware. This performance is due to three main computational approaches: (1) Task-based parallelism for shared-memory parallelism, which provides fine-grained load balancing and thus strong scaling on large numbers of cores. (2) Graph-based domain decomposition, which uses the task graph to decompose the simulation domain such that the work, as opposed to just the data, as is the case with most partitioning schemes, is equally distributed across all nodes. (3) Fully dynamic and asynchronous communication, in which communication is modelled as just another task in the task-based scheme, sending data whenever it is ready and deferring on tasks that rely on data from other nodes until it arrives. In order to use these approaches, the code had to be re-written from scratch, and the algorithms therein adapted to the task-based paradigm. As a result, we can show upwards of 60% parallel efficiency for moderate-sized problems when increasing the number of cores 512-fold, on both x86-based and Power8-based architectures.
[ { "created": "Wed, 8 Jun 2016 20:22:15 GMT", "version": "v1" } ]
2022-08-03
[ [ "Schaller", "Matthieu", "" ], [ "Gonnet", "Pedro", "" ], [ "Chalk", "Aidan B. G.", "" ], [ "Draper", "Peter W.", "" ] ]
We present a new open-source cosmological code, called SWIFT, designed to solve the equations of hydrodynamics using a particle-based approach (Smooth Particle Hydrodynamics) on hybrid shared/distributed-memory architectures. SWIFT was designed from the bottom up to provide excellent strong scaling on both commodity clusters (Tier-2 systems) and Top100-supercomputers (Tier-0 systems), without relying on architecture-specific features or specialized accelerator hardware. This performance is due to three main computational approaches: (1) Task-based parallelism for shared-memory parallelism, which provides fine-grained load balancing and thus strong scaling on large numbers of cores. (2) Graph-based domain decomposition, which uses the task graph to decompose the simulation domain such that the work, as opposed to just the data, as is the case with most partitioning schemes, is equally distributed across all nodes. (3) Fully dynamic and asynchronous communication, in which communication is modelled as just another task in the task-based scheme, sending data whenever it is ready and deferring on tasks that rely on data from other nodes until it arrives. In order to use these approaches, the code had to be re-written from scratch, and the algorithms therein adapted to the task-based paradigm. As a result, we can show upwards of 60% parallel efficiency for moderate-sized problems when increasing the number of cores 512-fold, on both x86-based and Power8-based architectures.
1701.03854
Xiaowang Zhang
Qiong Li and Xiaowang Zhang and Zhiyong Feng
PRSP: A Plugin-based Framework for RDF Stream Processing
2 pages and 1 figure
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a plugin-based framework for RDF stream processing named PRSP. Within this framework, we can employ SPARQL query engines to process C-SPARQL queries with maintaining the high performance of those engines in a simple way. Taking advantage of PRSP, we can process large-scale RDF streams in a distributed context via distributed SPARQL engines. Besides, we can evaluate the performance and correctness of existing SPARQL query engines in handling RDF streams in a united way, which amends the evaluation of them ranging from static RDF (i.e., RDF graph) to dynamic RDF (i.e., RDF stream). Finally, within PRSP, we experimently evaluate the correctness and the performance on YABench. The experiments show that PRSP can still maintain the high performance of those engines in RDF stream processing although there are some slight differences among them.
[ { "created": "Sat, 14 Jan 2017 00:17:53 GMT", "version": "v1" } ]
2017-01-17
[ [ "Li", "Qiong", "" ], [ "Zhang", "Xiaowang", "" ], [ "Feng", "Zhiyong", "" ] ]
In this paper, we propose a plugin-based framework for RDF stream processing named PRSP. Within this framework, we can employ SPARQL query engines to process C-SPARQL queries with maintaining the high performance of those engines in a simple way. Taking advantage of PRSP, we can process large-scale RDF streams in a distributed context via distributed SPARQL engines. Besides, we can evaluate the performance and correctness of existing SPARQL query engines in handling RDF streams in a united way, which amends the evaluation of them ranging from static RDF (i.e., RDF graph) to dynamic RDF (i.e., RDF stream). Finally, within PRSP, we experimently evaluate the correctness and the performance on YABench. The experiments show that PRSP can still maintain the high performance of those engines in RDF stream processing although there are some slight differences among them.
1001.3689
Mohsen Sardari
Mohsen Sardari, Faramarz Hendessi, Faramarz Fekri
Infocast: A New Paradigm for Collaborative Content Distribution from Roadside Units to Vehicular Networks Using Rateless Codes
null
Proc. 6th Annual IEEE Communications Society Conference on Sensor, Mesh and Ad Hoc Communications and Networks SECON '09, 2009, 1-9
10.1109/SAHCN.2009.5168939
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we address the problem of distributing a large amount of bulk data to a sparse vehicular network from roadside infostations, using efficient vehicle-to-vehicle collaboration. Due to the highly dynamic nature of the underlying vehicular network topology, we depart from architectures requiring centralized coordination, reliable MAC scheduling, or global network state knowledge, and instead adopt a distributed paradigm with simple protocols. In other words, we investigate the problem of reliable dissemination from multiple sources when each node in the network shares a limited amount of its resources for cooperating with others. By using \emph{rateless} coding at the Road Side Unit (RSU) and using vehicles as data carriers, we describe an efficient way to achieve reliable dissemination to all nodes (even disconnected clusters in the network). In the nutshell, we explore vehicles as mobile storage devices. We then develop a method to keep the density of the rateless codes packets as a function of distance from the RSU at the desired level set for the target decoding distance. We investigate various tradeoffs involving buffer size, maximum capacity, and the mobility parameter of the vehicles.
[ { "created": "Wed, 20 Jan 2010 22:24:10 GMT", "version": "v1" } ]
2010-01-22
[ [ "Sardari", "Mohsen", "" ], [ "Hendessi", "Faramarz", "" ], [ "Fekri", "Faramarz", "" ] ]
In this paper, we address the problem of distributing a large amount of bulk data to a sparse vehicular network from roadside infostations, using efficient vehicle-to-vehicle collaboration. Due to the highly dynamic nature of the underlying vehicular network topology, we depart from architectures requiring centralized coordination, reliable MAC scheduling, or global network state knowledge, and instead adopt a distributed paradigm with simple protocols. In other words, we investigate the problem of reliable dissemination from multiple sources when each node in the network shares a limited amount of its resources for cooperating with others. By using \emph{rateless} coding at the Road Side Unit (RSU) and using vehicles as data carriers, we describe an efficient way to achieve reliable dissemination to all nodes (even disconnected clusters in the network). In the nutshell, we explore vehicles as mobile storage devices. We then develop a method to keep the density of the rateless codes packets as a function of distance from the RSU at the desired level set for the target decoding distance. We investigate various tradeoffs involving buffer size, maximum capacity, and the mobility parameter of the vehicles.
1711.08319
Mark Burgin
Mark Burgin
Systems, Actors and Agents: Operation in a multicomponent environment
null
null
null
null
cs.MA cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-agent approach has become popular in computer science and technology. However, the conventional models of multi-agent and multicomponent systems implicitly or explicitly assume existence of absolute time or even do not include time in the set of defining parameters. At the same time, it is proved theoretically and validated experimentally that there are different times and time scales in a variety of real systems - physical, chemical, biological, social, informational, etc. Thus, the goal of this work is construction of a multi-agent multicomponent system models with concurrency of processes and diversity of actions. To achieve this goal, a mathematical system actor model is elaborated and its properties are studied.
[ { "created": "Thu, 16 Nov 2017 01:31:36 GMT", "version": "v1" } ]
2017-11-23
[ [ "Burgin", "Mark", "" ] ]
Multi-agent approach has become popular in computer science and technology. However, the conventional models of multi-agent and multicomponent systems implicitly or explicitly assume existence of absolute time or even do not include time in the set of defining parameters. At the same time, it is proved theoretically and validated experimentally that there are different times and time scales in a variety of real systems - physical, chemical, biological, social, informational, etc. Thus, the goal of this work is construction of a multi-agent multicomponent system models with concurrency of processes and diversity of actions. To achieve this goal, a mathematical system actor model is elaborated and its properties are studied.
2304.09728
Songhua Liu
Songhua Liu, Jingwen Ye, Xinchao Wang
Any-to-Any Style Transfer: Making Picasso and Da Vinci Collaborate
Work in progress
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Style transfer aims to render the style of a given image for style reference to another given image for content reference, and has been widely adopted in artistic generation and image editing. Existing approaches either apply the holistic style of the style image in a global manner, or migrate local colors and textures of the style image to the content counterparts in a pre-defined way. In either case, only one result can be generated for a specific pair of content and style images, which therefore lacks flexibility and is hard to satisfy different users with different preferences. We propose here a novel strategy termed Any-to-Any Style Transfer to address this drawback, which enables users to interactively select styles of regions in the style image and apply them to the prescribed content regions. In this way, personalizable style transfer is achieved through human-computer interaction. At the heart of our approach lies in (1) a region segmentation module based on Segment Anything, which supports region selection with only some clicks or drawing on images and thus takes user inputs conveniently and flexibly; (2) and an attention fusion module, which converts inputs from users to controlling signals for the style transfer model. Experiments demonstrate the effectiveness for personalizable style transfer. Notably, our approach performs in a plug-and-play manner portable to any style transfer method and enhance the controllablity. Our code is available \href{https://github.com/Huage001/Transfer-Any-Style}{here}.
[ { "created": "Wed, 19 Apr 2023 15:15:36 GMT", "version": "v1" }, { "created": "Thu, 20 Apr 2023 04:17:31 GMT", "version": "v2" } ]
2023-04-21
[ [ "Liu", "Songhua", "" ], [ "Ye", "Jingwen", "" ], [ "Wang", "Xinchao", "" ] ]
Style transfer aims to render the style of a given image for style reference to another given image for content reference, and has been widely adopted in artistic generation and image editing. Existing approaches either apply the holistic style of the style image in a global manner, or migrate local colors and textures of the style image to the content counterparts in a pre-defined way. In either case, only one result can be generated for a specific pair of content and style images, which therefore lacks flexibility and is hard to satisfy different users with different preferences. We propose here a novel strategy termed Any-to-Any Style Transfer to address this drawback, which enables users to interactively select styles of regions in the style image and apply them to the prescribed content regions. In this way, personalizable style transfer is achieved through human-computer interaction. At the heart of our approach lies in (1) a region segmentation module based on Segment Anything, which supports region selection with only some clicks or drawing on images and thus takes user inputs conveniently and flexibly; (2) and an attention fusion module, which converts inputs from users to controlling signals for the style transfer model. Experiments demonstrate the effectiveness for personalizable style transfer. Notably, our approach performs in a plug-and-play manner portable to any style transfer method and enhance the controllablity. Our code is available \href{https://github.com/Huage001/Transfer-Any-Style}{here}.
2106.11297
Michael S. Ryoo
Michael S. Ryoo, AJ Piergiovanni, Anurag Arnab, Mostafa Dehghani, Anelia Angelova
TokenLearner: What Can 8 Learned Tokens Do for Images and Videos?
This is the full version of the paper, extending its conference paper at NeurIPS 2021. Version 1.1 of the code is released
NeurIPS 2021
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we introduce a novel visual representation learning which relies on a handful of adaptively learned tokens, and which is applicable to both image and video understanding tasks. Instead of relying on hand-designed splitting strategies to obtain visual tokens and processing a large number of densely sampled patches for attention, our approach learns to mine important tokens in visual data. This results in efficiently and effectively finding a few important visual tokens and enables modeling of pairwise attention between such tokens, over a longer temporal horizon for videos, or the spatial content in images. Our experiments demonstrate strong performance on several challenging benchmarks for both image and video recognition tasks. Importantly, due to our tokens being adaptive, we accomplish competitive results at significantly reduced compute amount. We obtain comparable results to the state-of-the-arts on ImageNet while being computationally more efficient. We also confirm the effectiveness of the approach on multiple video datasets, including Kinetics-400, Kinetics-600, Charades, and AViD. The code is available at: https://github.com/google-research/scenic/tree/main/scenic/projects/token_learner
[ { "created": "Mon, 21 Jun 2021 17:55:59 GMT", "version": "v1" }, { "created": "Tue, 5 Oct 2021 17:52:45 GMT", "version": "v2" }, { "created": "Tue, 7 Dec 2021 18:11:22 GMT", "version": "v3" }, { "created": "Sun, 3 Apr 2022 15:42:57 GMT", "version": "v4" } ]
2022-04-05
[ [ "Ryoo", "Michael S.", "" ], [ "Piergiovanni", "AJ", "" ], [ "Arnab", "Anurag", "" ], [ "Dehghani", "Mostafa", "" ], [ "Angelova", "Anelia", "" ] ]
In this paper, we introduce a novel visual representation learning which relies on a handful of adaptively learned tokens, and which is applicable to both image and video understanding tasks. Instead of relying on hand-designed splitting strategies to obtain visual tokens and processing a large number of densely sampled patches for attention, our approach learns to mine important tokens in visual data. This results in efficiently and effectively finding a few important visual tokens and enables modeling of pairwise attention between such tokens, over a longer temporal horizon for videos, or the spatial content in images. Our experiments demonstrate strong performance on several challenging benchmarks for both image and video recognition tasks. Importantly, due to our tokens being adaptive, we accomplish competitive results at significantly reduced compute amount. We obtain comparable results to the state-of-the-arts on ImageNet while being computationally more efficient. We also confirm the effectiveness of the approach on multiple video datasets, including Kinetics-400, Kinetics-600, Charades, and AViD. The code is available at: https://github.com/google-research/scenic/tree/main/scenic/projects/token_learner
2310.16836
Shih-Yang Liu
Shih-yang Liu, Zechun Liu, Xijie Huang, Pingcheng Dong, Kwang-Ting Cheng
LLM-FP4: 4-Bit Floating-Point Quantized Transformers
EMNLP 2023 Main Conference
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
10.18653/v1/2023.emnlp-main.39
null
cs.CL cs.AI cs.AR cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose LLM-FP4 for quantizing both weights and activations in large language models (LLMs) down to 4-bit floating-point values, in a post-training manner. Existing post-training quantization (PTQ) solutions are primarily integer-based and struggle with bit widths below 8 bits. Compared to integer quantization, floating-point (FP) quantization is more flexible and can better handle long-tail or bell-shaped distributions, and it has emerged as a default choice in many hardware platforms. One characteristic of FP quantization is that its performance largely depends on the choice of exponent bits and clipping range. In this regard, we construct a strong FP-PTQ baseline by searching for the optimal quantization parameters. Furthermore, we observe a high inter-channel variance and low intra-channel variance pattern in activation distributions, which adds activation quantization difficulty. We recognize this pattern to be consistent across a spectrum of transformer models designed for diverse tasks, such as LLMs, BERT, and Vision Transformer models. To tackle this, we propose per-channel activation quantization and show that these additional scaling factors can be reparameterized as exponential biases of weights, incurring a negligible cost. Our method, for the first time, can quantize both weights and activations in the LLaMA-13B to only 4-bit and achieves an average score of 63.1 on the common sense zero-shot reasoning tasks, which is only 5.8 lower than the full-precision model, significantly outperforming the previous state-of-the-art by 12.7 points. Code is available at: https://github.com/nbasyl/LLM-FP4.
[ { "created": "Wed, 25 Oct 2023 17:59:32 GMT", "version": "v1" } ]
2024-04-30
[ [ "Liu", "Shih-yang", "" ], [ "Liu", "Zechun", "" ], [ "Huang", "Xijie", "" ], [ "Dong", "Pingcheng", "" ], [ "Cheng", "Kwang-Ting", "" ] ]
We propose LLM-FP4 for quantizing both weights and activations in large language models (LLMs) down to 4-bit floating-point values, in a post-training manner. Existing post-training quantization (PTQ) solutions are primarily integer-based and struggle with bit widths below 8 bits. Compared to integer quantization, floating-point (FP) quantization is more flexible and can better handle long-tail or bell-shaped distributions, and it has emerged as a default choice in many hardware platforms. One characteristic of FP quantization is that its performance largely depends on the choice of exponent bits and clipping range. In this regard, we construct a strong FP-PTQ baseline by searching for the optimal quantization parameters. Furthermore, we observe a high inter-channel variance and low intra-channel variance pattern in activation distributions, which adds activation quantization difficulty. We recognize this pattern to be consistent across a spectrum of transformer models designed for diverse tasks, such as LLMs, BERT, and Vision Transformer models. To tackle this, we propose per-channel activation quantization and show that these additional scaling factors can be reparameterized as exponential biases of weights, incurring a negligible cost. Our method, for the first time, can quantize both weights and activations in the LLaMA-13B to only 4-bit and achieves an average score of 63.1 on the common sense zero-shot reasoning tasks, which is only 5.8 lower than the full-precision model, significantly outperforming the previous state-of-the-art by 12.7 points. Code is available at: https://github.com/nbasyl/LLM-FP4.
cs/0111011
Giovambattista Ianni
Giovambattista Ianni
Sintesi di algoritmi con SKY
In italian
null
null
Unical Math. Dept. TR 11-2001
cs.LO
null
This paper describes the semantics and ideas about SKY, a logic programming language intended in order to specify algorithmic strategies for the evaluation of problems.
[ { "created": "Tue, 6 Nov 2001 17:02:25 GMT", "version": "v1" } ]
2007-05-23
[ [ "Ianni", "Giovambattista", "" ] ]
This paper describes the semantics and ideas about SKY, a logic programming language intended in order to specify algorithmic strategies for the evaluation of problems.
2109.14696
Sihao Zhao
Yoga Suhas Kuruba Manjunath, Sihao Zhao, Xiao-Ping Zhang
Time-Distributed Feature Learning in Network Traffic Classification for Internet of Things
null
null
null
null
cs.NI cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
The plethora of Internet of Things (IoT) devices leads to explosive network traffic. The network traffic classification (NTC) is an essential tool to explore behaviours of network flows, and NTC is required for Internet service providers (ISPs) to manage the performance of the IoT network. We propose a novel network data representation, treating the traffic data as a series of images. Thus, the network data is realized as a video stream to employ time-distributed (TD) feature learning. The intra-temporal information within the network statistical data is learned using convolutional neural networks (CNN) and long short-term memory (LSTM), and the inter pseudo-temporal feature among the flows is learned by TD multi-layer perceptron (MLP). We conduct experiments using a large data-set with more number of classes. The experimental result shows that the TD feature learning elevates the network classification performance by 10%.
[ { "created": "Wed, 29 Sep 2021 20:01:40 GMT", "version": "v1" } ]
2021-10-01
[ [ "Manjunath", "Yoga Suhas Kuruba", "" ], [ "Zhao", "Sihao", "" ], [ "Zhang", "Xiao-Ping", "" ] ]
The plethora of Internet of Things (IoT) devices leads to explosive network traffic. The network traffic classification (NTC) is an essential tool to explore behaviours of network flows, and NTC is required for Internet service providers (ISPs) to manage the performance of the IoT network. We propose a novel network data representation, treating the traffic data as a series of images. Thus, the network data is realized as a video stream to employ time-distributed (TD) feature learning. The intra-temporal information within the network statistical data is learned using convolutional neural networks (CNN) and long short-term memory (LSTM), and the inter pseudo-temporal feature among the flows is learned by TD multi-layer perceptron (MLP). We conduct experiments using a large data-set with more number of classes. The experimental result shows that the TD feature learning elevates the network classification performance by 10%.
2005.06311
Zhihao Gavin Tang
Zhiyi Huang, Zhihao Gavin Tang, Xiaowei Wu and Yuhao Zhang
Fully Online Matching II: Beating Ranking and Water-filling
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Karp, Vazirani, and Vazirani (STOC 1990) initiated the study of online bipartite matching, which has held a central role in online algorithms ever since. Of particular importance are the Ranking algorithm for integral matching and the Water-filling algorithm for fractional matching. Most algorithms in the literature can be viewed as adaptations of these two in the corresponding models. Recently, Huang et al.~(STOC 2018, SODA 2019) introduced a more general model called \emph{fully online matching}, which considers general graphs and allows all vertices to arrive online. They also generalized Ranking and Water-filling to fully online matching and gave some tight analysis: Ranking is $\Omega \approx 0.567$-competitive on bipartite graphs where the $\Omega$-constant satisfies $\Omega e^\Omega = 1$, and Water-filling is $2-\sqrt{2} \approx 0.585$-competitive on general graphs. We propose fully online matching algorithms strictly better than Ranking and Water-filling. For integral matching on bipartite graphs, we build on the online primal dual analysis of Ranking and Water-filling to design a $0.569$-competitive hybrid algorithm called Balanced Ranking. To our knowledge, it is the first integral algorithm in the online matching literature that successfully integrates ideas from Water-filling. For fractional matching on general graphs, we give a $0.592$-competitive algorithm called Eager Water-filling, which may match a vertex on its arrival. By contrast, the original Water-filling algorithm always matches vertices at their deadlines. Our result for fractional matching further shows a separation between fully online matching and the general vertex arrival model by Wang and Wong (ICALP 2015), due to an upper bound of $0.5914$ in the latter model by Buchbinder, Segev, and Tkach (ESA 2017).
[ { "created": "Wed, 13 May 2020 13:33:21 GMT", "version": "v1" } ]
2020-05-14
[ [ "Huang", "Zhiyi", "" ], [ "Tang", "Zhihao Gavin", "" ], [ "Wu", "Xiaowei", "" ], [ "Zhang", "Yuhao", "" ] ]
Karp, Vazirani, and Vazirani (STOC 1990) initiated the study of online bipartite matching, which has held a central role in online algorithms ever since. Of particular importance are the Ranking algorithm for integral matching and the Water-filling algorithm for fractional matching. Most algorithms in the literature can be viewed as adaptations of these two in the corresponding models. Recently, Huang et al.~(STOC 2018, SODA 2019) introduced a more general model called \emph{fully online matching}, which considers general graphs and allows all vertices to arrive online. They also generalized Ranking and Water-filling to fully online matching and gave some tight analysis: Ranking is $\Omega \approx 0.567$-competitive on bipartite graphs where the $\Omega$-constant satisfies $\Omega e^\Omega = 1$, and Water-filling is $2-\sqrt{2} \approx 0.585$-competitive on general graphs. We propose fully online matching algorithms strictly better than Ranking and Water-filling. For integral matching on bipartite graphs, we build on the online primal dual analysis of Ranking and Water-filling to design a $0.569$-competitive hybrid algorithm called Balanced Ranking. To our knowledge, it is the first integral algorithm in the online matching literature that successfully integrates ideas from Water-filling. For fractional matching on general graphs, we give a $0.592$-competitive algorithm called Eager Water-filling, which may match a vertex on its arrival. By contrast, the original Water-filling algorithm always matches vertices at their deadlines. Our result for fractional matching further shows a separation between fully online matching and the general vertex arrival model by Wang and Wong (ICALP 2015), due to an upper bound of $0.5914$ in the latter model by Buchbinder, Segev, and Tkach (ESA 2017).
cs/0305033
Johan Schubert
Ulla Bergsten, Johan Schubert, Per Svensson
Beslutst\"odssystemet Dezzy - en \"oversikt
18 pages, 8 figures, in Swedish, with appendix in English
in Dokumentation 7 juni av Seminarium och fackutst\"allning om samband, sensorer och datorer f\"or ledningssystem till f\"orsvaret (MILINF'89), pp. 07B2:19-31, Enk\"oping, June 1989, Telub AB, V\"axj\"o, 1989
null
FOA Report B 20078-2.7
cs.AI cs.DB
null
Within the scope of the three-year ANTI-SUBMARINE WARFARE project of the National Defence Research Establishment, the INFORMATION SYSTEMS subproject has developed the demonstration prototype Dezzy for handling and analysis of intelligence reports concerning foreign underwater activities. ----- Inom ramen f\"or FOA:s tre{\aa}riga huvudprojekt UB{\AA}TSSKYDD har delprojekt INFORMATIONSSYSTEM utvecklat demonstrationsprototypen Dezzy till ett beslutsst\"odsystem f\"or hantering och analys av underr\"attelser om fr\"ammande undervattensverksamhet.
[ { "created": "Fri, 16 May 2003 18:26:22 GMT", "version": "v1" } ]
2007-05-23
[ [ "Bergsten", "Ulla", "" ], [ "Schubert", "Johan", "" ], [ "Svensson", "Per", "" ] ]
Within the scope of the three-year ANTI-SUBMARINE WARFARE project of the National Defence Research Establishment, the INFORMATION SYSTEMS subproject has developed the demonstration prototype Dezzy for handling and analysis of intelligence reports concerning foreign underwater activities. ----- Inom ramen f\"or FOA:s tre{\aa}riga huvudprojekt UB{\AA}TSSKYDD har delprojekt INFORMATIONSSYSTEM utvecklat demonstrationsprototypen Dezzy till ett beslutsst\"odsystem f\"or hantering och analys av underr\"attelser om fr\"ammande undervattensverksamhet.
1509.01659
Armen Aghajanyan
Armen Aghajanyan
Gravitational Clustering
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The downfall of many supervised learning algorithms, such as neural networks, is the inherent need for a large amount of training data. Although there is a lot of buzz about big data, there is still the problem of doing classification from a small dataset. Other methods such as support vector machines, although capable of dealing with few samples, are inherently binary classifiers, and are in need of learning strategies such as One vs All in the case of multi-classification. In the presence of a large number of classes this can become problematic. In this paper we present, a novel approach to supervised learning through the method of clustering. Unlike traditional methods such as K-Means, Gravitational Clustering does not require the initial number of clusters, and automatically builds the clusters, individual samples can be arbitrarily weighted and it requires only few samples while staying resilient to over-fitting.
[ { "created": "Sat, 5 Sep 2015 03:37:50 GMT", "version": "v1" } ]
2015-09-08
[ [ "Aghajanyan", "Armen", "" ] ]
The downfall of many supervised learning algorithms, such as neural networks, is the inherent need for a large amount of training data. Although there is a lot of buzz about big data, there is still the problem of doing classification from a small dataset. Other methods such as support vector machines, although capable of dealing with few samples, are inherently binary classifiers, and are in need of learning strategies such as One vs All in the case of multi-classification. In the presence of a large number of classes this can become problematic. In this paper we present, a novel approach to supervised learning through the method of clustering. Unlike traditional methods such as K-Means, Gravitational Clustering does not require the initial number of clusters, and automatically builds the clusters, individual samples can be arbitrarily weighted and it requires only few samples while staying resilient to over-fitting.
2403.04600
Reza Dastbasteh
Reza Dastbasteh, Farzad Padashnick, Pedro M. Crespo, Markus Grassl, and Javad Sharafi
Equivalence of constacyclic codes with shift constants of different orders
15 pages, 4 figures, 2 tables
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
Let $a$ and $b$ be two non-zero elements of a finite field $\mathbb{F}_q$, where $q>2$. It has been shown that if $a$ and $b$ have the same multiplicative order in $\mathbb{F}_q$, then the families of $a$-constacyclic and $b$-constacyclic codes over $\mathbb{F}_q$ are monomially equivalent. In this paper, we investigate the monomial equivalence of $a$-constacyclic and $b$-constacyclic codes when $a$ and $b$ have distinct multiplicative orders. We present novel conditions for establishing monomial equivalence in such constacyclic codes, surpassing previous methods of determining monomially equivalent constacyclic and cyclic codes. As an application, we use these results to search for new linear codes more systematically. In particular, we present more than $70$ new record-breaking linear codes over various finite fields, as well as new binary quantum codes.
[ { "created": "Thu, 7 Mar 2024 15:48:19 GMT", "version": "v1" } ]
2024-03-08
[ [ "Dastbasteh", "Reza", "" ], [ "Padashnick", "Farzad", "" ], [ "Crespo", "Pedro M.", "" ], [ "Grassl", "Markus", "" ], [ "Sharafi", "Javad", "" ] ]
Let $a$ and $b$ be two non-zero elements of a finite field $\mathbb{F}_q$, where $q>2$. It has been shown that if $a$ and $b$ have the same multiplicative order in $\mathbb{F}_q$, then the families of $a$-constacyclic and $b$-constacyclic codes over $\mathbb{F}_q$ are monomially equivalent. In this paper, we investigate the monomial equivalence of $a$-constacyclic and $b$-constacyclic codes when $a$ and $b$ have distinct multiplicative orders. We present novel conditions for establishing monomial equivalence in such constacyclic codes, surpassing previous methods of determining monomially equivalent constacyclic and cyclic codes. As an application, we use these results to search for new linear codes more systematically. In particular, we present more than $70$ new record-breaking linear codes over various finite fields, as well as new binary quantum codes.
2201.05819
Yuefei Lyu
Yuefei Lyu, Xiaoyu Yang, Jiaxin Liu, Philip S. Yu, Sihong Xie, Xi Zhang
Interpretable and Effective Reinforcement Learning for Attacking against Graph-based Rumor Detection
null
null
null
null
cs.LG cs.CR cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Social networks are frequently polluted by rumors, which can be detected by advanced models such as graph neural networks. However, the models are vulnerable to attacks and understanding the vulnerabilities is critical to rumor detection in practice. To discover subtle vulnerabilities, we design a powerful attacking algorithm to camouflage rumors in social networks based on reinforcement learning that can interact with and attack any black-box detectors. The environment has exponentially large state spaces, high-order graph dependencies, and delayed noisy rewards, making the state-of-the-art end-to-end approaches difficult to learn features as large learning costs and expressive limitation of graph deep models. Instead, we design domain-specific features to avoid learning features and produce interpretable attack policies. To further speed up policy optimization, we devise: (i) a credit assignment method that decomposes delayed rewards to atomic attacking actions proportional to the their camouflage effects on target rumors; (ii) a time-dependent control variate to reduce reward variance due to large graphs and many attacking steps, supported by the reward variance analysis and a Bayesian analysis of the prediction distribution. On three real world datasets of rumor detection tasks, we demonstrate: (i) the effectiveness of the learned attacking policy compared to rule-based attacks and current end-to-end approaches; (ii) the usefulness of the proposed credit assignment strategy and variance reduction components; (iii) the interpretability of the policy when generating strong attacks via the case study.
[ { "created": "Sat, 15 Jan 2022 10:06:29 GMT", "version": "v1" }, { "created": "Fri, 14 Oct 2022 14:02:13 GMT", "version": "v2" } ]
2022-10-17
[ [ "Lyu", "Yuefei", "" ], [ "Yang", "Xiaoyu", "" ], [ "Liu", "Jiaxin", "" ], [ "Yu", "Philip S.", "" ], [ "Xie", "Sihong", "" ], [ "Zhang", "Xi", "" ] ]
Social networks are frequently polluted by rumors, which can be detected by advanced models such as graph neural networks. However, the models are vulnerable to attacks and understanding the vulnerabilities is critical to rumor detection in practice. To discover subtle vulnerabilities, we design a powerful attacking algorithm to camouflage rumors in social networks based on reinforcement learning that can interact with and attack any black-box detectors. The environment has exponentially large state spaces, high-order graph dependencies, and delayed noisy rewards, making the state-of-the-art end-to-end approaches difficult to learn features as large learning costs and expressive limitation of graph deep models. Instead, we design domain-specific features to avoid learning features and produce interpretable attack policies. To further speed up policy optimization, we devise: (i) a credit assignment method that decomposes delayed rewards to atomic attacking actions proportional to the their camouflage effects on target rumors; (ii) a time-dependent control variate to reduce reward variance due to large graphs and many attacking steps, supported by the reward variance analysis and a Bayesian analysis of the prediction distribution. On three real world datasets of rumor detection tasks, we demonstrate: (i) the effectiveness of the learned attacking policy compared to rule-based attacks and current end-to-end approaches; (ii) the usefulness of the proposed credit assignment strategy and variance reduction components; (iii) the interpretability of the policy when generating strong attacks via the case study.
1905.09345
Yuke Wang
Yuke Wang, Zhaorui Zeng, Boyuan Feng, Lei Deng, Yufei Ding
KPynq: A Work-Efficient Triangle-Inequality based K-means on FPGA
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
K-means is a popular but computation-intensive algorithm for unsupervised learning. To address this issue, we present KPynq, a work-efficient triangle-inequality based K-means on FPGA for handling large-size, high-dimension datasets. KPynq leverages an algorithm-level optimization to balance the performance and computation irregularity, and a hardware architecture design to fully exploit the pipeline and parallel processing capability of various FPGAs. In the experiment, KPynq consistently outperforms the CPU-based standard K-means in terms of its speedup (up to 4.2x) and significant energy-efficiency (up to 218x).
[ { "created": "Wed, 22 May 2019 19:41:17 GMT", "version": "v1" } ]
2019-05-24
[ [ "Wang", "Yuke", "" ], [ "Zeng", "Zhaorui", "" ], [ "Feng", "Boyuan", "" ], [ "Deng", "Lei", "" ], [ "Ding", "Yufei", "" ] ]
K-means is a popular but computation-intensive algorithm for unsupervised learning. To address this issue, we present KPynq, a work-efficient triangle-inequality based K-means on FPGA for handling large-size, high-dimension datasets. KPynq leverages an algorithm-level optimization to balance the performance and computation irregularity, and a hardware architecture design to fully exploit the pipeline and parallel processing capability of various FPGAs. In the experiment, KPynq consistently outperforms the CPU-based standard K-means in terms of its speedup (up to 4.2x) and significant energy-efficiency (up to 218x).
2310.11442
Ludwig Stage
Ludwig Stage and Dimka Karastoyanova
Trusted Provenance of Automated, Collaborative and Adaptive Data Processing Pipelines
22 pages, 6 figures
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-sa/4.0/
To benefit from the abundance of data and the insights it brings data processing pipelines are being used in many areas of research and development in both industry and academia. One approach to automating data processing pipelines is the workflow technology, as it also supports collaborative, trial-and-error experimentation with the pipeline architecture in different application domains. In addition to the necessary flexibility that such pipelines need to possess, in collaborative settings cross-organisational interactions are plagued by lack of trust. While capturing provenance information related to the pipeline execution and the processed data is a first step towards enabling trusted collaborations, the current solutions do not allow for provenance of the change in the processing pipelines, where the subject of change can be made on any aspect of the workflow implementing the pipeline and on the data used while the pipeline is being executed. Therefore in this work we provide a solution architecture and a proof of concept implementation of a service, called Provenance Holder, which enable provenance of collaborative, adaptive data processing pipelines in a trusted manner. We also contribute a definition of a set of properties of such a service and identify future research directions.
[ { "created": "Tue, 17 Oct 2023 17:52:27 GMT", "version": "v1" } ]
2023-10-18
[ [ "Stage", "Ludwig", "" ], [ "Karastoyanova", "Dimka", "" ] ]
To benefit from the abundance of data and the insights it brings data processing pipelines are being used in many areas of research and development in both industry and academia. One approach to automating data processing pipelines is the workflow technology, as it also supports collaborative, trial-and-error experimentation with the pipeline architecture in different application domains. In addition to the necessary flexibility that such pipelines need to possess, in collaborative settings cross-organisational interactions are plagued by lack of trust. While capturing provenance information related to the pipeline execution and the processed data is a first step towards enabling trusted collaborations, the current solutions do not allow for provenance of the change in the processing pipelines, where the subject of change can be made on any aspect of the workflow implementing the pipeline and on the data used while the pipeline is being executed. Therefore in this work we provide a solution architecture and a proof of concept implementation of a service, called Provenance Holder, which enable provenance of collaborative, adaptive data processing pipelines in a trusted manner. We also contribute a definition of a set of properties of such a service and identify future research directions.
1606.03418
Lili Su
Lili Su, Nitin H. Vaidya
Asynchronous Distributed Hypothesis Testing in the Presence of Crash Failures
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses the problem of distributed hypothesis testing in multi-agent networks, where agents repeatedly collect local observations about an unknown state of the world, and try to collaboratively detect the true state through information exchange. We focus on the impact of failures and asynchrony (two fundamental factors in distributed systems) on the performance of consensus-based non-Bayesian learning. In particular, we consider the scenario where the networked agents may suffer crash faults, and messages delay can be arbitrarily long but finite. We identify the minimal global detectability of the network for non-Bayesian rule to succeed. In addition, we obtain a generalization of a celebrated result by Wolfowitz and Hajnal to submatrices, which might be of independent interest.
[ { "created": "Thu, 4 Feb 2016 21:12:53 GMT", "version": "v1" } ]
2016-06-13
[ [ "Su", "Lili", "" ], [ "Vaidya", "Nitin H.", "" ] ]
This paper addresses the problem of distributed hypothesis testing in multi-agent networks, where agents repeatedly collect local observations about an unknown state of the world, and try to collaboratively detect the true state through information exchange. We focus on the impact of failures and asynchrony (two fundamental factors in distributed systems) on the performance of consensus-based non-Bayesian learning. In particular, we consider the scenario where the networked agents may suffer crash faults, and messages delay can be arbitrarily long but finite. We identify the minimal global detectability of the network for non-Bayesian rule to succeed. In addition, we obtain a generalization of a celebrated result by Wolfowitz and Hajnal to submatrices, which might be of independent interest.
2110.12567
Shujian Zhang
Shujian Zhang, Xinjie Fan, Huangjie Zheng, Korawat Tanwisuth, Mingyuan Zhou
Alignment Attention by Matching Key and Query Distributions
NeurIPS 2021; Our code is publicly available at https://github.com/szhang42/alignment_attention
null
null
null
cs.LG cs.CL stat.ML
http://creativecommons.org/licenses/by/4.0/
The neural attention mechanism has been incorporated into deep neural networks to achieve state-of-the-art performance in various domains. Most such models use multi-head self-attention which is appealing for the ability to attend to information from different perspectives. This paper introduces alignment attention that explicitly encourages self-attention to match the distributions of the key and query within each head. The resulting alignment attention networks can be optimized as an unsupervised regularization in the existing attention framework. It is simple to convert any models with self-attention, including pre-trained ones, to the proposed alignment attention. On a variety of language understanding tasks, we show the effectiveness of our method in accuracy, uncertainty estimation, generalization across domains, and robustness to adversarial attacks. We further demonstrate the general applicability of our approach on graph attention and visual question answering, showing the great potential of incorporating our alignment method into various attention-related tasks.
[ { "created": "Mon, 25 Oct 2021 00:54:57 GMT", "version": "v1" } ]
2021-10-26
[ [ "Zhang", "Shujian", "" ], [ "Fan", "Xinjie", "" ], [ "Zheng", "Huangjie", "" ], [ "Tanwisuth", "Korawat", "" ], [ "Zhou", "Mingyuan", "" ] ]
The neural attention mechanism has been incorporated into deep neural networks to achieve state-of-the-art performance in various domains. Most such models use multi-head self-attention which is appealing for the ability to attend to information from different perspectives. This paper introduces alignment attention that explicitly encourages self-attention to match the distributions of the key and query within each head. The resulting alignment attention networks can be optimized as an unsupervised regularization in the existing attention framework. It is simple to convert any models with self-attention, including pre-trained ones, to the proposed alignment attention. On a variety of language understanding tasks, we show the effectiveness of our method in accuracy, uncertainty estimation, generalization across domains, and robustness to adversarial attacks. We further demonstrate the general applicability of our approach on graph attention and visual question answering, showing the great potential of incorporating our alignment method into various attention-related tasks.
2112.10944
Yonatan Ashenafi
Yonatan Ashenafi, Piyush Pandita, Sayan Ghosh
Reinforcement Learning based Sequential Batch-sampling for Bayesian Optimal Experimental Design
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Engineering problems that are modeled using sophisticated mathematical methods or are characterized by expensive-to-conduct tests or experiments, are encumbered with limited budget or finite computational resources. Moreover, practical scenarios in the industry, impose restrictions, based on logistics and preference, on the manner in which the experiments can be conducted. For example, material supply may enable only a handful of experiments in a single-shot or in the case of computational models one may face significant wait-time based on shared computational resources. In such scenarios, one usually resorts to performing experiments in a manner that allows for maximizing one's state-of-knowledge while satisfying the above mentioned practical constraints. Sequential design of experiments (SDOE) is a popular suite of methods, that has yielded promising results in recent years across different engineering and practical problems. A common strategy, that leverages Bayesian formalism is the Bayesian SDOE, which usually works best in the one-step-ahead or myopic scenario of selecting a single experiment at each step of a sequence of experiments. In this work, we aim to extend the SDOE strategy, to query the experiment or computer code at a batch of inputs. To this end, we leverage deep reinforcement learning (RL) based policy gradient methods, to propose batches of queries that are selected taking into account entire budget in hand. The algorithm retains the sequential nature, inherent in the SDOE, while incorporating elements of reward based on task from the domain of deep RL. A unique capability of the proposed methodology is its ability to be applied to multiple tasks, for example optimization of a function, once its trained. We demonstrate the performance of the proposed algorithm on a synthetic problem, and a challenging high-dimensional engineering problem.
[ { "created": "Tue, 21 Dec 2021 02:25:23 GMT", "version": "v1" }, { "created": "Thu, 23 Dec 2021 07:15:09 GMT", "version": "v2" } ]
2021-12-24
[ [ "Ashenafi", "Yonatan", "" ], [ "Pandita", "Piyush", "" ], [ "Ghosh", "Sayan", "" ] ]
Engineering problems that are modeled using sophisticated mathematical methods or are characterized by expensive-to-conduct tests or experiments, are encumbered with limited budget or finite computational resources. Moreover, practical scenarios in the industry, impose restrictions, based on logistics and preference, on the manner in which the experiments can be conducted. For example, material supply may enable only a handful of experiments in a single-shot or in the case of computational models one may face significant wait-time based on shared computational resources. In such scenarios, one usually resorts to performing experiments in a manner that allows for maximizing one's state-of-knowledge while satisfying the above mentioned practical constraints. Sequential design of experiments (SDOE) is a popular suite of methods, that has yielded promising results in recent years across different engineering and practical problems. A common strategy, that leverages Bayesian formalism is the Bayesian SDOE, which usually works best in the one-step-ahead or myopic scenario of selecting a single experiment at each step of a sequence of experiments. In this work, we aim to extend the SDOE strategy, to query the experiment or computer code at a batch of inputs. To this end, we leverage deep reinforcement learning (RL) based policy gradient methods, to propose batches of queries that are selected taking into account entire budget in hand. The algorithm retains the sequential nature, inherent in the SDOE, while incorporating elements of reward based on task from the domain of deep RL. A unique capability of the proposed methodology is its ability to be applied to multiple tasks, for example optimization of a function, once its trained. We demonstrate the performance of the proposed algorithm on a synthetic problem, and a challenging high-dimensional engineering problem.
2405.05496
Jie Zhou
Xuanwen Ding, Jie Zhou, Liang Dou, Qin Chen, Yuanbin Wu, Chengcai Chen, Liang He
Boosting Large Language Models with Continual Learning for Aspect-based Sentiment Analysis
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Aspect-based sentiment analysis (ABSA) is an important subtask of sentiment analysis, which aims to extract the aspects and predict their sentiments. Most existing studies focus on improving the performance of the target domain by fine-tuning domain-specific models (trained on source domains) based on the target domain dataset. Few works propose continual learning tasks for ABSA, which aim to learn the target domain's ability while maintaining the history domains' abilities. In this paper, we propose a Large Language Model-based Continual Learning (\texttt{LLM-CL}) model for ABSA. First, we design a domain knowledge decoupling module to learn a domain-invariant adapter and separate domain-variant adapters dependently with an orthogonal constraint. Then, we introduce a domain knowledge warmup strategy to align the representation between domain-invariant and domain-variant knowledge. In the test phase, we index the corresponding domain-variant knowledge via domain positioning to not require each sample's domain ID. Extensive experiments over 19 datasets indicate that our \texttt{LLM-CL} model obtains new state-of-the-art performance.
[ { "created": "Thu, 9 May 2024 02:00:07 GMT", "version": "v1" } ]
2024-05-10
[ [ "Ding", "Xuanwen", "" ], [ "Zhou", "Jie", "" ], [ "Dou", "Liang", "" ], [ "Chen", "Qin", "" ], [ "Wu", "Yuanbin", "" ], [ "Chen", "Chengcai", "" ], [ "He", "Liang", "" ] ]
Aspect-based sentiment analysis (ABSA) is an important subtask of sentiment analysis, which aims to extract the aspects and predict their sentiments. Most existing studies focus on improving the performance of the target domain by fine-tuning domain-specific models (trained on source domains) based on the target domain dataset. Few works propose continual learning tasks for ABSA, which aim to learn the target domain's ability while maintaining the history domains' abilities. In this paper, we propose a Large Language Model-based Continual Learning (\texttt{LLM-CL}) model for ABSA. First, we design a domain knowledge decoupling module to learn a domain-invariant adapter and separate domain-variant adapters dependently with an orthogonal constraint. Then, we introduce a domain knowledge warmup strategy to align the representation between domain-invariant and domain-variant knowledge. In the test phase, we index the corresponding domain-variant knowledge via domain positioning to not require each sample's domain ID. Extensive experiments over 19 datasets indicate that our \texttt{LLM-CL} model obtains new state-of-the-art performance.
2207.05714
Riccardo Barbano
Riccardo Barbano, Johannes Leuschner, Javier Antor\'an, Bangti Jin, Jos\'e Miguel Hern\'andez-Lobato
Bayesian Experimental Design for Computed Tomography with the Linearised Deep Image Prior
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate adaptive design based on a single sparse pilot scan for generating effective scanning strategies for computed tomography reconstruction. We propose a novel approach using the linearised deep image prior. It allows incorporating information from the pilot measurements into the angle selection criteria, while maintaining the tractability of a conjugate Gaussian-linear model. On a synthetically generated dataset with preferential directions, linearised DIP design allows reducing the number of scans by up to 30% relative to an equidistant angle baseline.
[ { "created": "Mon, 11 Jul 2022 12:45:31 GMT", "version": "v1" } ]
2022-07-13
[ [ "Barbano", "Riccardo", "" ], [ "Leuschner", "Johannes", "" ], [ "Antorán", "Javier", "" ], [ "Jin", "Bangti", "" ], [ "Hernández-Lobato", "José Miguel", "" ] ]
We investigate adaptive design based on a single sparse pilot scan for generating effective scanning strategies for computed tomography reconstruction. We propose a novel approach using the linearised deep image prior. It allows incorporating information from the pilot measurements into the angle selection criteria, while maintaining the tractability of a conjugate Gaussian-linear model. On a synthetically generated dataset with preferential directions, linearised DIP design allows reducing the number of scans by up to 30% relative to an equidistant angle baseline.
1808.00616
Daniel L. Pimentel-Alarc\'on
Daniel L. Pimentel-Alarc\'on
Mixture Matrix Completion
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Completing a data matrix X has become an ubiquitous problem in modern data science, with applications in recommender systems, computer vision, and networks inference, to name a few. One typical assumption is that X is low-rank. A more general model assumes that each column of X corresponds to one of several low-rank matrices. This paper generalizes these models to what we call mixture matrix completion (MMC): the case where each entry of X corresponds to one of several low-rank matrices. MMC is a more accurate model for recommender systems, and brings more flexibility to other completion and clustering problems. We make four fundamental contributions about this new model. First, we show that MMC is theoretically possible (well-posed). Second, we give its precise information-theoretic identifiability conditions. Third, we derive the sample complexity of MMC. Finally, we give a practical algorithm for MMC with performance comparable to the state-of-the-art for simpler related problems, both on synthetic and real data.
[ { "created": "Thu, 2 Aug 2018 01:09:24 GMT", "version": "v1" } ]
2018-08-03
[ [ "Pimentel-Alarcón", "Daniel L.", "" ] ]
Completing a data matrix X has become an ubiquitous problem in modern data science, with applications in recommender systems, computer vision, and networks inference, to name a few. One typical assumption is that X is low-rank. A more general model assumes that each column of X corresponds to one of several low-rank matrices. This paper generalizes these models to what we call mixture matrix completion (MMC): the case where each entry of X corresponds to one of several low-rank matrices. MMC is a more accurate model for recommender systems, and brings more flexibility to other completion and clustering problems. We make four fundamental contributions about this new model. First, we show that MMC is theoretically possible (well-posed). Second, we give its precise information-theoretic identifiability conditions. Third, we derive the sample complexity of MMC. Finally, we give a practical algorithm for MMC with performance comparable to the state-of-the-art for simpler related problems, both on synthetic and real data.
2306.13418
Jongwook Si
Jongwook Si and Sungyoung Kim
PP-GAN : Style Transfer from Korean Portraits to ID Photos Using Landmark Extractor with GAN
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
The objective of a style transfer is to maintain the content of an image while transferring the style of another image. However, conventional research on style transfer has a significant limitation in preserving facial landmarks, such as the eyes, nose, and mouth, which are crucial for maintaining the identity of the image. In Korean portraits, the majority of individuals wear "Gat", a type of headdress exclusively worn by men. Owing to its distinct characteristics from the hair in ID photos, transferring the "Gat" is challenging. To address this issue, this study proposes a deep learning network that can perform style transfer, including the "Gat", while preserving the identity of the face. Unlike existing style transfer approaches, the proposed method aims to preserve texture, costume, and the "Gat" on the style image. The Generative Adversarial Network forms the backbone of the proposed network. The color, texture, and intensity were extracted differently based on the characteristics of each block and layer of the pre-trained VGG-16, and only the necessary elements during training were preserved using a facial landmark mask. The head area was presented using the eyebrow area to transfer the "Gat". Furthermore, the identity of the face was retained, and style correlation was considered based on the Gram matrix. The proposed approach demonstrated superior transfer and preservation performance compared to previous studies.
[ { "created": "Fri, 23 Jun 2023 10:10:16 GMT", "version": "v1" } ]
2023-06-26
[ [ "Si", "Jongwook", "" ], [ "Kim", "Sungyoung", "" ] ]
The objective of a style transfer is to maintain the content of an image while transferring the style of another image. However, conventional research on style transfer has a significant limitation in preserving facial landmarks, such as the eyes, nose, and mouth, which are crucial for maintaining the identity of the image. In Korean portraits, the majority of individuals wear "Gat", a type of headdress exclusively worn by men. Owing to its distinct characteristics from the hair in ID photos, transferring the "Gat" is challenging. To address this issue, this study proposes a deep learning network that can perform style transfer, including the "Gat", while preserving the identity of the face. Unlike existing style transfer approaches, the proposed method aims to preserve texture, costume, and the "Gat" on the style image. The Generative Adversarial Network forms the backbone of the proposed network. The color, texture, and intensity were extracted differently based on the characteristics of each block and layer of the pre-trained VGG-16, and only the necessary elements during training were preserved using a facial landmark mask. The head area was presented using the eyebrow area to transfer the "Gat". Furthermore, the identity of the face was retained, and style correlation was considered based on the Gram matrix. The proposed approach demonstrated superior transfer and preservation performance compared to previous studies.
2407.19904
Rub\'en Ruiz-Torrubiano
Rub\'en Ruiz-Torrubiano
Modeling Local Search Metaheuristics Using Markov Decision Processes
null
null
null
null
cs.NE
http://creativecommons.org/licenses/by/4.0/
Local search metaheuristics like tabu search or simulated annealing are popular heuristic optimization algorithms for finding near-optimal solutions for combinatorial optimization problems. However, it is still challenging for researchers and practitioners to analyze their behaviour and systematically choose one over a vast set of possible metaheuristics for the particular problem at hand. In this paper, we introduce a theoretical framework based on Markov Decision Processes (MDP) for analyzing local search metaheuristics. This framework not only helps in providing convergence results for individual algorithms, but also provides an explicit characterization of the exploration-exploitation tradeoff and a theory-grounded guidance for practitioners for choosing an appropriate metaheuristic for the problem at hand. We present this framework in detail and show how to apply it in the case of hill climbing and the simulated annealing algorithm.
[ { "created": "Mon, 29 Jul 2024 11:28:30 GMT", "version": "v1" } ]
2024-07-30
[ [ "Ruiz-Torrubiano", "Rubén", "" ] ]
Local search metaheuristics like tabu search or simulated annealing are popular heuristic optimization algorithms for finding near-optimal solutions for combinatorial optimization problems. However, it is still challenging for researchers and practitioners to analyze their behaviour and systematically choose one over a vast set of possible metaheuristics for the particular problem at hand. In this paper, we introduce a theoretical framework based on Markov Decision Processes (MDP) for analyzing local search metaheuristics. This framework not only helps in providing convergence results for individual algorithms, but also provides an explicit characterization of the exploration-exploitation tradeoff and a theory-grounded guidance for practitioners for choosing an appropriate metaheuristic for the problem at hand. We present this framework in detail and show how to apply it in the case of hill climbing and the simulated annealing algorithm.
2307.15433
Dimitri Korsch
Dimitri Korsch, Paul Bodesheim, Gunnar Brehm, Joachim Denzler
Automated Visual Monitoring of Nocturnal Insects with Light-based Camera Traps
Presented at the FGVC workshop at the CVPR2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic camera-assisted monitoring of insects for abundance estimations is crucial to understand and counteract ongoing insect decline. In this paper, we present two datasets of nocturnal insects, especially moths as a subset of Lepidoptera, photographed in Central Europe. One of the datasets, the EU-Moths dataset, was captured manually by citizen scientists and contains species annotations for 200 different species and bounding box annotations for those. We used this dataset to develop and evaluate a two-stage pipeline for insect detection and moth species classification in previous work. We further introduce a prototype for an automated visual monitoring system. This prototype produced the second dataset consisting of more than 27,000 images captured on 95 nights. For evaluation and bootstrapping purposes, we annotated a subset of the images with bounding boxes enframing nocturnal insects. Finally, we present first detection and classification baselines for these datasets and encourage other scientists to use this publicly available data.
[ { "created": "Fri, 28 Jul 2023 09:31:36 GMT", "version": "v1" } ]
2023-07-31
[ [ "Korsch", "Dimitri", "" ], [ "Bodesheim", "Paul", "" ], [ "Brehm", "Gunnar", "" ], [ "Denzler", "Joachim", "" ] ]
Automatic camera-assisted monitoring of insects for abundance estimations is crucial to understand and counteract ongoing insect decline. In this paper, we present two datasets of nocturnal insects, especially moths as a subset of Lepidoptera, photographed in Central Europe. One of the datasets, the EU-Moths dataset, was captured manually by citizen scientists and contains species annotations for 200 different species and bounding box annotations for those. We used this dataset to develop and evaluate a two-stage pipeline for insect detection and moth species classification in previous work. We further introduce a prototype for an automated visual monitoring system. This prototype produced the second dataset consisting of more than 27,000 images captured on 95 nights. For evaluation and bootstrapping purposes, we annotated a subset of the images with bounding boxes enframing nocturnal insects. Finally, we present first detection and classification baselines for these datasets and encourage other scientists to use this publicly available data.
2407.20648
JongWoo Kim
JongWoo Kim, SeongYeub Chu, HyeongMin Park, Bryan Wong, MunYong Yi
Leveraging Multi-facet Paths for Heterogeneous Graph Representation Learning
9pages
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advancements in graph neural networks (GNNs) and heterogeneous GNNs (HGNNs) have advanced node embeddings and relationship learning for various tasks. However, existing methods often rely on domain-specific predefined meta-paths, which are coarse-grained and focus solely on aspects like node type, limiting their ability to capture complex interactions. We introduce MF2Vec, a model that uses multi-faceted (fine-grained) paths instead of predefined meta-paths. MF2Vec extracts paths via random walks and generates multi-faceted vectors, ignoring predefined schemas. This method learns diverse aspects of nodes and their relationships, constructs a homogeneous network, and creates node embeddings for classification, link prediction, and clustering. Extensive experiments show that MF2Vec outperforms existing methods, offering a more flexible and comprehensive framework for analyzing complex networks. The code is available at https://anonymous.4open.science/r/MF2Vec-6ABC.
[ { "created": "Tue, 30 Jul 2024 08:45:32 GMT", "version": "v1" } ]
2024-07-31
[ [ "Kim", "JongWoo", "" ], [ "Chu", "SeongYeub", "" ], [ "Park", "HyeongMin", "" ], [ "Wong", "Bryan", "" ], [ "Yi", "MunYong", "" ] ]
Recent advancements in graph neural networks (GNNs) and heterogeneous GNNs (HGNNs) have advanced node embeddings and relationship learning for various tasks. However, existing methods often rely on domain-specific predefined meta-paths, which are coarse-grained and focus solely on aspects like node type, limiting their ability to capture complex interactions. We introduce MF2Vec, a model that uses multi-faceted (fine-grained) paths instead of predefined meta-paths. MF2Vec extracts paths via random walks and generates multi-faceted vectors, ignoring predefined schemas. This method learns diverse aspects of nodes and their relationships, constructs a homogeneous network, and creates node embeddings for classification, link prediction, and clustering. Extensive experiments show that MF2Vec outperforms existing methods, offering a more flexible and comprehensive framework for analyzing complex networks. The code is available at https://anonymous.4open.science/r/MF2Vec-6ABC.
2110.02707
Majid Rafiei
Andrew Pery, Majid Rafiei, Michael Simon, Wil M.P. van der Aalst
Trustworthy Artificial Intelligence and Process Mining: Challenges and Opportunities
null
null
null
null
cs.SE cs.AI
http://creativecommons.org/licenses/by/4.0/
The premise of this paper is that compliance with Trustworthy AI governance best practices and regulatory frameworks is an inherently fragmented process spanning across diverse organizational units, external stakeholders, and systems of record, resulting in process uncertainties and in compliance gaps that may expose organizations to reputational and regulatory risks. Moreover, there are complexities associated with meeting the specific dimensions of Trustworthy AI best practices such as data governance, conformance testing, quality assurance of AI model behaviors, transparency, accountability, and confidentiality requirements. These processes involve multiple steps, hand-offs, re-works, and human-in-the-loop oversight. In this paper, we demonstrate that process mining can provide a useful framework for gaining fact-based visibility to AI compliance process execution, surfacing compliance bottlenecks, and providing for an automated approach to analyze, remediate and monitor uncertainty in AI regulatory compliance processes.
[ { "created": "Wed, 6 Oct 2021 12:50:47 GMT", "version": "v1" } ]
2021-10-07
[ [ "Pery", "Andrew", "" ], [ "Rafiei", "Majid", "" ], [ "Simon", "Michael", "" ], [ "van der Aalst", "Wil M. P.", "" ] ]
The premise of this paper is that compliance with Trustworthy AI governance best practices and regulatory frameworks is an inherently fragmented process spanning across diverse organizational units, external stakeholders, and systems of record, resulting in process uncertainties and in compliance gaps that may expose organizations to reputational and regulatory risks. Moreover, there are complexities associated with meeting the specific dimensions of Trustworthy AI best practices such as data governance, conformance testing, quality assurance of AI model behaviors, transparency, accountability, and confidentiality requirements. These processes involve multiple steps, hand-offs, re-works, and human-in-the-loop oversight. In this paper, we demonstrate that process mining can provide a useful framework for gaining fact-based visibility to AI compliance process execution, surfacing compliance bottlenecks, and providing for an automated approach to analyze, remediate and monitor uncertainty in AI regulatory compliance processes.
2304.08630
Anran Hu
Xin Guo, Anran Hu, Matteo Santamaria, Mahan Tajrobehkar, Junzi Zhang
MFGLib: A Library for Mean-Field Games
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mean-field games (MFGs) are limiting models to approximate $N$-player games, with a number of applications. Despite the ever-growing numerical literature on computation of MFGs, there is no library that allows researchers and practitioners to easily create and solve their own MFG problems. The purpose of this document is to introduce MFGLib, an open-source Python library for solving general MFGs with a user-friendly and customizable interface. It serves as a handy tool for creating and analyzing generic MFG environments, along with embedded auto-tuners for all implemented algorithms. The package is distributed under the MIT license and the source code and documentation can be found at https://github.com/radar-research-lab/MFGLib/.
[ { "created": "Mon, 17 Apr 2023 21:54:22 GMT", "version": "v1" } ]
2023-04-19
[ [ "Guo", "Xin", "" ], [ "Hu", "Anran", "" ], [ "Santamaria", "Matteo", "" ], [ "Tajrobehkar", "Mahan", "" ], [ "Zhang", "Junzi", "" ] ]
Mean-field games (MFGs) are limiting models to approximate $N$-player games, with a number of applications. Despite the ever-growing numerical literature on computation of MFGs, there is no library that allows researchers and practitioners to easily create and solve their own MFG problems. The purpose of this document is to introduce MFGLib, an open-source Python library for solving general MFGs with a user-friendly and customizable interface. It serves as a handy tool for creating and analyzing generic MFG environments, along with embedded auto-tuners for all implemented algorithms. The package is distributed under the MIT license and the source code and documentation can be found at https://github.com/radar-research-lab/MFGLib/.
1102.1408
Zhenghao Zhang
Zhenghao Zhang, Shuping Gong, Husheng Li, Changxing Pei
Time Stamp Attack on Wide Area Monitoring System in Smart Grid
This paper has been withdrawn by the author due to a crucial sign error in derivation
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Security becomes an extremely important issue in smart grid. To maintain the steady operation for smart power grid, massive measurement devices must be allocated widely among the power grid. Previous studies are focused on false data injection attack to the smart grid system. In practice, false data injection attack is not easy to implement, since it is not easy to hack the power grid data communication system. In this paper, we demonstrate that a novel time stamp attack is a practical and dangerous attack scheme for smart grid. Since most of measurement devices are equipped with global positioning system (GPS) to provide the time information of measurements, it is highly probable to attack the measurement system by spoofing the GPS. By employing the real measurement data in North American Power Grid, simulation results demonstrate the effectiveness of the time stamp attack on smart grid.
[ { "created": "Mon, 7 Feb 2011 20:27:56 GMT", "version": "v1" }, { "created": "Wed, 7 Sep 2011 23:38:31 GMT", "version": "v2" } ]
2015-03-18
[ [ "Zhang", "Zhenghao", "" ], [ "Gong", "Shuping", "" ], [ "Li", "Husheng", "" ], [ "Pei", "Changxing", "" ] ]
Security becomes an extremely important issue in smart grid. To maintain the steady operation for smart power grid, massive measurement devices must be allocated widely among the power grid. Previous studies are focused on false data injection attack to the smart grid system. In practice, false data injection attack is not easy to implement, since it is not easy to hack the power grid data communication system. In this paper, we demonstrate that a novel time stamp attack is a practical and dangerous attack scheme for smart grid. Since most of measurement devices are equipped with global positioning system (GPS) to provide the time information of measurements, it is highly probable to attack the measurement system by spoofing the GPS. By employing the real measurement data in North American Power Grid, simulation results demonstrate the effectiveness of the time stamp attack on smart grid.
0905.2636
Lada A. Adamic
Xiaolin Shi, Belle Tseng, Lada A. Adamic
Information Diffusion in Computer Science Citation Networks
long version of poster published at ICWSM 2009
null
null
null
cs.DL cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper citation network is a traditional social medium for the exchange of ideas and knowledge. In this paper we view citation networks from the perspective of information diffusion. We study the structural features of the information paths through the citation networks of publications in computer science, and analyze the impact of various citation choices on the subsequent impact of the article. We find that citing recent papers and papers within the same scholarly community garners a slightly larger number of citations on average. However, this correlation is weaker among well-cited papers implying that for high impact work citing within one's field is of lesser importance. We also study differences in information flow for specific subsets of citation networks: books versus conference and journal articles, different areas of computer science, and different time periods.
[ { "created": "Fri, 15 May 2009 22:41:39 GMT", "version": "v1" } ]
2009-05-19
[ [ "Shi", "Xiaolin", "" ], [ "Tseng", "Belle", "" ], [ "Adamic", "Lada A.", "" ] ]
The paper citation network is a traditional social medium for the exchange of ideas and knowledge. In this paper we view citation networks from the perspective of information diffusion. We study the structural features of the information paths through the citation networks of publications in computer science, and analyze the impact of various citation choices on the subsequent impact of the article. We find that citing recent papers and papers within the same scholarly community garners a slightly larger number of citations on average. However, this correlation is weaker among well-cited papers implying that for high impact work citing within one's field is of lesser importance. We also study differences in information flow for specific subsets of citation networks: books versus conference and journal articles, different areas of computer science, and different time periods.
2111.15263
Byeonghu Na
Byeonghu Na, Yoonsik Kim, Sungrae Park
Multi-modal Text Recognition Networks: Interactive Enhancements between Visual and Semantic Features
Accepted for publication at ECCV 2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Linguistic knowledge has brought great benefits to scene text recognition by providing semantics to refine character sequences. However, since linguistic knowledge has been applied individually on the output sequence, previous methods have not fully utilized the semantics to understand visual clues for text recognition. This paper introduces a novel method, called Multi-modAl Text Recognition Network (MATRN), that enables interactions between visual and semantic features for better recognition performances. Specifically, MATRN identifies visual and semantic feature pairs and encodes spatial information into semantic features. Based on the spatial encoding, visual and semantic features are enhanced by referring to related features in the other modality. Furthermore, MATRN stimulates combining semantic features into visual features by hiding visual clues related to the character in the training phase. Our experiments demonstrate that MATRN achieves state-of-the-art performances on seven benchmarks with large margins, while naive combinations of two modalities show less-effective improvements. Further ablative studies prove the effectiveness of our proposed components. Our implementation is available at https://github.com/wp03052/MATRN.
[ { "created": "Tue, 30 Nov 2021 10:22:11 GMT", "version": "v1" }, { "created": "Sat, 22 Jan 2022 13:01:48 GMT", "version": "v2" }, { "created": "Sat, 13 Aug 2022 17:50:20 GMT", "version": "v3" } ]
2022-08-16
[ [ "Na", "Byeonghu", "" ], [ "Kim", "Yoonsik", "" ], [ "Park", "Sungrae", "" ] ]
Linguistic knowledge has brought great benefits to scene text recognition by providing semantics to refine character sequences. However, since linguistic knowledge has been applied individually on the output sequence, previous methods have not fully utilized the semantics to understand visual clues for text recognition. This paper introduces a novel method, called Multi-modAl Text Recognition Network (MATRN), that enables interactions between visual and semantic features for better recognition performances. Specifically, MATRN identifies visual and semantic feature pairs and encodes spatial information into semantic features. Based on the spatial encoding, visual and semantic features are enhanced by referring to related features in the other modality. Furthermore, MATRN stimulates combining semantic features into visual features by hiding visual clues related to the character in the training phase. Our experiments demonstrate that MATRN achieves state-of-the-art performances on seven benchmarks with large margins, while naive combinations of two modalities show less-effective improvements. Further ablative studies prove the effectiveness of our proposed components. Our implementation is available at https://github.com/wp03052/MATRN.
1701.03515
Fariborz Salehi
Fariborz Salehi, Kishore Jaganathan, Babak Hassibi
Multiple Illumination Phaseless Super-Resolution (MIPS) with Applications To Phaseless DOA Estimation and Diffraction Imaging
To appear in ICASSP 2017
null
null
null
cs.IT math.IT math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phaseless super-resolution is the problem of recovering an unknown signal from measurements of the magnitudes of the low frequency Fourier transform of the signal. This problem arises in applications where measuring the phase, and making high-frequency measurements, are either too costly or altogether infeasible. The problem is especially challenging because it combines the difficult problems of phase retrieval and classical super-resolution
[ { "created": "Thu, 12 Jan 2017 21:44:12 GMT", "version": "v1" } ]
2017-01-16
[ [ "Salehi", "Fariborz", "" ], [ "Jaganathan", "Kishore", "" ], [ "Hassibi", "Babak", "" ] ]
Phaseless super-resolution is the problem of recovering an unknown signal from measurements of the magnitudes of the low frequency Fourier transform of the signal. This problem arises in applications where measuring the phase, and making high-frequency measurements, are either too costly or altogether infeasible. The problem is especially challenging because it combines the difficult problems of phase retrieval and classical super-resolution
2404.10534
Nadezda Kirillova
Nadezda Kirillova, M. Jehanzeb Mirza, Horst Possegger, Horst Bischof
Into the Fog: Evaluating Multiple Object Tracking Robustness
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
State-of-the-art (SOTA) trackers have shown remarkable Multiple Object Tracking (MOT) performance when trained and evaluated on current benchmarks. However, these benchmarks primarily consist of clear scenarios, overlooking adverse atmospheric conditions such as fog, haze, smoke and dust. As a result, the robustness of SOTA trackers remains underexplored. To address these limitations, we propose a pipeline for physic-based volumetric fog simulation in arbitrary real-world MOT dataset utilizing frame-by-frame monocular depth estimation and a fog formation optical model. Moreover, we enhance our simulation by rendering of both homogeneous and heterogeneous fog effects. We propose to use the dark channel prior method to estimate fog (smoke) color, which shows promising results even in night and indoor scenes. We present the leading tracking benchmark MOTChallenge (MOT17 dataset) overlaid by fog (smoke for indoor scenes) of various intensity levels and conduct a comprehensive evaluation of SOTA MOT methods, revealing their limitations under fog and fog-similar challenges.
[ { "created": "Fri, 12 Apr 2024 21:41:50 GMT", "version": "v1" } ]
2024-04-17
[ [ "Kirillova", "Nadezda", "" ], [ "Mirza", "M. Jehanzeb", "" ], [ "Possegger", "Horst", "" ], [ "Bischof", "Horst", "" ] ]
State-of-the-art (SOTA) trackers have shown remarkable Multiple Object Tracking (MOT) performance when trained and evaluated on current benchmarks. However, these benchmarks primarily consist of clear scenarios, overlooking adverse atmospheric conditions such as fog, haze, smoke and dust. As a result, the robustness of SOTA trackers remains underexplored. To address these limitations, we propose a pipeline for physic-based volumetric fog simulation in arbitrary real-world MOT dataset utilizing frame-by-frame monocular depth estimation and a fog formation optical model. Moreover, we enhance our simulation by rendering of both homogeneous and heterogeneous fog effects. We propose to use the dark channel prior method to estimate fog (smoke) color, which shows promising results even in night and indoor scenes. We present the leading tracking benchmark MOTChallenge (MOT17 dataset) overlaid by fog (smoke for indoor scenes) of various intensity levels and conduct a comprehensive evaluation of SOTA MOT methods, revealing their limitations under fog and fog-similar challenges.
1904.13079
Qi Wang
Yuan Yuan and Dong Wang and Qi Wang
Anomaly Detection in Traffic Scenes via Spatial-aware Motion Reconstruction
IEEE Transactions on Intelligent Transportation Systems
null
10.1109/TITS.2016.2601655
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Anomaly detection from a driver's perspective when driving is important to autonomous vehicles. As a part of Advanced Driver Assistance Systems (ADAS), it can remind the driver about dangers timely. Compared with traditional studied scenes such as the university campus and market surveillance videos, it is difficult to detect abnormal event from a driver's perspective due to camera waggle, abidingly moving background, drastic change of vehicle velocity, etc. To tackle these specific problems, this paper proposes a spatial localization constrained sparse coding approach for anomaly detection in traffic scenes, which firstly measures the abnormality of motion orientation and magnitude respectively and then fuses these two aspects to obtain a robust detection result. The main contributions are threefold: 1) This work describes the motion orientation and magnitude of the object respectively in a new way, which is demonstrated to be better than the traditional motion descriptors. 2) The spatial localization of object is taken into account of the sparse reconstruction framework, which utilizes the scene's structural information and outperforms the conventional sparse coding methods. 3) Results of motion orientation and magnitude are adaptively weighted and fused by a Bayesian model, which makes the proposed method more robust and handle more kinds of abnormal events. The efficiency and effectiveness of the proposed method are validated by testing on nine difficult video sequences captured by ourselves. Observed from the experimental results, the proposed method is more effective and efficient than the popular competitors, and yields a higher performance.
[ { "created": "Tue, 30 Apr 2019 07:14:03 GMT", "version": "v1" } ]
2019-05-01
[ [ "Yuan", "Yuan", "" ], [ "Wang", "Dong", "" ], [ "Wang", "Qi", "" ] ]
Anomaly detection from a driver's perspective when driving is important to autonomous vehicles. As a part of Advanced Driver Assistance Systems (ADAS), it can remind the driver about dangers timely. Compared with traditional studied scenes such as the university campus and market surveillance videos, it is difficult to detect abnormal event from a driver's perspective due to camera waggle, abidingly moving background, drastic change of vehicle velocity, etc. To tackle these specific problems, this paper proposes a spatial localization constrained sparse coding approach for anomaly detection in traffic scenes, which firstly measures the abnormality of motion orientation and magnitude respectively and then fuses these two aspects to obtain a robust detection result. The main contributions are threefold: 1) This work describes the motion orientation and magnitude of the object respectively in a new way, which is demonstrated to be better than the traditional motion descriptors. 2) The spatial localization of object is taken into account of the sparse reconstruction framework, which utilizes the scene's structural information and outperforms the conventional sparse coding methods. 3) Results of motion orientation and magnitude are adaptively weighted and fused by a Bayesian model, which makes the proposed method more robust and handle more kinds of abnormal events. The efficiency and effectiveness of the proposed method are validated by testing on nine difficult video sequences captured by ourselves. Observed from the experimental results, the proposed method is more effective and efficient than the popular competitors, and yields a higher performance.
1704.00801
Luiz Capretz Dr.
Pradeep Waychal and Luiz Fernando Capretz
Need for a Soft Dimension
3rd International Conference on Software Engineering, Geneva, Switzerland, March 2017
null
10.5121/csit.2017.70414
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is impossible to separate the human factors from software engineering expertise during software development, because software is developed by people and for people. The intangible nature of software has made it a difficult product to successfully create, and an examination of the many reasons for major software system failures show that the reasons for failures eventually come down to human issues. Software developers, immersed as they are in the technological aspect of the product, can quickly learn lessons from technological failures and readily come up with solutions to avoid them in the future, yet they do not learn lessons from human aspects in software engineering. Dealing with human errors is much more difficult for developers and often this aspect is overlooked in the evaluation process as developers move on to issues that they are more comfortable solving. A major reason for this oversight is that software psychology (the softer side) has not developed as extensively.
[ { "created": "Mon, 3 Apr 2017 20:38:25 GMT", "version": "v1" } ]
2017-04-05
[ [ "Waychal", "Pradeep", "" ], [ "Capretz", "Luiz Fernando", "" ] ]
It is impossible to separate the human factors from software engineering expertise during software development, because software is developed by people and for people. The intangible nature of software has made it a difficult product to successfully create, and an examination of the many reasons for major software system failures show that the reasons for failures eventually come down to human issues. Software developers, immersed as they are in the technological aspect of the product, can quickly learn lessons from technological failures and readily come up with solutions to avoid them in the future, yet they do not learn lessons from human aspects in software engineering. Dealing with human errors is much more difficult for developers and often this aspect is overlooked in the evaluation process as developers move on to issues that they are more comfortable solving. A major reason for this oversight is that software psychology (the softer side) has not developed as extensively.
1910.12172
Dhruv Rohatgi
Dhruv Rohatgi
Near-Optimal Bounds for Online Caching with Machine Learned Advice
18 pages; accepted to SODA'20; added references and acknowledgments
null
null
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
In the model of online caching with machine learned advice, introduced by Lykouris and Vassilvitskii, the goal is to solve the caching problem with an online algorithm that has access to next-arrival predictions: when each input element arrives, the algorithm is given a prediction of the next time when the element will reappear. The traditional model for online caching suffers from an $\Omega(\log k)$ competitive ratio lower bound (on a cache of size $k$). In contrast, the augmented model admits algorithms which beat this lower bound when the predictions have low error, and asymptotically match the lower bound when the predictions have high error, even if the algorithms are oblivious to the prediction error. In particular, Lykouris and Vassilvitskii showed that there is a prediction-augmented caching algorithm with a competitive ratio of $O(1+\min(\sqrt{\eta/OPT}, \log k))$ when the overall $\ell_1$ prediction error is bounded by $\eta$, and $OPT$ is the cost of the optimal offline algorithm. The dependence on $k$ in the competitive ratio is optimal, but the dependence on $\eta/OPT$ may be far from optimal. In this work, we make progress towards closing this gap. Our contributions are twofold. First, we provide an improved algorithm with a competitive ratio of $O(1 + \min((\eta/OPT)/k, 1) \log k)$. Second, we provide a lower bound of $\Omega(\log \min((\eta/OPT)/(k \log k), k))$.
[ { "created": "Sun, 27 Oct 2019 03:38:16 GMT", "version": "v1" }, { "created": "Tue, 29 Oct 2019 20:11:34 GMT", "version": "v2" } ]
2019-10-31
[ [ "Rohatgi", "Dhruv", "" ] ]
In the model of online caching with machine learned advice, introduced by Lykouris and Vassilvitskii, the goal is to solve the caching problem with an online algorithm that has access to next-arrival predictions: when each input element arrives, the algorithm is given a prediction of the next time when the element will reappear. The traditional model for online caching suffers from an $\Omega(\log k)$ competitive ratio lower bound (on a cache of size $k$). In contrast, the augmented model admits algorithms which beat this lower bound when the predictions have low error, and asymptotically match the lower bound when the predictions have high error, even if the algorithms are oblivious to the prediction error. In particular, Lykouris and Vassilvitskii showed that there is a prediction-augmented caching algorithm with a competitive ratio of $O(1+\min(\sqrt{\eta/OPT}, \log k))$ when the overall $\ell_1$ prediction error is bounded by $\eta$, and $OPT$ is the cost of the optimal offline algorithm. The dependence on $k$ in the competitive ratio is optimal, but the dependence on $\eta/OPT$ may be far from optimal. In this work, we make progress towards closing this gap. Our contributions are twofold. First, we provide an improved algorithm with a competitive ratio of $O(1 + \min((\eta/OPT)/k, 1) \log k)$. Second, we provide a lower bound of $\Omega(\log \min((\eta/OPT)/(k \log k), k))$.
2401.08634
Xueyuan Wang
Xueyuan Wang and M. Cenk Gursoy
Resilient Path Planning for UAVs in Data Collection under Adversarial Attacks
The final version of this paper has been accepted in IEEE Transactions on Information Forensics and Security
vol. 18, pp. 2766-2779, 2023
10.1109/TIFS.2023.3266699
null
cs.NI eess.SP
http://creativecommons.org/licenses/by/4.0/
In this paper, we investigate jamming-resilient UAV path planning strategies for data collection in Internet of Things (IoT) networks, in which the typical UAV can learn the optimal trajectory to elude such jamming attacks. Specifically, the typical UAV is required to collect data from multiple distributed IoT nodes under collision avoidance, mission completion deadline, and kinematic constraints in the presence of jamming attacks. We first design a fixed ground jammer with continuous jamming attack and periodical jamming attack strategies to jam the link between the typical UAV and IoT nodes. Defensive strategies involving a reinforcement learning (RL) based virtual jammer and the adoption of higher SINR thresholds are proposed to counteract against such attacks. Secondly, we design an intelligent UAV jammer, which utilizes the RL algorithm to choose actions based on its observation. Then, an intelligent UAV anti-jamming strategy is constructed to deal with such attacks, and the optimal trajectory of the typical UAV is obtained via dueling double deep Q-network (D3QN). Simulation results show that both non-intelligent and intelligent jamming attacks have significant influence on the UAV's performance, and the proposed defense strategies can recover the performance close to that in no-jammer scenarios.
[ { "created": "Mon, 11 Dec 2023 09:28:28 GMT", "version": "v1" } ]
2024-01-18
[ [ "Wang", "Xueyuan", "" ], [ "Gursoy", "M. Cenk", "" ] ]
In this paper, we investigate jamming-resilient UAV path planning strategies for data collection in Internet of Things (IoT) networks, in which the typical UAV can learn the optimal trajectory to elude such jamming attacks. Specifically, the typical UAV is required to collect data from multiple distributed IoT nodes under collision avoidance, mission completion deadline, and kinematic constraints in the presence of jamming attacks. We first design a fixed ground jammer with continuous jamming attack and periodical jamming attack strategies to jam the link between the typical UAV and IoT nodes. Defensive strategies involving a reinforcement learning (RL) based virtual jammer and the adoption of higher SINR thresholds are proposed to counteract against such attacks. Secondly, we design an intelligent UAV jammer, which utilizes the RL algorithm to choose actions based on its observation. Then, an intelligent UAV anti-jamming strategy is constructed to deal with such attacks, and the optimal trajectory of the typical UAV is obtained via dueling double deep Q-network (D3QN). Simulation results show that both non-intelligent and intelligent jamming attacks have significant influence on the UAV's performance, and the proposed defense strategies can recover the performance close to that in no-jammer scenarios.
2010.11352
Alessandro Lameiras Koerich
Mohammad Esmaeilpour, Patrick Cardinal, Alessandro Lameiras Koerich
Class-Conditional Defense GAN Against End-to-End Speech Attacks
5 pages
46th IEEE International Conference on Acoustics, Speech, & Signal Processing (ICASSP), 2021
null
null
cs.SD cs.CR cs.CV cs.LG eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we propose a novel defense approach against end-to-end adversarial attacks developed to fool advanced speech-to-text systems such as DeepSpeech and Lingvo. Unlike conventional defense approaches, the proposed approach does not directly employ low-level transformations such as autoencoding a given input signal aiming at removing potential adversarial perturbation. Instead of that, we find an optimal input vector for a class conditional generative adversarial network through minimizing the relative chordal distance adjustment between a given test input and the generator network. Then, we reconstruct the 1D signal from the synthesized spectrogram and the original phase information derived from the given input signal. Hence, this reconstruction does not add any extra noise to the signal and according to our experimental results, our defense-GAN considerably outperforms conventional defense algorithms both in terms of word error rate and sentence level recognition accuracy.
[ { "created": "Thu, 22 Oct 2020 00:02:02 GMT", "version": "v1" }, { "created": "Sat, 20 Feb 2021 02:51:55 GMT", "version": "v2" } ]
2021-02-23
[ [ "Esmaeilpour", "Mohammad", "" ], [ "Cardinal", "Patrick", "" ], [ "Koerich", "Alessandro Lameiras", "" ] ]
In this paper we propose a novel defense approach against end-to-end adversarial attacks developed to fool advanced speech-to-text systems such as DeepSpeech and Lingvo. Unlike conventional defense approaches, the proposed approach does not directly employ low-level transformations such as autoencoding a given input signal aiming at removing potential adversarial perturbation. Instead of that, we find an optimal input vector for a class conditional generative adversarial network through minimizing the relative chordal distance adjustment between a given test input and the generator network. Then, we reconstruct the 1D signal from the synthesized spectrogram and the original phase information derived from the given input signal. Hence, this reconstruction does not add any extra noise to the signal and according to our experimental results, our defense-GAN considerably outperforms conventional defense algorithms both in terms of word error rate and sentence level recognition accuracy.
2312.11785
Zhangdie Yuan
Zhangdie Yuan and Andreas Vlachos
Zero-Shot Fact-Checking with Semantic Triples and Knowledge Graphs
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite progress in automated fact-checking, most systems require a significant amount of labeled training data, which is expensive. In this paper, we propose a novel zero-shot method, which instead of operating directly on the claim and evidence sentences, decomposes them into semantic triples augmented using external knowledge graphs, and uses large language models trained for natural language inference. This allows it to generalize to adversarial datasets and domains that supervised models require specific training data for. Our empirical results show that our approach outperforms previous zero-shot approaches on FEVER, FEVER-Symmetric, FEVER 2.0, and Climate-FEVER, while being comparable or better than supervised models on the adversarial and the out-of-domain datasets.
[ { "created": "Tue, 19 Dec 2023 01:48:31 GMT", "version": "v1" } ]
2023-12-20
[ [ "Yuan", "Zhangdie", "" ], [ "Vlachos", "Andreas", "" ] ]
Despite progress in automated fact-checking, most systems require a significant amount of labeled training data, which is expensive. In this paper, we propose a novel zero-shot method, which instead of operating directly on the claim and evidence sentences, decomposes them into semantic triples augmented using external knowledge graphs, and uses large language models trained for natural language inference. This allows it to generalize to adversarial datasets and domains that supervised models require specific training data for. Our empirical results show that our approach outperforms previous zero-shot approaches on FEVER, FEVER-Symmetric, FEVER 2.0, and Climate-FEVER, while being comparable or better than supervised models on the adversarial and the out-of-domain datasets.
2101.06802
Liu Yang
Liu Yang, Tingwei Meng, George Em Karniadakis
Measure-conditional Discriminator with Stationary Optimum for GANs and Statistical Distance Surrogates
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
We propose a simple but effective modification of the discriminators, namely measure-conditional discriminators, as a plug-and-play module for different GANs. By taking the generated distributions as part of input so that the target optimum for the discriminator is stationary, the proposed discriminator is more robust than the vanilla one. A variant of the measure-conditional discriminator can also handle multiple target distributions, or act as a surrogate model of statistical distances such as KL divergence with applications to transfer learning.
[ { "created": "Sun, 17 Jan 2021 23:18:10 GMT", "version": "v1" } ]
2021-01-19
[ [ "Yang", "Liu", "" ], [ "Meng", "Tingwei", "" ], [ "Karniadakis", "George Em", "" ] ]
We propose a simple but effective modification of the discriminators, namely measure-conditional discriminators, as a plug-and-play module for different GANs. By taking the generated distributions as part of input so that the target optimum for the discriminator is stationary, the proposed discriminator is more robust than the vanilla one. A variant of the measure-conditional discriminator can also handle multiple target distributions, or act as a surrogate model of statistical distances such as KL divergence with applications to transfer learning.
1906.04980
Patrick Lewis
Patrick Lewis, Ludovic Denoyer, Sebastian Riedel
Unsupervised Question Answering by Cloze Translation
To appear in ACL 2019
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019
10.18653/v1/P19-1484
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Obtaining training data for Question Answering (QA) is time-consuming and resource-intensive, and existing QA datasets are only available for limited domains and languages. In this work, we explore to what extent high quality training data is actually required for Extractive QA, and investigate the possibility of unsupervised Extractive QA. We approach this problem by first learning to generate context, question and answer triples in an unsupervised manner, which we then use to synthesize Extractive QA training data automatically. To generate such triples, we first sample random context paragraphs from a large corpus of documents and then random noun phrases or named entity mentions from these paragraphs as answers. Next we convert answers in context to "fill-in-the-blank" cloze questions and finally translate them into natural questions. We propose and compare various unsupervised ways to perform cloze-to-natural question translation, including training an unsupervised NMT model using non-aligned corpora of natural questions and cloze questions as well as a rule-based approach. We find that modern QA models can learn to answer human questions surprisingly well using only synthetic training data. We demonstrate that, without using the SQuAD training data at all, our approach achieves 56.4 F1 on SQuAD v1 (64.5 F1 when the answer is a Named entity mention), outperforming early supervised models.
[ { "created": "Wed, 12 Jun 2019 07:30:32 GMT", "version": "v1" }, { "created": "Thu, 27 Jun 2019 09:43:46 GMT", "version": "v2" } ]
2020-05-05
[ [ "Lewis", "Patrick", "" ], [ "Denoyer", "Ludovic", "" ], [ "Riedel", "Sebastian", "" ] ]
Obtaining training data for Question Answering (QA) is time-consuming and resource-intensive, and existing QA datasets are only available for limited domains and languages. In this work, we explore to what extent high quality training data is actually required for Extractive QA, and investigate the possibility of unsupervised Extractive QA. We approach this problem by first learning to generate context, question and answer triples in an unsupervised manner, which we then use to synthesize Extractive QA training data automatically. To generate such triples, we first sample random context paragraphs from a large corpus of documents and then random noun phrases or named entity mentions from these paragraphs as answers. Next we convert answers in context to "fill-in-the-blank" cloze questions and finally translate them into natural questions. We propose and compare various unsupervised ways to perform cloze-to-natural question translation, including training an unsupervised NMT model using non-aligned corpora of natural questions and cloze questions as well as a rule-based approach. We find that modern QA models can learn to answer human questions surprisingly well using only synthetic training data. We demonstrate that, without using the SQuAD training data at all, our approach achieves 56.4 F1 on SQuAD v1 (64.5 F1 when the answer is a Named entity mention), outperforming early supervised models.
0904.2027
Jelani Nelson
Jelani Nelson, David P. Woodruff
A Near-Optimal Algorithm for L1-Difference
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We give the first L_1-sketching algorithm for integer vectors which produces nearly optimal sized sketches in nearly linear time. This answers the first open problem in the list of open problems from the 2006 IITK Workshop on Algorithms for Data Streams. Specifically, suppose Alice receives a vector x in {-M,...,M}^n and Bob receives y in {-M,...,M}^n, and the two parties share randomness. Each party must output a short sketch of their vector such that a third party can later quickly recover a (1 +/- eps)-approximation to ||x-y||_1 with 2/3 probability given only the sketches. We give a sketching algorithm which produces O(eps^{-2}log(1/eps)log(nM))-bit sketches in O(n*log^2(nM)) time, independent of eps. The previous best known sketching algorithm for L_1 is due to [Feigenbaum et al., SICOMP 2002], which achieved the optimal sketch length of O(eps^{-2}log(nM)) bits but had a running time of O(n*log(nM)/eps^2). Notice that our running time is near-linear for every eps, whereas for sufficiently small values of eps, the running time of the previous algorithm can be as large as quadratic. Like their algorithm, our sketching procedure also yields a small-space, one-pass streaming algorithm which works even if the entries of x,y are given in arbitrary order.
[ { "created": "Mon, 13 Apr 2009 22:54:26 GMT", "version": "v1" } ]
2009-04-15
[ [ "Nelson", "Jelani", "" ], [ "Woodruff", "David P.", "" ] ]
We give the first L_1-sketching algorithm for integer vectors which produces nearly optimal sized sketches in nearly linear time. This answers the first open problem in the list of open problems from the 2006 IITK Workshop on Algorithms for Data Streams. Specifically, suppose Alice receives a vector x in {-M,...,M}^n and Bob receives y in {-M,...,M}^n, and the two parties share randomness. Each party must output a short sketch of their vector such that a third party can later quickly recover a (1 +/- eps)-approximation to ||x-y||_1 with 2/3 probability given only the sketches. We give a sketching algorithm which produces O(eps^{-2}log(1/eps)log(nM))-bit sketches in O(n*log^2(nM)) time, independent of eps. The previous best known sketching algorithm for L_1 is due to [Feigenbaum et al., SICOMP 2002], which achieved the optimal sketch length of O(eps^{-2}log(nM)) bits but had a running time of O(n*log(nM)/eps^2). Notice that our running time is near-linear for every eps, whereas for sufficiently small values of eps, the running time of the previous algorithm can be as large as quadratic. Like their algorithm, our sketching procedure also yields a small-space, one-pass streaming algorithm which works even if the entries of x,y are given in arbitrary order.
1801.07546
John Warwicker
Andrei Lissovoi, Pietro S. Oliveto, John Alasdair Warwicker
Simple Hyper-heuristics Control the Neighbourhood Size of Randomised Local Search Optimally for LeadingOnes
This work is accepted in Evolutionary Computation Journal. Abstract shortened for ArXiv
null
null
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Selection HHs are randomised search methodologies which choose and execute heuristics during the optimisation process from a set of low-level heuristics. A machine learning mechanism is generally used to decide which low-level heuristic should be applied in each decision step. In this paper we analyse whether sophisticated learning mechanisms are always necessary for HHs to perform well. To this end we consider the most simple HHs from the literature and rigorously analyse their performance for the LeadingOnes function. Our analysis shows that the standard Simple Random, Permutation, Greedy and Random Gradient HHs show no signs of learning. While the former HHs do not attempt to learn from the past performance of low-level heuristics, the idea behind the Random Gradient HH is to continue to exploit the currently selected heuristic as long as it is successful. Hence, it is embedded with a reinforcement learning mechanism with the shortest possible memory. However, the probability that a promising heuristic is successful in the next step is relatively low when perturbing a reasonable solution to a combinatorial optimisation problem. We generalise the simple Random Gradient HH so success can be measured over a fixed period of time tau, instead of a single iteration. For LO we prove that the Generalised Random Gradient HH can learn to adapt the neighbourhood size of RLS to optimality during the run. We prove it has the best possible performance achievable with the low-level heuristics. We also prove that the performance of the HH improves as the number of low-level local search heuristics to choose from increases. Finally, we show that the advantages of GRG over RLS and EAs using standard bit mutation increase if the anytime performance is considered. Experimental analyses confirm these results for different problem sizes.
[ { "created": "Tue, 23 Jan 2018 14:13:53 GMT", "version": "v1" }, { "created": "Thu, 15 Nov 2018 17:49:26 GMT", "version": "v2" }, { "created": "Thu, 6 Dec 2018 11:13:13 GMT", "version": "v3" }, { "created": "Fri, 3 May 2019 12:45:51 GMT", "version": "v4" }, { "created": "Tue, 14 May 2019 14:01:33 GMT", "version": "v5" }, { "created": "Wed, 15 May 2019 10:43:08 GMT", "version": "v6" } ]
2019-05-16
[ [ "Lissovoi", "Andrei", "" ], [ "Oliveto", "Pietro S.", "" ], [ "Warwicker", "John Alasdair", "" ] ]
Selection HHs are randomised search methodologies which choose and execute heuristics during the optimisation process from a set of low-level heuristics. A machine learning mechanism is generally used to decide which low-level heuristic should be applied in each decision step. In this paper we analyse whether sophisticated learning mechanisms are always necessary for HHs to perform well. To this end we consider the most simple HHs from the literature and rigorously analyse their performance for the LeadingOnes function. Our analysis shows that the standard Simple Random, Permutation, Greedy and Random Gradient HHs show no signs of learning. While the former HHs do not attempt to learn from the past performance of low-level heuristics, the idea behind the Random Gradient HH is to continue to exploit the currently selected heuristic as long as it is successful. Hence, it is embedded with a reinforcement learning mechanism with the shortest possible memory. However, the probability that a promising heuristic is successful in the next step is relatively low when perturbing a reasonable solution to a combinatorial optimisation problem. We generalise the simple Random Gradient HH so success can be measured over a fixed period of time tau, instead of a single iteration. For LO we prove that the Generalised Random Gradient HH can learn to adapt the neighbourhood size of RLS to optimality during the run. We prove it has the best possible performance achievable with the low-level heuristics. We also prove that the performance of the HH improves as the number of low-level local search heuristics to choose from increases. Finally, we show that the advantages of GRG over RLS and EAs using standard bit mutation increase if the anytime performance is considered. Experimental analyses confirm these results for different problem sizes.
2107.07268
Jing Yi
Jing Yi and Yaochen Zhu and Jiayi Xie and Zhenzhong Chen
Cross-modal Variational Auto-encoder for Content-based Micro-video Background Music Recommendation
null
null
10.1109/TMM.2021.3128254
null
cs.MM cs.IR
http://creativecommons.org/licenses/by/4.0/
In this paper, we propose a cross-modal variational auto-encoder (CMVAE) for content-based micro-video background music recommendation. CMVAE is a hierarchical Bayesian generative model that matches relevant background music to a micro-video by projecting these two multimodal inputs into a shared low-dimensional latent space, where the alignment of two corresponding embeddings of a matched video-music pair is achieved by cross-generation. Moreover, the multimodal information is fused by the product-of-experts (PoE) principle, where the semantic information in visual and textual modalities of the micro-video are weighted according to their variance estimations such that the modality with a lower noise level is given more weights. Therefore, the micro-video latent variables contain less irrelevant information that results in a more robust model generalization. Furthermore, we establish a large-scale content-based micro-video background music recommendation dataset, TT-150k, composed of approximately 3,000 different background music clips associated to 150,000 micro-videos from different users. Extensive experiments on the established TT-150k dataset demonstrate the effectiveness of the proposed method. A qualitative assessment of CMVAE by visualizing some recommendation results is also included.
[ { "created": "Thu, 15 Jul 2021 11:47:43 GMT", "version": "v1" }, { "created": "Sun, 11 Dec 2022 15:07:42 GMT", "version": "v2" } ]
2022-12-13
[ [ "Yi", "Jing", "" ], [ "Zhu", "Yaochen", "" ], [ "Xie", "Jiayi", "" ], [ "Chen", "Zhenzhong", "" ] ]
In this paper, we propose a cross-modal variational auto-encoder (CMVAE) for content-based micro-video background music recommendation. CMVAE is a hierarchical Bayesian generative model that matches relevant background music to a micro-video by projecting these two multimodal inputs into a shared low-dimensional latent space, where the alignment of two corresponding embeddings of a matched video-music pair is achieved by cross-generation. Moreover, the multimodal information is fused by the product-of-experts (PoE) principle, where the semantic information in visual and textual modalities of the micro-video are weighted according to their variance estimations such that the modality with a lower noise level is given more weights. Therefore, the micro-video latent variables contain less irrelevant information that results in a more robust model generalization. Furthermore, we establish a large-scale content-based micro-video background music recommendation dataset, TT-150k, composed of approximately 3,000 different background music clips associated to 150,000 micro-videos from different users. Extensive experiments on the established TT-150k dataset demonstrate the effectiveness of the proposed method. A qualitative assessment of CMVAE by visualizing some recommendation results is also included.
2402.01166
Jian Liu
Jian Liu, Xiaoshui Huang, Tianyu Huang, Lu Chen, Yuenan Hou, Shixiang Tang, Ziwei Liu, Wanli Ouyang, Wangmeng Zuo, Junjun Jiang, Xianming Liu
A Comprehensive Survey on 3D Content Generation
under review
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent years have witnessed remarkable advances in artificial intelligence generated content(AIGC), with diverse input modalities, e.g., text, image, video, audio and 3D. The 3D is the most close visual modality to real-world 3D environment and carries enormous knowledge. The 3D content generation shows both academic and practical values while also presenting formidable technical challenges. This review aims to consolidate developments within the burgeoning domain of 3D content generation. Specifically, a new taxonomy is proposed that categorizes existing approaches into three types: 3D native generative methods, 2D prior-based 3D generative methods, and hybrid 3D generative methods. The survey covers approximately 60 papers spanning the major techniques. Besides, we discuss limitations of current 3D content generation techniques, and point out open challenges as well as promising directions for future work. Accompanied with this survey, we have established a project website where the resources on 3D content generation research are provided. The project page is available at https://github.com/hitcslj/Awesome-AIGC-3D.
[ { "created": "Fri, 2 Feb 2024 06:20:44 GMT", "version": "v1" }, { "created": "Tue, 19 Mar 2024 08:22:42 GMT", "version": "v2" } ]
2024-03-20
[ [ "Liu", "Jian", "" ], [ "Huang", "Xiaoshui", "" ], [ "Huang", "Tianyu", "" ], [ "Chen", "Lu", "" ], [ "Hou", "Yuenan", "" ], [ "Tang", "Shixiang", "" ], [ "Liu", "Ziwei", "" ], [ "Ouyang", "Wanli", "" ], [ "Zuo", "Wangmeng", "" ], [ "Jiang", "Junjun", "" ], [ "Liu", "Xianming", "" ] ]
Recent years have witnessed remarkable advances in artificial intelligence generated content(AIGC), with diverse input modalities, e.g., text, image, video, audio and 3D. The 3D is the most close visual modality to real-world 3D environment and carries enormous knowledge. The 3D content generation shows both academic and practical values while also presenting formidable technical challenges. This review aims to consolidate developments within the burgeoning domain of 3D content generation. Specifically, a new taxonomy is proposed that categorizes existing approaches into three types: 3D native generative methods, 2D prior-based 3D generative methods, and hybrid 3D generative methods. The survey covers approximately 60 papers spanning the major techniques. Besides, we discuss limitations of current 3D content generation techniques, and point out open challenges as well as promising directions for future work. Accompanied with this survey, we have established a project website where the resources on 3D content generation research are provided. The project page is available at https://github.com/hitcslj/Awesome-AIGC-3D.
1701.07208
Lars Rohwedder
Klaus Jansen and Lars Rohwedder
A Quasi-Polynomial Approximation for the Restricted Assignment Problem
This article is an extended joint version of conference articles "On the configuration-LP of the restricted assignment problem" [Jansen, Rohwedder SODA'17] and "A quasi-polynomial approximation for the restricted assignment problem" [Jansen, Rohwedder IPCO'17]
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Restricted Assignment Problem is a prominent special case of Scheduling on Parallel Unrelated Machines. For the strongest known linear programming relaxation, the configuration LP, we improve the non-constructive bound on its integrality gap from 1.9142 to 1.8334 and significantly simplify the proof. Then we give a constructive variant, yielding a 1.8334-approximation in quasi-polynomial time. This is the first quasi-polynomial algorithm for this problem improving on the long-standing approximation rate of 2.
[ { "created": "Wed, 25 Jan 2017 08:55:14 GMT", "version": "v1" }, { "created": "Tue, 20 Aug 2019 11:04:42 GMT", "version": "v2" } ]
2019-08-21
[ [ "Jansen", "Klaus", "" ], [ "Rohwedder", "Lars", "" ] ]
The Restricted Assignment Problem is a prominent special case of Scheduling on Parallel Unrelated Machines. For the strongest known linear programming relaxation, the configuration LP, we improve the non-constructive bound on its integrality gap from 1.9142 to 1.8334 and significantly simplify the proof. Then we give a constructive variant, yielding a 1.8334-approximation in quasi-polynomial time. This is the first quasi-polynomial algorithm for this problem improving on the long-standing approximation rate of 2.
1810.09951
Yujie Zhong
Yujie Zhong, Relja Arandjelovi\'c, Andrew Zisserman
GhostVLAD for set-based face recognition
Accepted by ACCV 2018
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The objective of this paper is to learn a compact representation of image sets for template-based face recognition. We make the following contributions: first, we propose a network architecture which aggregates and embeds the face descriptors produced by deep convolutional neural networks into a compact fixed-length representation. This compact representation requires minimal memory storage and enables efficient similarity computation. Second, we propose a novel GhostVLAD layer that includes {\em ghost clusters}, that do not contribute to the aggregation. We show that a quality weighting on the input faces emerges automatically such that informative images contribute more than those with low quality, and that the ghost clusters enhance the network's ability to deal with poor quality images. Third, we explore how input feature dimension, number of clusters and different training techniques affect the recognition performance. Given this analysis, we train a network that far exceeds the state-of-the-art on the IJB-B face recognition dataset. This is currently one of the most challenging public benchmarks, and we surpass the state-of-the-art on both the identification and verification protocols.
[ { "created": "Tue, 23 Oct 2018 16:31:10 GMT", "version": "v1" } ]
2018-10-24
[ [ "Zhong", "Yujie", "" ], [ "Arandjelović", "Relja", "" ], [ "Zisserman", "Andrew", "" ] ]
The objective of this paper is to learn a compact representation of image sets for template-based face recognition. We make the following contributions: first, we propose a network architecture which aggregates and embeds the face descriptors produced by deep convolutional neural networks into a compact fixed-length representation. This compact representation requires minimal memory storage and enables efficient similarity computation. Second, we propose a novel GhostVLAD layer that includes {\em ghost clusters}, that do not contribute to the aggregation. We show that a quality weighting on the input faces emerges automatically such that informative images contribute more than those with low quality, and that the ghost clusters enhance the network's ability to deal with poor quality images. Third, we explore how input feature dimension, number of clusters and different training techniques affect the recognition performance. Given this analysis, we train a network that far exceeds the state-of-the-art on the IJB-B face recognition dataset. This is currently one of the most challenging public benchmarks, and we surpass the state-of-the-art on both the identification and verification protocols.
2109.09180
Annie Xie
Annie Xie, Chelsea Finn
Lifelong Robotic Reinforcement Learning by Retaining Experiences
Supplementary website at https://sites.google.com/view/retain-experience/
null
null
null
cs.LG cs.AI cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-task learning ideally allows robots to acquire a diverse repertoire of useful skills. However, many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times. In reality, the tasks that the robot learns arrive sequentially, depending on the user and the robot's current environment. In this work, we study a practical sequential multi-task RL problem that is motivated by the practical constraints of physical robotic systems, and derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set. In a series of simulated robotic manipulation experiments, our approach requires less than half the samples than learning each task from scratch, while avoiding impractical round-robin data collection. On a Franka Emika Panda robot arm, our approach incrementally learns ten challenging tasks, including bottle capping and block insertion.
[ { "created": "Sun, 19 Sep 2021 18:00:51 GMT", "version": "v1" }, { "created": "Wed, 6 Apr 2022 05:42:16 GMT", "version": "v2" } ]
2022-04-07
[ [ "Xie", "Annie", "" ], [ "Finn", "Chelsea", "" ] ]
Multi-task learning ideally allows robots to acquire a diverse repertoire of useful skills. However, many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times. In reality, the tasks that the robot learns arrive sequentially, depending on the user and the robot's current environment. In this work, we study a practical sequential multi-task RL problem that is motivated by the practical constraints of physical robotic systems, and derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set. In a series of simulated robotic manipulation experiments, our approach requires less than half the samples than learning each task from scratch, while avoiding impractical round-robin data collection. On a Franka Emika Panda robot arm, our approach incrementally learns ten challenging tasks, including bottle capping and block insertion.
1610.02091
Dmitri Strukov B
F. Merrikh Bayat, X. Guo, M. Klachko, M. Prezioso, K. K. Likharev, and D. B. Strukov
Sub-1-us, Sub-20-nJ Pattern Classification in a Mixed-Signal Circuit Based on Embedded 180-nm Floating-Gate Memory Cell Arrays
4 pages, 10 figures
null
null
null
cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We have designed, fabricated, and successfully tested a prototype mixed-signal, 28x28-binary-input, 10-output, 3-layer neuromorphic network ("MLP perceptron"). It is based on embedded nonvolatile floating-gate cell arrays redesigned from a commercial 180-nm NOR flash memory. The arrays allow precise (~1%) individual tuning of all memory cells, having long-term analog-level retention and low noise. Each array performs a very fast and energy-efficient analog vector-by-matrix multiplication, which is the bottleneck for signal propagation in most neuromorphic networks. All functional components of the prototype circuit, including 2 synaptic arrays with 101,780 floating-gate synaptic cells, 74 analog neurons, and the peripheral circuitry for weight adjustment and I/O operations, have a total area below 1 mm^2. Its testing on the common MNIST benchmark set (at this stage, with a relatively low weight import precision) has shown a classification fidelity of 94.65%, close to the 96.2% obtained in simulation. The classification of one pattern takes less than 1 us time and ~20 nJ energy - both numbers much better than for digital implementations of the same task. Estimates show that this performance may be further improved using a better neuron design and a more advanced memory technology, leading to a >10^2 advantage in speed and a >10^4 advantage in energy efficiency over the state-of-the-art purely digital (GPU and custom) circuits, at classification of large, complex patterns.
[ { "created": "Thu, 6 Oct 2016 22:50:47 GMT", "version": "v1" }, { "created": "Mon, 10 Oct 2016 23:27:06 GMT", "version": "v2" } ]
2016-10-12
[ [ "Bayat", "F. Merrikh", "" ], [ "Guo", "X.", "" ], [ "Klachko", "M.", "" ], [ "Prezioso", "M.", "" ], [ "Likharev", "K. K.", "" ], [ "Strukov", "D. B.", "" ] ]
We have designed, fabricated, and successfully tested a prototype mixed-signal, 28x28-binary-input, 10-output, 3-layer neuromorphic network ("MLP perceptron"). It is based on embedded nonvolatile floating-gate cell arrays redesigned from a commercial 180-nm NOR flash memory. The arrays allow precise (~1%) individual tuning of all memory cells, having long-term analog-level retention and low noise. Each array performs a very fast and energy-efficient analog vector-by-matrix multiplication, which is the bottleneck for signal propagation in most neuromorphic networks. All functional components of the prototype circuit, including 2 synaptic arrays with 101,780 floating-gate synaptic cells, 74 analog neurons, and the peripheral circuitry for weight adjustment and I/O operations, have a total area below 1 mm^2. Its testing on the common MNIST benchmark set (at this stage, with a relatively low weight import precision) has shown a classification fidelity of 94.65%, close to the 96.2% obtained in simulation. The classification of one pattern takes less than 1 us time and ~20 nJ energy - both numbers much better than for digital implementations of the same task. Estimates show that this performance may be further improved using a better neuron design and a more advanced memory technology, leading to a >10^2 advantage in speed and a >10^4 advantage in energy efficiency over the state-of-the-art purely digital (GPU and custom) circuits, at classification of large, complex patterns.
1305.6012
Yi Gong
Sheng-Ming Cai, Yi Gong
Cognitive Beamforming for Multiple Secondary Data Streams With Individual SNR Constraints
This is the longer version of a paper to appear in the IEEE Transactions on Signal Processing
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider cognitive beamforming for multiple secondary data streams subject to individual signal-to-noise ratio (SNR) requirements for each secondary data stream. In such a cognitive radio system, the secondary user is permitted to use the spectrum allocated to the primary user as long as the caused interference at the primary receiver is tolerable. With both secondary SNR constraint and primary interference power constraint, we aim to minimize the secondary transmit power consumption. By exploiting the individual SNR requirements, we formulate this cognitive beamforming problem as an optimization problem on the Stiefel manifold. Both zero forcing beamforming (ZFB) and nonzero forcing beamforming (NFB) are considered. For the ZFB case, we derive a closed form beamforming solution. For the NFB case, we prove that the strong duality holds for the nonconvex primal problem and thus the optimal solution can be easily obtained by solving the dual problem. Finally, numerical results are presented to illustrate the performance of the proposed cognitive beamforming solutions.
[ { "created": "Sun, 26 May 2013 10:15:00 GMT", "version": "v1" } ]
2013-05-28
[ [ "Cai", "Sheng-Ming", "" ], [ "Gong", "Yi", "" ] ]
In this paper, we consider cognitive beamforming for multiple secondary data streams subject to individual signal-to-noise ratio (SNR) requirements for each secondary data stream. In such a cognitive radio system, the secondary user is permitted to use the spectrum allocated to the primary user as long as the caused interference at the primary receiver is tolerable. With both secondary SNR constraint and primary interference power constraint, we aim to minimize the secondary transmit power consumption. By exploiting the individual SNR requirements, we formulate this cognitive beamforming problem as an optimization problem on the Stiefel manifold. Both zero forcing beamforming (ZFB) and nonzero forcing beamforming (NFB) are considered. For the ZFB case, we derive a closed form beamforming solution. For the NFB case, we prove that the strong duality holds for the nonconvex primal problem and thus the optimal solution can be easily obtained by solving the dual problem. Finally, numerical results are presented to illustrate the performance of the proposed cognitive beamforming solutions.
1809.05745
Guohui Lin
Longcheng Liu, Yong Chen, Jianming Dong, Randy Goebel, Guohui Lin, Yue Luo, Guanqun Ni, Bing Su, and An Zhang
Approximation algorithms for the three-machine proportionate mixed shop scheduling
An extended abstract containing a subset of results has been accepted by AAIM 2018. This is the full version with 20 pages, 14 figures
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A mixed shop is a manufacturing infrastructure designed to process a mixture of a set of flow-shop jobs and a set of open-shop jobs. Mixed shops are in general much more complex to schedule than flow-shops and open-shops, and have been studied since the 1980's. We consider the three machine proportionate mixed shop problem denoted as $M3 \mid prpt \mid C_{\max}$, in which each job has equal processing times on all three machines. Koulamas and Kyparisis [{\it European Journal of Operational Research}, 243:70--74,2015] showed that the problem is solvable in polynomial time in some very special cases; for the non-solvable case, they proposed a $5/3$-approximation algorithm. In this paper, we present an improved $4/3$-approximation algorithm and show that this ratio of $4/3$ is asymptotically tight; when the largest job is a flow-shop job, we present a fully polynomial-time approximation scheme (FPTAS). On the negative side, while the $F3 \mid prpt \mid C_{\max}$ problem is polynomial-time solvable, we show an interesting hardness result that adding one open-shop job to the job set makes the problem NP-hard if this open-shop job is larger than any flow-shop job. We are able to design an FPTAS for this special case too.
[ { "created": "Sat, 15 Sep 2018 17:16:26 GMT", "version": "v1" } ]
2018-09-18
[ [ "Liu", "Longcheng", "" ], [ "Chen", "Yong", "" ], [ "Dong", "Jianming", "" ], [ "Goebel", "Randy", "" ], [ "Lin", "Guohui", "" ], [ "Luo", "Yue", "" ], [ "Ni", "Guanqun", "" ], [ "Su", "Bing", "" ], [ "Zhang", "An", "" ] ]
A mixed shop is a manufacturing infrastructure designed to process a mixture of a set of flow-shop jobs and a set of open-shop jobs. Mixed shops are in general much more complex to schedule than flow-shops and open-shops, and have been studied since the 1980's. We consider the three machine proportionate mixed shop problem denoted as $M3 \mid prpt \mid C_{\max}$, in which each job has equal processing times on all three machines. Koulamas and Kyparisis [{\it European Journal of Operational Research}, 243:70--74,2015] showed that the problem is solvable in polynomial time in some very special cases; for the non-solvable case, they proposed a $5/3$-approximation algorithm. In this paper, we present an improved $4/3$-approximation algorithm and show that this ratio of $4/3$ is asymptotically tight; when the largest job is a flow-shop job, we present a fully polynomial-time approximation scheme (FPTAS). On the negative side, while the $F3 \mid prpt \mid C_{\max}$ problem is polynomial-time solvable, we show an interesting hardness result that adding one open-shop job to the job set makes the problem NP-hard if this open-shop job is larger than any flow-shop job. We are able to design an FPTAS for this special case too.
2403.11397
Yujia Liu
Yujia Liu, Chenxi Yang, Dingquan Li, Jianhao Ding, Tingting Jiang
Defense Against Adversarial Attacks on No-Reference Image Quality Models with Gradient Norm Regularization
accepted by CVPR 2024
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The task of No-Reference Image Quality Assessment (NR-IQA) is to estimate the quality score of an input image without additional information. NR-IQA models play a crucial role in the media industry, aiding in performance evaluation and optimization guidance. However, these models are found to be vulnerable to adversarial attacks, which introduce imperceptible perturbations to input images, resulting in significant changes in predicted scores. In this paper, we propose a defense method to improve the stability in predicted scores when attacked by small perturbations, thus enhancing the adversarial robustness of NR-IQA models. To be specific, we present theoretical evidence showing that the magnitude of score changes is related to the $\ell_1$ norm of the model's gradient with respect to the input image. Building upon this theoretical foundation, we propose a norm regularization training strategy aimed at reducing the $\ell_1$ norm of the gradient, thereby boosting the robustness of NR-IQA models. Experiments conducted on four NR-IQA baseline models demonstrate the effectiveness of our strategy in reducing score changes in the presence of adversarial attacks. To the best of our knowledge, this work marks the first attempt to defend against adversarial attacks on NR-IQA models. Our study offers valuable insights into the adversarial robustness of NR-IQA models and provides a foundation for future research in this area.
[ { "created": "Mon, 18 Mar 2024 01:11:53 GMT", "version": "v1" } ]
2024-03-19
[ [ "Liu", "Yujia", "" ], [ "Yang", "Chenxi", "" ], [ "Li", "Dingquan", "" ], [ "Ding", "Jianhao", "" ], [ "Jiang", "Tingting", "" ] ]
The task of No-Reference Image Quality Assessment (NR-IQA) is to estimate the quality score of an input image without additional information. NR-IQA models play a crucial role in the media industry, aiding in performance evaluation and optimization guidance. However, these models are found to be vulnerable to adversarial attacks, which introduce imperceptible perturbations to input images, resulting in significant changes in predicted scores. In this paper, we propose a defense method to improve the stability in predicted scores when attacked by small perturbations, thus enhancing the adversarial robustness of NR-IQA models. To be specific, we present theoretical evidence showing that the magnitude of score changes is related to the $\ell_1$ norm of the model's gradient with respect to the input image. Building upon this theoretical foundation, we propose a norm regularization training strategy aimed at reducing the $\ell_1$ norm of the gradient, thereby boosting the robustness of NR-IQA models. Experiments conducted on four NR-IQA baseline models demonstrate the effectiveness of our strategy in reducing score changes in the presence of adversarial attacks. To the best of our knowledge, this work marks the first attempt to defend against adversarial attacks on NR-IQA models. Our study offers valuable insights into the adversarial robustness of NR-IQA models and provides a foundation for future research in this area.
2304.06906
Yang Liu
Yu-Qi Yang, Yu-Xiao Guo, Jian-Yu Xiong, Yang Liu, Hao Pan, Peng-Shuai Wang, Xin Tong, Baining Guo
Swin3D: A Pretrained Transformer Backbone for 3D Indoor Scene Understanding
Project page: https://yukichiii.github.io/project/swin3D/swin3D.html
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
The use of pretrained backbones with fine-tuning has been successful for 2D vision and natural language processing tasks, showing advantages over task-specific networks. In this work, we introduce a pretrained 3D backbone, called {\SST}, for 3D indoor scene understanding. We design a 3D Swin transformer as our backbone network, which enables efficient self-attention on sparse voxels with linear memory complexity, making the backbone scalable to large models and datasets. We also introduce a generalized contextual relative positional embedding scheme to capture various irregularities of point signals for improved network performance. We pretrained a large {\SST} model on a synthetic Structured3D dataset, which is an order of magnitude larger than the ScanNet dataset. Our model pretrained on the synthetic dataset not only generalizes well to downstream segmentation and detection on real 3D point datasets, but also outperforms state-of-the-art methods on downstream tasks with +2.3 mIoU and +2.2 mIoU on S3DIS Area5 and 6-fold semantic segmentation, +1.8 mIoU on ScanNet segmentation (val), +1.9 mAP@0.5 on ScanNet detection, and +8.1 mAP@0.5 on S3DIS detection. A series of extensive ablation studies further validate the scalability, generality, and superior performance enabled by our approach. The code and models are available at https://github.com/microsoft/Swin3D .
[ { "created": "Fri, 14 Apr 2023 02:49:08 GMT", "version": "v1" }, { "created": "Mon, 24 Apr 2023 02:46:34 GMT", "version": "v2" }, { "created": "Wed, 16 Aug 2023 01:53:02 GMT", "version": "v3" } ]
2023-08-17
[ [ "Yang", "Yu-Qi", "" ], [ "Guo", "Yu-Xiao", "" ], [ "Xiong", "Jian-Yu", "" ], [ "Liu", "Yang", "" ], [ "Pan", "Hao", "" ], [ "Wang", "Peng-Shuai", "" ], [ "Tong", "Xin", "" ], [ "Guo", "Baining", "" ] ]
The use of pretrained backbones with fine-tuning has been successful for 2D vision and natural language processing tasks, showing advantages over task-specific networks. In this work, we introduce a pretrained 3D backbone, called {\SST}, for 3D indoor scene understanding. We design a 3D Swin transformer as our backbone network, which enables efficient self-attention on sparse voxels with linear memory complexity, making the backbone scalable to large models and datasets. We also introduce a generalized contextual relative positional embedding scheme to capture various irregularities of point signals for improved network performance. We pretrained a large {\SST} model on a synthetic Structured3D dataset, which is an order of magnitude larger than the ScanNet dataset. Our model pretrained on the synthetic dataset not only generalizes well to downstream segmentation and detection on real 3D point datasets, but also outperforms state-of-the-art methods on downstream tasks with +2.3 mIoU and +2.2 mIoU on S3DIS Area5 and 6-fold semantic segmentation, +1.8 mIoU on ScanNet segmentation (val), +1.9 mAP@0.5 on ScanNet detection, and +8.1 mAP@0.5 on S3DIS detection. A series of extensive ablation studies further validate the scalability, generality, and superior performance enabled by our approach. The code and models are available at https://github.com/microsoft/Swin3D .
1510.00726
Yoav Goldberg
Yoav Goldberg
A Primer on Neural Network Models for Natural Language Processing
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over the past few years, neural networks have re-emerged as powerful machine-learning models, yielding state-of-the-art results in fields such as image recognition and speech processing. More recently, neural network models started to be applied also to textual natural language signals, again with very promising results. This tutorial surveys neural network models from the perspective of natural language processing research, in an attempt to bring natural-language researchers up to speed with the neural techniques. The tutorial covers input encoding for natural language tasks, feed-forward networks, convolutional networks, recurrent networks and recursive networks, as well as the computation graph abstraction for automatic gradient computation.
[ { "created": "Fri, 2 Oct 2015 20:17:33 GMT", "version": "v1" } ]
2015-10-06
[ [ "Goldberg", "Yoav", "" ] ]
Over the past few years, neural networks have re-emerged as powerful machine-learning models, yielding state-of-the-art results in fields such as image recognition and speech processing. More recently, neural network models started to be applied also to textual natural language signals, again with very promising results. This tutorial surveys neural network models from the perspective of natural language processing research, in an attempt to bring natural-language researchers up to speed with the neural techniques. The tutorial covers input encoding for natural language tasks, feed-forward networks, convolutional networks, recurrent networks and recursive networks, as well as the computation graph abstraction for automatic gradient computation.
2310.09347
Yufei Liu
Yufei Liu, Manzhou Li, Qin Ma
Efficient Apple Maturity and Damage Assessment: A Lightweight Detection Model with GAN and Attention Mechanism
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This study proposes a method based on lightweight convolutional neural networks (CNN) and generative adversarial networks (GAN) for apple ripeness and damage level detection tasks. Initially, a lightweight CNN model is designed by optimizing the model's depth and width, as well as employing advanced model compression techniques, successfully reducing the model's parameter and computational requirements, thus enhancing real-time performance in practical applications. Simultaneously, attention mechanisms are introduced, dynamically adjusting the importance of different feature layers to improve the performance in object detection tasks. To address the issues of sample imbalance and insufficient sample size, GANs are used to generate realistic apple images, expanding the training dataset and enhancing the model's recognition capability when faced with apples of varying ripeness and damage levels. Furthermore, by applying the object detection network for damage location annotation on damaged apples, the accuracy of damage level detection is improved, providing a more precise basis for decision-making. Experimental results show that in apple ripeness grading detection, the proposed model achieves 95.6\%, 93.8\%, 95.0\%, and 56.5 in precision, recall, accuracy, and FPS, respectively. In apple damage level detection, the proposed model reaches 95.3\%, 93.7\%, and 94.5\% in precision, recall, and mAP, respectively. In both tasks, the proposed method outperforms other mainstream models, demonstrating the excellent performance and high practical value of the proposed method in apple ripeness and damage level detection tasks.
[ { "created": "Fri, 13 Oct 2023 18:22:30 GMT", "version": "v1" } ]
2023-10-17
[ [ "Liu", "Yufei", "" ], [ "Li", "Manzhou", "" ], [ "Ma", "Qin", "" ] ]
This study proposes a method based on lightweight convolutional neural networks (CNN) and generative adversarial networks (GAN) for apple ripeness and damage level detection tasks. Initially, a lightweight CNN model is designed by optimizing the model's depth and width, as well as employing advanced model compression techniques, successfully reducing the model's parameter and computational requirements, thus enhancing real-time performance in practical applications. Simultaneously, attention mechanisms are introduced, dynamically adjusting the importance of different feature layers to improve the performance in object detection tasks. To address the issues of sample imbalance and insufficient sample size, GANs are used to generate realistic apple images, expanding the training dataset and enhancing the model's recognition capability when faced with apples of varying ripeness and damage levels. Furthermore, by applying the object detection network for damage location annotation on damaged apples, the accuracy of damage level detection is improved, providing a more precise basis for decision-making. Experimental results show that in apple ripeness grading detection, the proposed model achieves 95.6\%, 93.8\%, 95.0\%, and 56.5 in precision, recall, accuracy, and FPS, respectively. In apple damage level detection, the proposed model reaches 95.3\%, 93.7\%, and 94.5\% in precision, recall, and mAP, respectively. In both tasks, the proposed method outperforms other mainstream models, demonstrating the excellent performance and high practical value of the proposed method in apple ripeness and damage level detection tasks.