id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2405.14384
Marion Neumeier
Marion Neumeier, Sebastian Dorn, Michael Botsch, Wolfgang Utschick
Reliable Trajectory Prediction and Uncertainty Quantification with Conditioned Diffusion Models
Accepted at IEEE/CVF Computer Vision and Pattern Recognition Conference Workshops (CVPRW) 2024
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work introduces the conditioned Vehicle Motion Diffusion (cVMD) model, a novel network architecture for highway trajectory prediction using diffusion models. The proposed model ensures the drivability of the predicted trajectory by integrating non-holonomic motion constraints and physical constraints into the generative prediction module. Central to the architecture of cVMD is its capacity to perform uncertainty quantification, a feature that is crucial in safety-critical applications. By integrating the quantified uncertainty into the prediction process, the cVMD's trajectory prediction performance is improved considerably. The model's performance was evaluated using the publicly available highD dataset. Experiments show that the proposed architecture achieves competitive trajectory prediction accuracy compared to state-of-the-art models, while providing guaranteed drivable trajectories and uncertainty quantification.
[ { "created": "Thu, 23 May 2024 10:01:39 GMT", "version": "v1" } ]
2024-05-24
[ [ "Neumeier", "Marion", "" ], [ "Dorn", "Sebastian", "" ], [ "Botsch", "Michael", "" ], [ "Utschick", "Wolfgang", "" ] ]
This work introduces the conditioned Vehicle Motion Diffusion (cVMD) model, a novel network architecture for highway trajectory prediction using diffusion models. The proposed model ensures the drivability of the predicted trajectory by integrating non-holonomic motion constraints and physical constraints into the generative prediction module. Central to the architecture of cVMD is its capacity to perform uncertainty quantification, a feature that is crucial in safety-critical applications. By integrating the quantified uncertainty into the prediction process, the cVMD's trajectory prediction performance is improved considerably. The model's performance was evaluated using the publicly available highD dataset. Experiments show that the proposed architecture achieves competitive trajectory prediction accuracy compared to state-of-the-art models, while providing guaranteed drivable trajectories and uncertainty quantification.
1512.07331
Suhas Sreehari
Suhas Sreehari, S. V. Venkatakrishnan, Brendt Wohlberg, Lawrence F. Drummy, Jeffrey P. Simmons, Charles A. Bouman
Plug-and-Play Priors for Bright Field Electron Tomography and Sparse Interpolation
13 pages, 11 figures
null
10.1109/TCI.2016.2599778
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many material and biological samples in scientific imaging are characterized by non-local repeating structures. These are studied using scanning electron microscopy and electron tomography. Sparse sampling of individual pixels in a 2D image acquisition geometry, or sparse sampling of projection images with large tilt increments in a tomography experiment, can enable high speed data acquisition and minimize sample damage caused by the electron beam. In this paper, we present an algorithm for electron tomographic reconstruction and sparse image interpolation that exploits the non-local redundancy in images. We adapt a framework, termed plug-and-play (P&P) priors, to solve these imaging problems in a regularized inversion setting. The power of the P&P approach is that it allows a wide array of modern denoising algorithms to be used as a "prior model" for tomography and image interpolation. We also present sufficient mathematical conditions that ensure convergence of the P&P approach, and we use these insights to design a new non-local means denoising algorithm. Finally, we demonstrate that the algorithm produces higher quality reconstructions on both simulated and real electron microscope data, along with improved convergence properties compared to other methods.
[ { "created": "Wed, 23 Dec 2015 02:06:29 GMT", "version": "v1" } ]
2017-11-09
[ [ "Sreehari", "Suhas", "" ], [ "Venkatakrishnan", "S. V.", "" ], [ "Wohlberg", "Brendt", "" ], [ "Drummy", "Lawrence F.", "" ], [ "Simmons", "Jeffrey P.", "" ], [ "Bouman", "Charles A.", "" ] ]
Many material and biological samples in scientific imaging are characterized by non-local repeating structures. These are studied using scanning electron microscopy and electron tomography. Sparse sampling of individual pixels in a 2D image acquisition geometry, or sparse sampling of projection images with large tilt increments in a tomography experiment, can enable high speed data acquisition and minimize sample damage caused by the electron beam. In this paper, we present an algorithm for electron tomographic reconstruction and sparse image interpolation that exploits the non-local redundancy in images. We adapt a framework, termed plug-and-play (P&P) priors, to solve these imaging problems in a regularized inversion setting. The power of the P&P approach is that it allows a wide array of modern denoising algorithms to be used as a "prior model" for tomography and image interpolation. We also present sufficient mathematical conditions that ensure convergence of the P&P approach, and we use these insights to design a new non-local means denoising algorithm. Finally, we demonstrate that the algorithm produces higher quality reconstructions on both simulated and real electron microscope data, along with improved convergence properties compared to other methods.
2107.07355
Stefan Marksteiner
Stefan Marksteiner, Slava Bronfman, Markus Wolf, Eddie Lazebnik
Using Cyber Digital Twins for Automated Automotive Cybersecurity Testing
6 pages, 3 figures, accepted for the joint SRCNAS/STRIVE workshop at the 6th IEEE European Symposium on Security and Privacy
2021 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW) - Safety vs Security in the Air and on the Ground
10.1109/EuroSPW54576.2021.00020
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cybersecurity testing of automotive systems has become a practical necessity, with the wide adoption of advanced driving assistance functions and vehicular communications. These functionalities require the integration of information and communication technologies that not only allow for a plethora of on-the-fly configuration abilities, but also provide a huge surface for attacks. Theses circumstances have also been recognized by standardization and regulation bodies, making the need for not only proper cybersecurity engineering but also proving the effectiveness of security measures by verification and validation through testing also a formal necessity. In order to keep pace with the rapidly growing demand of neutral-party security testing of vehicular systems, novel approaches are needed. This paper therefore presents a methodology to create and execute cybersecurity test cases on the fly in a black box setting by using pattern matching-based binary analysis and translation mechanisms to formal attack descriptions as well as model-checking techniques. The approach is intended to generate meaningful attack vectors on a system with next-to-zero a priori knowledge.
[ { "created": "Thu, 15 Jul 2021 14:32:10 GMT", "version": "v1" } ]
2021-09-07
[ [ "Marksteiner", "Stefan", "" ], [ "Bronfman", "Slava", "" ], [ "Wolf", "Markus", "" ], [ "Lazebnik", "Eddie", "" ] ]
Cybersecurity testing of automotive systems has become a practical necessity, with the wide adoption of advanced driving assistance functions and vehicular communications. These functionalities require the integration of information and communication technologies that not only allow for a plethora of on-the-fly configuration abilities, but also provide a huge surface for attacks. Theses circumstances have also been recognized by standardization and regulation bodies, making the need for not only proper cybersecurity engineering but also proving the effectiveness of security measures by verification and validation through testing also a formal necessity. In order to keep pace with the rapidly growing demand of neutral-party security testing of vehicular systems, novel approaches are needed. This paper therefore presents a methodology to create and execute cybersecurity test cases on the fly in a black box setting by using pattern matching-based binary analysis and translation mechanisms to formal attack descriptions as well as model-checking techniques. The approach is intended to generate meaningful attack vectors on a system with next-to-zero a priori knowledge.
1810.11274
Hao Chen
Hao Chen, Daniel Zelazo, Xiangke Wang, and Lincheng Shen
Convergence Analysis of Signed Nonlinear Networks
null
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work analyzes the convergence properties of signed networks with nonlinear edge functions. We consider diffusively coupled networks comprised of maximal equilibrium-independent passive (MEIP) dynamics on the nodes, and a general class of nonlinear coupling functions on the edges. The first contribution of this work is to generalize the classical notion of signed networks for graphs with scalar weights to graphs with nonlinear edge functions using notions from passivity theory. We show that the output of the network can finally form one or several steady-state clusters if all edges are positive, and in particular, all nodes can reach an output agreement if there is a connected subnetwork spanning all nodes and strictly positive edges. When there are non-positive edges added to the network, we show that the tension of the network still converges to the equilibria of the edge functions if the relative outputs of the nodes connected by non-positive edges converge to their equilibria. Furthermore, we establish the equivalent circuit models for signed nonlinear networks, and define the concept of equivalent edge functions which is a generalization of the notion of effective resistance. We finally characterize the relationship between the convergence property and the equivalent edge function, when a non-positive edge is added to a strictly positive network comprised of nonlinear integrators. We show that the convergence of the network is always guaranteed, if the sum of the equivalent edge function of the previous network and the new edge function is passive.
[ { "created": "Fri, 26 Oct 2018 11:38:58 GMT", "version": "v1" }, { "created": "Thu, 31 Jan 2019 03:07:13 GMT", "version": "v2" }, { "created": "Wed, 27 Mar 2019 05:46:44 GMT", "version": "v3" } ]
2019-03-28
[ [ "Chen", "Hao", "" ], [ "Zelazo", "Daniel", "" ], [ "Wang", "Xiangke", "" ], [ "Shen", "Lincheng", "" ] ]
This work analyzes the convergence properties of signed networks with nonlinear edge functions. We consider diffusively coupled networks comprised of maximal equilibrium-independent passive (MEIP) dynamics on the nodes, and a general class of nonlinear coupling functions on the edges. The first contribution of this work is to generalize the classical notion of signed networks for graphs with scalar weights to graphs with nonlinear edge functions using notions from passivity theory. We show that the output of the network can finally form one or several steady-state clusters if all edges are positive, and in particular, all nodes can reach an output agreement if there is a connected subnetwork spanning all nodes and strictly positive edges. When there are non-positive edges added to the network, we show that the tension of the network still converges to the equilibria of the edge functions if the relative outputs of the nodes connected by non-positive edges converge to their equilibria. Furthermore, we establish the equivalent circuit models for signed nonlinear networks, and define the concept of equivalent edge functions which is a generalization of the notion of effective resistance. We finally characterize the relationship between the convergence property and the equivalent edge function, when a non-positive edge is added to a strictly positive network comprised of nonlinear integrators. We show that the convergence of the network is always guaranteed, if the sum of the equivalent edge function of the previous network and the new edge function is passive.
2207.09869
Tam\'as Matuszka Ph.D.
Tamas Matuszka, Daniel Kozma
A Novel Neural Network Training Method for Autonomous Driving Using Semi-Pseudo-Labels and 3D Data Augmentations
null
null
10.1007/978-3-031-21967-2_18
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Training neural networks to perform 3D object detection for autonomous driving requires a large amount of diverse annotated data. However, obtaining training data with sufficient quality and quantity is expensive and sometimes impossible due to human and sensor constraints. Therefore, a novel solution is needed for extending current training methods to overcome this limitation and enable accurate 3D object detection. Our solution for the above-mentioned problem combines semi-pseudo-labeling and novel 3D augmentations. For demonstrating the applicability of the proposed method, we have designed a convolutional neural network for 3D object detection which can significantly increase the detection range in comparison with the training data distribution.
[ { "created": "Wed, 20 Jul 2022 13:04:08 GMT", "version": "v1" } ]
2022-12-13
[ [ "Matuszka", "Tamas", "" ], [ "Kozma", "Daniel", "" ] ]
Training neural networks to perform 3D object detection for autonomous driving requires a large amount of diverse annotated data. However, obtaining training data with sufficient quality and quantity is expensive and sometimes impossible due to human and sensor constraints. Therefore, a novel solution is needed for extending current training methods to overcome this limitation and enable accurate 3D object detection. Our solution for the above-mentioned problem combines semi-pseudo-labeling and novel 3D augmentations. For demonstrating the applicability of the proposed method, we have designed a convolutional neural network for 3D object detection which can significantly increase the detection range in comparison with the training data distribution.
1807.00948
De'Aira Bryant
Tobi Ogunyale, De'Aira Bryant and Ayanna Howard
Does Removing Stereotype Priming Remove Bias? A Pilot Human-Robot Interaction Study
5 pages, 9 figures, 1 table, to be presented at the 5th Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2018), Stockholm, Sweden, July 15, 2018
null
null
null
cs.RO cs.HC
http://creativecommons.org/licenses/by/4.0/
Robots capable of participating in complex social interactions have shown great potential in a variety of applications. As these robots grow more popular, it is essential to continuously evaluate the dynamics of the human-robot relationship. One factor shown to have potential impacts on this critical relationship is the human projection of stereotypes onto social robots, a practice that is implicitly known to effect both developers and users of this technology. As such, in this research, we wished to investigate the difference in participants' perceptions of the robot interaction if we removed stereotype priming. This has not yet been a common practice in similar studies. Given the stereotypes of emotions among ethnic groups, especially in the U.S., this study specifically sought to investigate the impact that robot "skin color" could potentially have on the human perception of a robot's emotional expressive behavior. A between-subject experiment with 198 individuals was conducted. The results showed no significant differences in the overall emotion classification or intensity ratings for the different robot skin colors. These results lend credence to our hypothesis that when individuals are not primed with information related to human stereotypes, robots are evaluated based on functional attributes versus stereotypical attributes. This provides some confidence that robots, if designed correctly, can potentially be used as a tool to override stereotype-based biases associated with human behavior.
[ { "created": "Tue, 3 Jul 2018 01:48:06 GMT", "version": "v1" } ]
2018-07-04
[ [ "Ogunyale", "Tobi", "" ], [ "Bryant", "De'Aira", "" ], [ "Howard", "Ayanna", "" ] ]
Robots capable of participating in complex social interactions have shown great potential in a variety of applications. As these robots grow more popular, it is essential to continuously evaluate the dynamics of the human-robot relationship. One factor shown to have potential impacts on this critical relationship is the human projection of stereotypes onto social robots, a practice that is implicitly known to effect both developers and users of this technology. As such, in this research, we wished to investigate the difference in participants' perceptions of the robot interaction if we removed stereotype priming. This has not yet been a common practice in similar studies. Given the stereotypes of emotions among ethnic groups, especially in the U.S., this study specifically sought to investigate the impact that robot "skin color" could potentially have on the human perception of a robot's emotional expressive behavior. A between-subject experiment with 198 individuals was conducted. The results showed no significant differences in the overall emotion classification or intensity ratings for the different robot skin colors. These results lend credence to our hypothesis that when individuals are not primed with information related to human stereotypes, robots are evaluated based on functional attributes versus stereotypical attributes. This provides some confidence that robots, if designed correctly, can potentially be used as a tool to override stereotype-based biases associated with human behavior.
2107.09265
Ziqi Lu
Ziqi Lu, Qiangqiang Huang, Kevin Doherty, John Leonard
Consensus-Informed Optimization Over Mixtures for Ambiguity-Aware Object SLAM
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Building object-level maps can facilitate robot-environment interactions (e.g. planning and manipulation), but objects could often have multiple probable poses when viewed from a single vantage point, due to symmetry, occlusion or perceptual failures. A robust object-level simultaneous localization and mapping (object SLAM) algorithm needs to be aware of this pose ambiguity. We propose to maintain and subsequently disambiguate the multiple pose interpretations to gradually recover a globally consistent world representation. The max-mixtures model is applied to implicitly and efficiently track all pose hypotheses, but the resulting formulation is non-convex, and therefore subject to local optima. To mitigate this problem, temporally consistent hypotheses are extracted, guiding the optimization into the global optimum. This consensus-informed inference method is applied online via landmark variable re-initialization within an incremental SLAM framework, iSAM2, for robust real-time performance. We demonstrate that this approach improves SLAM performance on both simulated and real object SLAM problems with pose ambiguity.
[ { "created": "Tue, 20 Jul 2021 05:23:20 GMT", "version": "v1" }, { "created": "Wed, 8 Sep 2021 04:32:34 GMT", "version": "v2" } ]
2021-09-09
[ [ "Lu", "Ziqi", "" ], [ "Huang", "Qiangqiang", "" ], [ "Doherty", "Kevin", "" ], [ "Leonard", "John", "" ] ]
Building object-level maps can facilitate robot-environment interactions (e.g. planning and manipulation), but objects could often have multiple probable poses when viewed from a single vantage point, due to symmetry, occlusion or perceptual failures. A robust object-level simultaneous localization and mapping (object SLAM) algorithm needs to be aware of this pose ambiguity. We propose to maintain and subsequently disambiguate the multiple pose interpretations to gradually recover a globally consistent world representation. The max-mixtures model is applied to implicitly and efficiently track all pose hypotheses, but the resulting formulation is non-convex, and therefore subject to local optima. To mitigate this problem, temporally consistent hypotheses are extracted, guiding the optimization into the global optimum. This consensus-informed inference method is applied online via landmark variable re-initialization within an incremental SLAM framework, iSAM2, for robust real-time performance. We demonstrate that this approach improves SLAM performance on both simulated and real object SLAM problems with pose ambiguity.
2102.00423
Reza Hadi Mogavi
Reza Hadi Mogavi, Xiaojuan Ma, Pan Hui
Characterizing Student Engagement Moods for Dropout Prediction in Question Pool Websites
Accepted in the 24th ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW 2021)
null
10.1145/3449086
null
cs.HC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Problem-Based Learning (PBL) is a popular approach to instruction that supports students to get hands-on training by solving problems. Question Pool websites (QPs) such as LeetCode, Code Chef, and Math Playground help PBL by supplying authentic, diverse, and contextualized questions to students. Nonetheless, empirical findings suggest that 40% to 80% of students registered in QPs drop out in less than two months. This research is the first attempt to understand and predict student dropouts from QPs via exploiting students' engagement moods. Adopting a data-driven approach, we identify five different engagement moods for QP students, which are namely challenge-seeker, subject-seeker, interest-seeker, joy-seeker, and non-seeker. We find that students have collective preferences for answering questions in each engagement mood, and deviation from those preferences increases their probability of dropping out significantly. Last but not least, this paper contributes by introducing a new hybrid machine learning model (we call Dropout-Plus) for predicting student dropouts in QPs. The test results on a popular QP in China, with nearly 10K students, show that Dropout-Plus can exceed the rival algorithms' dropout prediction performance in terms of accuracy, F1-measure, and AUC. We wrap up our work by giving some design suggestions to QP managers and online learning professionals to reduce their student dropouts.
[ { "created": "Sun, 31 Jan 2021 10:30:19 GMT", "version": "v1" }, { "created": "Tue, 2 Feb 2021 19:15:09 GMT", "version": "v2" } ]
2021-02-05
[ [ "Mogavi", "Reza Hadi", "" ], [ "Ma", "Xiaojuan", "" ], [ "Hui", "Pan", "" ] ]
Problem-Based Learning (PBL) is a popular approach to instruction that supports students to get hands-on training by solving problems. Question Pool websites (QPs) such as LeetCode, Code Chef, and Math Playground help PBL by supplying authentic, diverse, and contextualized questions to students. Nonetheless, empirical findings suggest that 40% to 80% of students registered in QPs drop out in less than two months. This research is the first attempt to understand and predict student dropouts from QPs via exploiting students' engagement moods. Adopting a data-driven approach, we identify five different engagement moods for QP students, which are namely challenge-seeker, subject-seeker, interest-seeker, joy-seeker, and non-seeker. We find that students have collective preferences for answering questions in each engagement mood, and deviation from those preferences increases their probability of dropping out significantly. Last but not least, this paper contributes by introducing a new hybrid machine learning model (we call Dropout-Plus) for predicting student dropouts in QPs. The test results on a popular QP in China, with nearly 10K students, show that Dropout-Plus can exceed the rival algorithms' dropout prediction performance in terms of accuracy, F1-measure, and AUC. We wrap up our work by giving some design suggestions to QP managers and online learning professionals to reduce their student dropouts.
2104.04071
Gautam Srivastava
Farrah Huntinghawk, Candace Richard, Sarah Plosker, Gautam Srivastava
Expanding Cybersecurity Knowledge Through an Indigenous Lens: A First Look
9 pages, 0 figures
2020 IEEE CCECE, London, ON, Canada, 2020, pp. 1-4
10.1109/CCECE47787.2020.9255753.
null
cs.CY cs.CR
http://creativecommons.org/licenses/by/4.0/
Decolonization and Indigenous education are at the forefront of Canadian content currently in Academia. Over the last few decades, we have seen some major changes in the way in which we share information. In particular, we have moved into an age of electronically-shared content, and there is an increasing expectation in Canada that this content is both culturally significant and relevant. In this paper, we discuss an ongoing community engagement initiative with First Nations communities in the Western Manitoba region. The initiative involves knowledge-sharing activities that focus on the topic of cybersecurity, and are aimed at a public audience. This initial look into our educational project focuses on the conceptual analysis and planning stage. We are developing a "Cybersecurity 101" mini-curriculum, to be implemented over several one-hour long workshops aimed at diverse groups (these public workshops may include a wide range of participants, from tech-adverse to tech-savvy). Learning assessment tools have been built in to the workshop program. We have created informational and promotional pamphlets, posters, lesson plans, and feedback questionnaires which we believe instill relevance and personal connection to this topic, helping to bridge gaps in accessibility for Indigenous communities while striving to build positive, reciprocal relationships. Our methodology is to approach the subject from a community needs and priorities perspective. Activities are therefore being tailored to fit each community.
[ { "created": "Tue, 30 Mar 2021 19:25:01 GMT", "version": "v1" } ]
2021-04-12
[ [ "Huntinghawk", "Farrah", "" ], [ "Richard", "Candace", "" ], [ "Plosker", "Sarah", "" ], [ "Srivastava", "Gautam", "" ] ]
Decolonization and Indigenous education are at the forefront of Canadian content currently in Academia. Over the last few decades, we have seen some major changes in the way in which we share information. In particular, we have moved into an age of electronically-shared content, and there is an increasing expectation in Canada that this content is both culturally significant and relevant. In this paper, we discuss an ongoing community engagement initiative with First Nations communities in the Western Manitoba region. The initiative involves knowledge-sharing activities that focus on the topic of cybersecurity, and are aimed at a public audience. This initial look into our educational project focuses on the conceptual analysis and planning stage. We are developing a "Cybersecurity 101" mini-curriculum, to be implemented over several one-hour long workshops aimed at diverse groups (these public workshops may include a wide range of participants, from tech-adverse to tech-savvy). Learning assessment tools have been built in to the workshop program. We have created informational and promotional pamphlets, posters, lesson plans, and feedback questionnaires which we believe instill relevance and personal connection to this topic, helping to bridge gaps in accessibility for Indigenous communities while striving to build positive, reciprocal relationships. Our methodology is to approach the subject from a community needs and priorities perspective. Activities are therefore being tailored to fit each community.
1708.02393
Chadarat Phipathananunth
Panuchart Bunyakiati and Chadarat Phipathananunth
Cherry-Picking of Code Commits in Long-Running, Multi-release Software
5 pages
null
10.1145/3106237.3122818
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents Tartarian, a tool that supports maintenance of software with long-running, multi-release branches in distributed version control systems. When new maintenance code, such as bug fixes and code improvement, is committed into a branch, it is likely that such code can be applied or reused with some other branches. To do so, a developer may manually identify a commit and cherry pick it. Tartarian can support this activity by providing commit hashtags, which the developer uses as metadata to specify their intentions when committing the code. With these tags, Tartarian uses dependency graph, that represents the dependency constraints of the branches, and Branch Identifier, which matches the commit hashtags with the dependency graph, to identify the applicable branches for the commits. Using Tartarian, developers may be able to maintain software with multiple releases more efficiently.
[ { "created": "Tue, 8 Aug 2017 07:43:31 GMT", "version": "v1" } ]
2017-08-09
[ [ "Bunyakiati", "Panuchart", "" ], [ "Phipathananunth", "Chadarat", "" ] ]
This paper presents Tartarian, a tool that supports maintenance of software with long-running, multi-release branches in distributed version control systems. When new maintenance code, such as bug fixes and code improvement, is committed into a branch, it is likely that such code can be applied or reused with some other branches. To do so, a developer may manually identify a commit and cherry pick it. Tartarian can support this activity by providing commit hashtags, which the developer uses as metadata to specify their intentions when committing the code. With these tags, Tartarian uses dependency graph, that represents the dependency constraints of the branches, and Branch Identifier, which matches the commit hashtags with the dependency graph, to identify the applicable branches for the commits. Using Tartarian, developers may be able to maintain software with multiple releases more efficiently.
2006.14784
Peter Vaillancourt
Peter Z. Vaillancourt, J. Eric Coulter, Richard Knepper, Brandon Barker
Self-Scaling Clusters and Reproducible Containers to Enable Scientific Computing
Accepted for publication in the IEEE conference proceedings for HPEC 2020
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Container technologies such as Docker have become a crucial component of many software industry practices especially those pertaining to reproducibility and portability. The containerization philosophy has influenced the scientific computing community, which has begun to adopt - and even develop - container technologies (such as Singularity). Leveraging containers for scientific software often poses challenges distinct from those encountered in industry, and requires different methodologies. This is especially true for HPC. With an increasing number of options for HPC in the cloud (including NSF-funded cloud projects), there is strong motivation to seek solutions that provide flexibility to develop and deploy scientific software on a variety of computational infrastructures in a portable and reproducible way. The flexibility offered by cloud services enables virtual HPC clusters that scale on-demand, and the Cyberinfrastructure Resource Integration team in the XSEDE project has developed a set of tools which provides scalable infrastructure in the cloud. We now present a solution which uses the Nix package manager in an MPI-capable Docker container that is converted to Singularity. It provides consistent installations, dependencies, and environments in each image that are reproducible and portable across scientific computing infrastructures. We demonstrate the utility of these containers with cluster benchmark runs in a self-scaling virtual cluster using the Slurm scheduler deployed in the Jetstream and Aristotle Red Cloud OpenStack clouds. We conclude this technique is useful as a template for scientific software application containers to be used in the XSEDE compute environment, other Singularity HPC environments, and cloud computing environments.
[ { "created": "Fri, 26 Jun 2020 03:57:19 GMT", "version": "v1" }, { "created": "Mon, 3 Aug 2020 23:40:15 GMT", "version": "v2" } ]
2020-08-05
[ [ "Vaillancourt", "Peter Z.", "" ], [ "Coulter", "J. Eric", "" ], [ "Knepper", "Richard", "" ], [ "Barker", "Brandon", "" ] ]
Container technologies such as Docker have become a crucial component of many software industry practices especially those pertaining to reproducibility and portability. The containerization philosophy has influenced the scientific computing community, which has begun to adopt - and even develop - container technologies (such as Singularity). Leveraging containers for scientific software often poses challenges distinct from those encountered in industry, and requires different methodologies. This is especially true for HPC. With an increasing number of options for HPC in the cloud (including NSF-funded cloud projects), there is strong motivation to seek solutions that provide flexibility to develop and deploy scientific software on a variety of computational infrastructures in a portable and reproducible way. The flexibility offered by cloud services enables virtual HPC clusters that scale on-demand, and the Cyberinfrastructure Resource Integration team in the XSEDE project has developed a set of tools which provides scalable infrastructure in the cloud. We now present a solution which uses the Nix package manager in an MPI-capable Docker container that is converted to Singularity. It provides consistent installations, dependencies, and environments in each image that are reproducible and portable across scientific computing infrastructures. We demonstrate the utility of these containers with cluster benchmark runs in a self-scaling virtual cluster using the Slurm scheduler deployed in the Jetstream and Aristotle Red Cloud OpenStack clouds. We conclude this technique is useful as a template for scientific software application containers to be used in the XSEDE compute environment, other Singularity HPC environments, and cloud computing environments.
1710.02282
Gabriele D'Angelo
Stefano Ferretti, Gabriele D'Angelo, Vittorio Ghini, Moreno Marzolla
The Quest for Scalability and Accuracy in the Simulation of the Internet of Things: an Approach based on Multi-Level Simulation
Proceedings of the IEEE/ACM International Symposium on Distributed Simulation and Real Time Applications (DS-RT 2017)
null
10.1109/DISTRA.2017.8167672
null
cs.PF cs.DC cs.MA cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a methodology for simulating the Internet of Things (IoT) using multi-level simulation models. With respect to conventional simulators, this approach allows us to tune the level of detail of different parts of the model without compromising the scalability of the simulation. As a use case, we have developed a two-level simulator to study the deployment of smart services over rural territories. The higher level is base on a coarse grained, agent-based adaptive parallel and distributed simulator. When needed, this simulator spawns OMNeT++ model instances to evaluate in more detail the issues concerned with wireless communications in restricted areas of the simulated world. The performance evaluation confirms the viability of multi-level simulations for IoT environments.
[ { "created": "Fri, 6 Oct 2017 06:05:58 GMT", "version": "v1" }, { "created": "Tue, 7 Aug 2018 07:12:41 GMT", "version": "v2" } ]
2018-08-08
[ [ "Ferretti", "Stefano", "" ], [ "D'Angelo", "Gabriele", "" ], [ "Ghini", "Vittorio", "" ], [ "Marzolla", "Moreno", "" ] ]
This paper presents a methodology for simulating the Internet of Things (IoT) using multi-level simulation models. With respect to conventional simulators, this approach allows us to tune the level of detail of different parts of the model without compromising the scalability of the simulation. As a use case, we have developed a two-level simulator to study the deployment of smart services over rural territories. The higher level is base on a coarse grained, agent-based adaptive parallel and distributed simulator. When needed, this simulator spawns OMNeT++ model instances to evaluate in more detail the issues concerned with wireless communications in restricted areas of the simulated world. The performance evaluation confirms the viability of multi-level simulations for IoT environments.
2002.02071
Jiangsheng You Dr.
Jason You
Finite Hilbert Transform in Weighted L2 Spaces
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several new properties of weighted Hilbert transform are obtained. If mu is zero, two Plancherel-like equations and the isotropic properties are derived. For mu is real number, a coerciveness is derived and two iterative sequences are constructed to find the inversion. The proposed iterative sequences are applicable to the case of pure imaginary constant mu=i*eta with |eta|<pi/4 . For mu=0.0 and 3.0 , we present the computer simulation results by using the Chebyshev series representation of finite Hilbert transform. The results in this paper are useful to the half scan in several imaging applications.
[ { "created": "Thu, 6 Feb 2020 02:13:18 GMT", "version": "v1" }, { "created": "Tue, 11 Feb 2020 03:47:58 GMT", "version": "v2" } ]
2020-02-12
[ [ "You", "Jason", "" ] ]
Several new properties of weighted Hilbert transform are obtained. If mu is zero, two Plancherel-like equations and the isotropic properties are derived. For mu is real number, a coerciveness is derived and two iterative sequences are constructed to find the inversion. The proposed iterative sequences are applicable to the case of pure imaginary constant mu=i*eta with |eta|<pi/4 . For mu=0.0 and 3.0 , we present the computer simulation results by using the Chebyshev series representation of finite Hilbert transform. The results in this paper are useful to the half scan in several imaging applications.
1806.00194
Chen Huang
Chen Huang, Yining Li, Chen Change Loy, Xiaoou Tang
Deep Imbalanced Learning for Face Recognition and Attribute Prediction
14 pages, 10 figures, 8 tables. Accepted to TPAMI
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data for face analysis often exhibit highly-skewed class distribution, i.e., most data belong to a few majority classes, while the minority classes only contain a scarce amount of instances. To mitigate this issue, contemporary deep learning methods typically follow classic strategies such as class re-sampling or cost-sensitive training. In this paper, we conduct extensive and systematic experiments to validate the effectiveness of these classic schemes for representation learning on class-imbalanced data. We further demonstrate that more discriminative deep representation can be learned by enforcing a deep network to maintain inter-cluster margins both within and between classes. This tight constraint effectively reduces the class imbalance inherent in the local data neighborhood, thus carving much more balanced class boundaries locally. We show that it is easy to deploy angular margins between the cluster distributions on a hypersphere manifold. Such learned Cluster-based Large Margin Local Embedding (CLMLE), when combined with a simple k-nearest cluster algorithm, shows significant improvements in accuracy over existing methods on both face recognition and face attribute prediction tasks that exhibit imbalanced class distribution.
[ { "created": "Fri, 1 Jun 2018 04:55:47 GMT", "version": "v1" }, { "created": "Tue, 30 Apr 2019 03:49:42 GMT", "version": "v2" } ]
2019-05-01
[ [ "Huang", "Chen", "" ], [ "Li", "Yining", "" ], [ "Loy", "Chen Change", "" ], [ "Tang", "Xiaoou", "" ] ]
Data for face analysis often exhibit highly-skewed class distribution, i.e., most data belong to a few majority classes, while the minority classes only contain a scarce amount of instances. To mitigate this issue, contemporary deep learning methods typically follow classic strategies such as class re-sampling or cost-sensitive training. In this paper, we conduct extensive and systematic experiments to validate the effectiveness of these classic schemes for representation learning on class-imbalanced data. We further demonstrate that more discriminative deep representation can be learned by enforcing a deep network to maintain inter-cluster margins both within and between classes. This tight constraint effectively reduces the class imbalance inherent in the local data neighborhood, thus carving much more balanced class boundaries locally. We show that it is easy to deploy angular margins between the cluster distributions on a hypersphere manifold. Such learned Cluster-based Large Margin Local Embedding (CLMLE), when combined with a simple k-nearest cluster algorithm, shows significant improvements in accuracy over existing methods on both face recognition and face attribute prediction tasks that exhibit imbalanced class distribution.
2309.12578
Bokyeong Yoon
Bokyeong Yoon, Yoonsang Han, Gordon Euhyun Moon
SPION: Layer-Wise Sparse Training of Transformer via Convolutional Flood Filling
null
null
null
null
cs.LG cs.DC
http://creativecommons.org/licenses/by/4.0/
Sparsifying the Transformer has garnered considerable interest, as training the Transformer is very computationally demanding. Prior efforts to sparsify the Transformer have either used a fixed pattern or data-driven approach to reduce the number of operations involving the computation of multi-head attention, which is the main bottleneck of the Transformer. However, existing methods suffer from inevitable problems, such as the potential loss of essential sequence features due to the uniform fixed pattern applied across all layers, and an increase in the model size resulting from the use of additional parameters to learn sparsity patterns in attention operations. In this paper, we propose a novel sparsification scheme for the Transformer that integrates convolution filters and the flood filling method to efficiently capture the layer-wise sparse pattern in attention operations. Our sparsification approach reduces the computational complexity and memory footprint of the Transformer during training. Efficient implementations of the layer-wise sparsified attention algorithm on GPUs are developed, demonstrating a new SPION that achieves up to 3.08X speedup over existing state-of-the-art sparse Transformer models, with better evaluation quality.
[ { "created": "Fri, 22 Sep 2023 02:14:46 GMT", "version": "v1" } ]
2023-09-25
[ [ "Yoon", "Bokyeong", "" ], [ "Han", "Yoonsang", "" ], [ "Moon", "Gordon Euhyun", "" ] ]
Sparsifying the Transformer has garnered considerable interest, as training the Transformer is very computationally demanding. Prior efforts to sparsify the Transformer have either used a fixed pattern or data-driven approach to reduce the number of operations involving the computation of multi-head attention, which is the main bottleneck of the Transformer. However, existing methods suffer from inevitable problems, such as the potential loss of essential sequence features due to the uniform fixed pattern applied across all layers, and an increase in the model size resulting from the use of additional parameters to learn sparsity patterns in attention operations. In this paper, we propose a novel sparsification scheme for the Transformer that integrates convolution filters and the flood filling method to efficiently capture the layer-wise sparse pattern in attention operations. Our sparsification approach reduces the computational complexity and memory footprint of the Transformer during training. Efficient implementations of the layer-wise sparsified attention algorithm on GPUs are developed, demonstrating a new SPION that achieves up to 3.08X speedup over existing state-of-the-art sparse Transformer models, with better evaluation quality.
2312.12006
Md.Rafiul Biswas Mr.
Md. Rafiul Biswas, Ashhadul Islam, Zubair Shah, Wajdi Zaghouani, Samir Brahim Belhaouari
Can ChatGPT be Your Personal Medical Assistant?
5 pages, 7 figures, two tables, Accepted on The International Symposium on Foundation and Large Language Models (FLLM2023)
The International Symposium on Foundation and Large Language Models (FLLM2023) https://fllm-conference.org/2023/
null
null
cs.CL cs.SI
http://creativecommons.org/licenses/by/4.0/
The advanced large language model (LLM) ChatGPT has shown its potential in different domains and remains unbeaten due to its characteristics compared to other LLMs. This study aims to evaluate the potential of using a fine-tuned ChatGPT model as a personal medical assistant in the Arabic language. To do so, this study uses publicly available online questions and answering datasets in Arabic language. There are almost 430K questions and answers for 20 disease-specific categories. GPT-3.5-turbo model was fine-tuned with a portion of this dataset. The performance of this fine-tuned model was evaluated through automated and human evaluation. The automated evaluations include perplexity, coherence, similarity, and token count. Native Arabic speakers with medical knowledge evaluated the generated text by calculating relevance, accuracy, precision, logic, and originality. The overall result shows that ChatGPT has a bright future in medical assistance.
[ { "created": "Tue, 19 Dec 2023 09:54:27 GMT", "version": "v1" } ]
2023-12-20
[ [ "Biswas", "Md. Rafiul", "" ], [ "Islam", "Ashhadul", "" ], [ "Shah", "Zubair", "" ], [ "Zaghouani", "Wajdi", "" ], [ "Belhaouari", "Samir Brahim", "" ] ]
The advanced large language model (LLM) ChatGPT has shown its potential in different domains and remains unbeaten due to its characteristics compared to other LLMs. This study aims to evaluate the potential of using a fine-tuned ChatGPT model as a personal medical assistant in the Arabic language. To do so, this study uses publicly available online questions and answering datasets in Arabic language. There are almost 430K questions and answers for 20 disease-specific categories. GPT-3.5-turbo model was fine-tuned with a portion of this dataset. The performance of this fine-tuned model was evaluated through automated and human evaluation. The automated evaluations include perplexity, coherence, similarity, and token count. Native Arabic speakers with medical knowledge evaluated the generated text by calculating relevance, accuracy, precision, logic, and originality. The overall result shows that ChatGPT has a bright future in medical assistance.
2010.01247
Zhun Deng
Zhun Deng, Cynthia Dwork, Jialiang Wang, Linjun Zhang
Interpreting Robust Optimization via Adversarial Influence Functions
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robust optimization has been widely used in nowadays data science, especially in adversarial training. However, little research has been done to quantify how robust optimization changes the optimizers and the prediction losses comparing to standard training. In this paper, inspired by the influence function in robust statistics, we introduce the Adversarial Influence Function (AIF) as a tool to investigate the solution produced by robust optimization. The proposed AIF enjoys a closed-form and can be calculated efficiently. To illustrate the usage of AIF, we apply it to study model sensitivity -- a quantity defined to capture the change of prediction losses on the natural data after implementing robust optimization. We use AIF to analyze how model complexity and randomized smoothing affect the model sensitivity with respect to specific models. We further derive AIF for kernel regressions, with a particular application to neural tangent kernels, and experimentally demonstrate the effectiveness of the proposed AIF. Lastly, the theories of AIF will be extended to distributional robust optimization.
[ { "created": "Sat, 3 Oct 2020 01:19:10 GMT", "version": "v1" } ]
2020-10-06
[ [ "Deng", "Zhun", "" ], [ "Dwork", "Cynthia", "" ], [ "Wang", "Jialiang", "" ], [ "Zhang", "Linjun", "" ] ]
Robust optimization has been widely used in nowadays data science, especially in adversarial training. However, little research has been done to quantify how robust optimization changes the optimizers and the prediction losses comparing to standard training. In this paper, inspired by the influence function in robust statistics, we introduce the Adversarial Influence Function (AIF) as a tool to investigate the solution produced by robust optimization. The proposed AIF enjoys a closed-form and can be calculated efficiently. To illustrate the usage of AIF, we apply it to study model sensitivity -- a quantity defined to capture the change of prediction losses on the natural data after implementing robust optimization. We use AIF to analyze how model complexity and randomized smoothing affect the model sensitivity with respect to specific models. We further derive AIF for kernel regressions, with a particular application to neural tangent kernels, and experimentally demonstrate the effectiveness of the proposed AIF. Lastly, the theories of AIF will be extended to distributional robust optimization.
2011.08529
Zhaoyi Wan
Zhaoyi Wan, Yimin Chen, Sutao Deng, Kunpeng Chen, Cong Yao, Jiebo Luo
Slender Object Detection: Diagnoses and Improvements
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In this paper, we are concerned with the detection of a particular type of objects with extreme aspect ratios, namely \textbf{slender objects}. In real-world scenarios, slender objects are actually very common and crucial to the objective of a detection system. However, this type of objects has been largely overlooked by previous object detection algorithms. Upon our investigation, for a classical object detection method, a drastic drop of $18.9\%$ mAP on COCO is observed, if solely evaluated on slender objects. Therefore, we systematically study the problem of slender object detection in this work. Accordingly, an analytical framework with carefully designed benchmark and evaluation protocols is established, in which different algorithms and modules can be inspected and compared. \New Our study reveals that effective slender object detection can be achieved ~\textbf{with none of} (1) anchor-based localization; (2) specially designed box representations. Instead, \textbf{the critical aspect of improving slender object detection is feature adaptation}. It identifies and extends the insights of existing methods that are previously underexploited. Furthermore, we propose a feature adaption strategy that achieves clear and consistent improvements over current representative object detection methods.
[ { "created": "Tue, 17 Nov 2020 09:39:42 GMT", "version": "v1" }, { "created": "Sat, 21 Nov 2020 05:33:07 GMT", "version": "v2" }, { "created": "Thu, 24 Dec 2020 09:14:36 GMT", "version": "v3" }, { "created": "Wed, 7 Apr 2021 02:35:15 GMT", "version": "v4" } ]
2021-04-08
[ [ "Wan", "Zhaoyi", "" ], [ "Chen", "Yimin", "" ], [ "Deng", "Sutao", "" ], [ "Chen", "Kunpeng", "" ], [ "Yao", "Cong", "" ], [ "Luo", "Jiebo", "" ] ]
In this paper, we are concerned with the detection of a particular type of objects with extreme aspect ratios, namely \textbf{slender objects}. In real-world scenarios, slender objects are actually very common and crucial to the objective of a detection system. However, this type of objects has been largely overlooked by previous object detection algorithms. Upon our investigation, for a classical object detection method, a drastic drop of $18.9\%$ mAP on COCO is observed, if solely evaluated on slender objects. Therefore, we systematically study the problem of slender object detection in this work. Accordingly, an analytical framework with carefully designed benchmark and evaluation protocols is established, in which different algorithms and modules can be inspected and compared. \New Our study reveals that effective slender object detection can be achieved ~\textbf{with none of} (1) anchor-based localization; (2) specially designed box representations. Instead, \textbf{the critical aspect of improving slender object detection is feature adaptation}. It identifies and extends the insights of existing methods that are previously underexploited. Furthermore, we propose a feature adaption strategy that achieves clear and consistent improvements over current representative object detection methods.
1012.5041
Pablo S\'anchez-Moreno
P. S\'anchez-Moreno, A. Zarzo and J.S. Dehesa
Jensen divergence based on Fisher's information
8 pages, 8 figures
J. Phys. A: Math. Theor. 45 (2012) 125305
10.1088/1751-8113/45/12/125305
null
cs.IT math.IT physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The measure of Jensen-Fisher divergence between probability distributions is introduced and its theoretical grounds set up. This quantity, in contrast to the remaining Jensen divergences, is very sensitive to the fluctuations of the probability distributions because it is controlled by the (local) Fisher information, which is a gradient functional of the distribution. So, it is appropriate and informative when studying the similarity of distributions, mainly for those having oscillatory character. The new Jensen-Fisher divergence shares with the Jensen-Shannon divergence the following properties: non-negativity, additivity when applied to an arbitrary number of probability densities, symmetry under exchange of these densities, vanishing if and only if all the densities are equal, and definiteness even when these densities present non-common zeros. Moreover, the Jensen-Fisher divergence is shown to be expressed in terms of the relative Fisher information as the Jensen-Shannon divergence does in terms of the Kullback-Leibler or relative Shannon entropy. Finally the Jensen-Shannon and Jensen-Fisher divergences are compared for the following three large, non-trivial and qualitatively different families of probability distributions: the sinusoidal, generalized gamma-like and Rakhmanov-Hermite distributions.
[ { "created": "Wed, 22 Dec 2010 17:15:17 GMT", "version": "v1" } ]
2013-01-08
[ [ "Sánchez-Moreno", "P.", "" ], [ "Zarzo", "A.", "" ], [ "Dehesa", "J. S.", "" ] ]
The measure of Jensen-Fisher divergence between probability distributions is introduced and its theoretical grounds set up. This quantity, in contrast to the remaining Jensen divergences, is very sensitive to the fluctuations of the probability distributions because it is controlled by the (local) Fisher information, which is a gradient functional of the distribution. So, it is appropriate and informative when studying the similarity of distributions, mainly for those having oscillatory character. The new Jensen-Fisher divergence shares with the Jensen-Shannon divergence the following properties: non-negativity, additivity when applied to an arbitrary number of probability densities, symmetry under exchange of these densities, vanishing if and only if all the densities are equal, and definiteness even when these densities present non-common zeros. Moreover, the Jensen-Fisher divergence is shown to be expressed in terms of the relative Fisher information as the Jensen-Shannon divergence does in terms of the Kullback-Leibler or relative Shannon entropy. Finally the Jensen-Shannon and Jensen-Fisher divergences are compared for the following three large, non-trivial and qualitatively different families of probability distributions: the sinusoidal, generalized gamma-like and Rakhmanov-Hermite distributions.
1505.07293
Vijay Badrinarayanan
Vijay Badrinarayanan, Ankur Handa, Roberto Cipolla
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Robust Semantic Pixel-Wise Labelling
This version was first submitted to CVPR' 15 on November 14, 2014 with paper Id 1468. A similar architecture was proposed more recently on May 17, 2015, see http://arxiv.org/pdf/1505.04366.pdf
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel deep architecture, SegNet, for semantic pixel wise image labelling. SegNet has several attractive properties; (i) it only requires forward evaluation of a fully learnt function to obtain smooth label predictions, (ii) with increasing depth, a larger context is considered for pixel labelling which improves accuracy, and (iii) it is easy to visualise the effect of feature activation(s) in the pixel label space at any depth. SegNet is composed of a stack of encoders followed by a corresponding decoder stack which feeds into a soft-max classification layer. The decoders help map low resolution feature maps at the output of the encoder stack to full input image size feature maps. This addresses an important drawback of recent deep learning approaches which have adopted networks designed for object categorization for pixel wise labelling. These methods lack a mechanism to map deep layer feature maps to input dimensions. They resort to ad hoc methods to upsample features, e.g. by replication. This results in noisy predictions and also restricts the number of pooling layers in order to avoid too much upsampling and thus reduces spatial context. SegNet overcomes these problems by learning to map encoder outputs to image pixel labels. We test the performance of SegNet on outdoor RGB scenes from CamVid, KITTI and indoor scenes from the NYU dataset. Our results show that SegNet achieves state-of-the-art performance even without use of additional cues such as depth, video frames or post-processing with CRF models.
[ { "created": "Wed, 27 May 2015 12:54:17 GMT", "version": "v1" } ]
2015-05-28
[ [ "Badrinarayanan", "Vijay", "" ], [ "Handa", "Ankur", "" ], [ "Cipolla", "Roberto", "" ] ]
We propose a novel deep architecture, SegNet, for semantic pixel wise image labelling. SegNet has several attractive properties; (i) it only requires forward evaluation of a fully learnt function to obtain smooth label predictions, (ii) with increasing depth, a larger context is considered for pixel labelling which improves accuracy, and (iii) it is easy to visualise the effect of feature activation(s) in the pixel label space at any depth. SegNet is composed of a stack of encoders followed by a corresponding decoder stack which feeds into a soft-max classification layer. The decoders help map low resolution feature maps at the output of the encoder stack to full input image size feature maps. This addresses an important drawback of recent deep learning approaches which have adopted networks designed for object categorization for pixel wise labelling. These methods lack a mechanism to map deep layer feature maps to input dimensions. They resort to ad hoc methods to upsample features, e.g. by replication. This results in noisy predictions and also restricts the number of pooling layers in order to avoid too much upsampling and thus reduces spatial context. SegNet overcomes these problems by learning to map encoder outputs to image pixel labels. We test the performance of SegNet on outdoor RGB scenes from CamVid, KITTI and indoor scenes from the NYU dataset. Our results show that SegNet achieves state-of-the-art performance even without use of additional cues such as depth, video frames or post-processing with CRF models.
2308.14326
Maximilian St\"abler
Maximilian Staebler, Frank Koester, Christoph Schlueter-Langdon
Towards solving ontological dissonance using network graphs
5 pages, AMCIS 2023 proceedings
null
null
null
cs.AI cs.SI
http://creativecommons.org/licenses/by/4.0/
Data Spaces are an emerging concept for the trusted implementation of data-based applications and business models, offering a high degree of flexibility and sovereignty to all stakeholders. As Data Spaces are currently emerging in different domains such as mobility, health or food, semantic interfaces need to be identified and implemented to ensure the technical interoperability of these Data Spaces. This paper consolidates data models from 13 different domains and analyzes the ontological dissonance of these domains. Using a network graph, central data models and ontology attributes are identified, while the semantic heterogeneity of these domains is described qualitatively. The research outlook describes how these results help to connect different Data Spaces across domains.
[ { "created": "Mon, 28 Aug 2023 06:10:26 GMT", "version": "v1" } ]
2023-08-29
[ [ "Staebler", "Maximilian", "" ], [ "Koester", "Frank", "" ], [ "Schlueter-Langdon", "Christoph", "" ] ]
Data Spaces are an emerging concept for the trusted implementation of data-based applications and business models, offering a high degree of flexibility and sovereignty to all stakeholders. As Data Spaces are currently emerging in different domains such as mobility, health or food, semantic interfaces need to be identified and implemented to ensure the technical interoperability of these Data Spaces. This paper consolidates data models from 13 different domains and analyzes the ontological dissonance of these domains. Using a network graph, central data models and ontology attributes are identified, while the semantic heterogeneity of these domains is described qualitatively. The research outlook describes how these results help to connect different Data Spaces across domains.
2012.11334
Viacheslav Dubeyko
Viacheslav Dubeyko
Cognitive Computing in Data-centric Paradigm
null
null
null
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge is the most precious asset of humankind. People extract the experience from the data that provide for us the reality through the feelings. Generally speaking, it is possible to see the analogy of knowledge elaboration between humankind's way and the artificial system's way. Digital data are the "feelings" of an artificial system, and it needs to invent a method of extraction of knowledge from the Universe of data. The cognitive computing paradigm implies that a system should be able to extract the knowledge from raw data without any human-made algorithm. The first step of the paradigm is analysis of raw data streams through the discovery of repeatable patterns of data. The knowledge of relationships among the patterns provides a way to see the structures and to generalize the concepts with the goal to synthesize new statements. The cognitive computing paradigm is capable of mimicking the human's ability to generalize the notions. It is possible to say that the generalization step provides the basis for discovering the abstract notions, revealing the abstract relations of patterns and general rules of structure synthesis. If anyone continues the process of structure generalization, then it is possible to build the multi-level hierarchy of abstract notions. Moreover, discovering the generalized classes of notions is the first step towards a paradigm of artificial analytical thinking. The most critical possible responsibility of cognitive computing could be the classification of data and recognition of input data stream's states. The synthesis of new statements creates the foundation for the foreseeing the possible data states and elaboration of knowledge about new data classes by employing synthesis and checking the hypothesis.
[ { "created": "Mon, 14 Dec 2020 22:39:53 GMT", "version": "v1" } ]
2020-12-22
[ [ "Dubeyko", "Viacheslav", "" ] ]
Knowledge is the most precious asset of humankind. People extract the experience from the data that provide for us the reality through the feelings. Generally speaking, it is possible to see the analogy of knowledge elaboration between humankind's way and the artificial system's way. Digital data are the "feelings" of an artificial system, and it needs to invent a method of extraction of knowledge from the Universe of data. The cognitive computing paradigm implies that a system should be able to extract the knowledge from raw data without any human-made algorithm. The first step of the paradigm is analysis of raw data streams through the discovery of repeatable patterns of data. The knowledge of relationships among the patterns provides a way to see the structures and to generalize the concepts with the goal to synthesize new statements. The cognitive computing paradigm is capable of mimicking the human's ability to generalize the notions. It is possible to say that the generalization step provides the basis for discovering the abstract notions, revealing the abstract relations of patterns and general rules of structure synthesis. If anyone continues the process of structure generalization, then it is possible to build the multi-level hierarchy of abstract notions. Moreover, discovering the generalized classes of notions is the first step towards a paradigm of artificial analytical thinking. The most critical possible responsibility of cognitive computing could be the classification of data and recognition of input data stream's states. The synthesis of new statements creates the foundation for the foreseeing the possible data states and elaboration of knowledge about new data classes by employing synthesis and checking the hypothesis.
1603.02381
Ragesh K Ramachandran
Ragesh K Ramachandran and Spring Berman
The Effect of Communication Topology on Scalar Field Estimation by Networked Robotic Swarms
null
null
null
null
cs.RO cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies the problem of reconstructing a two-dimensional scalar field using a swarm of networked robots with local communication capabilities. We consider the communication network of the robots to form either a chain or a grid topology. We formulate the reconstruction problem as an optimization problem that is constrained by first-order linear dynamics on a large, interconnected system. To solve this problem, we employ an optimization-based scheme that uses a gradient-based method with an analytical computation of the gradient. In addition, we derive bounds on the trace of observability Gramian of the system, which helps us to quantify and compare the estimation capability of chain and grid networks. A comparison based on a performance measure related to the H2 norm of the system is also used to study robustness of the network topologies. Our resultsare validated using both simulated scalar fields and actual ocean salinity data.
[ { "created": "Tue, 8 Mar 2016 04:51:09 GMT", "version": "v1" } ]
2016-03-09
[ [ "Ramachandran", "Ragesh K", "" ], [ "Berman", "Spring", "" ] ]
This paper studies the problem of reconstructing a two-dimensional scalar field using a swarm of networked robots with local communication capabilities. We consider the communication network of the robots to form either a chain or a grid topology. We formulate the reconstruction problem as an optimization problem that is constrained by first-order linear dynamics on a large, interconnected system. To solve this problem, we employ an optimization-based scheme that uses a gradient-based method with an analytical computation of the gradient. In addition, we derive bounds on the trace of observability Gramian of the system, which helps us to quantify and compare the estimation capability of chain and grid networks. A comparison based on a performance measure related to the H2 norm of the system is also used to study robustness of the network topologies. Our resultsare validated using both simulated scalar fields and actual ocean salinity data.
2304.00173
Rami Botros
Rami Botros, Rohit Prabhavalkar, Johan Schalkwyk, Ciprian Chelba, Tara N. Sainath, Fran\c{c}oise Beaufays
Lego-Features: Exporting modular encoder features for streaming and deliberation ASR
null
null
null
null
cs.CL cs.AI cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
In end-to-end (E2E) speech recognition models, a representational tight-coupling inevitably emerges between the encoder and the decoder. We build upon recent work that has begun to explore building encoders with modular encoded representations, such that encoders and decoders from different models can be stitched together in a zero-shot manner without further fine-tuning. While previous research only addresses full-context speech models, we explore the problem in a streaming setting as well. Our framework builds on top of existing encoded representations, converting them to modular features, dubbed as Lego-Features, without modifying the pre-trained model. The features remain interchangeable when the model is retrained with distinct initializations. Though sparse, we show that the Lego-Features are powerful when tested with RNN-T or LAS decoders, maintaining high-quality downstream performance. They are also rich enough to represent the first-pass prediction during two-pass deliberation. In this scenario, they outperform the N-best hypotheses, since they do not need to be supplemented with acoustic features to deliver the best results. Moreover, generating the Lego-Features does not require beam search or auto-regressive computation. Overall, they present a modular, powerful and cheap alternative to the standard encoder output, as well as the N-best hypotheses.
[ { "created": "Fri, 31 Mar 2023 23:33:21 GMT", "version": "v1" } ]
2023-04-04
[ [ "Botros", "Rami", "" ], [ "Prabhavalkar", "Rohit", "" ], [ "Schalkwyk", "Johan", "" ], [ "Chelba", "Ciprian", "" ], [ "Sainath", "Tara N.", "" ], [ "Beaufays", "Françoise", "" ] ]
In end-to-end (E2E) speech recognition models, a representational tight-coupling inevitably emerges between the encoder and the decoder. We build upon recent work that has begun to explore building encoders with modular encoded representations, such that encoders and decoders from different models can be stitched together in a zero-shot manner without further fine-tuning. While previous research only addresses full-context speech models, we explore the problem in a streaming setting as well. Our framework builds on top of existing encoded representations, converting them to modular features, dubbed as Lego-Features, without modifying the pre-trained model. The features remain interchangeable when the model is retrained with distinct initializations. Though sparse, we show that the Lego-Features are powerful when tested with RNN-T or LAS decoders, maintaining high-quality downstream performance. They are also rich enough to represent the first-pass prediction during two-pass deliberation. In this scenario, they outperform the N-best hypotheses, since they do not need to be supplemented with acoustic features to deliver the best results. Moreover, generating the Lego-Features does not require beam search or auto-regressive computation. Overall, they present a modular, powerful and cheap alternative to the standard encoder output, as well as the N-best hypotheses.
1809.01906
Felix Leibfried
Felix Leibfried, Peter Vrancx
Model-Based Regularization for Deep Reinforcement Learning with Transcoder Networks
Presented at the NIPS Deep Reinforcement Learning Workshop, Montreal, Canada, 2018
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a new optimization objective for value-based deep reinforcement learning. We extend conventional Deep Q-Networks (DQNs) by adding a model-learning component yielding a transcoder network. The prediction errors for the model are included in the basic DQN loss as additional regularizers. This augmented objective leads to a richer training signal that provides feedback at every time step. Moreover, because learning an environment model shares a common structure with the RL problem, we hypothesize that the resulting objective improves both sample efficiency and performance. We empirically confirm our hypothesis on a range of 20 games from the Atari benchmark attaining superior results over vanilla DQN without model-based regularization.
[ { "created": "Thu, 6 Sep 2018 09:49:18 GMT", "version": "v1" }, { "created": "Tue, 20 Nov 2018 13:30:16 GMT", "version": "v2" } ]
2018-11-21
[ [ "Leibfried", "Felix", "" ], [ "Vrancx", "Peter", "" ] ]
This paper proposes a new optimization objective for value-based deep reinforcement learning. We extend conventional Deep Q-Networks (DQNs) by adding a model-learning component yielding a transcoder network. The prediction errors for the model are included in the basic DQN loss as additional regularizers. This augmented objective leads to a richer training signal that provides feedback at every time step. Moreover, because learning an environment model shares a common structure with the RL problem, we hypothesize that the resulting objective improves both sample efficiency and performance. We empirically confirm our hypothesis on a range of 20 games from the Atari benchmark attaining superior results over vanilla DQN without model-based regularization.
2202.13677
Sean Kauffman
Sean Kauffman, Martin Zimmermann
The Complexity of Evaluating nfer
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nfer is a rule-based language for abstracting event streams into a hierarchy of intervals with data. Nfer has multiple implementations and has been applied in the analysis of spacecraft telemetry and autonomous vehicle logs. This work provides the first complexity analysis of nfer evaluation, i.e., the problem of deciding whether a given interval is generated by applying rules. We show that the full nfer language is undecidable and that this depends on both recursion in the rules and an infinite data domain. By restricting either or both of those capabilities, we obtain tight decidability results. We also examine the impact on complexity of exclusive rules and minimality. For the most practical case, which is minimality with finite data, we provide a polynomial-time algorithm.
[ { "created": "Mon, 28 Feb 2022 10:53:09 GMT", "version": "v1" }, { "created": "Fri, 1 Jul 2022 12:37:22 GMT", "version": "v2" }, { "created": "Mon, 21 Nov 2022 12:08:18 GMT", "version": "v3" } ]
2022-11-22
[ [ "Kauffman", "Sean", "" ], [ "Zimmermann", "Martin", "" ] ]
Nfer is a rule-based language for abstracting event streams into a hierarchy of intervals with data. Nfer has multiple implementations and has been applied in the analysis of spacecraft telemetry and autonomous vehicle logs. This work provides the first complexity analysis of nfer evaluation, i.e., the problem of deciding whether a given interval is generated by applying rules. We show that the full nfer language is undecidable and that this depends on both recursion in the rules and an infinite data domain. By restricting either or both of those capabilities, we obtain tight decidability results. We also examine the impact on complexity of exclusive rules and minimality. For the most practical case, which is minimality with finite data, we provide a polynomial-time algorithm.
2209.10767
Srikanth Malla
Srikanth Malla, Chiho Choi, Isht Dwivedi, Joon Hee Choi, Jiachen Li
DRAMA: Joint Risk Localization and Captioning in Driving
WACV 2023 (Winter Conference on Applications of Computer Vision)
null
null
null
cs.CV cs.AI cs.LG cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Considering the functionality of situational awareness in safety-critical automation systems, the perception of risk in driving scenes and its explainability is of particular importance for autonomous and cooperative driving. Toward this goal, this paper proposes a new research direction of joint risk localization in driving scenes and its risk explanation as a natural language description. Due to the lack of standard benchmarks, we collected a large-scale dataset, DRAMA (Driving Risk Assessment Mechanism with A captioning module), which consists of 17,785 interactive driving scenarios collected in Tokyo, Japan. Our DRAMA dataset accommodates video- and object-level questions on driving risks with associated important objects to achieve the goal of visual captioning as a free-form language description utilizing closed and open-ended responses for multi-level questions, which can be used to evaluate a range of visual captioning capabilities in driving scenarios. We make this data available to the community for further research. Using DRAMA, we explore multiple facets of joint risk localization and captioning in interactive driving scenarios. In particular, we benchmark various multi-task prediction architectures and provide a detailed analysis of joint risk localization and risk captioning. The data set is available at https://usa.honda-ri.com/drama
[ { "created": "Thu, 22 Sep 2022 03:53:56 GMT", "version": "v1" }, { "created": "Wed, 5 Oct 2022 21:09:10 GMT", "version": "v2" } ]
2022-10-07
[ [ "Malla", "Srikanth", "" ], [ "Choi", "Chiho", "" ], [ "Dwivedi", "Isht", "" ], [ "Choi", "Joon Hee", "" ], [ "Li", "Jiachen", "" ] ]
Considering the functionality of situational awareness in safety-critical automation systems, the perception of risk in driving scenes and its explainability is of particular importance for autonomous and cooperative driving. Toward this goal, this paper proposes a new research direction of joint risk localization in driving scenes and its risk explanation as a natural language description. Due to the lack of standard benchmarks, we collected a large-scale dataset, DRAMA (Driving Risk Assessment Mechanism with A captioning module), which consists of 17,785 interactive driving scenarios collected in Tokyo, Japan. Our DRAMA dataset accommodates video- and object-level questions on driving risks with associated important objects to achieve the goal of visual captioning as a free-form language description utilizing closed and open-ended responses for multi-level questions, which can be used to evaluate a range of visual captioning capabilities in driving scenarios. We make this data available to the community for further research. Using DRAMA, we explore multiple facets of joint risk localization and captioning in interactive driving scenarios. In particular, we benchmark various multi-task prediction architectures and provide a detailed analysis of joint risk localization and risk captioning. The data set is available at https://usa.honda-ri.com/drama
2211.06223
Linqi Ye Dr.
Linqi Ye, Xueqian Wang, Houde Liu, Bin Liang
The Simplest Balance Controller for Dynamic Walking
null
null
null
null
cs.RO cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Humans can balance very well during walking, even when perturbed. But it seems difficult to achieve robust walking for bipedal robots. Here we describe the simplest balance controller that leads to robust walking for a linear inverted pendulum (LIP) model. The main idea is to use a linear function of the body velocity to determine the next foot placement, which we call linear foot placement control (LFPC). By using the Poincar\'e map, a balance criterion is derived, which shows that LFPC is stable when the velocity-feedback coefficient is located in a certain range. And that range is much bigger when stepping faster, which indicates "faster stepping, easier to balance". We show that various gaits can be generated by adjusting the controller parameters in LFPC. Particularly, a dead-beat controller is discovered that can lead to steady-state walking in just one step. The effectiveness of LFPC is verified through Matlab simulation as well as V-REP simulation for both 2D and 3D walking. The main feature of LFPC is its simplicity and inherent robustness, which may help us understand the essence of how to maintain balance in dynamic walking.
[ { "created": "Fri, 11 Nov 2022 14:19:40 GMT", "version": "v1" } ]
2022-11-14
[ [ "Ye", "Linqi", "" ], [ "Wang", "Xueqian", "" ], [ "Liu", "Houde", "" ], [ "Liang", "Bin", "" ] ]
Humans can balance very well during walking, even when perturbed. But it seems difficult to achieve robust walking for bipedal robots. Here we describe the simplest balance controller that leads to robust walking for a linear inverted pendulum (LIP) model. The main idea is to use a linear function of the body velocity to determine the next foot placement, which we call linear foot placement control (LFPC). By using the Poincar\'e map, a balance criterion is derived, which shows that LFPC is stable when the velocity-feedback coefficient is located in a certain range. And that range is much bigger when stepping faster, which indicates "faster stepping, easier to balance". We show that various gaits can be generated by adjusting the controller parameters in LFPC. Particularly, a dead-beat controller is discovered that can lead to steady-state walking in just one step. The effectiveness of LFPC is verified through Matlab simulation as well as V-REP simulation for both 2D and 3D walking. The main feature of LFPC is its simplicity and inherent robustness, which may help us understand the essence of how to maintain balance in dynamic walking.
2309.16783
David Widemann
Lakshmi Nair, David Widemann, Brad Turcott, Nick Moore, Alexandra Wleklinski, Darius Bunandar, Ioannis Papavasileiou, Shihu Wang, Eric Logan
Photonic Accelerators for Image Segmentation in Autonomous Driving and Defect Detection
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Photonic computing promises faster and more energy-efficient deep neural network (DNN) inference than traditional digital hardware. Advances in photonic computing can have profound impacts on applications such as autonomous driving and defect detection that depend on fast, accurate and energy efficient execution of image segmentation models. In this paper, we investigate image segmentation on photonic accelerators to explore: a) the types of image segmentation DNN architectures that are best suited for photonic accelerators, and b) the throughput and energy efficiency of executing the different image segmentation models on photonic accelerators, along with the trade-offs involved therein. Specifically, we demonstrate that certain segmentation models exhibit negligible loss in accuracy (compared to digital float32 models) when executed on photonic accelerators, and explore the empirical reasoning for their robustness. We also discuss techniques for recovering accuracy in the case of models that do not perform well. Further, we compare throughput (inferences-per-second) and energy consumption estimates for different image segmentation workloads on photonic accelerators. We discuss the challenges and potential optimizations that can help improve the application of photonic accelerators to such computer vision tasks.
[ { "created": "Thu, 28 Sep 2023 18:22:41 GMT", "version": "v1" }, { "created": "Tue, 3 Oct 2023 16:34:13 GMT", "version": "v2" } ]
2023-10-04
[ [ "Nair", "Lakshmi", "" ], [ "Widemann", "David", "" ], [ "Turcott", "Brad", "" ], [ "Moore", "Nick", "" ], [ "Wleklinski", "Alexandra", "" ], [ "Bunandar", "Darius", "" ], [ "Papavasileiou", "Ioannis", "" ], [ "Wang", "Shihu", "" ], [ "Logan", "Eric", "" ] ]
Photonic computing promises faster and more energy-efficient deep neural network (DNN) inference than traditional digital hardware. Advances in photonic computing can have profound impacts on applications such as autonomous driving and defect detection that depend on fast, accurate and energy efficient execution of image segmentation models. In this paper, we investigate image segmentation on photonic accelerators to explore: a) the types of image segmentation DNN architectures that are best suited for photonic accelerators, and b) the throughput and energy efficiency of executing the different image segmentation models on photonic accelerators, along with the trade-offs involved therein. Specifically, we demonstrate that certain segmentation models exhibit negligible loss in accuracy (compared to digital float32 models) when executed on photonic accelerators, and explore the empirical reasoning for their robustness. We also discuss techniques for recovering accuracy in the case of models that do not perform well. Further, we compare throughput (inferences-per-second) and energy consumption estimates for different image segmentation workloads on photonic accelerators. We discuss the challenges and potential optimizations that can help improve the application of photonic accelerators to such computer vision tasks.
2102.08085
Fouzia Altaf Ms
Fouzia Altaf, Syed M.S. Islam, Naeem K. Janjua, Naveed Akhtar
Boosting Deep Transfer Learning for COVID-19 Classification
5 pages
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
COVID-19 classification using chest Computed Tomography (CT) has been found pragmatically useful by several studies. Due to the lack of annotated samples, these studies recommend transfer learning and explore the choices of pre-trained models and data augmentation. However, it is still unknown if there are better strategies than vanilla transfer learning for more accurate COVID-19 classification with limited CT data. This paper provides an affirmative answer, devising a novel `model' augmentation technique that allows a considerable performance boost to transfer learning for the task. Our method systematically reduces the distributional shift between the source and target domains and considers augmenting deep learning with complementary representation learning techniques. We establish the efficacy of our method with publicly available datasets and models, along with identifying contrasting observations in the previous studies.
[ { "created": "Tue, 16 Feb 2021 11:15:23 GMT", "version": "v1" } ]
2021-02-17
[ [ "Altaf", "Fouzia", "" ], [ "Islam", "Syed M. S.", "" ], [ "Janjua", "Naeem K.", "" ], [ "Akhtar", "Naveed", "" ] ]
COVID-19 classification using chest Computed Tomography (CT) has been found pragmatically useful by several studies. Due to the lack of annotated samples, these studies recommend transfer learning and explore the choices of pre-trained models and data augmentation. However, it is still unknown if there are better strategies than vanilla transfer learning for more accurate COVID-19 classification with limited CT data. This paper provides an affirmative answer, devising a novel `model' augmentation technique that allows a considerable performance boost to transfer learning for the task. Our method systematically reduces the distributional shift between the source and target domains and considers augmenting deep learning with complementary representation learning techniques. We establish the efficacy of our method with publicly available datasets and models, along with identifying contrasting observations in the previous studies.
2011.01671
Christian Berger
Christian Berger, Hans P. Reiser, Jo\~ao Sousa, Alysson Bessani
AWARE: Adaptive Wide-Area Replication for Fast and Resilient Byzantine Consensus
This paper consists of 16 pages in total. This paper is the accepted version to be published in IEEE Transactions on Dependable and Secure Computing (2020). For the published version refer to DOI https://doi.org/10.1109/TDSC.2020.3030605
null
10.1109/TDSC.2020.3030605
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With upcoming blockchain infrastructures, world-spanning Byzantine consensus is getting practical and necessary. In geographically distributed systems, the pace at which consensus is achieved is limited by the heterogenous latencies of connections between replicas. If deployed on a wide-area network, consensus-based systems benefit from weighted replication, an approach that utilizes extra replicas and assigns higher voting power to well connected replicas. This enables more choice in quorum formation and replicas can leverage proportionally smaller quorums to advance, thus decreasing consensus latency. However, the system needs a solution to autonomously adjust to its environment if network conditions change or faults occur. We present Adaptive Wide-Area REplication (AWARE), a mechanism which improves the geographical scalability of consensus with nodes being widely spread across the world. Essentially, AWARE is an automated and dynamic voting weight tuning and leader positioning scheme, which supports the emergence of fast quorums in the system. It employs a reliable self-monitoring process and provides a prediction model seeking to minimize the system's consensus latency. In experiments using several AWS EC2 regions, AWARE dynamically optimizes consensus latency by self-reliantly finding a fast weight configuration yielding latency gains observed by clients located across the globe.
[ { "created": "Tue, 3 Nov 2020 12:58:39 GMT", "version": "v1" } ]
2020-11-04
[ [ "Berger", "Christian", "" ], [ "Reiser", "Hans P.", "" ], [ "Sousa", "João", "" ], [ "Bessani", "Alysson", "" ] ]
With upcoming blockchain infrastructures, world-spanning Byzantine consensus is getting practical and necessary. In geographically distributed systems, the pace at which consensus is achieved is limited by the heterogenous latencies of connections between replicas. If deployed on a wide-area network, consensus-based systems benefit from weighted replication, an approach that utilizes extra replicas and assigns higher voting power to well connected replicas. This enables more choice in quorum formation and replicas can leverage proportionally smaller quorums to advance, thus decreasing consensus latency. However, the system needs a solution to autonomously adjust to its environment if network conditions change or faults occur. We present Adaptive Wide-Area REplication (AWARE), a mechanism which improves the geographical scalability of consensus with nodes being widely spread across the world. Essentially, AWARE is an automated and dynamic voting weight tuning and leader positioning scheme, which supports the emergence of fast quorums in the system. It employs a reliable self-monitoring process and provides a prediction model seeking to minimize the system's consensus latency. In experiments using several AWS EC2 regions, AWARE dynamically optimizes consensus latency by self-reliantly finding a fast weight configuration yielding latency gains observed by clients located across the globe.
2403.18133
Erkan Karabulut
Erkan Karabulut, Victoria Degeler, Paul Groth
AE SemRL: Learning Semantic Association Rules with Autoencoders
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Association Rule Mining (ARM) is the task of learning associations among data features in the form of logical rules. Mining association rules from high-dimensional numerical data, for example, time series data from a large number of sensors in a smart environment, is a computationally intensive task. In this study, we propose an Autoencoder-based approach to learn and extract association rules from time series data (AE SemRL). Moreover, we argue that in the presence of semantic information related to time series data sources, semantics can facilitate learning generalizable and explainable association rules. Despite enriching time series data with additional semantic features, AE SemRL makes learning association rules from high-dimensional data feasible. Our experiments show that semantic association rules can be extracted from a latent representation created by an Autoencoder and this method has in the order of hundreds of times faster execution time than state-of-the-art ARM approaches in many scenarios. We believe that this study advances a new way of extracting associations from representations and has the potential to inspire more research in this field.
[ { "created": "Tue, 26 Mar 2024 22:28:43 GMT", "version": "v1" } ]
2024-03-28
[ [ "Karabulut", "Erkan", "" ], [ "Degeler", "Victoria", "" ], [ "Groth", "Paul", "" ] ]
Association Rule Mining (ARM) is the task of learning associations among data features in the form of logical rules. Mining association rules from high-dimensional numerical data, for example, time series data from a large number of sensors in a smart environment, is a computationally intensive task. In this study, we propose an Autoencoder-based approach to learn and extract association rules from time series data (AE SemRL). Moreover, we argue that in the presence of semantic information related to time series data sources, semantics can facilitate learning generalizable and explainable association rules. Despite enriching time series data with additional semantic features, AE SemRL makes learning association rules from high-dimensional data feasible. Our experiments show that semantic association rules can be extracted from a latent representation created by an Autoencoder and this method has in the order of hundreds of times faster execution time than state-of-the-art ARM approaches in many scenarios. We believe that this study advances a new way of extracting associations from representations and has the potential to inspire more research in this field.
1704.03928
Noah Stephens-Davidowitz
Huck Bennett, Alexander Golovnev, Noah Stephens-Davidowitz
On the Quantitative Hardness of CVP
null
FOCS 2017
null
null
cs.CC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
$ \newcommand{\eps}{\varepsilon} \newcommand{\problem}[1]{\ensuremath{\mathrm{#1}} } \newcommand{\CVP}{\problem{CVP}} \newcommand{\SVP}{\problem{SVP}} \newcommand{\CVPP}{\problem{CVPP}} \newcommand{\ensuremath}[1]{#1} $For odd integers $p \geq 1$ (and $p = \infty$), we show that the Closest Vector Problem in the $\ell_p$ norm ($\CVP_p$) over rank $n$ lattices cannot be solved in $2^{(1-\eps) n}$ time for any constant $\eps > 0$ unless the Strong Exponential Time Hypothesis (SETH) fails. We then extend this result to "almost all" values of $p \geq 1$, not including the even integers. This comes tantalizingly close to settling the quantitative time complexity of the important special case of $\CVP_2$ (i.e., $\CVP$ in the Euclidean norm), for which a $2^{n +o(n)}$-time algorithm is known. In particular, our result applies for any $p = p(n) \neq 2$ that approaches $2$ as $n \to \infty$. We also show a similar SETH-hardness result for $\SVP_\infty$; hardness of approximating $\CVP_p$ to within some constant factor under the so-called Gap-ETH assumption; and other quantitative hardness results for $\CVP_p$ and $\CVPP_p$ for any $1 \leq p < \infty$ under different assumptions.
[ { "created": "Wed, 12 Apr 2017 20:55:59 GMT", "version": "v1" }, { "created": "Thu, 5 Oct 2017 19:05:01 GMT", "version": "v2" } ]
2019-01-28
[ [ "Bennett", "Huck", "" ], [ "Golovnev", "Alexander", "" ], [ "Stephens-Davidowitz", "Noah", "" ] ]
$ \newcommand{\eps}{\varepsilon} \newcommand{\problem}[1]{\ensuremath{\mathrm{#1}} } \newcommand{\CVP}{\problem{CVP}} \newcommand{\SVP}{\problem{SVP}} \newcommand{\CVPP}{\problem{CVPP}} \newcommand{\ensuremath}[1]{#1} $For odd integers $p \geq 1$ (and $p = \infty$), we show that the Closest Vector Problem in the $\ell_p$ norm ($\CVP_p$) over rank $n$ lattices cannot be solved in $2^{(1-\eps) n}$ time for any constant $\eps > 0$ unless the Strong Exponential Time Hypothesis (SETH) fails. We then extend this result to "almost all" values of $p \geq 1$, not including the even integers. This comes tantalizingly close to settling the quantitative time complexity of the important special case of $\CVP_2$ (i.e., $\CVP$ in the Euclidean norm), for which a $2^{n +o(n)}$-time algorithm is known. In particular, our result applies for any $p = p(n) \neq 2$ that approaches $2$ as $n \to \infty$. We also show a similar SETH-hardness result for $\SVP_\infty$; hardness of approximating $\CVP_p$ to within some constant factor under the so-called Gap-ETH assumption; and other quantitative hardness results for $\CVP_p$ and $\CVPP_p$ for any $1 \leq p < \infty$ under different assumptions.
2310.00922
Hong Huy Nguyen
Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
How Close are Other Computer Vision Tasks to Deepfake Detection?
Accepted to be Published in Proceedings of the IEEE International Joint Conference on Biometrics (IJCB 2023)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we challenge the conventional belief that supervised ImageNet-trained models have strong generalizability and are suitable for use as feature extractors in deepfake detection. We present a new measurement, "model separability," for visually and quantitatively assessing a model's raw capacity to separate data in an unsupervised manner. We also present a systematic benchmark for determining the correlation between deepfake detection and other computer vision tasks using pre-trained models. Our analysis shows that pre-trained face recognition models are more closely related to deepfake detection than other models. Additionally, models trained using self-supervised methods are more effective in separation than those trained using supervised methods. After fine-tuning all models on a small deepfake dataset, we found that self-supervised models deliver the best results, but there is a risk of overfitting. Our results provide valuable insights that should help researchers and practitioners develop more effective deepfake detection models.
[ { "created": "Mon, 2 Oct 2023 06:32:35 GMT", "version": "v1" } ]
2023-10-03
[ [ "Nguyen", "Huy H.", "" ], [ "Yamagishi", "Junichi", "" ], [ "Echizen", "Isao", "" ] ]
In this paper, we challenge the conventional belief that supervised ImageNet-trained models have strong generalizability and are suitable for use as feature extractors in deepfake detection. We present a new measurement, "model separability," for visually and quantitatively assessing a model's raw capacity to separate data in an unsupervised manner. We also present a systematic benchmark for determining the correlation between deepfake detection and other computer vision tasks using pre-trained models. Our analysis shows that pre-trained face recognition models are more closely related to deepfake detection than other models. Additionally, models trained using self-supervised methods are more effective in separation than those trained using supervised methods. After fine-tuning all models on a small deepfake dataset, we found that self-supervised models deliver the best results, but there is a risk of overfitting. Our results provide valuable insights that should help researchers and practitioners develop more effective deepfake detection models.
2107.07983
Zhi-Gang Liu
Zhi-Gang Liu, Paul N. Whatmough, Yuhao Zhu, Matthew Mattina
S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration
Accepted by the HPCA 20222, the 28th IEEE International Symposium on High-Performance Computer Architecture (HPCA-28)
null
null
null
cs.AR cs.LG
http://creativecommons.org/licenses/by/4.0/
Exploiting sparsity is a key technique in accelerating quantized convolutional neural network (CNN) inference on mobile devices. Prior sparse CNN accelerators largely exploit un-structured sparsity and achieve significant speedups. Due to the unbounded, largely unpredictable sparsity patterns, however, exploiting unstructured sparsity requires complicated hardware design with significant energy and area overhead, which is particularly detrimental to mobile/IoT inference scenarios where energy and area efficiency are crucial. We propose to exploit structured sparsity, more specifically, Density Bound Block (DBB) sparsity for both weights and activations. DBB block tensors bound the maximum number of non-zeros per block. DBB thus exposes statically predictable sparsity patterns that enable lean sparsity-exploiting hardware. We propose new hardware primitives to implement DBB sparsity for (static) weights and (dynamic) activations, respectively, with very low overheads. Building on top of the primitives, we describe S2TA, a systolic array-based CNN accelerator that exploits joint weight and activation DBB sparsity and new dimensions of data reuse unavailable on the traditional systolic array. S2TA in 16nm achieves more than 2x speedup and energy reduction compared to a strong baseline of a systolic array with zero-value clock gating, over five popular CNN benchmarks. Compared to two recent non-systolic sparse accelerators, Eyeriss v2 (65nm) and SparTen (45nm), S2TA in 65nm uses about 2.2x and 3.1x less energy per inference, respectively.
[ { "created": "Fri, 16 Jul 2021 15:57:06 GMT", "version": "v1" }, { "created": "Thu, 6 Jan 2022 16:23:55 GMT", "version": "v2" } ]
2022-01-07
[ [ "Liu", "Zhi-Gang", "" ], [ "Whatmough", "Paul N.", "" ], [ "Zhu", "Yuhao", "" ], [ "Mattina", "Matthew", "" ] ]
Exploiting sparsity is a key technique in accelerating quantized convolutional neural network (CNN) inference on mobile devices. Prior sparse CNN accelerators largely exploit un-structured sparsity and achieve significant speedups. Due to the unbounded, largely unpredictable sparsity patterns, however, exploiting unstructured sparsity requires complicated hardware design with significant energy and area overhead, which is particularly detrimental to mobile/IoT inference scenarios where energy and area efficiency are crucial. We propose to exploit structured sparsity, more specifically, Density Bound Block (DBB) sparsity for both weights and activations. DBB block tensors bound the maximum number of non-zeros per block. DBB thus exposes statically predictable sparsity patterns that enable lean sparsity-exploiting hardware. We propose new hardware primitives to implement DBB sparsity for (static) weights and (dynamic) activations, respectively, with very low overheads. Building on top of the primitives, we describe S2TA, a systolic array-based CNN accelerator that exploits joint weight and activation DBB sparsity and new dimensions of data reuse unavailable on the traditional systolic array. S2TA in 16nm achieves more than 2x speedup and energy reduction compared to a strong baseline of a systolic array with zero-value clock gating, over five popular CNN benchmarks. Compared to two recent non-systolic sparse accelerators, Eyeriss v2 (65nm) and SparTen (45nm), S2TA in 65nm uses about 2.2x and 3.1x less energy per inference, respectively.
1804.06682
Mostafa Wahby
Mostafa Wahby, Mary Katherine Heinrich, Daniel Nicolas Hofstadler, Payam Zahadat, Sebastian Risi, Phil Ayres, Thomas Schmickl and Heiko Hamann
A Robot to Shape your Natural Plant: The Machine Learning Approach to Model and Control Bio-Hybrid Systems
null
null
10.1145/3205455.3205516
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bio-hybrid systems---close couplings of natural organisms with technology---are high potential and still underexplored. In existing work, robots have mostly influenced group behaviors of animals. We explore the possibilities of mixing robots with natural plants, merging useful attributes. Significant synergies arise by combining the plants' ability to efficiently produce shaped material and the robots' ability to extend sensing and decision-making behaviors. However, programming robots to control plant motion and shape requires good knowledge of complex plant behaviors. Therefore, we use machine learning to create a holistic plant model and evolve robot controllers. As a benchmark task we choose obstacle avoidance. We use computer vision to construct a model of plant stem stiffening and motion dynamics by training an LSTM network. The LSTM network acts as a forward model predicting change in the plant, driving the evolution of neural network robot controllers. The evolved controllers augment the plants' natural light-finding and tissue-stiffening behaviors to avoid obstacles and grow desired shapes. We successfully verify the robot controllers and bio-hybrid behavior in reality, with a physical setup and actual plants.
[ { "created": "Wed, 18 Apr 2018 12:30:18 GMT", "version": "v1" }, { "created": "Thu, 19 Apr 2018 09:26:34 GMT", "version": "v2" } ]
2018-04-20
[ [ "Wahby", "Mostafa", "" ], [ "Heinrich", "Mary Katherine", "" ], [ "Hofstadler", "Daniel Nicolas", "" ], [ "Zahadat", "Payam", "" ], [ "Risi", "Sebastian", "" ], [ "Ayres", "Phil", "" ], [ "Schmickl", "Thomas", "" ], [ "Hamann", "Heiko", "" ] ]
Bio-hybrid systems---close couplings of natural organisms with technology---are high potential and still underexplored. In existing work, robots have mostly influenced group behaviors of animals. We explore the possibilities of mixing robots with natural plants, merging useful attributes. Significant synergies arise by combining the plants' ability to efficiently produce shaped material and the robots' ability to extend sensing and decision-making behaviors. However, programming robots to control plant motion and shape requires good knowledge of complex plant behaviors. Therefore, we use machine learning to create a holistic plant model and evolve robot controllers. As a benchmark task we choose obstacle avoidance. We use computer vision to construct a model of plant stem stiffening and motion dynamics by training an LSTM network. The LSTM network acts as a forward model predicting change in the plant, driving the evolution of neural network robot controllers. The evolved controllers augment the plants' natural light-finding and tissue-stiffening behaviors to avoid obstacles and grow desired shapes. We successfully verify the robot controllers and bio-hybrid behavior in reality, with a physical setup and actual plants.
2008.10715
Binghui Wang
Binghui Wang, Jinyuan Jia, Xiaoyu Cao, Neil Zhenqiang Gong
Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation
Accepted by ACM SIGKDD'21
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph neural networks (GNNs) have recently gained much attention for node and graph classification tasks on graph-structured data. However, multiple recent works showed that an attacker can easily make GNNs predict incorrectly via perturbing the graph structure, i.e., adding or deleting edges in the graph. We aim to defend against such attacks via developing certifiably robust GNNs. Specifically, we prove the certified robustness guarantee of any GNN for both node and graph classifications against structural perturbation. Moreover, we show that our certified robustness guarantee is tight. Our results are based on a recently proposed technique called randomized smoothing, which we extend to graph data. We also empirically evaluate our method for both node and graph classifications on multiple GNNs and multiple benchmark datasets. For instance, on the Cora dataset, Graph Convolutional Network with our randomized smoothing can achieve a certified accuracy of 0.49 when the attacker can arbitrarily add/delete at most 15 edges in the graph.
[ { "created": "Mon, 24 Aug 2020 21:39:10 GMT", "version": "v1" }, { "created": "Fri, 4 Jun 2021 02:34:29 GMT", "version": "v2" }, { "created": "Fri, 16 Jul 2021 01:54:43 GMT", "version": "v3" } ]
2021-07-19
[ [ "Wang", "Binghui", "" ], [ "Jia", "Jinyuan", "" ], [ "Cao", "Xiaoyu", "" ], [ "Gong", "Neil Zhenqiang", "" ] ]
Graph neural networks (GNNs) have recently gained much attention for node and graph classification tasks on graph-structured data. However, multiple recent works showed that an attacker can easily make GNNs predict incorrectly via perturbing the graph structure, i.e., adding or deleting edges in the graph. We aim to defend against such attacks via developing certifiably robust GNNs. Specifically, we prove the certified robustness guarantee of any GNN for both node and graph classifications against structural perturbation. Moreover, we show that our certified robustness guarantee is tight. Our results are based on a recently proposed technique called randomized smoothing, which we extend to graph data. We also empirically evaluate our method for both node and graph classifications on multiple GNNs and multiple benchmark datasets. For instance, on the Cora dataset, Graph Convolutional Network with our randomized smoothing can achieve a certified accuracy of 0.49 when the attacker can arbitrarily add/delete at most 15 edges in the graph.
2307.16651
Yu Wu
Yu Wu, Dimitris Spathis, Hong Jia, Ignacio Perez-Pozuelo, Tomas Gonzales, Soren Brage, Nicholas Wareham, Cecilia Mascolo
UDAMA: Unsupervised Domain Adaptation through Multi-discriminator Adversarial Training with Noisy Labels Improves Cardio-fitness Prediction
Accepted at Machine Learning for Healthcare (MLHC) 2023
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning models have shown great promise in various healthcare monitoring applications. However, most healthcare datasets with high-quality (gold-standard) labels are small-scale, as directly collecting ground truth is often costly and time-consuming. As a result, models developed and validated on small-scale datasets often suffer from overfitting and do not generalize well to unseen scenarios. At the same time, large amounts of imprecise (silver-standard) labeled data, annotated by approximate methods with the help of modern wearables and in the absence of ground truth validation, are starting to emerge. However, due to measurement differences, this data displays significant label distribution shifts, which motivates the use of domain adaptation. To this end, we introduce UDAMA, a method with two key components: Unsupervised Domain Adaptation and Multidiscriminator Adversarial Training, where we pre-train on the silver-standard data and employ adversarial adaptation with the gold-standard data along with two domain discriminators. In particular, we showcase the practical potential of UDAMA by applying it to Cardio-respiratory fitness (CRF) prediction. CRF is a crucial determinant of metabolic disease and mortality, and it presents labels with various levels of noise (goldand silver-standard), making it challenging to establish an accurate prediction model. Our results show promising performance by alleviating distribution shifts in various label shift settings. Additionally, by using data from two free-living cohort studies (Fenland and BBVS), we show that UDAMA consistently outperforms up to 12% compared to competitive transfer learning and state-of-the-art domain adaptation models, paving the way for leveraging noisy labeled data to improve fitness estimation at scale.
[ { "created": "Mon, 31 Jul 2023 13:31:53 GMT", "version": "v1" } ]
2023-08-01
[ [ "Wu", "Yu", "" ], [ "Spathis", "Dimitris", "" ], [ "Jia", "Hong", "" ], [ "Perez-Pozuelo", "Ignacio", "" ], [ "Gonzales", "Tomas", "" ], [ "Brage", "Soren", "" ], [ "Wareham", "Nicholas", "" ], [ "Mascolo", "Cecilia", "" ] ]
Deep learning models have shown great promise in various healthcare monitoring applications. However, most healthcare datasets with high-quality (gold-standard) labels are small-scale, as directly collecting ground truth is often costly and time-consuming. As a result, models developed and validated on small-scale datasets often suffer from overfitting and do not generalize well to unseen scenarios. At the same time, large amounts of imprecise (silver-standard) labeled data, annotated by approximate methods with the help of modern wearables and in the absence of ground truth validation, are starting to emerge. However, due to measurement differences, this data displays significant label distribution shifts, which motivates the use of domain adaptation. To this end, we introduce UDAMA, a method with two key components: Unsupervised Domain Adaptation and Multidiscriminator Adversarial Training, where we pre-train on the silver-standard data and employ adversarial adaptation with the gold-standard data along with two domain discriminators. In particular, we showcase the practical potential of UDAMA by applying it to Cardio-respiratory fitness (CRF) prediction. CRF is a crucial determinant of metabolic disease and mortality, and it presents labels with various levels of noise (goldand silver-standard), making it challenging to establish an accurate prediction model. Our results show promising performance by alleviating distribution shifts in various label shift settings. Additionally, by using data from two free-living cohort studies (Fenland and BBVS), we show that UDAMA consistently outperforms up to 12% compared to competitive transfer learning and state-of-the-art domain adaptation models, paving the way for leveraging noisy labeled data to improve fitness estimation at scale.
2403.14614
Yuning Cui
Yuning Cui and Syed Waqas Zamir and Salman Khan and Alois Knoll and Mubarak Shah and Fahad Shahbaz Khan
AdaIR: Adaptive All-in-One Image Restoration via Frequency Mining and Modulation
28 pages,15 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the image acquisition process, various forms of degradation, including noise, haze, and rain, are frequently introduced. These degradations typically arise from the inherent limitations of cameras or unfavorable ambient conditions. To recover clean images from degraded versions, numerous specialized restoration methods have been developed, each targeting a specific type of degradation. Recently, all-in-one algorithms have garnered significant attention by addressing different types of degradations within a single model without requiring prior information of the input degradation type. However, these methods purely operate in the spatial domain and do not delve into the distinct frequency variations inherent to different degradation types. To address this gap, we propose an adaptive all-in-one image restoration network based on frequency mining and modulation. Our approach is motivated by the observation that different degradation types impact the image content on different frequency subbands, thereby requiring different treatments for each restoration task. Specifically, we first mine low- and high-frequency information from the input features, guided by the adaptively decoupled spectra of the degraded image. The extracted features are then modulated by a bidirectional operator to facilitate interactions between different frequency components. Finally, the modulated features are merged into the original input for a progressively guided restoration. With this approach, the model achieves adaptive reconstruction by accentuating the informative frequency subbands according to different input degradations. Extensive experiments demonstrate that the proposed method achieves state-of-the-art performance on different image restoration tasks, including denoising, dehazing, deraining, motion deblurring, and low-light image enhancement. Our code is available at https://github.com/c-yn/AdaIR.
[ { "created": "Thu, 21 Mar 2024 17:58:14 GMT", "version": "v1" } ]
2024-03-22
[ [ "Cui", "Yuning", "" ], [ "Zamir", "Syed Waqas", "" ], [ "Khan", "Salman", "" ], [ "Knoll", "Alois", "" ], [ "Shah", "Mubarak", "" ], [ "Khan", "Fahad Shahbaz", "" ] ]
In the image acquisition process, various forms of degradation, including noise, haze, and rain, are frequently introduced. These degradations typically arise from the inherent limitations of cameras or unfavorable ambient conditions. To recover clean images from degraded versions, numerous specialized restoration methods have been developed, each targeting a specific type of degradation. Recently, all-in-one algorithms have garnered significant attention by addressing different types of degradations within a single model without requiring prior information of the input degradation type. However, these methods purely operate in the spatial domain and do not delve into the distinct frequency variations inherent to different degradation types. To address this gap, we propose an adaptive all-in-one image restoration network based on frequency mining and modulation. Our approach is motivated by the observation that different degradation types impact the image content on different frequency subbands, thereby requiring different treatments for each restoration task. Specifically, we first mine low- and high-frequency information from the input features, guided by the adaptively decoupled spectra of the degraded image. The extracted features are then modulated by a bidirectional operator to facilitate interactions between different frequency components. Finally, the modulated features are merged into the original input for a progressively guided restoration. With this approach, the model achieves adaptive reconstruction by accentuating the informative frequency subbands according to different input degradations. Extensive experiments demonstrate that the proposed method achieves state-of-the-art performance on different image restoration tasks, including denoising, dehazing, deraining, motion deblurring, and low-light image enhancement. Our code is available at https://github.com/c-yn/AdaIR.
2202.02524
Harichandana B S S
Harichandana B S S, Vibhav Agarwal, Sourav Ghosh, Gopi Ramena, Sumit Kumar and Barath Raj Kandur Raja
PrivPAS: A real time Privacy-Preserving AI System and applied ethics
Accepted at 16th IEEE International Conference on Semantic Computing (ICSC), January 26-28, 2022 [update: Best Paper candidate at ICSC 2022]
2022 IEEE 16th International Conference on Semantic Computing (ICSC), Laguna Hills, CA, USA, 2022, pp. 9-16
10.1109/ICSC52841.2022.00010
null
cs.CV cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
With 3.78 billion social media users worldwide in 2021 (48% of the human population), almost 3 billion images are shared daily. At the same time, a consistent evolution of smartphone cameras has led to a photography explosion with 85% of all new pictures being captured using smartphones. However, lately, there has been an increased discussion of privacy concerns when a person being photographed is unaware of the picture being taken or has reservations about the same being shared. These privacy violations are amplified for people with disabilities, who may find it challenging to raise dissent even if they are aware. Such unauthorized image captures may also be misused to gain sympathy by third-party organizations, leading to a privacy breach. Privacy for people with disabilities has so far received comparatively less attention from the AI community. This motivates us to work towards a solution to generate privacy-conscious cues for raising awareness in smartphone users of any sensitivity in their viewfinder content. To this end, we introduce PrivPAS (A real time Privacy-Preserving AI System) a novel framework to identify sensitive content. Additionally, we curate and annotate a dataset to identify and localize accessibility markers and classify whether an image is sensitive to a featured subject with a disability. We demonstrate that the proposed lightweight architecture, with a memory footprint of a mere 8.49MB, achieves a high mAP of 89.52% on resource-constrained devices. Furthermore, our pipeline, trained on face anonymized data, achieves an F1-score of 73.1%.
[ { "created": "Sat, 5 Feb 2022 09:52:54 GMT", "version": "v1" }, { "created": "Tue, 8 Feb 2022 14:23:15 GMT", "version": "v2" } ]
2022-04-05
[ [ "S", "Harichandana B S", "" ], [ "Agarwal", "Vibhav", "" ], [ "Ghosh", "Sourav", "" ], [ "Ramena", "Gopi", "" ], [ "Kumar", "Sumit", "" ], [ "Raja", "Barath Raj Kandur", "" ] ]
With 3.78 billion social media users worldwide in 2021 (48% of the human population), almost 3 billion images are shared daily. At the same time, a consistent evolution of smartphone cameras has led to a photography explosion with 85% of all new pictures being captured using smartphones. However, lately, there has been an increased discussion of privacy concerns when a person being photographed is unaware of the picture being taken or has reservations about the same being shared. These privacy violations are amplified for people with disabilities, who may find it challenging to raise dissent even if they are aware. Such unauthorized image captures may also be misused to gain sympathy by third-party organizations, leading to a privacy breach. Privacy for people with disabilities has so far received comparatively less attention from the AI community. This motivates us to work towards a solution to generate privacy-conscious cues for raising awareness in smartphone users of any sensitivity in their viewfinder content. To this end, we introduce PrivPAS (A real time Privacy-Preserving AI System) a novel framework to identify sensitive content. Additionally, we curate and annotate a dataset to identify and localize accessibility markers and classify whether an image is sensitive to a featured subject with a disability. We demonstrate that the proposed lightweight architecture, with a memory footprint of a mere 8.49MB, achieves a high mAP of 89.52% on resource-constrained devices. Furthermore, our pipeline, trained on face anonymized data, achieves an F1-score of 73.1%.
1510.05860
Ya-Feng Liu
Ya-Feng Liu
Dynamic Spectrum Management: A Complete Complexity Characterization
The paper has been accepted for publication in IEEE Transactions on Information Theory
null
null
null
cs.IT cs.CC math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Consider a multi-user multi-carrier communication system where multiple users share multiple discrete subcarriers. To achieve high spectrum efficiency, the users in the system must choose their transmit power dynamically in response to fast channel fluctuations. Assuming perfect channel state information, two formulations for the spectrum management (power control) problem are considered in this paper: the first is to minimize the total transmission power subject to all users' transmission data rate constraints, and the second is to maximize the min-rate utility subject to individual power constraints at each user. It is known in the literature that both formulations of the problem are polynomial time solvable when the number of subcarriers is one and strongly NP-hard when the number of subcarriers are greater than or equal to three. However, the complexity characterization of the problem when the number of subcarriers is two has been missing for a long time. This paper answers this long-standing open question: both formulations of the problem are strongly NP-hard when the number of subcarriers is two.
[ { "created": "Tue, 20 Oct 2015 12:24:35 GMT", "version": "v1" }, { "created": "Sat, 29 Oct 2016 00:26:26 GMT", "version": "v2" } ]
2016-11-01
[ [ "Liu", "Ya-Feng", "" ] ]
Consider a multi-user multi-carrier communication system where multiple users share multiple discrete subcarriers. To achieve high spectrum efficiency, the users in the system must choose their transmit power dynamically in response to fast channel fluctuations. Assuming perfect channel state information, two formulations for the spectrum management (power control) problem are considered in this paper: the first is to minimize the total transmission power subject to all users' transmission data rate constraints, and the second is to maximize the min-rate utility subject to individual power constraints at each user. It is known in the literature that both formulations of the problem are polynomial time solvable when the number of subcarriers is one and strongly NP-hard when the number of subcarriers are greater than or equal to three. However, the complexity characterization of the problem when the number of subcarriers is two has been missing for a long time. This paper answers this long-standing open question: both formulations of the problem are strongly NP-hard when the number of subcarriers is two.
2408.07191
Jonas Linkerh\"agner
Jonas Linkerh\"agner, Cheng Shi, Ivan Dokmani\'c
Joint Graph Rewiring and Feature Denoising via Spectral Resonance
null
null
null
null
cs.LG cs.SI stat.ML
http://creativecommons.org/licenses/by/4.0/
Graph neural networks (GNNs) take as input the graph structure and the feature vectors associated with the nodes. Both contain noisy information about the labels. Here we propose joint denoising and rewiring (JDR)--an algorithm to jointly denoise the graph structure and features, which can improve the performance of any downstream algorithm. We do this by defining and maximizing the alignment between the leading eigenspaces of graph and feature matrices. To approximately solve this computationally hard problem, we propose a heuristic that efficiently handles real-world graph datasets with many classes and different levels of homophily or heterophily. We experimentally verify the effectiveness of our approach on synthetic data and real-world graph datasets. The results show that JDR consistently outperforms existing rewiring methods on node classification tasks using GNNs as downstream models.
[ { "created": "Tue, 13 Aug 2024 20:16:11 GMT", "version": "v1" } ]
2024-08-15
[ [ "Linkerhägner", "Jonas", "" ], [ "Shi", "Cheng", "" ], [ "Dokmanić", "Ivan", "" ] ]
Graph neural networks (GNNs) take as input the graph structure and the feature vectors associated with the nodes. Both contain noisy information about the labels. Here we propose joint denoising and rewiring (JDR)--an algorithm to jointly denoise the graph structure and features, which can improve the performance of any downstream algorithm. We do this by defining and maximizing the alignment between the leading eigenspaces of graph and feature matrices. To approximately solve this computationally hard problem, we propose a heuristic that efficiently handles real-world graph datasets with many classes and different levels of homophily or heterophily. We experimentally verify the effectiveness of our approach on synthetic data and real-world graph datasets. The results show that JDR consistently outperforms existing rewiring methods on node classification tasks using GNNs as downstream models.
2201.04402
Ekrem \c{C}etinkaya
Ekrem \c{C}etinkaya and Minh Nguyen and Christian Timmerer
MoViDNN: A Mobile Platform for Evaluating Video Quality Enhancement with Deep Neural Networks
8 pages, 3 figures
MMM 2022: MultiMedia Modeling pp 465-472
10.1007/978-3-030-98355-0_40
null
cs.CV cs.MM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Deep neural network (DNN) based approaches have been intensively studied to improve video quality thanks to their fast advancement in recent years. These approaches are designed mainly for desktop devices due to their high computational cost. However, with the increasing performance of mobile devices in recent years, it became possible to execute DNN based approaches in mobile devices. Despite having the required computational power, utilizing DNNs to improve the video quality for mobile devices is still an active research area. In this paper, we propose an open-source mobile platform, namely MoViDNN, to evaluate DNN based video quality enhancement methods, such as super-resolution, denoising, and deblocking. Our proposed platform can be used to evaluate the DNN based approaches both objectively and subjectively. For objective evaluation, we report common metrics such as execution time, PSNR, and SSIM. For subjective evaluation, Mean Score Opinion (MOS) is reported. The proposed platform is available publicly at https://github.com/cd-athena/MoViDNN
[ { "created": "Wed, 12 Jan 2022 10:38:04 GMT", "version": "v1" } ]
2022-03-22
[ [ "Çetinkaya", "Ekrem", "" ], [ "Nguyen", "Minh", "" ], [ "Timmerer", "Christian", "" ] ]
Deep neural network (DNN) based approaches have been intensively studied to improve video quality thanks to their fast advancement in recent years. These approaches are designed mainly for desktop devices due to their high computational cost. However, with the increasing performance of mobile devices in recent years, it became possible to execute DNN based approaches in mobile devices. Despite having the required computational power, utilizing DNNs to improve the video quality for mobile devices is still an active research area. In this paper, we propose an open-source mobile platform, namely MoViDNN, to evaluate DNN based video quality enhancement methods, such as super-resolution, denoising, and deblocking. Our proposed platform can be used to evaluate the DNN based approaches both objectively and subjectively. For objective evaluation, we report common metrics such as execution time, PSNR, and SSIM. For subjective evaluation, Mean Score Opinion (MOS) is reported. The proposed platform is available publicly at https://github.com/cd-athena/MoViDNN
1908.08332
Luis Cruz
Luis Cruz, Rui Abreu, John Grundy, Li Li, Xin Xia
Do Energy-oriented Changes Hinder Maintainability?
International Conference on Software Maintenance and Evolution - ICSME 2019
null
null
null
cs.SE cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Energy efficiency is a crucial quality requirement for mobile applications. However, improving energy efficiency is far from trivial as developers lack the knowledge and tools to aid in this activity. In this paper we study the impact of changes to improve energy efficiency on the maintainability of Android applications. Using a dataset containing 539 energy efficiency-oriented commits, we measure maintainability -- as computed by the Software Improvement Group's web-based source code analysis service Better Code Hub (BCH) -- before and after energy efficiency-related code changes. Results show that in general improving energy efficiency comes with a significant decrease in maintainability. This is particularly evident in code changes to accommodate the Power Save Mode and Wakelock Addition energy patterns. In addition, we perform manual analysis to assess how real examples of energy-oriented changes affect maintainability. Our results help mobile app developers to 1) avoid common maintainability issues when improving the energy efficiency of their apps; and 2) adopt development processes to build maintainable and energy-efficient code. We also support researchers by identifying challenges in mobile app development that still need to be addressed.
[ { "created": "Thu, 22 Aug 2019 12:21:08 GMT", "version": "v1" } ]
2019-08-29
[ [ "Cruz", "Luis", "" ], [ "Abreu", "Rui", "" ], [ "Grundy", "John", "" ], [ "Li", "Li", "" ], [ "Xia", "Xin", "" ] ]
Energy efficiency is a crucial quality requirement for mobile applications. However, improving energy efficiency is far from trivial as developers lack the knowledge and tools to aid in this activity. In this paper we study the impact of changes to improve energy efficiency on the maintainability of Android applications. Using a dataset containing 539 energy efficiency-oriented commits, we measure maintainability -- as computed by the Software Improvement Group's web-based source code analysis service Better Code Hub (BCH) -- before and after energy efficiency-related code changes. Results show that in general improving energy efficiency comes with a significant decrease in maintainability. This is particularly evident in code changes to accommodate the Power Save Mode and Wakelock Addition energy patterns. In addition, we perform manual analysis to assess how real examples of energy-oriented changes affect maintainability. Our results help mobile app developers to 1) avoid common maintainability issues when improving the energy efficiency of their apps; and 2) adopt development processes to build maintainable and energy-efficient code. We also support researchers by identifying challenges in mobile app development that still need to be addressed.
1403.2508
Rajib Das
Sunirmal Khatua, Preetam K. Sur, Rajib K. Das and Nandini Mukherjee
Heuristic-based Optimal Resource Provisioning in Application-centric Cloud
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cloud Service Providers (CSPs) adapt different pricing models for their offered services. Some of the models are suitable for short term requirement while others may be suitable for the Cloud Service User's (CSU) long term requirement. In this paper, we look at the problem of finding the amount of resources to be reserved to satisfy the CSU's long term demands with the aim of minimizing the total cost. Finding the optimal resource requirement to satisfy the the CSU's demand for resources needs sufficient research effort. Various algorithms were discussed in the last couple of years for finding the optimal resource requirement but most of them are based on IPP which is NP in nature. In this paper, we derive some heuristic-based polynomial time algorithms to find some near optimal solution to the problem. We show that the cost for CSU using our approach is comparable to the solution obtained using optimal Integer Programming Problem(IPP).
[ { "created": "Tue, 11 Mar 2014 09:07:16 GMT", "version": "v1" } ]
2014-03-12
[ [ "Khatua", "Sunirmal", "" ], [ "Sur", "Preetam K.", "" ], [ "Das", "Rajib K.", "" ], [ "Mukherjee", "Nandini", "" ] ]
Cloud Service Providers (CSPs) adapt different pricing models for their offered services. Some of the models are suitable for short term requirement while others may be suitable for the Cloud Service User's (CSU) long term requirement. In this paper, we look at the problem of finding the amount of resources to be reserved to satisfy the CSU's long term demands with the aim of minimizing the total cost. Finding the optimal resource requirement to satisfy the the CSU's demand for resources needs sufficient research effort. Various algorithms were discussed in the last couple of years for finding the optimal resource requirement but most of them are based on IPP which is NP in nature. In this paper, we derive some heuristic-based polynomial time algorithms to find some near optimal solution to the problem. We show that the cost for CSU using our approach is comparable to the solution obtained using optimal Integer Programming Problem(IPP).
1211.3719
Athanasios Lioumpas S.
Athanasios S. Lioumpas, Petros S. Bithas, Angeliki Alexiou
Partitioning of Distributed MIMO Systems based on Overhead Considerations
IEEE Wireless Communications Letters
null
10.1109/WCL.2013.072913.130449
null
cs.NI cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Distributed-Multiple Input Multiple Output (DMIMO) networks is a promising enabler to address the challenges of high traffic demand in future wireless networks. A limiting factor that is directly related to the performance of these systems is the overhead signaling required for distributing data and control information among the network elements. In this paper, the concept of orthogonal partitioning is extended to D-MIMO networks employing joint multi-user beamforming, aiming to maximize the effective sum-rate, i.e., the actual transmitted information data. Furthermore, in order to comply with practical requirements, the overhead subframe size is considered to be constrained. In this context, a novel formulation of constrained orthogonal partitioning is introduced as an elegant Knapsack optimization problem, which allows the derivation of quick and accurate solutions. Several numerical results give insight into the capabilities of D-MIMO networks and the actual sum-rate scaling under overhead constraints.
[ { "created": "Thu, 15 Nov 2012 20:21:29 GMT", "version": "v1" }, { "created": "Fri, 16 Nov 2012 17:18:49 GMT", "version": "v2" }, { "created": "Sun, 21 Jul 2013 19:49:23 GMT", "version": "v3" } ]
2016-11-18
[ [ "Lioumpas", "Athanasios S.", "" ], [ "Bithas", "Petros S.", "" ], [ "Alexiou", "Angeliki", "" ] ]
Distributed-Multiple Input Multiple Output (DMIMO) networks is a promising enabler to address the challenges of high traffic demand in future wireless networks. A limiting factor that is directly related to the performance of these systems is the overhead signaling required for distributing data and control information among the network elements. In this paper, the concept of orthogonal partitioning is extended to D-MIMO networks employing joint multi-user beamforming, aiming to maximize the effective sum-rate, i.e., the actual transmitted information data. Furthermore, in order to comply with practical requirements, the overhead subframe size is considered to be constrained. In this context, a novel formulation of constrained orthogonal partitioning is introduced as an elegant Knapsack optimization problem, which allows the derivation of quick and accurate solutions. Several numerical results give insight into the capabilities of D-MIMO networks and the actual sum-rate scaling under overhead constraints.
2006.12779
Francesco Cicala
Francesco Cicala, Luca Bortolussi
Density-embedding layers: a general framework for adaptive receptive fields
13 pages, 2 figures, submitted to NeurIPS 2020
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The effectiveness and performance of artificial neural networks, particularly for visual tasks, depends in crucial ways on the receptive field of neurons. The receptive field itself depends on the interplay between several architectural aspects, including sparsity, pooling, and activation functions. In recent literature there are several ad hoc proposals trying to make receptive fields more flexible and adaptive to data. For instance, different parameterizations of convolutional and pooling layers have been proposed to increase their adaptivity. In this paper, we propose the novel theoretical framework of density-embedded layers, generalizing the transformation represented by a neuron. Specifically, the affine transformation applied on the input is replaced by a scalar product of the input, suitably represented as a piecewise constant function, with a density function associated with the neuron. This density is shown to describe directly the receptive field of the neuron. Crucially, by suitably representing such a density as a linear combination of a parametric family of functions, we can efficiently train the densities by means of any automatic differentiation system, making it adaptable to the problem at hand, and computationally efficient to evaluate. This framework captures and generalizes recent methods, allowing a fine tuning of the receptive field. In the paper, we define some novel layers and we experimentally validate them on the classic MNIST dataset.
[ { "created": "Tue, 23 Jun 2020 06:09:16 GMT", "version": "v1" }, { "created": "Mon, 6 Jul 2020 07:36:24 GMT", "version": "v2" } ]
2020-07-07
[ [ "Cicala", "Francesco", "" ], [ "Bortolussi", "Luca", "" ] ]
The effectiveness and performance of artificial neural networks, particularly for visual tasks, depends in crucial ways on the receptive field of neurons. The receptive field itself depends on the interplay between several architectural aspects, including sparsity, pooling, and activation functions. In recent literature there are several ad hoc proposals trying to make receptive fields more flexible and adaptive to data. For instance, different parameterizations of convolutional and pooling layers have been proposed to increase their adaptivity. In this paper, we propose the novel theoretical framework of density-embedded layers, generalizing the transformation represented by a neuron. Specifically, the affine transformation applied on the input is replaced by a scalar product of the input, suitably represented as a piecewise constant function, with a density function associated with the neuron. This density is shown to describe directly the receptive field of the neuron. Crucially, by suitably representing such a density as a linear combination of a parametric family of functions, we can efficiently train the densities by means of any automatic differentiation system, making it adaptable to the problem at hand, and computationally efficient to evaluate. This framework captures and generalizes recent methods, allowing a fine tuning of the receptive field. In the paper, we define some novel layers and we experimentally validate them on the classic MNIST dataset.
2303.14828
Dina Bashkirova
Dina Bashkirova, Samarth Mishra, Diala Lteif, Piotr Teterwak, Donghyun Kim, Fadi Alladkani, James Akl, Berk Calli, Sarah Adel Bargal, Kate Saenko, Daehan Kim, Minseok Seo, YoungJin Jeon, Dong-Geol Choi, Shahaf Ettedgui, Raja Giryes, Shady Abu-Hussein, Binhui Xie, Shuang Li
VisDA 2022 Challenge: Domain Adaptation for Industrial Waste Sorting
Proceedings of Machine Learning Research
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Label-efficient and reliable semantic segmentation is essential for many real-life applications, especially for industrial settings with high visual diversity, such as waste sorting. In industrial waste sorting, one of the biggest challenges is the extreme diversity of the input stream depending on factors like the location of the sorting facility, the equipment available in the facility, and the time of year, all of which significantly impact the composition and visual appearance of the waste stream. These changes in the data are called ``visual domains'', and label-efficient adaptation of models to such domains is needed for successful semantic segmentation of industrial waste. To test the abilities of computer vision models on this task, we present the VisDA 2022 Challenge on Domain Adaptation for Industrial Waste Sorting. Our challenge incorporates a fully-annotated waste sorting dataset, ZeroWaste, collected from two real material recovery facilities in different locations and seasons, as well as a novel procedurally generated synthetic waste sorting dataset, SynthWaste. In this competition, we aim to answer two questions: 1) can we leverage domain adaptation techniques to minimize the domain gap? and 2) can synthetic data augmentation improve performance on this task and help adapt to changing data distributions? The results of the competition show that industrial waste detection poses a real domain adaptation problem, that domain generalization techniques such as augmentations, ensembling, etc., improve the overall performance on the unlabeled target domain examples, and that leveraging synthetic data effectively remains an open problem. See https://ai.bu.edu/visda-2022/
[ { "created": "Sun, 26 Mar 2023 21:38:38 GMT", "version": "v1" } ]
2023-03-28
[ [ "Bashkirova", "Dina", "" ], [ "Mishra", "Samarth", "" ], [ "Lteif", "Diala", "" ], [ "Teterwak", "Piotr", "" ], [ "Kim", "Donghyun", "" ], [ "Alladkani", "Fadi", "" ], [ "Akl", "James", "" ], [ "Calli", "Berk", "" ], [ "Bargal", "Sarah Adel", "" ], [ "Saenko", "Kate", "" ], [ "Kim", "Daehan", "" ], [ "Seo", "Minseok", "" ], [ "Jeon", "YoungJin", "" ], [ "Choi", "Dong-Geol", "" ], [ "Ettedgui", "Shahaf", "" ], [ "Giryes", "Raja", "" ], [ "Abu-Hussein", "Shady", "" ], [ "Xie", "Binhui", "" ], [ "Li", "Shuang", "" ] ]
Label-efficient and reliable semantic segmentation is essential for many real-life applications, especially for industrial settings with high visual diversity, such as waste sorting. In industrial waste sorting, one of the biggest challenges is the extreme diversity of the input stream depending on factors like the location of the sorting facility, the equipment available in the facility, and the time of year, all of which significantly impact the composition and visual appearance of the waste stream. These changes in the data are called ``visual domains'', and label-efficient adaptation of models to such domains is needed for successful semantic segmentation of industrial waste. To test the abilities of computer vision models on this task, we present the VisDA 2022 Challenge on Domain Adaptation for Industrial Waste Sorting. Our challenge incorporates a fully-annotated waste sorting dataset, ZeroWaste, collected from two real material recovery facilities in different locations and seasons, as well as a novel procedurally generated synthetic waste sorting dataset, SynthWaste. In this competition, we aim to answer two questions: 1) can we leverage domain adaptation techniques to minimize the domain gap? and 2) can synthetic data augmentation improve performance on this task and help adapt to changing data distributions? The results of the competition show that industrial waste detection poses a real domain adaptation problem, that domain generalization techniques such as augmentations, ensembling, etc., improve the overall performance on the unlabeled target domain examples, and that leveraging synthetic data effectively remains an open problem. See https://ai.bu.edu/visda-2022/
1405.2199
Madhumangal Pal Dr.
Madhumangal Pal and Anita Pal
Scheduling algorithm to select $k$ optimal programme slots in television channels: A graph theoretic approach
25 pages
null
null
null
cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, it is shown that all programmes of all television channels can be modelled as an interval graph. The programme slots are taken as the vertices of the graph and if the time duration of two {programme slots} have non-empty intersection, the corresponding vertices are considered to be connected by an edge. The number of viewers of a programme is taken as the weight of the vertex. A set of programmes that are mutually exclusive in respect of time scheduling is called a session. We assume that a company sets the objective of selecting the popular programmes in $k$ parallel sessions among different channels so as to make its commercial advertisement reach the maximum number of viewers, that is, a company selects $k$ suitable programme slots simultaneously for advertisement. The aim of the paper is, therefore, to {help} the companies to select the programme slots, which are mutually exclusive with respect to the time schedule of telecasting time, in such a way that the total number of viewers of the selected programme in $k$ parallel slots rises to the optimum level. It is shown that the solution of this problem is obtained by solving the maximum weight $k$-colouring problem on an interval {graph}. An algorithm is designed to solve this just-in-time optimization problem using $O(kMn^2)$ time, where $n$ and $M$ represent the total number of programmes of all channels and the upper bound of the viewers of all programmes of all channels respectively. The problem considered in this paper is a daily life problem which is modeled by $k$-colouring problem on interval graph.
[ { "created": "Fri, 9 May 2014 10:29:10 GMT", "version": "v1" } ]
2014-05-12
[ [ "Pal", "Madhumangal", "" ], [ "Pal", "Anita", "" ] ]
In this paper, it is shown that all programmes of all television channels can be modelled as an interval graph. The programme slots are taken as the vertices of the graph and if the time duration of two {programme slots} have non-empty intersection, the corresponding vertices are considered to be connected by an edge. The number of viewers of a programme is taken as the weight of the vertex. A set of programmes that are mutually exclusive in respect of time scheduling is called a session. We assume that a company sets the objective of selecting the popular programmes in $k$ parallel sessions among different channels so as to make its commercial advertisement reach the maximum number of viewers, that is, a company selects $k$ suitable programme slots simultaneously for advertisement. The aim of the paper is, therefore, to {help} the companies to select the programme slots, which are mutually exclusive with respect to the time schedule of telecasting time, in such a way that the total number of viewers of the selected programme in $k$ parallel slots rises to the optimum level. It is shown that the solution of this problem is obtained by solving the maximum weight $k$-colouring problem on an interval {graph}. An algorithm is designed to solve this just-in-time optimization problem using $O(kMn^2)$ time, where $n$ and $M$ represent the total number of programmes of all channels and the upper bound of the viewers of all programmes of all channels respectively. The problem considered in this paper is a daily life problem which is modeled by $k$-colouring problem on interval graph.
2103.10107
Luk\'a\v{s} Picek
Luk\'a\v{s} Picek, Milan \v{S}ulc, Ji\v{r}\'i Matas, Jacob Heilmann-Clausen, Thomas S. Jeppesen, Thomas L{\ae}ss{\o}e, Tobias Fr{\o}slev
Danish Fungi 2020 -- Not Just Another Image Recognition Dataset
null
null
10.1109/WACV51458.2022.00334
null
cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
We introduce a novel fine-grained dataset and benchmark, the Danish Fungi 2020 (DF20). The dataset, constructed from observations submitted to the Atlas of Danish Fungi, is unique in its taxonomy-accurate class labels, small number of errors, highly unbalanced long-tailed class distribution, rich observation metadata, and well-defined class hierarchy. DF20 has zero overlap with ImageNet, allowing unbiased comparison of models fine-tuned from publicly available ImageNet checkpoints. The proposed evaluation protocol enables testing the ability to improve classification using metadata -- e.g. precise geographic location, habitat, and substrate, facilitates classifier calibration testing, and finally allows to study the impact of the device settings on the classification performance. Experiments using Convolutional Neural Networks (CNN) and the recent Vision Transformers (ViT) show that DF20 presents a challenging task. Interestingly, ViT achieves results superior to CNN baselines with 80.45% accuracy and 0.743 macro F1 score, reducing the CNN error by 9% and 12% respectively. A simple procedure for including metadata into the decision process improves the classification accuracy by more than 2.95 percentage points, reducing the error rate by 15%. The source code for all methods and experiments is available at https://sites.google.com/view/danish-fungi-dataset.
[ { "created": "Thu, 18 Mar 2021 09:33:11 GMT", "version": "v1" }, { "created": "Fri, 19 Mar 2021 12:15:47 GMT", "version": "v2" }, { "created": "Mon, 22 Mar 2021 08:43:04 GMT", "version": "v3" }, { "created": "Fri, 20 Aug 2021 14:35:44 GMT", "version": "v4" } ]
2022-06-13
[ [ "Picek", "Lukáš", "" ], [ "Šulc", "Milan", "" ], [ "Matas", "Jiří", "" ], [ "Heilmann-Clausen", "Jacob", "" ], [ "Jeppesen", "Thomas S.", "" ], [ "Læssøe", "Thomas", "" ], [ "Frøslev", "Tobias", "" ] ]
We introduce a novel fine-grained dataset and benchmark, the Danish Fungi 2020 (DF20). The dataset, constructed from observations submitted to the Atlas of Danish Fungi, is unique in its taxonomy-accurate class labels, small number of errors, highly unbalanced long-tailed class distribution, rich observation metadata, and well-defined class hierarchy. DF20 has zero overlap with ImageNet, allowing unbiased comparison of models fine-tuned from publicly available ImageNet checkpoints. The proposed evaluation protocol enables testing the ability to improve classification using metadata -- e.g. precise geographic location, habitat, and substrate, facilitates classifier calibration testing, and finally allows to study the impact of the device settings on the classification performance. Experiments using Convolutional Neural Networks (CNN) and the recent Vision Transformers (ViT) show that DF20 presents a challenging task. Interestingly, ViT achieves results superior to CNN baselines with 80.45% accuracy and 0.743 macro F1 score, reducing the CNN error by 9% and 12% respectively. A simple procedure for including metadata into the decision process improves the classification accuracy by more than 2.95 percentage points, reducing the error rate by 15%. The source code for all methods and experiments is available at https://sites.google.com/view/danish-fungi-dataset.
1605.00398
Akshay Khatri
Akshay Khatri, Sankalp Kolhe, Nupur Giri
Dynamic Address Allocation Algorithm for Mobile Ad hoc Networks
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A Mobile Ad hoc network (MANET) consists of nodes which use multi-hop communication to establish connection between nodes. Traditional infrastructure based systems use a centralized architecture for address allocation. However, this is not possible in Ad hoc networks due to their dynamic structure. Many schemes have been proposed to solve this problem, but most of them use network-wide broadcasts to ensure the availability of a new address. This becomes extremely difficult as network size grows. In this paper, we propose an address allocation algorithm which avoids network-wide broadcasts to allocate address to a new node. Moreover, the algorithm allocates addresses dynamically such that the network maintains an "IP resembles topology" state. In such a state, routing becomes easier and the overall overhead in communication is reduced. This algorithm is particularly useful for routing protocols which use topology information to route messages in the network. Our solution is designed with scalability in mind such that the cost of address assignment to a new node is independent of the number of nodes in the network.
[ { "created": "Mon, 2 May 2016 09:10:44 GMT", "version": "v1" } ]
2016-05-03
[ [ "Khatri", "Akshay", "" ], [ "Kolhe", "Sankalp", "" ], [ "Giri", "Nupur", "" ] ]
A Mobile Ad hoc network (MANET) consists of nodes which use multi-hop communication to establish connection between nodes. Traditional infrastructure based systems use a centralized architecture for address allocation. However, this is not possible in Ad hoc networks due to their dynamic structure. Many schemes have been proposed to solve this problem, but most of them use network-wide broadcasts to ensure the availability of a new address. This becomes extremely difficult as network size grows. In this paper, we propose an address allocation algorithm which avoids network-wide broadcasts to allocate address to a new node. Moreover, the algorithm allocates addresses dynamically such that the network maintains an "IP resembles topology" state. In such a state, routing becomes easier and the overall overhead in communication is reduced. This algorithm is particularly useful for routing protocols which use topology information to route messages in the network. Our solution is designed with scalability in mind such that the cost of address assignment to a new node is independent of the number of nodes in the network.
2404.14406
Kartik Narayan
Kartik Narayan, Vishal M. Patel
Hyp-OC: Hyperbolic One Class Classification for Face Anti-Spoofing
Accepted in FG2024, Project Page - https://kartik-3004.github.io/hyp-oc/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Face recognition technology has become an integral part of modern security systems and user authentication processes. However, these systems are vulnerable to spoofing attacks and can easily be circumvented. Most prior research in face anti-spoofing (FAS) approaches it as a two-class classification task where models are trained on real samples and known spoof attacks and tested for detection performance on unknown spoof attacks. However, in practice, FAS should be treated as a one-class classification task where, while training, one cannot assume any knowledge regarding the spoof samples a priori. In this paper, we reformulate the face anti-spoofing task from a one-class perspective and propose a novel hyperbolic one-class classification framework. To train our network, we use a pseudo-negative class sampled from the Gaussian distribution with a weighted running mean and propose two novel loss functions: (1) Hyp-PC: Hyperbolic Pairwise Confusion loss, and (2) Hyp-CE: Hyperbolic Cross Entropy loss, which operate in the hyperbolic space. Additionally, we employ Euclidean feature clipping and gradient clipping to stabilize the training in the hyperbolic space. To the best of our knowledge, this is the first work extending hyperbolic embeddings for face anti-spoofing in a one-class manner. With extensive experiments on five benchmark datasets: Rose-Youtu, MSU-MFSD, CASIA-MFSD, Idiap Replay-Attack, and OULU-NPU, we demonstrate that our method significantly outperforms the state-of-the-art, achieving better spoof detection performance.
[ { "created": "Mon, 22 Apr 2024 17:59:18 GMT", "version": "v1" } ]
2024-04-23
[ [ "Narayan", "Kartik", "" ], [ "Patel", "Vishal M.", "" ] ]
Face recognition technology has become an integral part of modern security systems and user authentication processes. However, these systems are vulnerable to spoofing attacks and can easily be circumvented. Most prior research in face anti-spoofing (FAS) approaches it as a two-class classification task where models are trained on real samples and known spoof attacks and tested for detection performance on unknown spoof attacks. However, in practice, FAS should be treated as a one-class classification task where, while training, one cannot assume any knowledge regarding the spoof samples a priori. In this paper, we reformulate the face anti-spoofing task from a one-class perspective and propose a novel hyperbolic one-class classification framework. To train our network, we use a pseudo-negative class sampled from the Gaussian distribution with a weighted running mean and propose two novel loss functions: (1) Hyp-PC: Hyperbolic Pairwise Confusion loss, and (2) Hyp-CE: Hyperbolic Cross Entropy loss, which operate in the hyperbolic space. Additionally, we employ Euclidean feature clipping and gradient clipping to stabilize the training in the hyperbolic space. To the best of our knowledge, this is the first work extending hyperbolic embeddings for face anti-spoofing in a one-class manner. With extensive experiments on five benchmark datasets: Rose-Youtu, MSU-MFSD, CASIA-MFSD, Idiap Replay-Attack, and OULU-NPU, we demonstrate that our method significantly outperforms the state-of-the-art, achieving better spoof detection performance.
2212.07618
Mengnan Shi
Bohao Li, Chang Liu, Mengnan Shi, Xiaozhong Chen, Xiangyang Ji, Qixiang Ye
Proposal Distribution Calibration for Few-Shot Object Detection
This paper is under review in IEEE TNNLS
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adapting object detectors learned with sufficient supervision to novel classes under low data regimes is charming yet challenging. In few-shot object detection (FSOD), the two-step training paradigm is widely adopted to mitigate the severe sample imbalance, i.e., holistic pre-training on base classes, then partial fine-tuning in a balanced setting with all classes. Since unlabeled instances are suppressed as backgrounds in the base training phase, the learned RPN is prone to produce biased proposals for novel instances, resulting in dramatic performance degradation. Unfortunately, the extreme data scarcity aggravates the proposal distribution bias, hindering the RoI head from evolving toward novel classes. In this paper, we introduce a simple yet effective proposal distribution calibration (PDC) approach to neatly enhance the localization and classification abilities of the RoI head by recycling its localization ability endowed in base training and enriching high-quality positive samples for semantic fine-tuning. Specifically, we sample proposals based on the base proposal statistics to calibrate the distribution bias and impose additional localization and classification losses upon the sampled proposals for fast expanding the base detector to novel classes. Experiments on the commonly used Pascal VOC and MS COCO datasets with explicit state-of-the-art performances justify the efficacy of our PDC for FSOD. Code is available at github.com/Bohao-Lee/PDC.
[ { "created": "Thu, 15 Dec 2022 05:09:11 GMT", "version": "v1" } ]
2022-12-16
[ [ "Li", "Bohao", "" ], [ "Liu", "Chang", "" ], [ "Shi", "Mengnan", "" ], [ "Chen", "Xiaozhong", "" ], [ "Ji", "Xiangyang", "" ], [ "Ye", "Qixiang", "" ] ]
Adapting object detectors learned with sufficient supervision to novel classes under low data regimes is charming yet challenging. In few-shot object detection (FSOD), the two-step training paradigm is widely adopted to mitigate the severe sample imbalance, i.e., holistic pre-training on base classes, then partial fine-tuning in a balanced setting with all classes. Since unlabeled instances are suppressed as backgrounds in the base training phase, the learned RPN is prone to produce biased proposals for novel instances, resulting in dramatic performance degradation. Unfortunately, the extreme data scarcity aggravates the proposal distribution bias, hindering the RoI head from evolving toward novel classes. In this paper, we introduce a simple yet effective proposal distribution calibration (PDC) approach to neatly enhance the localization and classification abilities of the RoI head by recycling its localization ability endowed in base training and enriching high-quality positive samples for semantic fine-tuning. Specifically, we sample proposals based on the base proposal statistics to calibrate the distribution bias and impose additional localization and classification losses upon the sampled proposals for fast expanding the base detector to novel classes. Experiments on the commonly used Pascal VOC and MS COCO datasets with explicit state-of-the-art performances justify the efficacy of our PDC for FSOD. Code is available at github.com/Bohao-Lee/PDC.
1612.08845
Toni Heidenreich
Toni Heidenreich
The formal-logical characterisation of lies, deception, and associated notions
Literature review prepared as a student at King's College London
null
null
null
cs.LO cs.AI cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Defining various dishonest notions in a formal way is a key step to enable intelligent agents to act in untrustworthy environments. This review evaluates the literature for this topic by looking at formal definitions based on modal logic as well as other formal approaches. Criteria from philosophical groundwork is used to assess the definitions for correctness and completeness. The key contribution of this review is to show that only a few definitions fully comply with this gold standard and to point out the missing steps towards a successful application of these definitions in an actual agent environment.
[ { "created": "Wed, 28 Dec 2016 10:35:05 GMT", "version": "v1" } ]
2016-12-30
[ [ "Heidenreich", "Toni", "" ] ]
Defining various dishonest notions in a formal way is a key step to enable intelligent agents to act in untrustworthy environments. This review evaluates the literature for this topic by looking at formal definitions based on modal logic as well as other formal approaches. Criteria from philosophical groundwork is used to assess the definitions for correctness and completeness. The key contribution of this review is to show that only a few definitions fully comply with this gold standard and to point out the missing steps towards a successful application of these definitions in an actual agent environment.
2006.11456
Abiola Osho
Abiola Osho and Ethan Tucker and George Amariucai
Implicit Crowdsourcing for Identifying Abusive Behavior in Online Social Networks
null
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The increased use of online social networks for the dissemination of information comes with the misuse of the internet for cyberbullying, cybercrime, spam, vandalism, amongst other things. To proactively identify abuse in the networks, we propose a model to identify abusive posts by crowdsourcing. The crowdsourcing part of the detection mechanism is implemented implicitly, by simply observing the natural interaction between users encountering the messages. We explore the node-to-node spread of information on Twitter and propose a model that predicts the abuse level (abusive, hate, spam, normal) associated with the tweet by observing the attributes of the message, along with those of the users interacting with it. We demonstrate that the difference in users' interactions with abusive posts can be leveraged in identifying posts of varying abuse levels.
[ { "created": "Sat, 20 Jun 2020 01:14:30 GMT", "version": "v1" } ]
2020-06-23
[ [ "Osho", "Abiola", "" ], [ "Tucker", "Ethan", "" ], [ "Amariucai", "George", "" ] ]
The increased use of online social networks for the dissemination of information comes with the misuse of the internet for cyberbullying, cybercrime, spam, vandalism, amongst other things. To proactively identify abuse in the networks, we propose a model to identify abusive posts by crowdsourcing. The crowdsourcing part of the detection mechanism is implemented implicitly, by simply observing the natural interaction between users encountering the messages. We explore the node-to-node spread of information on Twitter and propose a model that predicts the abuse level (abusive, hate, spam, normal) associated with the tweet by observing the attributes of the message, along with those of the users interacting with it. We demonstrate that the difference in users' interactions with abusive posts can be leveraged in identifying posts of varying abuse levels.
2001.09046
Bart Smets
Bart Smets, Jim Portegies, Erik Bekkers, Remco Duits
PDE-based Group Equivariant Convolutional Neural Networks
27 pages, 18 figures. v2 changes: - mentioned KerCNNs - added section Generalization of G-CNNs - clarification that the experiments utilized automatic differentiation and SGD. v3 changes: - streamlined theoretical framework - formulation and proof Thm.1 & 2 - expanded experiments. v4 changes: typos in Prop.5 and (20) v5/6 changes: minor revision
null
10.1007/s10851-022-01114-x
null
cs.LG cs.CV math.DG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a PDE-based framework that generalizes Group equivariant Convolutional Neural Networks (G-CNNs). In this framework, a network layer is seen as a set of PDE-solvers where geometrically meaningful PDE-coefficients become the layer's trainable weights. Formulating our PDEs on homogeneous spaces allows these networks to be designed with built-in symmetries such as rotation in addition to the standard translation equivariance of CNNs. Having all the desired symmetries included in the design obviates the need to include them by means of costly techniques such as data augmentation. We will discuss our PDE-based G-CNNs (PDE-G-CNNs) in a general homogeneous space setting while also going into the specifics of our primary case of interest: roto-translation equivariance. We solve the PDE of interest by a combination of linear group convolutions and non-linear morphological group convolutions with analytic kernel approximations that we underpin with formal theorems. Our kernel approximations allow for fast GPU-implementation of the PDE-solvers, we release our implementation with this article in the form of the LieTorch extension to PyTorch, available at https://gitlab.com/bsmetsjr/lietorch . Just like for linear convolution a morphological convolution is specified by a kernel that we train in our PDE-G-CNNs. In PDE-G-CNNs we do not use non-linearities such as max/min-pooling and ReLUs as they are already subsumed by morphological convolutions. We present a set of experiments to demonstrate the strength of the proposed PDE-G-CNNs in increasing the performance of deep learning based imaging applications with far fewer parameters than traditional CNNs.
[ { "created": "Fri, 24 Jan 2020 15:00:46 GMT", "version": "v1" }, { "created": "Mon, 9 Mar 2020 14:16:16 GMT", "version": "v2" }, { "created": "Mon, 12 Jul 2021 07:56:22 GMT", "version": "v3" }, { "created": "Sat, 24 Jul 2021 11:14:06 GMT", "version": "v4" }, { "created": "Tue, 26 Apr 2022 10:17:22 GMT", "version": "v5" }, { "created": "Mon, 30 May 2022 19:05:29 GMT", "version": "v6" } ]
2022-08-24
[ [ "Smets", "Bart", "" ], [ "Portegies", "Jim", "" ], [ "Bekkers", "Erik", "" ], [ "Duits", "Remco", "" ] ]
We present a PDE-based framework that generalizes Group equivariant Convolutional Neural Networks (G-CNNs). In this framework, a network layer is seen as a set of PDE-solvers where geometrically meaningful PDE-coefficients become the layer's trainable weights. Formulating our PDEs on homogeneous spaces allows these networks to be designed with built-in symmetries such as rotation in addition to the standard translation equivariance of CNNs. Having all the desired symmetries included in the design obviates the need to include them by means of costly techniques such as data augmentation. We will discuss our PDE-based G-CNNs (PDE-G-CNNs) in a general homogeneous space setting while also going into the specifics of our primary case of interest: roto-translation equivariance. We solve the PDE of interest by a combination of linear group convolutions and non-linear morphological group convolutions with analytic kernel approximations that we underpin with formal theorems. Our kernel approximations allow for fast GPU-implementation of the PDE-solvers, we release our implementation with this article in the form of the LieTorch extension to PyTorch, available at https://gitlab.com/bsmetsjr/lietorch . Just like for linear convolution a morphological convolution is specified by a kernel that we train in our PDE-G-CNNs. In PDE-G-CNNs we do not use non-linearities such as max/min-pooling and ReLUs as they are already subsumed by morphological convolutions. We present a set of experiments to demonstrate the strength of the proposed PDE-G-CNNs in increasing the performance of deep learning based imaging applications with far fewer parameters than traditional CNNs.
1304.0954
Marko Horvat
Marko Horvat, Anton Grbin, Gordan Gledec
Labeling and Retrieval of Emotionally-Annotated Images using WordNet
16 pages, 4 figures. arXiv admin note: substantial text overlap with arXiv:1302.2223
International Journal of Knowledge-Based and Intelligent Engineering Systems, Vol. 17, No. 2, pp. 157-166, 2013
null
null
cs.IR cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Repositories of images with semantic and emotion content descriptions are valuable tools in many areas such as Affective Computing and Human-Computer Interaction, but they are also important in the development of multimodal searchable online databases. Ever growing number of image documents available on the Internet continuously motivates research of better annotation models and more efficient retrieval methods which use mash-up of available data on semantics, scenes, objects, events, context and emotion. Formal knowledge representation of such high-level semantics requires rich, explicit, human but also machine-processable information. To achieve these goals we present an online ontology-based image annotation tool WNtags and demonstrate its usefulness in knowledge representation and image retrieval using the International Affective Picture System database. The WNtags uses WordNet as image tagging glossary but considers Suggested Upper Merged Ontology as the preferred upper labeling formalism. The retrieval is performed using node distance metrics to establish semantic relatedness between a query and the collaboratively weighted tags describing high-level image semantics, after which the result is ranked according to the derived importance. We also elaborate plans to improve the WNtags to create a collaborative Web-based multimedia repository for research in human emotion and attention.
[ { "created": "Wed, 3 Apr 2013 13:58:56 GMT", "version": "v1" }, { "created": "Fri, 10 Jan 2014 23:27:00 GMT", "version": "v2" } ]
2017-12-06
[ [ "Horvat", "Marko", "" ], [ "Grbin", "Anton", "" ], [ "Gledec", "Gordan", "" ] ]
Repositories of images with semantic and emotion content descriptions are valuable tools in many areas such as Affective Computing and Human-Computer Interaction, but they are also important in the development of multimodal searchable online databases. Ever growing number of image documents available on the Internet continuously motivates research of better annotation models and more efficient retrieval methods which use mash-up of available data on semantics, scenes, objects, events, context and emotion. Formal knowledge representation of such high-level semantics requires rich, explicit, human but also machine-processable information. To achieve these goals we present an online ontology-based image annotation tool WNtags and demonstrate its usefulness in knowledge representation and image retrieval using the International Affective Picture System database. The WNtags uses WordNet as image tagging glossary but considers Suggested Upper Merged Ontology as the preferred upper labeling formalism. The retrieval is performed using node distance metrics to establish semantic relatedness between a query and the collaboratively weighted tags describing high-level image semantics, after which the result is ranked according to the derived importance. We also elaborate plans to improve the WNtags to create a collaborative Web-based multimedia repository for research in human emotion and attention.
2006.14683
Itzik Malkiel
Itzik Malkiel, Lior Wolf
MTAdam: Automatic Balancing of Multiple Training Loss Terms
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When training neural models, it is common to combine multiple loss terms. The balancing of these terms requires considerable human effort and is computationally demanding. Moreover, the optimal trade-off between the loss term can change as training progresses, especially for adversarial terms. In this work, we generalize the Adam optimization algorithm to handle multiple loss terms. The guiding principle is that for every layer, the gradient magnitude of the terms should be balanced. To this end, the Multi-Term Adam (MTAdam) computes the derivative of each loss term separately, infers the first and second moments per parameter and loss term, and calculates a first moment for the magnitude per layer of the gradients arising from each loss. This magnitude is used to continuously balance the gradients across all layers, in a manner that both varies from one layer to the next and dynamically changes over time. Our results show that training with the new method leads to fast recovery from suboptimal initial loss weighting and to training outcomes that match conventional training with the prescribed hyperparameters of each method.
[ { "created": "Thu, 25 Jun 2020 20:27:27 GMT", "version": "v1" } ]
2020-06-29
[ [ "Malkiel", "Itzik", "" ], [ "Wolf", "Lior", "" ] ]
When training neural models, it is common to combine multiple loss terms. The balancing of these terms requires considerable human effort and is computationally demanding. Moreover, the optimal trade-off between the loss term can change as training progresses, especially for adversarial terms. In this work, we generalize the Adam optimization algorithm to handle multiple loss terms. The guiding principle is that for every layer, the gradient magnitude of the terms should be balanced. To this end, the Multi-Term Adam (MTAdam) computes the derivative of each loss term separately, infers the first and second moments per parameter and loss term, and calculates a first moment for the magnitude per layer of the gradients arising from each loss. This magnitude is used to continuously balance the gradients across all layers, in a manner that both varies from one layer to the next and dynamically changes over time. Our results show that training with the new method leads to fast recovery from suboptimal initial loss weighting and to training outcomes that match conventional training with the prescribed hyperparameters of each method.
1907.02841
Li Qiang
Wenxiang Zuo, Qiang Li, Xianming Liu
Depth Restoration: A fast low-rank matrix completion via dual-graph regularization
The paper will be added more experiments. The main idea of the paper needs to be revamped. Please withdraw the paper
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As a real scenes sensing approach, depth information obtains the widespread applications. However, resulting from the restriction of depth sensing technology, the depth map captured in practice usually suffers terrible noise and missing values at plenty of pixels. In this paper, we propose a fast low-rank matrix completion via dual-graph regularization for depth restoration. Specifically, the depth restoration can be transformed into a low-rank matrix completion problem. In order to complete the low-rank matrix and restore it to the depth map, the proposed dual-graph method containing the local and non-local graph regularizations exploits the local similarity of depth maps and the gradient consistency of depth-color counterparts respectively. In addition, the proposed approach achieves the high speed depth restoration due to closed-form solution. Experimental results demonstrate that the proposed method outperforms the state-of-the-art methods with respect to both objective and subjective quality evaluations, especially for serious depth degeneration.
[ { "created": "Fri, 5 Jul 2019 14:09:31 GMT", "version": "v1" }, { "created": "Mon, 28 Oct 2019 11:06:38 GMT", "version": "v2" }, { "created": "Thu, 31 Oct 2019 13:14:36 GMT", "version": "v3" }, { "created": "Wed, 8 Jan 2020 09:29:44 GMT", "version": "v4" } ]
2020-01-09
[ [ "Zuo", "Wenxiang", "" ], [ "Li", "Qiang", "" ], [ "Liu", "Xianming", "" ] ]
As a real scenes sensing approach, depth information obtains the widespread applications. However, resulting from the restriction of depth sensing technology, the depth map captured in practice usually suffers terrible noise and missing values at plenty of pixels. In this paper, we propose a fast low-rank matrix completion via dual-graph regularization for depth restoration. Specifically, the depth restoration can be transformed into a low-rank matrix completion problem. In order to complete the low-rank matrix and restore it to the depth map, the proposed dual-graph method containing the local and non-local graph regularizations exploits the local similarity of depth maps and the gradient consistency of depth-color counterparts respectively. In addition, the proposed approach achieves the high speed depth restoration due to closed-form solution. Experimental results demonstrate that the proposed method outperforms the state-of-the-art methods with respect to both objective and subjective quality evaluations, especially for serious depth degeneration.
2004.10495
Dong Wang
Dong Wang, Xiaoqian Qin, Fengyi Song, Li Cheng
Stabilizing Training of Generative Adversarial Nets via Langevin Stein Variational Gradient Descent
null
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative adversarial networks (GANs), famous for the capability of learning complex underlying data distribution, are however known to be tricky in the training process, which would probably result in mode collapse or performance deterioration. Current approaches of dealing with GANs' issues almost utilize some practical training techniques for the purpose of regularization, which on the other hand undermines the convergence and theoretical soundness of GAN. In this paper, we propose to stabilize GAN training via a novel particle-based variational inference -- Langevin Stein variational gradient descent (LSVGD), which not only inherits the flexibility and efficiency of original SVGD but aims to address its instability issues by incorporating an extra disturbance into the update dynamics. We further demonstrate that by properly adjusting the noise variance, LSVGD simulates a Langevin process whose stationary distribution is exactly the target distribution. We also show that LSVGD dynamics has an implicit regularization which is able to enhance particles' spread-out and diversity. At last we present an efficient way of applying particle-based variational inference on a general GAN training procedure no matter what loss function is adopted. Experimental results on one synthetic dataset and three popular benchmark datasets -- Cifar-10, Tiny-ImageNet and CelebA validate that LSVGD can remarkably improve the performance and stability of various GAN models.
[ { "created": "Wed, 22 Apr 2020 11:20:04 GMT", "version": "v1" } ]
2020-04-23
[ [ "Wang", "Dong", "" ], [ "Qin", "Xiaoqian", "" ], [ "Song", "Fengyi", "" ], [ "Cheng", "Li", "" ] ]
Generative adversarial networks (GANs), famous for the capability of learning complex underlying data distribution, are however known to be tricky in the training process, which would probably result in mode collapse or performance deterioration. Current approaches of dealing with GANs' issues almost utilize some practical training techniques for the purpose of regularization, which on the other hand undermines the convergence and theoretical soundness of GAN. In this paper, we propose to stabilize GAN training via a novel particle-based variational inference -- Langevin Stein variational gradient descent (LSVGD), which not only inherits the flexibility and efficiency of original SVGD but aims to address its instability issues by incorporating an extra disturbance into the update dynamics. We further demonstrate that by properly adjusting the noise variance, LSVGD simulates a Langevin process whose stationary distribution is exactly the target distribution. We also show that LSVGD dynamics has an implicit regularization which is able to enhance particles' spread-out and diversity. At last we present an efficient way of applying particle-based variational inference on a general GAN training procedure no matter what loss function is adopted. Experimental results on one synthetic dataset and three popular benchmark datasets -- Cifar-10, Tiny-ImageNet and CelebA validate that LSVGD can remarkably improve the performance and stability of various GAN models.
2112.13050
Susmit Agrawal
K. Ram Prabhakar, Susmit Agrawal, R. Venkatesh Babu
Self-Gated Memory Recurrent Network for Efficient Scalable HDR Deghosting
12 pages
IEEE Transactions on Computational Imaging (Volume 7, 2021) 1228-1239
10.1109/TCI.2021.3112920
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We propose a novel recurrent network-based HDR deghosting method for fusing arbitrary length dynamic sequences. The proposed method uses convolutional and recurrent architectures to generate visually pleasing, ghosting-free HDR images. We introduce a new recurrent cell architecture, namely Self-Gated Memory (SGM) cell, that outperforms the standard LSTM cell while containing fewer parameters and having faster running times. In the SGM cell, the information flow through a gate is controlled by multiplying the gate's output by a function of itself. Additionally, we use two SGM cells in a bidirectional setting to improve output quality. The proposed approach achieves state-of-the-art performance compared to existing HDR deghosting methods quantitatively across three publicly available datasets while simultaneously achieving scalability to fuse variable-length input sequence without necessitating re-training. Through extensive ablations, we demonstrate the importance of individual components in our proposed approach. The code is available at https://val.cds.iisc.ac.in/HDR/HDRRNN/index.html.
[ { "created": "Fri, 24 Dec 2021 12:36:33 GMT", "version": "v1" } ]
2021-12-28
[ [ "Prabhakar", "K. Ram", "" ], [ "Agrawal", "Susmit", "" ], [ "Babu", "R. Venkatesh", "" ] ]
We propose a novel recurrent network-based HDR deghosting method for fusing arbitrary length dynamic sequences. The proposed method uses convolutional and recurrent architectures to generate visually pleasing, ghosting-free HDR images. We introduce a new recurrent cell architecture, namely Self-Gated Memory (SGM) cell, that outperforms the standard LSTM cell while containing fewer parameters and having faster running times. In the SGM cell, the information flow through a gate is controlled by multiplying the gate's output by a function of itself. Additionally, we use two SGM cells in a bidirectional setting to improve output quality. The proposed approach achieves state-of-the-art performance compared to existing HDR deghosting methods quantitatively across three publicly available datasets while simultaneously achieving scalability to fuse variable-length input sequence without necessitating re-training. Through extensive ablations, we demonstrate the importance of individual components in our proposed approach. The code is available at https://val.cds.iisc.ac.in/HDR/HDRRNN/index.html.
1904.10522
Hyunsu Cho
Theodore Vasiloudis, Hyunsu Cho, Henrik Bostr\"om
Block-distributed Gradient Boosted Trees
SIGIR 2019
null
null
null
cs.LG cs.IR stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Gradient Boosted Tree (GBT) algorithm is one of the most popular machine learning algorithms used in production, for tasks that include Click-Through Rate (CTR) prediction and learning-to-rank. To deal with the massive datasets available today, many distributed GBT methods have been proposed. However, they all assume a row-distributed dataset, addressing scalability only with respect to the number of data points and not the number of features, and increasing communication cost for high-dimensional data. In order to allow for scalability across both the data point and feature dimensions, and reduce communication cost, we propose block-distributed GBTs. We achieve communication efficiency by making full use of the data sparsity and adapting the Quickscorer algorithm to the block-distributed setting. We evaluate our approach using datasets with millions of features, and demonstrate that we are able to achieve multiple orders of magnitude reduction in communication cost for sparse data, with no loss in accuracy, while providing a more scalable design. As a result, we are able to reduce the training time for high-dimensional data, and allow more cost-effective scale-out without the need for expensive network communication.
[ { "created": "Tue, 23 Apr 2019 20:10:36 GMT", "version": "v1" }, { "created": "Tue, 28 May 2019 19:32:35 GMT", "version": "v2" } ]
2019-05-30
[ [ "Vasiloudis", "Theodore", "" ], [ "Cho", "Hyunsu", "" ], [ "Boström", "Henrik", "" ] ]
The Gradient Boosted Tree (GBT) algorithm is one of the most popular machine learning algorithms used in production, for tasks that include Click-Through Rate (CTR) prediction and learning-to-rank. To deal with the massive datasets available today, many distributed GBT methods have been proposed. However, they all assume a row-distributed dataset, addressing scalability only with respect to the number of data points and not the number of features, and increasing communication cost for high-dimensional data. In order to allow for scalability across both the data point and feature dimensions, and reduce communication cost, we propose block-distributed GBTs. We achieve communication efficiency by making full use of the data sparsity and adapting the Quickscorer algorithm to the block-distributed setting. We evaluate our approach using datasets with millions of features, and demonstrate that we are able to achieve multiple orders of magnitude reduction in communication cost for sparse data, with no loss in accuracy, while providing a more scalable design. As a result, we are able to reduce the training time for high-dimensional data, and allow more cost-effective scale-out without the need for expensive network communication.
2306.09547
Daria Reshetova
Daria Reshetova, Wei-Ning Chen, Ayfer \"Ozg\"ur
Training generative models from privatized data
null
null
null
null
cs.LG cs.CR cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
Local differential privacy is a powerful method for privacy-preserving data collection. In this paper, we develop a framework for training Generative Adversarial Networks (GANs) on differentially privatized data. We show that entropic regularization of optimal transport - a popular regularization method in the literature that has often been leveraged for its computational benefits - enables the generator to learn the raw (unprivatized) data distribution even though it only has access to privatized samples. We prove that at the same time this leads to fast statistical convergence at the parametric rate. This shows that entropic regularization of optimal transport uniquely enables the mitigation of both the effects of privatization noise and the curse of dimensionality in statistical convergence. We provide experimental evidence to support the efficacy of our framework in practice.
[ { "created": "Thu, 15 Jun 2023 23:28:45 GMT", "version": "v1" }, { "created": "Fri, 1 Mar 2024 01:54:15 GMT", "version": "v2" } ]
2024-03-04
[ [ "Reshetova", "Daria", "" ], [ "Chen", "Wei-Ning", "" ], [ "Özgür", "Ayfer", "" ] ]
Local differential privacy is a powerful method for privacy-preserving data collection. In this paper, we develop a framework for training Generative Adversarial Networks (GANs) on differentially privatized data. We show that entropic regularization of optimal transport - a popular regularization method in the literature that has often been leveraged for its computational benefits - enables the generator to learn the raw (unprivatized) data distribution even though it only has access to privatized samples. We prove that at the same time this leads to fast statistical convergence at the parametric rate. This shows that entropic regularization of optimal transport uniquely enables the mitigation of both the effects of privatization noise and the curse of dimensionality in statistical convergence. We provide experimental evidence to support the efficacy of our framework in practice.
1809.01301
Pamela Shapiro
Pamela Shapiro and Kevin Duh
BPE and CharCNNs for Translation of Morphology: A Cross-Lingual Comparison and Analysis
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural Machine Translation (NMT) in low-resource settings and of morphologically rich languages is made difficult in part by data sparsity of vocabulary words. Several methods have been used to help reduce this sparsity, notably Byte-Pair Encoding (BPE) and a character-based CNN layer (charCNN). However, the charCNN has largely been neglected, possibly because it has only been compared to BPE rather than combined with it. We argue for a reconsideration of the charCNN, based on cross-lingual improvements on low-resource data. We translate from 8 languages into English, using a multi-way parallel collection of TED transcripts. We find that in most cases, using both BPE and a charCNN performs best, while in Hebrew, using a charCNN over words is best.
[ { "created": "Wed, 5 Sep 2018 02:26:09 GMT", "version": "v1" }, { "created": "Sat, 8 Sep 2018 23:36:53 GMT", "version": "v2" } ]
2018-09-11
[ [ "Shapiro", "Pamela", "" ], [ "Duh", "Kevin", "" ] ]
Neural Machine Translation (NMT) in low-resource settings and of morphologically rich languages is made difficult in part by data sparsity of vocabulary words. Several methods have been used to help reduce this sparsity, notably Byte-Pair Encoding (BPE) and a character-based CNN layer (charCNN). However, the charCNN has largely been neglected, possibly because it has only been compared to BPE rather than combined with it. We argue for a reconsideration of the charCNN, based on cross-lingual improvements on low-resource data. We translate from 8 languages into English, using a multi-way parallel collection of TED transcripts. We find that in most cases, using both BPE and a charCNN performs best, while in Hebrew, using a charCNN over words is best.
cs/0109012
Michael Geist
Michael Geist
Is There a There There: Towards Greater Certainty for Internet Jurisdiction
29th TPRC Conference, 2001
16 (3) Berkeley Tech. LJ (forthcoming 2001)
null
TPRC-2001-017
cs.CY
null
The unique challenge presented by the Internet is that compliance with local laws is rarely sufficient to assure a business that it has limited its exposure to legal risk. The paper identifies why the challenge of adequately accounting for the legal risk arising from Internet jurisdiction has been aggravated in recent years by the adoption of the Zippo legal framework, commonly referred to as the passive versus active test. The test provides parties with only limited guidance and often results in detrimental judicial decisions from a policy perspective. Given the inadequacies of the Zippo passive versus active test, the paper argues that it is now fitting to identify a more effective standard for determining when it is appropriate to assert jurisdiction in cases involving predominantly Internet-based contacts. The solution submitted in the paper is to move toward a targeting-based analysis. Unlike the Zippo approach, a targeting analysis would seek to identify the intentions of the parties and to assess the steps taken to either enter or avoid a particular jurisdiction. Targeting would also lessen the reliance on effects-based analysis, the source of considerable uncertainty since Internet-based activity can ordinarily be said to create some effects in most jurisdictions. To identify the appropriate criteria for a targeting test, the paper recommends returning to the core jurisdictional principle -- foreseeability. Foreseeability in the targeting context depends on three factors -- contracts, technology, and actual or implied knowledge.
[ { "created": "Tue, 11 Sep 2001 03:22:25 GMT", "version": "v1" } ]
2007-05-23
[ [ "Geist", "Michael", "" ] ]
The unique challenge presented by the Internet is that compliance with local laws is rarely sufficient to assure a business that it has limited its exposure to legal risk. The paper identifies why the challenge of adequately accounting for the legal risk arising from Internet jurisdiction has been aggravated in recent years by the adoption of the Zippo legal framework, commonly referred to as the passive versus active test. The test provides parties with only limited guidance and often results in detrimental judicial decisions from a policy perspective. Given the inadequacies of the Zippo passive versus active test, the paper argues that it is now fitting to identify a more effective standard for determining when it is appropriate to assert jurisdiction in cases involving predominantly Internet-based contacts. The solution submitted in the paper is to move toward a targeting-based analysis. Unlike the Zippo approach, a targeting analysis would seek to identify the intentions of the parties and to assess the steps taken to either enter or avoid a particular jurisdiction. Targeting would also lessen the reliance on effects-based analysis, the source of considerable uncertainty since Internet-based activity can ordinarily be said to create some effects in most jurisdictions. To identify the appropriate criteria for a targeting test, the paper recommends returning to the core jurisdictional principle -- foreseeability. Foreseeability in the targeting context depends on three factors -- contracts, technology, and actual or implied knowledge.
2303.06611
Weilin Lin
Weilin Lin, Xiangyu Zhao, Yejing Wang, Yuanshao Zhu, Wanyu Wang
AutoDenoise: Automatic Data Instance Denoising for Recommendations
9 pages, 4 figures, 5 tables, conference
null
10.1145/3543507.3583339
null
cs.IR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Historical user-item interaction datasets are essential in training modern recommender systems for predicting user preferences. However, the arbitrary user behaviors in most recommendation scenarios lead to a large volume of noisy data instances being recorded, which cannot fully represent their true interests. While a large number of denoising studies are emerging in the recommender system community, all of them suffer from highly dynamic data distributions. In this paper, we propose a Deep Reinforcement Learning (DRL) based framework, AutoDenoise, with an Instance Denoising Policy Network, for denoising data instances with an instance selection manner in deep recommender systems. To be specific, AutoDenoise serves as an agent in DRL to adaptively select noise-free and predictive data instances, which can then be utilized directly in training representative recommendation models. In addition, we design an alternate two-phase optimization strategy to train and validate the AutoDenoise properly. In the searching phase, we aim to train the policy network with the capacity of instance denoising; in the validation phase, we find out and evaluate the denoised subset of data instances selected by the trained policy network, so as to validate its denoising ability. We conduct extensive experiments to validate the effectiveness of AutoDenoise combined with multiple representative recommender system models.
[ { "created": "Sun, 12 Mar 2023 08:36:15 GMT", "version": "v1" } ]
2023-03-14
[ [ "Lin", "Weilin", "" ], [ "Zhao", "Xiangyu", "" ], [ "Wang", "Yejing", "" ], [ "Zhu", "Yuanshao", "" ], [ "Wang", "Wanyu", "" ] ]
Historical user-item interaction datasets are essential in training modern recommender systems for predicting user preferences. However, the arbitrary user behaviors in most recommendation scenarios lead to a large volume of noisy data instances being recorded, which cannot fully represent their true interests. While a large number of denoising studies are emerging in the recommender system community, all of them suffer from highly dynamic data distributions. In this paper, we propose a Deep Reinforcement Learning (DRL) based framework, AutoDenoise, with an Instance Denoising Policy Network, for denoising data instances with an instance selection manner in deep recommender systems. To be specific, AutoDenoise serves as an agent in DRL to adaptively select noise-free and predictive data instances, which can then be utilized directly in training representative recommendation models. In addition, we design an alternate two-phase optimization strategy to train and validate the AutoDenoise properly. In the searching phase, we aim to train the policy network with the capacity of instance denoising; in the validation phase, we find out and evaluate the denoised subset of data instances selected by the trained policy network, so as to validate its denoising ability. We conduct extensive experiments to validate the effectiveness of AutoDenoise combined with multiple representative recommender system models.
2301.07849
Giovanni Viglietta
Giuseppe A. Di Luna and Giovanni Viglietta
Efficient Computation in Congested Anonymous Dynamic Networks
26 pages, 2 figures
null
null
null
cs.DC cs.DM
http://creativecommons.org/licenses/by/4.0/
An anonymous dynamic network is a network of indistinguishable processes whose communication links may appear or disappear unpredictably over time. Previous research has shown that deterministically computing an arbitrary function of a multiset of input values given to these processes takes only a linear number of communication rounds (Di Luna-Viglietta, FOCS 2022). However, fast algorithms for anonymous dynamic networks rely on the construction and transmission of large data structures called "history trees", whose size is polynomial in the number of processes. This approach is unfeasible if the network is congested, and only messages of logarithmic size can be sent through its links. Observe that sending a large message piece by piece over several rounds is not in itself a solution, due to the anonymity of the processes combined with the dynamic nature of the network. Moreover, it is known that certain basic tasks such as all-to-all token dissemination (by means of single-token forwarding) require $\Omega(n^2/\log n)$ rounds in congested networks (Dutta et al., SODA 2013). In this work, we develop a series of practical and efficient techniques that make it possible to use history trees in congested anonymous dynamic networks. Among other applications, we show how to compute arbitrary functions in such networks in $O(n^3)$ communication rounds, greatly improving upon previous state-of-the-art algorithms for congested networks.
[ { "created": "Thu, 19 Jan 2023 02:11:47 GMT", "version": "v1" }, { "created": "Sat, 6 May 2023 15:22:15 GMT", "version": "v2" }, { "created": "Tue, 5 Sep 2023 03:03:07 GMT", "version": "v3" }, { "created": "Sat, 29 Jun 2024 12:53:12 GMT", "version": "v4" } ]
2024-07-02
[ [ "Di Luna", "Giuseppe A.", "" ], [ "Viglietta", "Giovanni", "" ] ]
An anonymous dynamic network is a network of indistinguishable processes whose communication links may appear or disappear unpredictably over time. Previous research has shown that deterministically computing an arbitrary function of a multiset of input values given to these processes takes only a linear number of communication rounds (Di Luna-Viglietta, FOCS 2022). However, fast algorithms for anonymous dynamic networks rely on the construction and transmission of large data structures called "history trees", whose size is polynomial in the number of processes. This approach is unfeasible if the network is congested, and only messages of logarithmic size can be sent through its links. Observe that sending a large message piece by piece over several rounds is not in itself a solution, due to the anonymity of the processes combined with the dynamic nature of the network. Moreover, it is known that certain basic tasks such as all-to-all token dissemination (by means of single-token forwarding) require $\Omega(n^2/\log n)$ rounds in congested networks (Dutta et al., SODA 2013). In this work, we develop a series of practical and efficient techniques that make it possible to use history trees in congested anonymous dynamic networks. Among other applications, we show how to compute arbitrary functions in such networks in $O(n^3)$ communication rounds, greatly improving upon previous state-of-the-art algorithms for congested networks.
2407.18813
Chenming Wu
Zhe Xin and Yufeng Yue and Liangjun Zhang and Chenming Wu
HERO-SLAM: Hybrid Enhanced Robust Optimization of Neural SLAM
Accepted to ICRA 2024
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Simultaneous Localization and Mapping (SLAM) is a fundamental task in robotics, driving numerous applications such as autonomous driving and virtual reality. Recent progress on neural implicit SLAM has shown encouraging and impressive results. However, the robustness of neural SLAM, particularly in challenging or data-limited situations, remains an unresolved issue. This paper presents HERO-SLAM, a Hybrid Enhanced Robust Optimization method for neural SLAM, which combines the benefits of neural implicit field and feature-metric optimization. This hybrid method optimizes a multi-resolution implicit field and enhances robustness in challenging environments with sudden viewpoint changes or sparse data collection. Our comprehensive experimental results on benchmarking datasets validate the effectiveness of our hybrid approach, demonstrating its superior performance over existing implicit field-based methods in challenging scenarios. HERO-SLAM provides a new pathway to enhance the stability, performance, and applicability of neural SLAM in real-world scenarios. Code is available on the project page: https://hero-slam.github.io.
[ { "created": "Fri, 26 Jul 2024 15:22:14 GMT", "version": "v1" } ]
2024-07-29
[ [ "Xin", "Zhe", "" ], [ "Yue", "Yufeng", "" ], [ "Zhang", "Liangjun", "" ], [ "Wu", "Chenming", "" ] ]
Simultaneous Localization and Mapping (SLAM) is a fundamental task in robotics, driving numerous applications such as autonomous driving and virtual reality. Recent progress on neural implicit SLAM has shown encouraging and impressive results. However, the robustness of neural SLAM, particularly in challenging or data-limited situations, remains an unresolved issue. This paper presents HERO-SLAM, a Hybrid Enhanced Robust Optimization method for neural SLAM, which combines the benefits of neural implicit field and feature-metric optimization. This hybrid method optimizes a multi-resolution implicit field and enhances robustness in challenging environments with sudden viewpoint changes or sparse data collection. Our comprehensive experimental results on benchmarking datasets validate the effectiveness of our hybrid approach, demonstrating its superior performance over existing implicit field-based methods in challenging scenarios. HERO-SLAM provides a new pathway to enhance the stability, performance, and applicability of neural SLAM in real-world scenarios. Code is available on the project page: https://hero-slam.github.io.
cs/0702083
Serebrenik Alexander
Alexander Serebrenik, Tom Schrijvers, Bart Demoen
Improving Prolog programs: Refactoring for Prolog
To appear in Theory and Practice of Logic Programming (TPLP)
null
null
2006-1
cs.SE
null
Refactoring is an established technique from the object-oriented (OO) programming community to restructure code: it aims at improving software readability, maintainability and extensibility. Although refactoring is not tied to the OO-paradigm in particular, its ideas have not been applied to Logic Programming until now. This paper applies the ideas of refactoring to Prolog programs. A catalogue is presented listing refactorings classified according to scope. Some of the refactorings have been adapted from the OO-paradigm, while others have been specifically designed for Prolog. The discrepancy between intended and operational semantics in Prolog is also addressed by some of the refactorings. In addition, ViPReSS, a semi-automatic refactoring browser, is discussed and the experience with applying ViPReSS to a large Prolog legacy system is reported. The main conclusion is that refactoring is both a viable technique in Prolog and a rather desirable one.
[ { "created": "Wed, 14 Feb 2007 09:53:37 GMT", "version": "v1" } ]
2007-05-23
[ [ "Serebrenik", "Alexander", "" ], [ "Schrijvers", "Tom", "" ], [ "Demoen", "Bart", "" ] ]
Refactoring is an established technique from the object-oriented (OO) programming community to restructure code: it aims at improving software readability, maintainability and extensibility. Although refactoring is not tied to the OO-paradigm in particular, its ideas have not been applied to Logic Programming until now. This paper applies the ideas of refactoring to Prolog programs. A catalogue is presented listing refactorings classified according to scope. Some of the refactorings have been adapted from the OO-paradigm, while others have been specifically designed for Prolog. The discrepancy between intended and operational semantics in Prolog is also addressed by some of the refactorings. In addition, ViPReSS, a semi-automatic refactoring browser, is discussed and the experience with applying ViPReSS to a large Prolog legacy system is reported. The main conclusion is that refactoring is both a viable technique in Prolog and a rather desirable one.
2306.01116
Julien Launay
Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, Julien Launay
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models are commonly trained on a mixture of filtered web data and curated high-quality corpora, such as social media conversations, books, or technical papers. This curation process is believed to be necessary to produce performant models with broad zero-shot generalization abilities. However, as larger models requiring pretraining on trillions of tokens are considered, it is unclear how scalable is curation and whether we will run out of unique high-quality data soon. At variance with previous beliefs, we show that properly filtered and deduplicated web data alone can lead to powerful models; even significantly outperforming models from the state-of-the-art trained on The Pile. Despite extensive filtering, the high-quality data we extract from the web is still plentiful, and we are able to obtain five trillion tokens from CommonCrawl. We publicly release an extract of 600 billion tokens from our RefinedWeb dataset, and 1.3/7.5B parameters language models trained on it.
[ { "created": "Thu, 1 Jun 2023 20:03:56 GMT", "version": "v1" } ]
2023-06-05
[ [ "Penedo", "Guilherme", "" ], [ "Malartic", "Quentin", "" ], [ "Hesslow", "Daniel", "" ], [ "Cojocaru", "Ruxandra", "" ], [ "Cappelli", "Alessandro", "" ], [ "Alobeidli", "Hamza", "" ], [ "Pannier", "Baptiste", "" ], [ "Almazrouei", "Ebtesam", "" ], [ "Launay", "Julien", "" ] ]
Large language models are commonly trained on a mixture of filtered web data and curated high-quality corpora, such as social media conversations, books, or technical papers. This curation process is believed to be necessary to produce performant models with broad zero-shot generalization abilities. However, as larger models requiring pretraining on trillions of tokens are considered, it is unclear how scalable is curation and whether we will run out of unique high-quality data soon. At variance with previous beliefs, we show that properly filtered and deduplicated web data alone can lead to powerful models; even significantly outperforming models from the state-of-the-art trained on The Pile. Despite extensive filtering, the high-quality data we extract from the web is still plentiful, and we are able to obtain five trillion tokens from CommonCrawl. We publicly release an extract of 600 billion tokens from our RefinedWeb dataset, and 1.3/7.5B parameters language models trained on it.
1701.00416
Tom Mens
Alexandre Decan, Mathieu Goeminne, Tom Mens
On the Interaction of Relational Database Access Technologies in Open Source Java Projects
Postproceeding of the SATTOSE 2015 Research Seminar on Advanced Tools and Techniques for Software Evolution. To be published in CEUR.WS workshop proceedings (2017)
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article presents an empirical study of how the use of relational database access technologies in open source Java projects evolves over time. Our observations may be useful to project managers to make more informed decisions on which technologies to introduce into an existing project and when. We selected 2,457 Java projects on GitHub using the low-level JDBC technology and higher-level object relational mappings such as Hibernate XML configuration files and JPA annotations. At a coarse-grained level, we analysed the probability of introducing such technologies over time, as well as the likelihood that multiple technologies co-occur within the same project. At a fine-grained level, we analysed to which extent these different technologies are used within the same set of project files. We also explored how the introduction of a new database technology in a Java project impacts the use of existing ones. We observed that, contrary to what could have been expected, object-relational mapping technologies do not tend to replace existing ones but rather complement them.
[ { "created": "Mon, 2 Jan 2017 15:07:36 GMT", "version": "v1" } ]
2017-01-03
[ [ "Decan", "Alexandre", "" ], [ "Goeminne", "Mathieu", "" ], [ "Mens", "Tom", "" ] ]
This article presents an empirical study of how the use of relational database access technologies in open source Java projects evolves over time. Our observations may be useful to project managers to make more informed decisions on which technologies to introduce into an existing project and when. We selected 2,457 Java projects on GitHub using the low-level JDBC technology and higher-level object relational mappings such as Hibernate XML configuration files and JPA annotations. At a coarse-grained level, we analysed the probability of introducing such technologies over time, as well as the likelihood that multiple technologies co-occur within the same project. At a fine-grained level, we analysed to which extent these different technologies are used within the same set of project files. We also explored how the introduction of a new database technology in a Java project impacts the use of existing ones. We observed that, contrary to what could have been expected, object-relational mapping technologies do not tend to replace existing ones but rather complement them.
1401.3556
Sergiy Vorobyov A.
Alex E. Geyer, Reza Nikjah, Sergiy A. Vorobyov, and Norman C. Beaulieu
Equivalent Codes, Optimality, and Performance Analysis of OSTBC: Textbook Study
33 pages, 12 figures, 5 tables, full size journal paper, Finished in Oct. 2009, Unpublished
IEEE Trans. Communications, vol. 63, no. 8, pp. 2912-2923, Aug. 2015
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An equivalent model for a multi-input multi-output (MIMO) communication system with orthogonal space-time block codes (OSTBCs) is proposed based on a newly revealed connection between OSTBCs and Euclidean codes. Examples of distance spectra, signal constellations, and signal coordinate diagrams of Euclidean codes equivalent to simplest OSTBCs are given. A new asymptotic upper bound for the symbol error rate (SER) of OSTBCs, based on the distance spectra of the introduced equivalent Euclidean codes is derived, and new general design criteria for signal constellations of the optimal OSTBC are proposed. Some bounds relating distance properties, dimensionality, and cardinality of OSTBCs with constituent signals of equal energy are given, and new optimal signal constellations with cardinalities M = 8 and M = 16 for Alamouti's code are designed. Using the new model for MIMO communication systems with OSTBCs, a general methodology for performance analysis of OSTBCs is developed. As an example of the application of this methodology, an exact evaluation of the SER of any OSTBC is given. Namely, a new expression for the SER of Alamouti's OSTBC with binary phase shift keying (BPSK) signals is derived.
[ { "created": "Wed, 15 Jan 2014 12:07:56 GMT", "version": "v1" } ]
2016-03-03
[ [ "Geyer", "Alex E.", "" ], [ "Nikjah", "Reza", "" ], [ "Vorobyov", "Sergiy A.", "" ], [ "Beaulieu", "Norman C.", "" ] ]
An equivalent model for a multi-input multi-output (MIMO) communication system with orthogonal space-time block codes (OSTBCs) is proposed based on a newly revealed connection between OSTBCs and Euclidean codes. Examples of distance spectra, signal constellations, and signal coordinate diagrams of Euclidean codes equivalent to simplest OSTBCs are given. A new asymptotic upper bound for the symbol error rate (SER) of OSTBCs, based on the distance spectra of the introduced equivalent Euclidean codes is derived, and new general design criteria for signal constellations of the optimal OSTBC are proposed. Some bounds relating distance properties, dimensionality, and cardinality of OSTBCs with constituent signals of equal energy are given, and new optimal signal constellations with cardinalities M = 8 and M = 16 for Alamouti's code are designed. Using the new model for MIMO communication systems with OSTBCs, a general methodology for performance analysis of OSTBCs is developed. As an example of the application of this methodology, an exact evaluation of the SER of any OSTBC is given. Namely, a new expression for the SER of Alamouti's OSTBC with binary phase shift keying (BPSK) signals is derived.
1910.05268
Asier Mujika
Florian Meier and Asier Mujika and Marcelo Matheus Gauy and Angelika Steger
Improving Gradient Estimation in Evolutionary Strategies With Past Descent Directions
null
null
null
null
cs.NE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evolutionary Strategies (ES) are known to be an effective black-box optimization technique for deep neural networks when the true gradients cannot be computed, such as in Reinforcement Learning. We continue a recent line of research that uses surrogate gradients to improve the gradient estimation of ES. We propose a novel method to optimally incorporate surrogate gradient information. Our approach, unlike previous work, needs no information about the quality of the surrogate gradients and is always guaranteed to find a descent direction that is better than the surrogate gradient. This allows to iteratively use the previous gradient estimate as surrogate gradient for the current search point. We theoretically prove that this yields fast convergence to the true gradient for linear functions and show under simplifying assumptions that it significantly improves gradient estimates for general functions. Finally, we evaluate our approach empirically on MNIST and reinforcement learning tasks and show that it considerably improves the gradient estimation of ES at no extra computational cost.
[ { "created": "Fri, 11 Oct 2019 16:00:39 GMT", "version": "v1" } ]
2019-10-14
[ [ "Meier", "Florian", "" ], [ "Mujika", "Asier", "" ], [ "Gauy", "Marcelo Matheus", "" ], [ "Steger", "Angelika", "" ] ]
Evolutionary Strategies (ES) are known to be an effective black-box optimization technique for deep neural networks when the true gradients cannot be computed, such as in Reinforcement Learning. We continue a recent line of research that uses surrogate gradients to improve the gradient estimation of ES. We propose a novel method to optimally incorporate surrogate gradient information. Our approach, unlike previous work, needs no information about the quality of the surrogate gradients and is always guaranteed to find a descent direction that is better than the surrogate gradient. This allows to iteratively use the previous gradient estimate as surrogate gradient for the current search point. We theoretically prove that this yields fast convergence to the true gradient for linear functions and show under simplifying assumptions that it significantly improves gradient estimates for general functions. Finally, we evaluate our approach empirically on MNIST and reinforcement learning tasks and show that it considerably improves the gradient estimation of ES at no extra computational cost.
2204.03479
Zuzana Jel\v{c}icov\'a
Zuzana Jel\v{c}icov\'a and Marian Verhelst
Delta Keyword Transformer: Bringing Transformers to the Edge through Dynamically Pruned Multi-Head Self-Attention
null
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Multi-head self-attention forms the core of Transformer networks. However, their quadratically growing complexity with respect to the input sequence length impedes their deployment on resource-constrained edge devices. We address this challenge by proposing a dynamic pruning method, which exploits the temporal stability of data across tokens to reduce inference cost. The threshold-based method only retains significant differences between the subsequent tokens, effectively reducing the number of multiply-accumulates, as well as the internal tensor data sizes. The approach is evaluated on the Google Speech Commands Dataset for keyword spotting, and the performance is compared against the baseline Keyword Transformer. Our experiments show that we can reduce ~80% of operations while maintaining the original 98.4% accuracy. Moreover, a reduction of ~87-94% operations can be achieved when only degrading the accuracy by 1-4%, speeding up the multi-head self-attention inference by a factor of ~7.5-16.
[ { "created": "Sun, 20 Mar 2022 20:59:13 GMT", "version": "v1" } ]
2022-04-08
[ [ "Jelčicová", "Zuzana", "" ], [ "Verhelst", "Marian", "" ] ]
Multi-head self-attention forms the core of Transformer networks. However, their quadratically growing complexity with respect to the input sequence length impedes their deployment on resource-constrained edge devices. We address this challenge by proposing a dynamic pruning method, which exploits the temporal stability of data across tokens to reduce inference cost. The threshold-based method only retains significant differences between the subsequent tokens, effectively reducing the number of multiply-accumulates, as well as the internal tensor data sizes. The approach is evaluated on the Google Speech Commands Dataset for keyword spotting, and the performance is compared against the baseline Keyword Transformer. Our experiments show that we can reduce ~80% of operations while maintaining the original 98.4% accuracy. Moreover, a reduction of ~87-94% operations can be achieved when only degrading the accuracy by 1-4%, speeding up the multi-head self-attention inference by a factor of ~7.5-16.
2404.03081
Hans De Sterck
Yifan Qu, Oliver Krzysik, Hans De Sterck, Omer Ege Kara
First-order PDES for Graph Neural Networks: Advection And Burgers Equation Models
null
null
null
null
cs.LG cs.NA math.NA
http://creativecommons.org/licenses/by/4.0/
Graph Neural Networks (GNNs) have established themselves as the preferred methodology in a multitude of domains, ranging from computer vision to computational biology, especially in contexts where data inherently conform to graph structures. While many existing methods have endeavored to model GNNs using various techniques, a prevalent challenge they grapple with is the issue of over-smoothing. This paper presents new Graph Neural Network models that incorporate two first-order Partial Differential Equations (PDEs). These models do not increase complexity but effectively mitigate the over-smoothing problem. Our experimental findings highlight the capacity of our new PDE model to achieve comparable results with higher-order PDE models and fix the over-smoothing problem up to 64 layers. These results underscore the adaptability and versatility of GNNs, indicating that unconventional approaches can yield outcomes on par with established techniques.
[ { "created": "Wed, 3 Apr 2024 21:47:02 GMT", "version": "v1" } ]
2024-04-05
[ [ "Qu", "Yifan", "" ], [ "Krzysik", "Oliver", "" ], [ "De Sterck", "Hans", "" ], [ "Kara", "Omer Ege", "" ] ]
Graph Neural Networks (GNNs) have established themselves as the preferred methodology in a multitude of domains, ranging from computer vision to computational biology, especially in contexts where data inherently conform to graph structures. While many existing methods have endeavored to model GNNs using various techniques, a prevalent challenge they grapple with is the issue of over-smoothing. This paper presents new Graph Neural Network models that incorporate two first-order Partial Differential Equations (PDEs). These models do not increase complexity but effectively mitigate the over-smoothing problem. Our experimental findings highlight the capacity of our new PDE model to achieve comparable results with higher-order PDE models and fix the over-smoothing problem up to 64 layers. These results underscore the adaptability and versatility of GNNs, indicating that unconventional approaches can yield outcomes on par with established techniques.
1705.02038
Jiazi Zhang
Jiazi Zhang and Zhigang Chu and Lalitha Sankar and Oliver Kosut
False Data Injection Attacks on Phasor Measurements That Bypass Low-rank Decomposition
6 pages, 4 figures, submitted to 2017 IEEE International Conference on Smart Grid Communications (SmartGridComm)
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies the vulnerability of phasor measurement units (PMUs) to false data injection (FDI) attacks. Prior work demonstrated that unobservable FDI attacks that can bypass traditional bad data detectors based on measurement residuals can be identified by detector based on low-rank decomposition (LD). In this work, a class of more sophisticated FDI attacks that captures the temporal correlation of PMU data is introduced. Such attacks are designed with a convex optimization problem and can always bypass the LD detector. The vulnerability of this attack model is illustrated on both the IEEE 24-bus RTS and the IEEE 118-bus systems.
[ { "created": "Thu, 4 May 2017 22:33:04 GMT", "version": "v1" } ]
2017-05-08
[ [ "Zhang", "Jiazi", "" ], [ "Chu", "Zhigang", "" ], [ "Sankar", "Lalitha", "" ], [ "Kosut", "Oliver", "" ] ]
This paper studies the vulnerability of phasor measurement units (PMUs) to false data injection (FDI) attacks. Prior work demonstrated that unobservable FDI attacks that can bypass traditional bad data detectors based on measurement residuals can be identified by detector based on low-rank decomposition (LD). In this work, a class of more sophisticated FDI attacks that captures the temporal correlation of PMU data is introduced. Such attacks are designed with a convex optimization problem and can always bypass the LD detector. The vulnerability of this attack model is illustrated on both the IEEE 24-bus RTS and the IEEE 118-bus systems.
1907.12430
Alexander V Terekhov
Alexander V. Terekhov and J. Kevin O'Regan
Learning abstract perceptual notions: the example of space
arXiv admin note: text overlap with arXiv:1308.2124
null
null
null
cs.AI q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans are extremely swift learners. We are able to grasp highly abstract notions, whether they come from art perception or pure mathematics. Current machine learning techniques demonstrate astonishing results in extracting patterns in information. Yet the abstract notions we possess are more than just statistical patterns in the incoming information. Sensorimotor theory suggests that they represent functions, laws, describing how the information can be transformed, or, in other words, they represent the statistics of sensorimotor changes rather than sensory inputs themselves. The aim of our work is to suggest a way for machine learning and sensorimotor theory to benefit from each other so as to pave the way toward new horizons in learning. We show in this study that a highly abstract notion, that of space, can be seen as a collection of laws of transformations of sensory information and that these laws could in theory be learned by a naive agent. As an illustration we do a one-dimensional simulation in which an agent extracts spatial knowledge in the form of internalized ("sensible") rigid displacements. The agent uses them to encode its own displacements in a way which is isometrically related to external space. Though the algorithm allowing acquisition of rigid displacements is designed \emph{ad hoc}, we believe it can stimulate the development of unsupervised learning techniques leading to similar results.
[ { "created": "Wed, 24 Jul 2019 17:57:54 GMT", "version": "v1" } ]
2019-07-30
[ [ "Terekhov", "Alexander V.", "" ], [ "O'Regan", "J. Kevin", "" ] ]
Humans are extremely swift learners. We are able to grasp highly abstract notions, whether they come from art perception or pure mathematics. Current machine learning techniques demonstrate astonishing results in extracting patterns in information. Yet the abstract notions we possess are more than just statistical patterns in the incoming information. Sensorimotor theory suggests that they represent functions, laws, describing how the information can be transformed, or, in other words, they represent the statistics of sensorimotor changes rather than sensory inputs themselves. The aim of our work is to suggest a way for machine learning and sensorimotor theory to benefit from each other so as to pave the way toward new horizons in learning. We show in this study that a highly abstract notion, that of space, can be seen as a collection of laws of transformations of sensory information and that these laws could in theory be learned by a naive agent. As an illustration we do a one-dimensional simulation in which an agent extracts spatial knowledge in the form of internalized ("sensible") rigid displacements. The agent uses them to encode its own displacements in a way which is isometrically related to external space. Though the algorithm allowing acquisition of rigid displacements is designed \emph{ad hoc}, we believe it can stimulate the development of unsupervised learning techniques leading to similar results.
2203.05735
Quoc Nguyen
Huu-Quoc Nguyen, Tien-Dung Nguyen, Van-Nam Pham, Xuan-Qui Pham, Quang-Thai Ngo, Eui-Nam Huh
An Efficient Video Streaming Architecture with QoS Control for Virtual Desktop Infrastructure in Cloud Computing
26 pages, Multimedia Tools and Applications Journal
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
In virtual desktop infrastructure (VDI) environments, the remote display protocol has a big responsibility to transmit video data from a data center-hosted desktop to the endpoint. The protocol must ensure a high level of client perceived end-to-end quality of service (QoS) under heavy work load conditions. Each remote display protocol works differently depending on the network and which applications are being delivered. In healthcare applications, doctors and nurses can use mobile devices directly to monitor patients. Moreover, the ability to implement tasks requiring high consumption of CPU and other resources is applicable to a variety of applications including research and cloud gaming. Such computer games and complex processes will run on powerful cloud servers and the screen contents will be transmitted to the client. TO enable such applications, remote display technology requires further enhancements to meet more stringent requirements on bandwidth and QoS, an to allow realtime operation. In this paper, we present an architecture including flexible QoS control to improve the user quality of experience (QoE). The QoS control is developed based on linear regression modeling using historical network data. Additionally, the architecture includes a novel compression algorithm of 2D images, designed to guarantee the best image quality and to reduce video delay; this algorithm is based on k-means clustering and can satisfy the requirements of realtime onboard processing. Through simulations with a real work dataset collected by the MIT Computer Science and Artificial Lab, we present experimental as well as explain the performance of the QoS system.
[ { "created": "Fri, 11 Mar 2022 03:22:11 GMT", "version": "v1" } ]
2022-03-14
[ [ "Nguyen", "Huu-Quoc", "" ], [ "Nguyen", "Tien-Dung", "" ], [ "Pham", "Van-Nam", "" ], [ "Pham", "Xuan-Qui", "" ], [ "Ngo", "Quang-Thai", "" ], [ "Huh", "Eui-Nam", "" ] ]
In virtual desktop infrastructure (VDI) environments, the remote display protocol has a big responsibility to transmit video data from a data center-hosted desktop to the endpoint. The protocol must ensure a high level of client perceived end-to-end quality of service (QoS) under heavy work load conditions. Each remote display protocol works differently depending on the network and which applications are being delivered. In healthcare applications, doctors and nurses can use mobile devices directly to monitor patients. Moreover, the ability to implement tasks requiring high consumption of CPU and other resources is applicable to a variety of applications including research and cloud gaming. Such computer games and complex processes will run on powerful cloud servers and the screen contents will be transmitted to the client. TO enable such applications, remote display technology requires further enhancements to meet more stringent requirements on bandwidth and QoS, an to allow realtime operation. In this paper, we present an architecture including flexible QoS control to improve the user quality of experience (QoE). The QoS control is developed based on linear regression modeling using historical network data. Additionally, the architecture includes a novel compression algorithm of 2D images, designed to guarantee the best image quality and to reduce video delay; this algorithm is based on k-means clustering and can satisfy the requirements of realtime onboard processing. Through simulations with a real work dataset collected by the MIT Computer Science and Artificial Lab, we present experimental as well as explain the performance of the QoS system.
2008.07689
Yaorui Zhang
Yitong Deng, Yaorui Zhang, Xingzhe He, Shuqi Yang, Yunjin Tong, Michael Zhang, Daniel DiPietro, Bo Zhu
Soft Multicopter Control using Neural Dynamics Identification
null
null
null
null
cs.RO cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dynamic control of a soft-body robot to deliver complex behaviors with low-dimensional actuation inputs is challenging. In this paper, we present a computational approach to automatically generate versatile, underactuated control policies that drives soft-bodied machines with complicated structures and nonlinear dynamics. Our target application is focused on the autonomous control of a soft multicopter, featured by its elastic material components, non-conventional shapes, and asymmetric rotor layouts, to precisely deliver compliant deformation and agile locomotion. The central piece of our approach lies in a lightweight neural surrogate model to identify and predict the temporal evolution of a set of geometric variables characterizing an elastic soft body. This physics-based learning model is further integrated into a Linear Quadratic Regulator (LQR) control loop enhanced by a novel online fixed-point relinearization scheme to accommodate the dynamic body balance, allowing an aggressive reduction of the computational overhead caused by the conventional full-scale sensing-simulation-control workflow. We demonstrate the efficacy of our approach by generating controllers for a broad spectrum of customized soft multicopter designs and testing them in a high-fidelity physics simulation environment. The control algorithm enables the multicopters to perform a variety of tasks, including hovering, trajectory tracking, cruising and active deforming.
[ { "created": "Tue, 18 Aug 2020 01:38:18 GMT", "version": "v1" }, { "created": "Mon, 31 Aug 2020 19:37:18 GMT", "version": "v2" }, { "created": "Wed, 2 Sep 2020 09:44:15 GMT", "version": "v3" }, { "created": "Tue, 1 Dec 2020 09:11:02 GMT", "version": "v4" } ]
2020-12-02
[ [ "Deng", "Yitong", "" ], [ "Zhang", "Yaorui", "" ], [ "He", "Xingzhe", "" ], [ "Yang", "Shuqi", "" ], [ "Tong", "Yunjin", "" ], [ "Zhang", "Michael", "" ], [ "DiPietro", "Daniel", "" ], [ "Zhu", "Bo", "" ] ]
Dynamic control of a soft-body robot to deliver complex behaviors with low-dimensional actuation inputs is challenging. In this paper, we present a computational approach to automatically generate versatile, underactuated control policies that drives soft-bodied machines with complicated structures and nonlinear dynamics. Our target application is focused on the autonomous control of a soft multicopter, featured by its elastic material components, non-conventional shapes, and asymmetric rotor layouts, to precisely deliver compliant deformation and agile locomotion. The central piece of our approach lies in a lightweight neural surrogate model to identify and predict the temporal evolution of a set of geometric variables characterizing an elastic soft body. This physics-based learning model is further integrated into a Linear Quadratic Regulator (LQR) control loop enhanced by a novel online fixed-point relinearization scheme to accommodate the dynamic body balance, allowing an aggressive reduction of the computational overhead caused by the conventional full-scale sensing-simulation-control workflow. We demonstrate the efficacy of our approach by generating controllers for a broad spectrum of customized soft multicopter designs and testing them in a high-fidelity physics simulation environment. The control algorithm enables the multicopters to perform a variety of tasks, including hovering, trajectory tracking, cruising and active deforming.
1206.4126
Yousuf Ibrahim Khan
Yousuf Ibrahim Khan
Image based Cryptography from a distance
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An information is a message which is received and understood. Information can be sent one person to another over a long range but the process of sending information must be done in a secure way especially in case of a private message. Mathematicians and Engineers have historically relied on different algorithmic techniques to secure messages and signals. Cryptography, to most people, is concerned with keeping communications private. Indeed, the protection of sensitive communications has been the emphasis of cryptography throughout much of its history. Sometimes it is safer to send a message using an image and thus cryptography can also be done using images during an emergency. The need to extract information from images and interpret their contents has been one of the driving factors in the development of image processing and cryptography during the past decades. In this paper, a simple cryptographic method was used to decode a message which was in an image and it was done using a popular computational software.
[ { "created": "Tue, 19 Jun 2012 06:02:32 GMT", "version": "v1" } ]
2012-06-20
[ [ "Khan", "Yousuf Ibrahim", "" ] ]
An information is a message which is received and understood. Information can be sent one person to another over a long range but the process of sending information must be done in a secure way especially in case of a private message. Mathematicians and Engineers have historically relied on different algorithmic techniques to secure messages and signals. Cryptography, to most people, is concerned with keeping communications private. Indeed, the protection of sensitive communications has been the emphasis of cryptography throughout much of its history. Sometimes it is safer to send a message using an image and thus cryptography can also be done using images during an emergency. The need to extract information from images and interpret their contents has been one of the driving factors in the development of image processing and cryptography during the past decades. In this paper, a simple cryptographic method was used to decode a message which was in an image and it was done using a popular computational software.
1908.01650
Chunming Tang
Sihem Mesnager, Yanfeng Qi, Hongming Ru, Chunming Tang
Minimal linear codes from characteristic functions
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Minimal linear codes have interesting applications in secret sharing schemes and secure two-party computation. This paper uses characteristic functions of some subsets of $\mathbb{F}_q$ to construct minimal linear codes. By properties of characteristic functions, we can obtain more minimal binary linear codes from known minimal binary linear codes, which generalizes results of Ding et al. [IEEE Trans. Inf. Theory, vol. 64, no. 10, pp. 6536-6545, 2018]. By characteristic functions corresponding to some subspaces of $\mathbb{F}_q$, we obtain many minimal linear codes, which generalizes results of [IEEE Trans. Inf. Theory, vol. 64, no. 10, pp. 6536-6545, 2018] and [IEEE Trans. Inf. Theory, vol. 65, no. 11, pp. 7067-7078, 2019]. Finally, we use characteristic functions to present a characterization of minimal linear codes from the defining set method and present a class of minimal linear codes.
[ { "created": "Mon, 5 Aug 2019 14:40:23 GMT", "version": "v1" }, { "created": "Wed, 20 Nov 2019 11:45:55 GMT", "version": "v2" } ]
2019-11-21
[ [ "Mesnager", "Sihem", "" ], [ "Qi", "Yanfeng", "" ], [ "Ru", "Hongming", "" ], [ "Tang", "Chunming", "" ] ]
Minimal linear codes have interesting applications in secret sharing schemes and secure two-party computation. This paper uses characteristic functions of some subsets of $\mathbb{F}_q$ to construct minimal linear codes. By properties of characteristic functions, we can obtain more minimal binary linear codes from known minimal binary linear codes, which generalizes results of Ding et al. [IEEE Trans. Inf. Theory, vol. 64, no. 10, pp. 6536-6545, 2018]. By characteristic functions corresponding to some subspaces of $\mathbb{F}_q$, we obtain many minimal linear codes, which generalizes results of [IEEE Trans. Inf. Theory, vol. 64, no. 10, pp. 6536-6545, 2018] and [IEEE Trans. Inf. Theory, vol. 65, no. 11, pp. 7067-7078, 2019]. Finally, we use characteristic functions to present a characterization of minimal linear codes from the defining set method and present a class of minimal linear codes.
2202.04076
Kun Wang
Kun Wang, Jingyi Wang, Christopher M. Poskitt, Xiangxiang Chen, Jun Sun, and Peng Cheng
K-ST: A Formal Executable Semantics of the Structured Text Language for PLCs
Accepted by IEEE Transactions on Software Engineering
IEEE Trans. Software Eng., 2023
10.1109/TSE.2023.3315292
null
cs.PL cs.SE
http://creativecommons.org/licenses/by/4.0/
Programmable Logic Controllers (PLCs) are responsible for automating process control in many industrial systems (e.g. in manufacturing and public infrastructure), and thus it is critical to ensure that they operate correctly and safely. The majority of PLCs are programmed in languages such as Structured Text (ST). However, a lack of formal semantics makes it difficult to ascertain the correctness of their translators and compilers, which vary from vendor-to-vendor. In this work, we develop K-ST, a formal executable semantics for ST in the K framework. Defined with respect to the IEC 61131-3 standard and PLC vendor manuals, K-ST is a high-level reference semantics that can be used to evaluate the correctness and consistency of different ST implementations. We validate K-ST by executing 509 ST programs extracted from Github and comparing the results against existing commercial compilers (i.e., CODESYS, CX-Programmer, and GX Works2). We then apply K-ST to validate the implementation of the open source OpenPLC platform, comparing the executions of several test programs to uncover five bugs and nine functional defects in the compiler.
[ { "created": "Tue, 8 Feb 2022 17:34:08 GMT", "version": "v1" }, { "created": "Tue, 12 Sep 2023 02:05:17 GMT", "version": "v2" } ]
2023-09-19
[ [ "Wang", "Kun", "" ], [ "Wang", "Jingyi", "" ], [ "Poskitt", "Christopher M.", "" ], [ "Chen", "Xiangxiang", "" ], [ "Sun", "Jun", "" ], [ "Cheng", "Peng", "" ] ]
Programmable Logic Controllers (PLCs) are responsible for automating process control in many industrial systems (e.g. in manufacturing and public infrastructure), and thus it is critical to ensure that they operate correctly and safely. The majority of PLCs are programmed in languages such as Structured Text (ST). However, a lack of formal semantics makes it difficult to ascertain the correctness of their translators and compilers, which vary from vendor-to-vendor. In this work, we develop K-ST, a formal executable semantics for ST in the K framework. Defined with respect to the IEC 61131-3 standard and PLC vendor manuals, K-ST is a high-level reference semantics that can be used to evaluate the correctness and consistency of different ST implementations. We validate K-ST by executing 509 ST programs extracted from Github and comparing the results against existing commercial compilers (i.e., CODESYS, CX-Programmer, and GX Works2). We then apply K-ST to validate the implementation of the open source OpenPLC platform, comparing the executions of several test programs to uncover five bugs and nine functional defects in the compiler.
2306.17804
Darren Strash
Anthony Hevia, Benjamin Kallus, Summer McClintic, Samantha Reisner, Darren Strash, and Johnathan Wilson
Solving Edge Clique Cover Exactly via Synergistic Data Reduction
22 pages, 5 figures, 6 tables, accepted at the 31st Annual European Symposium on Algorithms (ESA 2023)
null
null
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
The edge clique cover (ECC) problem -- where the goal is to find a minimum cardinality set of cliques that cover all the edges of a graph -- is a classic NP-hard problem that has received much attention from both the theoretical and experimental algorithms communities. While small sparse graphs can be solved exactly via the branch-and-reduce algorithm of Gramm et al. [JEA 2009], larger instances can currently only be solved inexactly using heuristics with unknown overall solution quality. We revisit computing minimum ECCs exactly in practice by combining data reduction for both the ECC \emph{and} vertex clique cover (VCC) problems. We do so by modifying the polynomial-time reduction of Kou et al. [Commun. ACM 1978] to transform a reduced ECC instance to a VCC instance; alternatively, we show it is possible to ``lift'' some VCC reductions to the ECC problem. Our experiments show that combining data reduction for both problems (which we call \emph{synergistic data reduction}) enables finding exact minimum ECCs orders of magnitude faster than the technique of Gramm et al., and allows solving large sparse graphs on up to millions of vertices and edges that have never before been solved. With these new exact solutions, we evaluate the quality of recent heuristic algorithms on large instances for the first time. The most recent of these, \textsf{EO-ECC} by Abdullah et al. [ICCS 2022], solves 8 of the 27 instances for which we have exact solutions. It is our hope that our strategy rallies researchers to seek improved algorithms for the ECC problem.
[ { "created": "Fri, 30 Jun 2023 17:06:04 GMT", "version": "v1" }, { "created": "Tue, 4 Jul 2023 18:04:39 GMT", "version": "v2" } ]
2023-07-06
[ [ "Hevia", "Anthony", "" ], [ "Kallus", "Benjamin", "" ], [ "McClintic", "Summer", "" ], [ "Reisner", "Samantha", "" ], [ "Strash", "Darren", "" ], [ "Wilson", "Johnathan", "" ] ]
The edge clique cover (ECC) problem -- where the goal is to find a minimum cardinality set of cliques that cover all the edges of a graph -- is a classic NP-hard problem that has received much attention from both the theoretical and experimental algorithms communities. While small sparse graphs can be solved exactly via the branch-and-reduce algorithm of Gramm et al. [JEA 2009], larger instances can currently only be solved inexactly using heuristics with unknown overall solution quality. We revisit computing minimum ECCs exactly in practice by combining data reduction for both the ECC \emph{and} vertex clique cover (VCC) problems. We do so by modifying the polynomial-time reduction of Kou et al. [Commun. ACM 1978] to transform a reduced ECC instance to a VCC instance; alternatively, we show it is possible to ``lift'' some VCC reductions to the ECC problem. Our experiments show that combining data reduction for both problems (which we call \emph{synergistic data reduction}) enables finding exact minimum ECCs orders of magnitude faster than the technique of Gramm et al., and allows solving large sparse graphs on up to millions of vertices and edges that have never before been solved. With these new exact solutions, we evaluate the quality of recent heuristic algorithms on large instances for the first time. The most recent of these, \textsf{EO-ECC} by Abdullah et al. [ICCS 2022], solves 8 of the 27 instances for which we have exact solutions. It is our hope that our strategy rallies researchers to seek improved algorithms for the ECC problem.
2209.14468
Yiheng Shen
Kamesh Munagala, Yiheng Shen, Kangning Wang
Auditing for Core Stability in Participatory Budgeting
accepted by the 18th Conference on Web and Internet Economics (WINE 2022)
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the participatory budgeting problem where each of $n$ voters specifies additive utilities over $m$ candidate projects with given sizes, and the goal is to choose a subset of projects (i.e., a committee) with total size at most $k$. Participatory budgeting mathematically generalizes multiwinner elections, and both have received great attention in computational social choice recently. A well-studied notion of group fairness in this setting is core stability: Each voter is assigned an "entitlement" of $\frac{k}{n}$, so that a subset $S$ of voters can pay for a committee of size at most $|S| \cdot \frac{k}{n}$. A given committee is in the core if no subset of voters can pay for another committee that provides each of them strictly larger utility. This provides proportional representation to all voters in a strong sense. In this paper, we study the following auditing question: Given a committee computed by some preference aggregation method, how close is it to the core? Concretely, how much does the entitlement of each voter need to be scaled down by, so that the core property subsequently holds? As our main contribution, we present computational hardness results for this problem, as well as a logarithmic approximation algorithm via linear program rounding. We show that our analysis is tight against the linear programming bound. Additionally, we consider two related notions of group fairness that have similar audit properties. The first is Lindahl priceability, which audits the closeness of a committee to a market clearing solution. We show that this is related to the linear programming relaxation of auditing the core, leading to efficient exact and approximation algorithms for auditing. The second is a novel weakening of the core that we term the sub-core, and we present computational results for auditing this notion as well.
[ { "created": "Wed, 28 Sep 2022 23:13:06 GMT", "version": "v1" } ]
2022-09-30
[ [ "Munagala", "Kamesh", "" ], [ "Shen", "Yiheng", "" ], [ "Wang", "Kangning", "" ] ]
We consider the participatory budgeting problem where each of $n$ voters specifies additive utilities over $m$ candidate projects with given sizes, and the goal is to choose a subset of projects (i.e., a committee) with total size at most $k$. Participatory budgeting mathematically generalizes multiwinner elections, and both have received great attention in computational social choice recently. A well-studied notion of group fairness in this setting is core stability: Each voter is assigned an "entitlement" of $\frac{k}{n}$, so that a subset $S$ of voters can pay for a committee of size at most $|S| \cdot \frac{k}{n}$. A given committee is in the core if no subset of voters can pay for another committee that provides each of them strictly larger utility. This provides proportional representation to all voters in a strong sense. In this paper, we study the following auditing question: Given a committee computed by some preference aggregation method, how close is it to the core? Concretely, how much does the entitlement of each voter need to be scaled down by, so that the core property subsequently holds? As our main contribution, we present computational hardness results for this problem, as well as a logarithmic approximation algorithm via linear program rounding. We show that our analysis is tight against the linear programming bound. Additionally, we consider two related notions of group fairness that have similar audit properties. The first is Lindahl priceability, which audits the closeness of a committee to a market clearing solution. We show that this is related to the linear programming relaxation of auditing the core, leading to efficient exact and approximation algorithms for auditing. The second is a novel weakening of the core that we term the sub-core, and we present computational results for auditing this notion as well.
2408.01196
Zhang Shanfan
Shanfan Zhang, Xiaoting Shen, Zhan Bu
Game Theory Based Community-Aware Opinion Dynamics
36 pages, 15figures
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Examining the mechanisms underlying the formation and evolution of opinions within real-world social systems, which consist of numerous individuals, can provide valuable insights for effective social functioning and informed business decision making. The focus of our study is on the dynamics of opinions inside a networked multi-agent system. We provide a novel approach called the Game Theory Based Community-Aware Opinion Formation Process (GCAOFP) to accurately represent the co-evolutionary dynamics of communities and opinions in real-world social systems. The GCAOFP algorithm comprises two distinct steps in each iteration. 1) The Community Dynamics Process conceptualizes the process of community formation as a non-cooperative game involving a finite number of agents. Each individual agent aims to maximize their own utility by adopting a response that leads to the most favorable update of the community label. 2) The Opinion Formation Process involves the updating of an individual agent's opinion within a community-aware framework that incorporates bounded confidence. This process takes into account the updated matrix of community members and ensures that an agent's opinion aligns with the opinions of others within their community, within certain defined limits. The present study provides a theoretical proof that under any initial conditions, the aforementioned co-evolutionary dynamics process will ultimately reach an equilibrium state. In this state, both the opinion vector and community member matrix will stabilize after a finite number of iterations. In contrast to conventional opinion dynamics models, the guaranteed convergence of agent opinion within the same community ensures that the convergence of opinions takes place exclusively inside a given community.
[ { "created": "Fri, 2 Aug 2024 11:24:56 GMT", "version": "v1" } ]
2024-08-05
[ [ "Zhang", "Shanfan", "" ], [ "Shen", "Xiaoting", "" ], [ "Bu", "Zhan", "" ] ]
Examining the mechanisms underlying the formation and evolution of opinions within real-world social systems, which consist of numerous individuals, can provide valuable insights for effective social functioning and informed business decision making. The focus of our study is on the dynamics of opinions inside a networked multi-agent system. We provide a novel approach called the Game Theory Based Community-Aware Opinion Formation Process (GCAOFP) to accurately represent the co-evolutionary dynamics of communities and opinions in real-world social systems. The GCAOFP algorithm comprises two distinct steps in each iteration. 1) The Community Dynamics Process conceptualizes the process of community formation as a non-cooperative game involving a finite number of agents. Each individual agent aims to maximize their own utility by adopting a response that leads to the most favorable update of the community label. 2) The Opinion Formation Process involves the updating of an individual agent's opinion within a community-aware framework that incorporates bounded confidence. This process takes into account the updated matrix of community members and ensures that an agent's opinion aligns with the opinions of others within their community, within certain defined limits. The present study provides a theoretical proof that under any initial conditions, the aforementioned co-evolutionary dynamics process will ultimately reach an equilibrium state. In this state, both the opinion vector and community member matrix will stabilize after a finite number of iterations. In contrast to conventional opinion dynamics models, the guaranteed convergence of agent opinion within the same community ensures that the convergence of opinions takes place exclusively inside a given community.
1912.12204
Boyi Liu
Boyi Liu, Lujia Wang, Ming Liu, Cheng-Zhong Xu
Federated Imitation Learning: A Novel Framework for Cloud Robotic Systems with Heterogeneous Sensor Data
arXiv admin note: substantial text overlap with arXiv:1909.00895
null
null
null
cs.RO cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans are capable of learning a new behavior by observing others to perform the skill. Similarly, robots can also implement this by imitation learning. Furthermore, if with external guidance, humans can master the new behavior more efficiently. So, how can robots achieve this? To address the issue, we present a novel framework named FIL. It provides a heterogeneous knowledge fusion mechanism for cloud robotic systems. Then, a knowledge fusion algorithm in FIL is proposed. It enables the cloud to fuse heterogeneous knowledge from local robots and generate guide models for robots with service requests. After that, we introduce a knowledge transfer scheme to facilitate local robots acquiring knowledge from the cloud. With FIL, a robot is capable of utilizing knowledge from other robots to increase its imitation learning in accuracy and efficiency. Compared with transfer learning and meta-learning, FIL is more suitable to be deployed in cloud robotic systems. Finally, we conduct experiments of a self-driving task for robots (cars). The experimental results demonstrate that the shared model generated by FIL increases imitation learning efficiency of local robots in cloud robotic systems.
[ { "created": "Tue, 24 Dec 2019 11:23:23 GMT", "version": "v1" } ]
2019-12-30
[ [ "Liu", "Boyi", "" ], [ "Wang", "Lujia", "" ], [ "Liu", "Ming", "" ], [ "Xu", "Cheng-Zhong", "" ] ]
Humans are capable of learning a new behavior by observing others to perform the skill. Similarly, robots can also implement this by imitation learning. Furthermore, if with external guidance, humans can master the new behavior more efficiently. So, how can robots achieve this? To address the issue, we present a novel framework named FIL. It provides a heterogeneous knowledge fusion mechanism for cloud robotic systems. Then, a knowledge fusion algorithm in FIL is proposed. It enables the cloud to fuse heterogeneous knowledge from local robots and generate guide models for robots with service requests. After that, we introduce a knowledge transfer scheme to facilitate local robots acquiring knowledge from the cloud. With FIL, a robot is capable of utilizing knowledge from other robots to increase its imitation learning in accuracy and efficiency. Compared with transfer learning and meta-learning, FIL is more suitable to be deployed in cloud robotic systems. Finally, we conduct experiments of a self-driving task for robots (cars). The experimental results demonstrate that the shared model generated by FIL increases imitation learning efficiency of local robots in cloud robotic systems.
2311.15512
Dong Yonghao
Yonghao Dong, Le Wang, Sanpin Zhou, Gang Hua, and Changyin Sun
Sparse Pedestrian Character Learning for Trajectory Prediction
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pedestrian trajectory prediction in a first-person view has recently attracted much attention due to its importance in autonomous driving. Recent work utilizes pedestrian character information, \textit{i.e.}, action and appearance, to improve the learned trajectory embedding and achieves state-of-the-art performance. However, it neglects the invalid and negative pedestrian character information, which is harmful to trajectory representation and thus leads to performance degradation. To address this issue, we present a two-stream sparse-character-based network~(TSNet) for pedestrian trajectory prediction. Specifically, TSNet learns the negative-removed characters in the sparse character representation stream to improve the trajectory embedding obtained in the trajectory representation stream. Moreover, to model the negative-removed characters, we propose a novel sparse character graph, including the sparse category and sparse temporal character graphs, to learn the different effects of various characters in category and temporal dimensions, respectively. Extensive experiments on two first-person view datasets, PIE and JAAD, show that our method outperforms existing state-of-the-art methods. In addition, ablation studies demonstrate different effects of various characters and prove that TSNet outperforms approaches without eliminating negative characters.
[ { "created": "Mon, 27 Nov 2023 03:15:48 GMT", "version": "v1" } ]
2023-11-28
[ [ "Dong", "Yonghao", "" ], [ "Wang", "Le", "" ], [ "Zhou", "Sanpin", "" ], [ "Hua", "Gang", "" ], [ "Sun", "Changyin", "" ] ]
Pedestrian trajectory prediction in a first-person view has recently attracted much attention due to its importance in autonomous driving. Recent work utilizes pedestrian character information, \textit{i.e.}, action and appearance, to improve the learned trajectory embedding and achieves state-of-the-art performance. However, it neglects the invalid and negative pedestrian character information, which is harmful to trajectory representation and thus leads to performance degradation. To address this issue, we present a two-stream sparse-character-based network~(TSNet) for pedestrian trajectory prediction. Specifically, TSNet learns the negative-removed characters in the sparse character representation stream to improve the trajectory embedding obtained in the trajectory representation stream. Moreover, to model the negative-removed characters, we propose a novel sparse character graph, including the sparse category and sparse temporal character graphs, to learn the different effects of various characters in category and temporal dimensions, respectively. Extensive experiments on two first-person view datasets, PIE and JAAD, show that our method outperforms existing state-of-the-art methods. In addition, ablation studies demonstrate different effects of various characters and prove that TSNet outperforms approaches without eliminating negative characters.
2209.09653
Tonio Ball
Maryna Kapitonova, Philipp Kellmeyer, Simon Vogt and Tonio Ball
A Framework for Preserving Privacy and Cybersecurity in Brain-Computer Interfacing Applications
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Brain-Computer Interfaces (BCIs) comprise a rapidly evolving field of technology with the potential of far-reaching impact in domains ranging from medical over industrial to artistic, gaming, and military. Today, these emerging BCI applications are typically still at early technology readiness levels, but because BCIs create novel, technical communication channels for the human brain, they have raised privacy and security concerns. To mitigate such risks, a large body of countermeasures has been proposed in the literature, but a general framework is lacking which would describe how privacy and security of BCI applications can be protected by design, i.e., already as an integral part of the early BCI design process, in a systematic manner, and allowing suitable depth of analysis for different contexts such as commercial BCI product development vs. academic research and lab prototypes. Here we propose the adoption of recent systems-engineering methodologies for privacy threat modeling, risk assessment, and privacy engineering to the BCI field. These methodologies address privacy and security concerns in a more systematic and holistic way than previous approaches, and provide reusable patterns on how to move from principles to actions. We apply these methodologies to BCI and data flows and derive a generic, extensible, and actionable framework for brain-privacy-preserving cybersecurity in BCI applications. This framework is designed for flexible application to the wide range of current and future BCI applications. We also propose a range of novel privacy-by-design features for BCIs, with an emphasis on features promoting BCI transparency as a prerequisite for informational self-determination of BCI users, as well as design features for ensuring BCI user autonomy. We anticipate that our framework will contribute to the development of privacy-respecting, trustworthy BCI technologies.
[ { "created": "Mon, 19 Sep 2022 15:45:13 GMT", "version": "v1" } ]
2022-09-21
[ [ "Kapitonova", "Maryna", "" ], [ "Kellmeyer", "Philipp", "" ], [ "Vogt", "Simon", "" ], [ "Ball", "Tonio", "" ] ]
Brain-Computer Interfaces (BCIs) comprise a rapidly evolving field of technology with the potential of far-reaching impact in domains ranging from medical over industrial to artistic, gaming, and military. Today, these emerging BCI applications are typically still at early technology readiness levels, but because BCIs create novel, technical communication channels for the human brain, they have raised privacy and security concerns. To mitigate such risks, a large body of countermeasures has been proposed in the literature, but a general framework is lacking which would describe how privacy and security of BCI applications can be protected by design, i.e., already as an integral part of the early BCI design process, in a systematic manner, and allowing suitable depth of analysis for different contexts such as commercial BCI product development vs. academic research and lab prototypes. Here we propose the adoption of recent systems-engineering methodologies for privacy threat modeling, risk assessment, and privacy engineering to the BCI field. These methodologies address privacy and security concerns in a more systematic and holistic way than previous approaches, and provide reusable patterns on how to move from principles to actions. We apply these methodologies to BCI and data flows and derive a generic, extensible, and actionable framework for brain-privacy-preserving cybersecurity in BCI applications. This framework is designed for flexible application to the wide range of current and future BCI applications. We also propose a range of novel privacy-by-design features for BCIs, with an emphasis on features promoting BCI transparency as a prerequisite for informational self-determination of BCI users, as well as design features for ensuring BCI user autonomy. We anticipate that our framework will contribute to the development of privacy-respecting, trustworthy BCI technologies.
2112.14890
Ke Wang
Jiayi Wang, Ke Wang, Boxing Chen, Yu Zhao, Weihua Luo, and Yuqi Zhang
QEMind: Alibaba's Submission to the WMT21 Quality Estimation Shared Task
Winner of WMT 2021 QE shared task 1
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Quality Estimation, as a crucial step of quality control for machine translation, has been explored for years. The goal is to investigate automatic methods for estimating the quality of machine translation results without reference translations. In this year's WMT QE shared task, we utilize the large-scale XLM-Roberta pre-trained model and additionally propose several useful features to evaluate the uncertainty of the translations to build our QE system, named \textit{QEMind}. The system has been applied to the sentence-level scoring task of Direct Assessment and the binary score prediction task of Critical Error Detection. In this paper, we present our submissions to the WMT 2021 QE shared task and an extensive set of experimental results have shown us that our multilingual systems outperform the best system in the Direct Assessment QE task of WMT 2020.
[ { "created": "Thu, 30 Dec 2021 02:27:29 GMT", "version": "v1" } ]
2022-01-03
[ [ "Wang", "Jiayi", "" ], [ "Wang", "Ke", "" ], [ "Chen", "Boxing", "" ], [ "Zhao", "Yu", "" ], [ "Luo", "Weihua", "" ], [ "Zhang", "Yuqi", "" ] ]
Quality Estimation, as a crucial step of quality control for machine translation, has been explored for years. The goal is to investigate automatic methods for estimating the quality of machine translation results without reference translations. In this year's WMT QE shared task, we utilize the large-scale XLM-Roberta pre-trained model and additionally propose several useful features to evaluate the uncertainty of the translations to build our QE system, named \textit{QEMind}. The system has been applied to the sentence-level scoring task of Direct Assessment and the binary score prediction task of Critical Error Detection. In this paper, we present our submissions to the WMT 2021 QE shared task and an extensive set of experimental results have shown us that our multilingual systems outperform the best system in the Direct Assessment QE task of WMT 2020.
2008.05297
Umberto Straccia
Franco Alberto Cardillo and Umberto Straccia
Fuzzy OWL-BOOST: Learning Fuzzy Concept Inclusions via Real-Valued Boosting
null
null
10.1016/j.fss.2021.07.002
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
OWL ontologies are nowadays a quite popular way to describe structured knowledge in terms of classes, relations among classes and class instances. In this paper, given a target class T of an OWL ontology, we address the problem of learning fuzzy concept inclusion axioms that describe sufficient conditions for being an individual instance of T. To do so, we present Fuzzy OWL-BOOST that relies on the Real AdaBoost boosting algorithm adapted to the (fuzzy) OWL case. We illustrate its effectiveness by means of an experimentation. An interesting feature is that the learned rules can be represented directly into Fuzzy OWL 2. As a consequence, any Fuzzy OWL 2 reasoner can then be used to automatically determine/classify (and to which degree) whether an individual belongs to the target class T.
[ { "created": "Mon, 3 Aug 2020 15:19:31 GMT", "version": "v1" }, { "created": "Fri, 26 Mar 2021 07:10:04 GMT", "version": "v2" } ]
2022-03-10
[ [ "Cardillo", "Franco Alberto", "" ], [ "Straccia", "Umberto", "" ] ]
OWL ontologies are nowadays a quite popular way to describe structured knowledge in terms of classes, relations among classes and class instances. In this paper, given a target class T of an OWL ontology, we address the problem of learning fuzzy concept inclusion axioms that describe sufficient conditions for being an individual instance of T. To do so, we present Fuzzy OWL-BOOST that relies on the Real AdaBoost boosting algorithm adapted to the (fuzzy) OWL case. We illustrate its effectiveness by means of an experimentation. An interesting feature is that the learned rules can be represented directly into Fuzzy OWL 2. As a consequence, any Fuzzy OWL 2 reasoner can then be used to automatically determine/classify (and to which degree) whether an individual belongs to the target class T.
2305.09281
Fatma Elsafoury
Fatma Elsafoury, Gavin Abercrombie
On the Origins of Bias in NLP through the Lens of the Jim Code
10 pages
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
In this paper, we trace the biases in current natural language processing (NLP) models back to their origins in racism, sexism, and homophobia over the last 500 years. We review literature from critical race theory, gender studies, data ethics, and digital humanities studies, and summarize the origins of bias in NLP models from these social science perspective. We show how the causes of the biases in the NLP pipeline are rooted in social issues. Finally, we argue that the only way to fix the bias and unfairness in NLP is by addressing the social problems that caused them in the first place and by incorporating social sciences and social scientists in efforts to mitigate bias in NLP models. We provide actionable recommendations for the NLP research community to do so.
[ { "created": "Tue, 16 May 2023 08:37:13 GMT", "version": "v1" } ]
2023-05-17
[ [ "Elsafoury", "Fatma", "" ], [ "Abercrombie", "Gavin", "" ] ]
In this paper, we trace the biases in current natural language processing (NLP) models back to their origins in racism, sexism, and homophobia over the last 500 years. We review literature from critical race theory, gender studies, data ethics, and digital humanities studies, and summarize the origins of bias in NLP models from these social science perspective. We show how the causes of the biases in the NLP pipeline are rooted in social issues. Finally, we argue that the only way to fix the bias and unfairness in NLP is by addressing the social problems that caused them in the first place and by incorporating social sciences and social scientists in efforts to mitigate bias in NLP models. We provide actionable recommendations for the NLP research community to do so.
2305.11059
Muhammad Husnain Mubarik
Ramakrishna Kanungo, Swamynathan Siva, Nathaniel Bleier, Muhammad Husnain Mubarik, Lav Varshney and Rakesh Kumar
Understanding Interactions Between Chip Architecture and Uncertainties in Semiconductor Supply and Demand
null
null
null
null
cs.AR cs.CE
http://creativecommons.org/licenses/by/4.0/
Mitigating losses from supply and demand volatility in the semiconductor supply chain and market has traditionally been cast as a logistics and forecasting problem. We investigate how the architecture of a family of chips influences how it is affected by supply and demand uncertainties. We observe that semiconductor supply chains become fragile, in part, due to single demand paths, where one chip can satisfy only one demand. Chip architects can enable multiple paths to satisfy a chip demand, which improves supply chain resilience. Based on this observation, we study composition and adaptation as architectural strategies to improve resilience to volatility and also introduce a third strategy of dispersion. These strategies allow multiple paths to satisfy a given chip demand. We develop a model to analyze the impact of these architectural techniques on supply chain costs under different regimes of uncertainties and evaluate what happens when they are combined. We present several interesting and even counterintuitive observations about the product configurations and market conditions where these interventions are impactful and where they are not. In all, we show that product redesign supported by architectural changes can mitigate nearly half of the losses caused by supply and demand volatility. As far as we know, this is the first such investigation concerning chip architecture.
[ { "created": "Wed, 10 May 2023 18:07:34 GMT", "version": "v1" } ]
2023-05-19
[ [ "Kanungo", "Ramakrishna", "" ], [ "Siva", "Swamynathan", "" ], [ "Bleier", "Nathaniel", "" ], [ "Mubarik", "Muhammad Husnain", "" ], [ "Varshney", "Lav", "" ], [ "Kumar", "Rakesh", "" ] ]
Mitigating losses from supply and demand volatility in the semiconductor supply chain and market has traditionally been cast as a logistics and forecasting problem. We investigate how the architecture of a family of chips influences how it is affected by supply and demand uncertainties. We observe that semiconductor supply chains become fragile, in part, due to single demand paths, where one chip can satisfy only one demand. Chip architects can enable multiple paths to satisfy a chip demand, which improves supply chain resilience. Based on this observation, we study composition and adaptation as architectural strategies to improve resilience to volatility and also introduce a third strategy of dispersion. These strategies allow multiple paths to satisfy a given chip demand. We develop a model to analyze the impact of these architectural techniques on supply chain costs under different regimes of uncertainties and evaluate what happens when they are combined. We present several interesting and even counterintuitive observations about the product configurations and market conditions where these interventions are impactful and where they are not. In all, we show that product redesign supported by architectural changes can mitigate nearly half of the losses caused by supply and demand volatility. As far as we know, this is the first such investigation concerning chip architecture.
2204.02464
Stefan Bosse
Stefan Bosse
BeeTS: Smart Distributed Sensor Tuple Spaces combined with Agents using Bluetooth and IP Broadcasting
null
null
null
null
cs.NI cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most Internet-of-Things (IoT) devices and smart sensors are connected via the Internet using IP communication driectly accessed by a server that collect sensor information periodically or event-based. Although, Internet access is widely available, there are places that are not covered and WLAN and mobile cell communication requires a descent amount of power not always available. Finally, the spatial context (the environment in which the sensor or devices is situated) is not considered (or lost) by Internet connectivity. In this work, smart devices communicate connectionless and ad-hoc by using low-energy Bluetooth broadcasting available in any smartphone and in most embedded computers, e.g., the Raspberry PI devices. Bi-directional connectionless communication is established via the advertisements and scanning modes. The communication nodes can exchange data via functional tuples using a tuple space service on each node. Tuple space access is performed by simple evenat-based agents. Mobile devices act as tuple carriers that can carry tuples between different locations. Additionally, UDP-based Intranet communication can be used to access tuple spaces on a wider spatial range. The Bluetooth Low Energy Tuple Space (BeeTS) service enables opportunistic, ad-hoc and loosely coupled device communication with a spatial context.
[ { "created": "Tue, 5 Apr 2022 19:47:21 GMT", "version": "v1" } ]
2022-04-07
[ [ "Bosse", "Stefan", "" ] ]
Most Internet-of-Things (IoT) devices and smart sensors are connected via the Internet using IP communication driectly accessed by a server that collect sensor information periodically or event-based. Although, Internet access is widely available, there are places that are not covered and WLAN and mobile cell communication requires a descent amount of power not always available. Finally, the spatial context (the environment in which the sensor or devices is situated) is not considered (or lost) by Internet connectivity. In this work, smart devices communicate connectionless and ad-hoc by using low-energy Bluetooth broadcasting available in any smartphone and in most embedded computers, e.g., the Raspberry PI devices. Bi-directional connectionless communication is established via the advertisements and scanning modes. The communication nodes can exchange data via functional tuples using a tuple space service on each node. Tuple space access is performed by simple evenat-based agents. Mobile devices act as tuple carriers that can carry tuples between different locations. Additionally, UDP-based Intranet communication can be used to access tuple spaces on a wider spatial range. The Bluetooth Low Energy Tuple Space (BeeTS) service enables opportunistic, ad-hoc and loosely coupled device communication with a spatial context.
1901.06212
Dmitry Kangin
Dmitry Kangin and Nicolas Pugeault
On-Policy Trust Region Policy Optimisation with Replay Buffers
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Building upon the recent success of deep reinforcement learning methods, we investigate the possibility of on-policy reinforcement learning improvement by reusing the data from several consecutive policies. On-policy methods bring many benefits, such as ability to evaluate each resulting policy. However, they usually discard all the information about the policies which existed before. In this work, we propose adaptation of the replay buffer concept, borrowed from the off-policy learning setting, to create the method, combining advantages of on- and off-policy learning. To achieve this, the proposed algorithm generalises the $Q$-, value and advantage functions for data from multiple policies. The method uses trust region optimisation, while avoiding some of the common problems of the algorithms such as TRPO or ACKTR: it uses hyperparameters to replace the trust region selection heuristics, as well as the trainable covariance matrix instead of the fixed one. In many cases, the method not only improves the results comparing to the state-of-the-art trust region on-policy learning algorithms such as PPO, ACKTR and TRPO, but also with respect to their off-policy counterpart DDPG.
[ { "created": "Fri, 18 Jan 2019 13:09:18 GMT", "version": "v1" } ]
2019-01-21
[ [ "Kangin", "Dmitry", "" ], [ "Pugeault", "Nicolas", "" ] ]
Building upon the recent success of deep reinforcement learning methods, we investigate the possibility of on-policy reinforcement learning improvement by reusing the data from several consecutive policies. On-policy methods bring many benefits, such as ability to evaluate each resulting policy. However, they usually discard all the information about the policies which existed before. In this work, we propose adaptation of the replay buffer concept, borrowed from the off-policy learning setting, to create the method, combining advantages of on- and off-policy learning. To achieve this, the proposed algorithm generalises the $Q$-, value and advantage functions for data from multiple policies. The method uses trust region optimisation, while avoiding some of the common problems of the algorithms such as TRPO or ACKTR: it uses hyperparameters to replace the trust region selection heuristics, as well as the trainable covariance matrix instead of the fixed one. In many cases, the method not only improves the results comparing to the state-of-the-art trust region on-policy learning algorithms such as PPO, ACKTR and TRPO, but also with respect to their off-policy counterpart DDPG.
2306.06791
Shugang Hao
Shugang Hao and Lingjie Duan
To Save Mobile Crowdsourcing from Cheap-talk: A Game Theoretic Learning Approach
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Today mobile crowdsourcing platforms invite users to provide anonymous reviews about service experiences, yet many reviews are found biased to be extremely positive or negative. The existing methods find it difficult to learn from biased reviews to infer the actual service state, as the state can also be extreme and the platform cannot verify the truthfulness of reviews immediately. Further, reviewers can hide their (positive or negative) bias types and proactively adjust their anonymous reviews against the platform's inference. To our best knowledge, we are the first to study how to save mobile crowdsourcing from cheap-talk and strategically learn from biased users' reviews. We formulate the problem as a dynamic Bayesian game, including users' service-type messaging and the platform's follow-up rating/inference. Our closed-form PBE shows that an extremely-biased user may still honestly message to convince the platform of listening to his review. Such Bayesian game-theoretic learning obviously outperforms the latest common schemes especially when there are multiple diversely-biased users to compete. For the challenging single-user case, we further propose a time-evolving mechanism with the platform's commitment inferences to ensure the biased user's truthful messaging all the time, whose performance improves with more time periods to learn from more historical data.
[ { "created": "Sun, 11 Jun 2023 22:07:18 GMT", "version": "v1" }, { "created": "Fri, 29 Dec 2023 05:10:44 GMT", "version": "v2" } ]
2024-01-01
[ [ "Hao", "Shugang", "" ], [ "Duan", "Lingjie", "" ] ]
Today mobile crowdsourcing platforms invite users to provide anonymous reviews about service experiences, yet many reviews are found biased to be extremely positive or negative. The existing methods find it difficult to learn from biased reviews to infer the actual service state, as the state can also be extreme and the platform cannot verify the truthfulness of reviews immediately. Further, reviewers can hide their (positive or negative) bias types and proactively adjust their anonymous reviews against the platform's inference. To our best knowledge, we are the first to study how to save mobile crowdsourcing from cheap-talk and strategically learn from biased users' reviews. We formulate the problem as a dynamic Bayesian game, including users' service-type messaging and the platform's follow-up rating/inference. Our closed-form PBE shows that an extremely-biased user may still honestly message to convince the platform of listening to his review. Such Bayesian game-theoretic learning obviously outperforms the latest common schemes especially when there are multiple diversely-biased users to compete. For the challenging single-user case, we further propose a time-evolving mechanism with the platform's commitment inferences to ensure the biased user's truthful messaging all the time, whose performance improves with more time periods to learn from more historical data.
2210.14638
Marcin Pilipczuk
Daniel Lokshtanov and Marcin Pilipczuk and Micha{\l} Pilipczuk and Saket Saurabh
Fixed-parameter tractability of Graph Isomorphism in graphs with an excluded minor
Part I of a full version of a paper accepted at STOC 2022
null
null
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We prove that Graph Isomorphism and Canonization in graphs excluding a fixed graph $H$ as a minor can be solved by an algorithm working in time $f(H)\cdot n^{O(1)}$, where $f$ is some function. In other words, we show that these problems are fixed-parameter tractable when parameterized by the size of the excluded minor, with the caveat that the bound on the running time is not necessarily computable. The underlying approach is based on decomposing the graph in a canonical way into unbreakable (intuitively, well-connected) parts, which essentially provides a reduction to the case where the given $H$-minor-free graph is unbreakable itself. This is complemented by an analysis of unbreakable $H$-minor-free graphs, performed in a second subordinate manuscript, which reveals that every such graph can be canonically decomposed into a part that admits few automorphisms and a part that has bounded treewidth.
[ { "created": "Wed, 26 Oct 2022 11:32:55 GMT", "version": "v1" } ]
2022-10-27
[ [ "Lokshtanov", "Daniel", "" ], [ "Pilipczuk", "Marcin", "" ], [ "Pilipczuk", "Michał", "" ], [ "Saurabh", "Saket", "" ] ]
We prove that Graph Isomorphism and Canonization in graphs excluding a fixed graph $H$ as a minor can be solved by an algorithm working in time $f(H)\cdot n^{O(1)}$, where $f$ is some function. In other words, we show that these problems are fixed-parameter tractable when parameterized by the size of the excluded minor, with the caveat that the bound on the running time is not necessarily computable. The underlying approach is based on decomposing the graph in a canonical way into unbreakable (intuitively, well-connected) parts, which essentially provides a reduction to the case where the given $H$-minor-free graph is unbreakable itself. This is complemented by an analysis of unbreakable $H$-minor-free graphs, performed in a second subordinate manuscript, which reveals that every such graph can be canonically decomposed into a part that admits few automorphisms and a part that has bounded treewidth.
1911.02423
Dorjan Hitaj
Fabio De Gaspari, Dorjan Hitaj, Giulio Pagnotta, Lorenzo De Carli, Luigi V. Mancini
The Naked Sun: Malicious Cooperation Between Benign-Looking Processes
15 pages, 6 figures, 4 tables
null
null
null
cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent progress in machine learning has generated promising results in behavioral malware detection. Behavioral modeling identifies malicious processes via features derived by their runtime behavior. Behavioral features hold great promise as they are intrinsically related to the functioning of each malware, and are therefore considered difficult to evade. Indeed, while a significant amount of results exists on evasion of static malware features, evasion of dynamic features has seen limited work. This paper thoroughly examines the robustness of behavioral malware detectors to evasion, focusing particularly on anti-ransomware evasion. We choose ransomware as its behavior tends to differ significantly from that of benign processes, making it a low-hanging fruit for behavioral detection (and a difficult candidate for evasion). Our analysis identifies a set of novel attacks that distribute the overall malware workload across a small set of cooperating processes to avoid the generation of significant behavioral features. Our most effective attack decreases the accuracy of a state-of-the-art classifier from 98.6% to 0% using only 18 cooperating processes. Furthermore, we show our attacks to be effective against commercial ransomware detectors even in a black-box setting.
[ { "created": "Wed, 6 Nov 2019 15:04:07 GMT", "version": "v1" } ]
2019-11-07
[ [ "De Gaspari", "Fabio", "" ], [ "Hitaj", "Dorjan", "" ], [ "Pagnotta", "Giulio", "" ], [ "De Carli", "Lorenzo", "" ], [ "Mancini", "Luigi V.", "" ] ]
Recent progress in machine learning has generated promising results in behavioral malware detection. Behavioral modeling identifies malicious processes via features derived by their runtime behavior. Behavioral features hold great promise as they are intrinsically related to the functioning of each malware, and are therefore considered difficult to evade. Indeed, while a significant amount of results exists on evasion of static malware features, evasion of dynamic features has seen limited work. This paper thoroughly examines the robustness of behavioral malware detectors to evasion, focusing particularly on anti-ransomware evasion. We choose ransomware as its behavior tends to differ significantly from that of benign processes, making it a low-hanging fruit for behavioral detection (and a difficult candidate for evasion). Our analysis identifies a set of novel attacks that distribute the overall malware workload across a small set of cooperating processes to avoid the generation of significant behavioral features. Our most effective attack decreases the accuracy of a state-of-the-art classifier from 98.6% to 0% using only 18 cooperating processes. Furthermore, we show our attacks to be effective against commercial ransomware detectors even in a black-box setting.
cs/0609009
Virginia Vassilevska
Virginia Vassilevska, Ryan Williams and Raphael Yuster
Finding heaviest H-subgraphs in real weighted graphs, with applications
23 pages
null
null
null
cs.DS cs.DM
null
For a graph G with real weights assigned to the vertices (edges), the MAX H-SUBGRAPH problem is to find an H-subgraph of G with maximum total weight, if one exists. The all-pairs MAX H-SUBGRAPH problem is to find for every pair of vertices u,v, a maximum H-subgraph containing both u and v, if one exists. Our main results are new strongly polynomial algorithms for the all-pairs MAX H-SUBGRAPH problem for vertex weighted graphs. We also give improved algorithms for the MAX-H SUBGRAPH problem for edge weighted graphs, and various related problems, including computing the first k most significant bits of the distance product of two matrices. Some of our algorithms are based, in part, on fast matrix multiplication.
[ { "created": "Mon, 4 Sep 2006 08:08:00 GMT", "version": "v1" } ]
2007-05-23
[ [ "Vassilevska", "Virginia", "" ], [ "Williams", "Ryan", "" ], [ "Yuster", "Raphael", "" ] ]
For a graph G with real weights assigned to the vertices (edges), the MAX H-SUBGRAPH problem is to find an H-subgraph of G with maximum total weight, if one exists. The all-pairs MAX H-SUBGRAPH problem is to find for every pair of vertices u,v, a maximum H-subgraph containing both u and v, if one exists. Our main results are new strongly polynomial algorithms for the all-pairs MAX H-SUBGRAPH problem for vertex weighted graphs. We also give improved algorithms for the MAX-H SUBGRAPH problem for edge weighted graphs, and various related problems, including computing the first k most significant bits of the distance product of two matrices. Some of our algorithms are based, in part, on fast matrix multiplication.
0912.1216
Ying Cui
Ying Cui, Vincent K.N.Lau and Rui Wang
Distributive Subband Allocation, Power and Rate Control for Relay-Assisted OFDMA Cellular System with Imperfect System State Knowledge
11 pages, 8 figures
null
null
null
cs.NI cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider distributive subband, power and rate allocation for a two-hop transmission in an orthogonal frequency-division multiple-access (OFDMA) cellular system with fixed relays which operate in decode-and-forward strategy. We take into account of system fairness by considering weighted sum goodput as our optimization objective. Based on the cluster-based architecture, we obtain a fast-converging distributive solution with only local imperfect CSIT by using decomposition of the optimization problem. To further reduce the signaling overhead and computational complexity, we propose a reduced feedback distributive solution, which can achieve asymptotically optimal performance for large number of users with arbitrarily small feedback overhead per user. We also derive asymptotic average system throughput for the relay-assisted OFDMA system so as to obtain useful design insights.
[ { "created": "Mon, 7 Dec 2009 12:30:57 GMT", "version": "v1" } ]
2009-12-08
[ [ "Cui", "Ying", "" ], [ "Lau", "Vincent K. N.", "" ], [ "Wang", "Rui", "" ] ]
In this paper, we consider distributive subband, power and rate allocation for a two-hop transmission in an orthogonal frequency-division multiple-access (OFDMA) cellular system with fixed relays which operate in decode-and-forward strategy. We take into account of system fairness by considering weighted sum goodput as our optimization objective. Based on the cluster-based architecture, we obtain a fast-converging distributive solution with only local imperfect CSIT by using decomposition of the optimization problem. To further reduce the signaling overhead and computational complexity, we propose a reduced feedback distributive solution, which can achieve asymptotically optimal performance for large number of users with arbitrarily small feedback overhead per user. We also derive asymptotic average system throughput for the relay-assisted OFDMA system so as to obtain useful design insights.
1810.03717
Judy Hanwen Shen
Judy Hanwen Shen, Matthias Hofer, Bjarke Felbo, Roger Levy
Comparing Models of Associative Meaning: An Empirical Investigation of Reference in Simple Language Games
Conference on Computational Natural Language Learning (CoNLL) 2018
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Simple reference games are of central theoretical and empirical importance in the study of situated language use. Although language provides rich, compositional truth-conditional semantics to facilitate reference, speakers and listeners may sometimes lack the overall lexical and cognitive resources to guarantee successful reference through these means alone. However, language also has rich associational structures that can serve as a further resource for achieving successful reference. Here we investigate this use of associational information in a setting where only associational information is available: a simplified version of the popular game Codenames. Using optimal experiment design techniques, we compare a range of models varying in the type of associative information deployed and in level of pragmatic sophistication against human behavior. In this setting, we find that listeners' behavior reflects direct bigram collocational associations more strongly than word-embedding or semantic knowledge graph-based associations and that there is little evidence for pragmatically sophisticated behavior by either speakers or listeners of the type that might be predicted by recursive-reasoning models such as the Rational Speech Acts theory. These results shed light on the nature of the lexical resources that speakers and listeners can bring to bear in achieving reference through associative meaning alone.
[ { "created": "Mon, 8 Oct 2018 21:51:44 GMT", "version": "v1" } ]
2018-10-10
[ [ "Shen", "Judy Hanwen", "" ], [ "Hofer", "Matthias", "" ], [ "Felbo", "Bjarke", "" ], [ "Levy", "Roger", "" ] ]
Simple reference games are of central theoretical and empirical importance in the study of situated language use. Although language provides rich, compositional truth-conditional semantics to facilitate reference, speakers and listeners may sometimes lack the overall lexical and cognitive resources to guarantee successful reference through these means alone. However, language also has rich associational structures that can serve as a further resource for achieving successful reference. Here we investigate this use of associational information in a setting where only associational information is available: a simplified version of the popular game Codenames. Using optimal experiment design techniques, we compare a range of models varying in the type of associative information deployed and in level of pragmatic sophistication against human behavior. In this setting, we find that listeners' behavior reflects direct bigram collocational associations more strongly than word-embedding or semantic knowledge graph-based associations and that there is little evidence for pragmatically sophisticated behavior by either speakers or listeners of the type that might be predicted by recursive-reasoning models such as the Rational Speech Acts theory. These results shed light on the nature of the lexical resources that speakers and listeners can bring to bear in achieving reference through associative meaning alone.