id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2110.04667
Shivam Bajaj
Shivam Bajaj, Eric Torng, Shaunak D. Bopardikar, Alexander Von Moll, Isaac Weintraub, Eloy Garcia, David W. Casbeer
Competitive Perimeter Defense of Conical Environments
Version 2 has additional images
null
null
null
cs.DS cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
We consider a perimeter defense problem in a planar conical environment in which a single vehicle, having a finite capture radius, aims to defend a concentric perimeter from mobile intruders. The intruders are arbitrarily released at the circumference of the environment and they move radially toward the perimeter with fixed speed. We present a competitive analysis approach to this problem by measuring the performance of multiple online algorithms for the vehicle against arbitrary inputs, relative to an optimal offline algorithm that has information about entire input sequence in advance. In particular, we establish two necessary conditions on the parameter space to guarantee (i) finite competitiveness of any algorithm and (ii) a competitive ratio of at least 2 for any algorithm. We then design and analyze three online algorithms and characterize parameter regimes in which they have finite competitive ratios. Specifically, our first two algorithms are provably 1, and 2-competitive, respectively, whereas our third algorithm exhibits different competitive ratios in different regimes of problem parameters. Finally, we provide a numerical plot in the parameter space to reveal additional insights into the relative performance of our algorithms.
[ { "created": "Sun, 10 Oct 2021 00:19:46 GMT", "version": "v1" }, { "created": "Wed, 30 Mar 2022 03:55:25 GMT", "version": "v2" } ]
2022-03-31
[ [ "Bajaj", "Shivam", "" ], [ "Torng", "Eric", "" ], [ "Bopardikar", "Shaunak D.", "" ], [ "Von Moll", "Alexander", "" ], [ "Weintraub", "Isaac", "" ], [ "Garcia", "Eloy", "" ], [ "Casbeer", "David W.", "" ] ]
We consider a perimeter defense problem in a planar conical environment in which a single vehicle, having a finite capture radius, aims to defend a concentric perimeter from mobile intruders. The intruders are arbitrarily released at the circumference of the environment and they move radially toward the perimeter with fixed speed. We present a competitive analysis approach to this problem by measuring the performance of multiple online algorithms for the vehicle against arbitrary inputs, relative to an optimal offline algorithm that has information about entire input sequence in advance. In particular, we establish two necessary conditions on the parameter space to guarantee (i) finite competitiveness of any algorithm and (ii) a competitive ratio of at least 2 for any algorithm. We then design and analyze three online algorithms and characterize parameter regimes in which they have finite competitive ratios. Specifically, our first two algorithms are provably 1, and 2-competitive, respectively, whereas our third algorithm exhibits different competitive ratios in different regimes of problem parameters. Finally, we provide a numerical plot in the parameter space to reveal additional insights into the relative performance of our algorithms.
2311.12345
Yanan Jian
Yanan Jian, Fuxun Yu, Simranjit Singh, Dimitrios Stamoulis
Stable Diffusion For Aerial Object Detection
Accepted at NeurIPS 2023 Synthetic Data Generation with Generative AI workshop
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Aerial object detection is a challenging task, in which one major obstacle lies in the limitations of large-scale data collection and the long-tail distribution of certain classes. Synthetic data offers a promising solution, especially with recent advances in diffusion-based methods like stable diffusion (SD). However, the direct application of diffusion methods to aerial domains poses unique challenges: stable diffusion's optimization for rich ground-level semantics doesn't align with the sparse nature of aerial objects, and the extraction of post-synthesis object coordinates remains problematic. To address these challenges, we introduce a synthetic data augmentation framework tailored for aerial images. It encompasses sparse-to-dense region of interest (ROI) extraction to bridge the semantic gap, fine-tuning the diffusion model with low-rank adaptation (LORA) to circumvent exhaustive retraining, and finally, a Copy-Paste method to compose synthesized objects with backgrounds, providing a nuanced approach to aerial object detection through synthetic data.
[ { "created": "Tue, 21 Nov 2023 04:38:21 GMT", "version": "v1" } ]
2023-11-22
[ [ "Jian", "Yanan", "" ], [ "Yu", "Fuxun", "" ], [ "Singh", "Simranjit", "" ], [ "Stamoulis", "Dimitrios", "" ] ]
Aerial object detection is a challenging task, in which one major obstacle lies in the limitations of large-scale data collection and the long-tail distribution of certain classes. Synthetic data offers a promising solution, especially with recent advances in diffusion-based methods like stable diffusion (SD). However, the direct application of diffusion methods to aerial domains poses unique challenges: stable diffusion's optimization for rich ground-level semantics doesn't align with the sparse nature of aerial objects, and the extraction of post-synthesis object coordinates remains problematic. To address these challenges, we introduce a synthetic data augmentation framework tailored for aerial images. It encompasses sparse-to-dense region of interest (ROI) extraction to bridge the semantic gap, fine-tuning the diffusion model with low-rank adaptation (LORA) to circumvent exhaustive retraining, and finally, a Copy-Paste method to compose synthesized objects with backgrounds, providing a nuanced approach to aerial object detection through synthetic data.
2105.01899
Tsung Wei Tsai
Tsung Wei Tsai, Chongxuan Li, Jun Zhu
MiCE: Mixture of Contrastive Experts for Unsupervised Image Clustering
International Conference on Learning Representations (ICLR) 2021
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by/4.0/
We present Mixture of Contrastive Experts (MiCE), a unified probabilistic clustering framework that simultaneously exploits the discriminative representations learned by contrastive learning and the semantic structures captured by a latent mixture model. Motivated by the mixture of experts, MiCE employs a gating function to partition an unlabeled dataset into subsets according to the latent semantics and multiple experts to discriminate distinct subsets of instances assigned to them in a contrastive learning manner. To solve the nontrivial inference and learning problems caused by the latent variables, we further develop a scalable variant of the Expectation-Maximization (EM) algorithm for MiCE and provide proof of the convergence. Empirically, we evaluate the clustering performance of MiCE on four widely adopted natural image datasets. MiCE achieves significantly better results than various previous methods and a strong contrastive learning baseline.
[ { "created": "Wed, 5 May 2021 07:17:57 GMT", "version": "v1" } ]
2021-05-06
[ [ "Tsai", "Tsung Wei", "" ], [ "Li", "Chongxuan", "" ], [ "Zhu", "Jun", "" ] ]
We present Mixture of Contrastive Experts (MiCE), a unified probabilistic clustering framework that simultaneously exploits the discriminative representations learned by contrastive learning and the semantic structures captured by a latent mixture model. Motivated by the mixture of experts, MiCE employs a gating function to partition an unlabeled dataset into subsets according to the latent semantics and multiple experts to discriminate distinct subsets of instances assigned to them in a contrastive learning manner. To solve the nontrivial inference and learning problems caused by the latent variables, we further develop a scalable variant of the Expectation-Maximization (EM) algorithm for MiCE and provide proof of the convergence. Empirically, we evaluate the clustering performance of MiCE on four widely adopted natural image datasets. MiCE achieves significantly better results than various previous methods and a strong contrastive learning baseline.
1805.05758
Thomas Wolf
Thomas Wolf, Julien Chaumond, Clement Delangue
Continuous Learning in a Hierarchical Multiscale Neural Network
5 pages, 2 figures, accepted as short paper at ACL 2018
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We reformulate the problem of encoding a multi-scale representation of a sequence in a language model by casting it in a continuous learning framework. We propose a hierarchical multi-scale language model in which short time-scale dependencies are encoded in the hidden state of a lower-level recurrent neural network while longer time-scale dependencies are encoded in the dynamic of the lower-level network by having a meta-learner update the weights of the lower-level neural network in an online meta-learning fashion. We use elastic weights consolidation as a higher-level to prevent catastrophic forgetting in our continuous learning framework.
[ { "created": "Tue, 15 May 2018 13:37:33 GMT", "version": "v1" } ]
2018-05-16
[ [ "Wolf", "Thomas", "" ], [ "Chaumond", "Julien", "" ], [ "Delangue", "Clement", "" ] ]
We reformulate the problem of encoding a multi-scale representation of a sequence in a language model by casting it in a continuous learning framework. We propose a hierarchical multi-scale language model in which short time-scale dependencies are encoded in the hidden state of a lower-level recurrent neural network while longer time-scale dependencies are encoded in the dynamic of the lower-level network by having a meta-learner update the weights of the lower-level neural network in an online meta-learning fashion. We use elastic weights consolidation as a higher-level to prevent catastrophic forgetting in our continuous learning framework.
2312.06202
Yitong Wang
Yitong Wang, Chang Liu, Jun Zhao
Transforms for Multiplicative and Fractional Programming with Broad Applications in Edge Computing and Communication Networks
null
null
null
null
cs.CC cs.DM cs.NA cs.PF math.NA
http://creativecommons.org/licenses/by/4.0/
Multiplicative Programming (MP) pertains to a spectrum of optimization problems that involve product term(s). As computational paradigms of communication systems continue to evolve, particularly concerning the offloading strategies of computationally intensive tasks simultaneously to centralized or decentralized servers, designing or optimizing effective communication systems with MP techniques becomes increasingly indispensable. Similarly, Fractional Programming (FP) is another significant branch in the optimization domain, addressing various essential scenarios in communication. For instance, in minimization optimization problems, transmission power and processing delay of communication systems are considered critical metrics. In a very recent JSAC paper by Zhao et al. [2], an innovative transform (Zhao's Optimization Transform) was proposed for solving the minimization of MP and FP problems. Nevertheless, the resolution of optimization problems in communication systems encounters several limitations when adopting Zhao's optimization transform, especially in MP problems. Primarily, objective functions proposed in these optimization problems typically involve sum-of-products terms and the optimization variables are always discrete leading to NP-hard problems. Furthermore, multiple functions mapping to the non-negative domain in these scenarios can result in auxiliary variables being zero values, while the same situation is avoidable in FP problems due to the presence of these functions in the denominator. In this paper, we introduce an updated transform, building on the foundations of Zhao's original method, designed to effectively overcome these challenges by reformulating the original problem into a series of convex or concave problems. This introduced problem reformulation provides a superior iteration algorithm with demonstrable convergence to a stationary point.
[ { "created": "Mon, 11 Dec 2023 08:36:17 GMT", "version": "v1" } ]
2023-12-12
[ [ "Wang", "Yitong", "" ], [ "Liu", "Chang", "" ], [ "Zhao", "Jun", "" ] ]
Multiplicative Programming (MP) pertains to a spectrum of optimization problems that involve product term(s). As computational paradigms of communication systems continue to evolve, particularly concerning the offloading strategies of computationally intensive tasks simultaneously to centralized or decentralized servers, designing or optimizing effective communication systems with MP techniques becomes increasingly indispensable. Similarly, Fractional Programming (FP) is another significant branch in the optimization domain, addressing various essential scenarios in communication. For instance, in minimization optimization problems, transmission power and processing delay of communication systems are considered critical metrics. In a very recent JSAC paper by Zhao et al. [2], an innovative transform (Zhao's Optimization Transform) was proposed for solving the minimization of MP and FP problems. Nevertheless, the resolution of optimization problems in communication systems encounters several limitations when adopting Zhao's optimization transform, especially in MP problems. Primarily, objective functions proposed in these optimization problems typically involve sum-of-products terms and the optimization variables are always discrete leading to NP-hard problems. Furthermore, multiple functions mapping to the non-negative domain in these scenarios can result in auxiliary variables being zero values, while the same situation is avoidable in FP problems due to the presence of these functions in the denominator. In this paper, we introduce an updated transform, building on the foundations of Zhao's original method, designed to effectively overcome these challenges by reformulating the original problem into a series of convex or concave problems. This introduced problem reformulation provides a superior iteration algorithm with demonstrable convergence to a stationary point.
1302.5848
Kalyana Babu Nakshatrala
D. Z. Turner, K. B. Nakshatrala, and M. J. Martinez
A framework for coupling flow and deformation of the porous solid
null
null
null
null
cs.NA math.NA physics.flu-dyn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider the flow of an incompressible fluid in a deformable porous solid. We present a mathematical model using the framework offered by the theory of interacting continua. In its most general form, this framework provides a mechanism for capturing multiphase flow, deformation, chemical reactions and thermal processes, as well as interactions between the various physics in a conveniently implemented fashion. To simplify the presentation of the framework, results are presented for a particular model than can be seen as an extension of Darcy's equation (which assumes that the porous solid is rigid) that takes into account elastic deformation of the porous solid. The model also considers the effect of deformation on porosity. We show that using this model one can recover identical results as in the framework proposed by Biot and Terzaghi. Some salient features of the framework are as follows: (a) It is a consistent mixture theory model, and adheres to the laws and principles of continuum thermodynamics, (b) the model is capable of simulating various important phenomena like consolidation and surface subsidence, and (c) the model is amenable to several extensions. We also present numerical coupling algorithms to obtain coupled flow-deformation response. Several representative numerical examples are presented to illustrate the capability of the mathematical model and the performance of the computational framework.
[ { "created": "Sat, 23 Feb 2013 22:03:45 GMT", "version": "v1" }, { "created": "Tue, 26 Feb 2013 05:16:27 GMT", "version": "v2" } ]
2013-02-27
[ [ "Turner", "D. Z.", "" ], [ "Nakshatrala", "K. B.", "" ], [ "Martinez", "M. J.", "" ] ]
In this paper, we consider the flow of an incompressible fluid in a deformable porous solid. We present a mathematical model using the framework offered by the theory of interacting continua. In its most general form, this framework provides a mechanism for capturing multiphase flow, deformation, chemical reactions and thermal processes, as well as interactions between the various physics in a conveniently implemented fashion. To simplify the presentation of the framework, results are presented for a particular model than can be seen as an extension of Darcy's equation (which assumes that the porous solid is rigid) that takes into account elastic deformation of the porous solid. The model also considers the effect of deformation on porosity. We show that using this model one can recover identical results as in the framework proposed by Biot and Terzaghi. Some salient features of the framework are as follows: (a) It is a consistent mixture theory model, and adheres to the laws and principles of continuum thermodynamics, (b) the model is capable of simulating various important phenomena like consolidation and surface subsidence, and (c) the model is amenable to several extensions. We also present numerical coupling algorithms to obtain coupled flow-deformation response. Several representative numerical examples are presented to illustrate the capability of the mathematical model and the performance of the computational framework.
2005.05465
Tom Kr\"uger
Tom Kr\"uger, Wolfgang Mauerer
Quantum Annealing-Based Software Components: An Experimental Case Study with SAT Solving
null
null
10.1145/3387940.3391472
null
cs.ET quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quantum computers have the potential of solving problems more efficiently than classical computers. While first commercial prototypes have become available, the performance of such machines in practical application is still subject to exploration. Quantum computers will not entirely replace classical machines, but serve as accelerators for specific problems. This necessitates integrating quantum computational primitives into existing applications. In this paper, we perform a case study on how to augment existing software with quantum computational primitives for the Boolean satisfiability problem (SAT) implemented using a quantum annealer (QA). We discuss relevant quality measures for quantum components, and show that mathematically equivalent, but structurally different ways of transforming SAT to a QA can lead to substantial differences regarding these qualities. We argue that engineers need to be aware that (and which) details, although they may be less relevant in traditional software engineering, require considerable attention in quantum computing.
[ { "created": "Mon, 11 May 2020 22:20:17 GMT", "version": "v1" } ]
2020-05-13
[ [ "Krüger", "Tom", "" ], [ "Mauerer", "Wolfgang", "" ] ]
Quantum computers have the potential of solving problems more efficiently than classical computers. While first commercial prototypes have become available, the performance of such machines in practical application is still subject to exploration. Quantum computers will not entirely replace classical machines, but serve as accelerators for specific problems. This necessitates integrating quantum computational primitives into existing applications. In this paper, we perform a case study on how to augment existing software with quantum computational primitives for the Boolean satisfiability problem (SAT) implemented using a quantum annealer (QA). We discuss relevant quality measures for quantum components, and show that mathematically equivalent, but structurally different ways of transforming SAT to a QA can lead to substantial differences regarding these qualities. We argue that engineers need to be aware that (and which) details, although they may be less relevant in traditional software engineering, require considerable attention in quantum computing.
2202.06297
Alexey Ovchinnikov
Mariya Bessonov, Ilia Ilmer, Tatiana Konstantinova, Alexey Ovchinnikov, Gleb Pogudin, Pedro Soto
Faster Gr\"obner bases for Lie derivatives of ODE systems via monomial orderings
null
null
10.1145/3666000.3669695
null
cs.SC cs.MS q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Symbolic computation for systems of differential equations is often computationally expensive. Many practical differential models have a form of polynomial or rational ODE system with specified outputs. A basic symbolic approach to analyze these models is to compute and then symbolically process the polynomial system obtained by sufficiently many Lie derivatives of the output functions with respect to the vector field given by the ODE system. In this paper, we present a method for speeding up Gr\"obner basis computation for such a class of polynomial systems by using specific monomial ordering, including weights for the variables, coming from the structure of the ODE model. We provide empirical results that show improvement across different symbolic computing frameworks and apply the method to speed up structural identifiability analysis of ODE models.
[ { "created": "Sun, 13 Feb 2022 12:40:11 GMT", "version": "v1" }, { "created": "Thu, 2 Feb 2023 17:01:39 GMT", "version": "v2" }, { "created": "Thu, 6 Jun 2024 21:18:53 GMT", "version": "v3" } ]
2024-06-10
[ [ "Bessonov", "Mariya", "" ], [ "Ilmer", "Ilia", "" ], [ "Konstantinova", "Tatiana", "" ], [ "Ovchinnikov", "Alexey", "" ], [ "Pogudin", "Gleb", "" ], [ "Soto", "Pedro", "" ] ]
Symbolic computation for systems of differential equations is often computationally expensive. Many practical differential models have a form of polynomial or rational ODE system with specified outputs. A basic symbolic approach to analyze these models is to compute and then symbolically process the polynomial system obtained by sufficiently many Lie derivatives of the output functions with respect to the vector field given by the ODE system. In this paper, we present a method for speeding up Gr\"obner basis computation for such a class of polynomial systems by using specific monomial ordering, including weights for the variables, coming from the structure of the ODE model. We provide empirical results that show improvement across different symbolic computing frameworks and apply the method to speed up structural identifiability analysis of ODE models.
1611.06468
Rui Liu
Rui Liu, Xiaoli Zhang
Generating machine-executable plans from end-user's natural-language instructions
16 pages, 10 figures, article submitted to Robotics and Computer-Integrated Manufacturing, 2016 Aug
null
null
null
cs.AI cs.CL cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is critical for advanced manufacturing machines to autonomously execute a task by following an end-user's natural language (NL) instructions. However, NL instructions are usually ambiguous and abstract so that the machines may misunderstand and incorrectly execute the task. To address this NL-based human-machine communication problem and enable the machines to appropriately execute tasks by following the end-user's NL instructions, we developed a Machine-Executable-Plan-Generation (exePlan) method. The exePlan method conducts task-centered semantic analysis to extract task-related information from ambiguous NL instructions. In addition, the method specifies machine execution parameters to generate a machine-executable plan by interpreting abstract NL instructions. To evaluate the exePlan method, an industrial robot Baxter was instructed by NL to perform three types of industrial tasks {'drill a hole', 'clean a spot', 'install a screw'}. The experiment results proved that the exePlan method was effective in generating machine-executable plans from the end-user's NL instructions. Such a method has the promise to endow a machine with the ability of NL-instructed task execution.
[ { "created": "Sun, 20 Nov 2016 04:06:47 GMT", "version": "v1" } ]
2016-11-22
[ [ "Liu", "Rui", "" ], [ "Zhang", "Xiaoli", "" ] ]
It is critical for advanced manufacturing machines to autonomously execute a task by following an end-user's natural language (NL) instructions. However, NL instructions are usually ambiguous and abstract so that the machines may misunderstand and incorrectly execute the task. To address this NL-based human-machine communication problem and enable the machines to appropriately execute tasks by following the end-user's NL instructions, we developed a Machine-Executable-Plan-Generation (exePlan) method. The exePlan method conducts task-centered semantic analysis to extract task-related information from ambiguous NL instructions. In addition, the method specifies machine execution parameters to generate a machine-executable plan by interpreting abstract NL instructions. To evaluate the exePlan method, an industrial robot Baxter was instructed by NL to perform three types of industrial tasks {'drill a hole', 'clean a spot', 'install a screw'}. The experiment results proved that the exePlan method was effective in generating machine-executable plans from the end-user's NL instructions. Such a method has the promise to endow a machine with the ability of NL-instructed task execution.
1801.02358
Priyanka Mukhopadhyay Ms
Divesh Aggarwal, Priyanka Mukhopadhyay
Improved algorithms for the Shortest Vector Problem and the Closest Vector Problem in the infinity norm
Changed the title
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Blomer and Naewe[BN09] modified the randomized sieving algorithm of Ajtai, Kumar and Sivakumar[AKS01] to solve the shortest vector problem (SVP). The algorithm starts with $N = 2^{O(n)}$ randomly chosen vectors in the lattice and employs a sieving procedure to iteratively obtain shorter vectors in the lattice. The running time of the sieving procedure is quadratic in $N$. We study this problem for the special but important case of the $\ell_\infty$ norm. We give a new sieving procedure that runs in time linear in $N$, thereby significantly improving the running time of the algorithm for SVP in the $\ell_\infty$ norm. As in [AKS02,BN09], we also extend this algorithm to obtain significantly faster algorithms for approximate versions of the shortest vector problem and the closest vector problem (CVP) in the $\ell_\infty$ norm. We also show that the heuristic sieving algorithms of Nguyen and Vidick[NV08] and Wang et al.[WLTB11] can also be analyzed in the $\ell_{\infty}$ norm. The main technical contribution in this part is to calculate the expected volume of intersection of a unit ball centred at origin and another ball of a different radius centred at a uniformly random point on the boundary of the unit ball. This might be of independent interest.
[ { "created": "Mon, 8 Jan 2018 09:43:43 GMT", "version": "v1" }, { "created": "Tue, 15 May 2018 17:04:41 GMT", "version": "v2" } ]
2018-05-16
[ [ "Aggarwal", "Divesh", "" ], [ "Mukhopadhyay", "Priyanka", "" ] ]
Blomer and Naewe[BN09] modified the randomized sieving algorithm of Ajtai, Kumar and Sivakumar[AKS01] to solve the shortest vector problem (SVP). The algorithm starts with $N = 2^{O(n)}$ randomly chosen vectors in the lattice and employs a sieving procedure to iteratively obtain shorter vectors in the lattice. The running time of the sieving procedure is quadratic in $N$. We study this problem for the special but important case of the $\ell_\infty$ norm. We give a new sieving procedure that runs in time linear in $N$, thereby significantly improving the running time of the algorithm for SVP in the $\ell_\infty$ norm. As in [AKS02,BN09], we also extend this algorithm to obtain significantly faster algorithms for approximate versions of the shortest vector problem and the closest vector problem (CVP) in the $\ell_\infty$ norm. We also show that the heuristic sieving algorithms of Nguyen and Vidick[NV08] and Wang et al.[WLTB11] can also be analyzed in the $\ell_{\infty}$ norm. The main technical contribution in this part is to calculate the expected volume of intersection of a unit ball centred at origin and another ball of a different radius centred at a uniformly random point on the boundary of the unit ball. This might be of independent interest.
2009.11180
Lance Eliot
Lance Eliot
AI and Legal Argumentation: Aligning the Autonomous Levels of AI Legal Reasoning
26 pages, 9 figures. arXiv admin note: text overlap with arXiv:2009.02243
null
null
null
cs.CY cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Legal argumentation is a vital cornerstone of justice, underpinning an adversarial form of law, and extensive research has attempted to augment or undertake legal argumentation via the use of computer-based automation including Artificial Intelligence (AI). AI advances in Natural Language Processing (NLP) and Machine Learning (ML) have especially furthered the capabilities of leveraging AI for aiding legal professionals, doing so in ways that are modeled here as CARE, namely Crafting, Assessing, Refining, and Engaging in legal argumentation. In addition to AI-enabled legal argumentation serving to augment human-based lawyering, an aspirational goal of this multi-disciplinary field consists of ultimately achieving autonomously effected human-equivalent legal argumentation. As such, an innovative meta-approach is proposed to apply the Levels of Autonomy (LoA) of AI Legal Reasoning (AILR) to the maturation of AI and Legal Argumentation (AILA), proffering a new means of gauging progress in this ever-evolving and rigorously sought domain.
[ { "created": "Fri, 11 Sep 2020 22:05:40 GMT", "version": "v1" } ]
2020-09-24
[ [ "Eliot", "Lance", "" ] ]
Legal argumentation is a vital cornerstone of justice, underpinning an adversarial form of law, and extensive research has attempted to augment or undertake legal argumentation via the use of computer-based automation including Artificial Intelligence (AI). AI advances in Natural Language Processing (NLP) and Machine Learning (ML) have especially furthered the capabilities of leveraging AI for aiding legal professionals, doing so in ways that are modeled here as CARE, namely Crafting, Assessing, Refining, and Engaging in legal argumentation. In addition to AI-enabled legal argumentation serving to augment human-based lawyering, an aspirational goal of this multi-disciplinary field consists of ultimately achieving autonomously effected human-equivalent legal argumentation. As such, an innovative meta-approach is proposed to apply the Levels of Autonomy (LoA) of AI Legal Reasoning (AILR) to the maturation of AI and Legal Argumentation (AILA), proffering a new means of gauging progress in this ever-evolving and rigorously sought domain.
2312.08888
Kyra Ahrens
Kyra Ahrens, Hans Hergen Lehmann, Jae Hee Lee, Stefan Wermter
Read Between the Layers: Leveraging Multi-Layer Representations for Rehearsal-Free Continual Learning with Pre-Trained Models
Accepted for publication in Transactions of Machine Learning Research (TMLR) journal
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by/4.0/
We address the Continual Learning (CL) problem, wherein a model must learn a sequence of tasks from non-stationary distributions while preserving prior knowledge upon encountering new experiences. With the advancement of foundation models, CL research has pivoted from the initial learning-from-scratch paradigm towards utilizing generic features from large-scale pre-training. However, existing approaches to CL with pre-trained models primarily focus on separating class-specific features from the final representation layer and neglect the potential of intermediate representations to capture low- and mid-level features, which are more invariant to domain shifts. In this work, we propose LayUP, a new prototype-based approach to CL that leverages second-order feature statistics from multiple intermediate layers of a pre-trained network. Our method is conceptually simple, does not require access to prior data, and works out of the box with any foundation model. LayUP surpasses the state of the art in four of the seven class-incremental learning benchmarks, all three domain-incremental learning benchmarks and in six of the seven online continual learning benchmarks, while significantly reducing memory and computational requirements compared to existing baselines. Our results demonstrate that fully exhausting the representational capacities of pre-trained models in CL goes well beyond their final embeddings.
[ { "created": "Wed, 13 Dec 2023 13:11:44 GMT", "version": "v1" }, { "created": "Wed, 17 Apr 2024 19:32:47 GMT", "version": "v2" }, { "created": "Fri, 5 Jul 2024 09:43:41 GMT", "version": "v3" } ]
2024-07-08
[ [ "Ahrens", "Kyra", "" ], [ "Lehmann", "Hans Hergen", "" ], [ "Lee", "Jae Hee", "" ], [ "Wermter", "Stefan", "" ] ]
We address the Continual Learning (CL) problem, wherein a model must learn a sequence of tasks from non-stationary distributions while preserving prior knowledge upon encountering new experiences. With the advancement of foundation models, CL research has pivoted from the initial learning-from-scratch paradigm towards utilizing generic features from large-scale pre-training. However, existing approaches to CL with pre-trained models primarily focus on separating class-specific features from the final representation layer and neglect the potential of intermediate representations to capture low- and mid-level features, which are more invariant to domain shifts. In this work, we propose LayUP, a new prototype-based approach to CL that leverages second-order feature statistics from multiple intermediate layers of a pre-trained network. Our method is conceptually simple, does not require access to prior data, and works out of the box with any foundation model. LayUP surpasses the state of the art in four of the seven class-incremental learning benchmarks, all three domain-incremental learning benchmarks and in six of the seven online continual learning benchmarks, while significantly reducing memory and computational requirements compared to existing baselines. Our results demonstrate that fully exhausting the representational capacities of pre-trained models in CL goes well beyond their final embeddings.
1911.07875
Sandhya Tripathi
Aditya Petety, Sandhya Tripathi, N Hemachandra
Attribute noise robust binary classification
Accepted for Student Abstract presentation at AAAI2020
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
We consider the problem of learning linear classifiers when both features and labels are binary. In addition, the features are noisy, i.e., they could be flipped with an unknown probability. In Sy-De attribute noise model, where all features could be noisy together with same probability, we show that $0$-$1$ loss ($l_{0-1}$) need not be robust but a popular surrogate, squared loss ($l_{sq}$) is. In Asy-In attribute noise model, we prove that $l_{0-1}$ is robust for any distribution over 2 dimensional feature space. However, due to computational intractability of $l_{0-1}$, we resort to $l_{sq}$ and observe that it need not be Asy-In noise robust. Our empirical results support Sy-De robustness of squared loss for low to moderate noise rates.
[ { "created": "Mon, 18 Nov 2019 19:03:02 GMT", "version": "v1" } ]
2019-11-20
[ [ "Petety", "Aditya", "" ], [ "Tripathi", "Sandhya", "" ], [ "Hemachandra", "N", "" ] ]
We consider the problem of learning linear classifiers when both features and labels are binary. In addition, the features are noisy, i.e., they could be flipped with an unknown probability. In Sy-De attribute noise model, where all features could be noisy together with same probability, we show that $0$-$1$ loss ($l_{0-1}$) need not be robust but a popular surrogate, squared loss ($l_{sq}$) is. In Asy-In attribute noise model, we prove that $l_{0-1}$ is robust for any distribution over 2 dimensional feature space. However, due to computational intractability of $l_{0-1}$, we resort to $l_{sq}$ and observe that it need not be Asy-In noise robust. Our empirical results support Sy-De robustness of squared loss for low to moderate noise rates.
2404.12821
Geoffrey Goodell
William Macpherson and Geoffrey Goodell
Benchmarking the performance of a self-custody, non-ledger-based, obliviously managed digital payment system
23 pages, 10 figures
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As global governments intensify efforts to operationalize retail central bank digital currencies (CBDCs), the imperative for architectures that preserve user privacy has never been more pronounced. This paper advances an existing retail CBDC framework developed at University College London. Utilizing the capabilities of the Comet research framework, our proposed design allows users to retain direct custody of their assets without the need for intermediary service providers, all while preserving transactional anonymity. The study unveils a novel technique to expedite the retrieval of Proof of Provenance, significantly accelerating the verification of transaction legitimacy through the refinement of Merkle Trie structures. In parallel, we introduce a streamlined Digital Ledger designed to offer fast, immutable, and decentralized transaction validation within a permissioned ecosystem. The ultimate objective of this research is to benchmark the performance of the legacy system formulated by the original Comet research team against the newly devised system elucidated in this paper. Our endeavour is to establish a foundational design for a scalable national infrastructure proficient in seamlessly processing thousands of transactions in real-time, without compromising consumer privacy or data integrity.
[ { "created": "Fri, 19 Apr 2024 11:57:32 GMT", "version": "v1" } ]
2024-04-22
[ [ "Macpherson", "William", "" ], [ "Goodell", "Geoffrey", "" ] ]
As global governments intensify efforts to operationalize retail central bank digital currencies (CBDCs), the imperative for architectures that preserve user privacy has never been more pronounced. This paper advances an existing retail CBDC framework developed at University College London. Utilizing the capabilities of the Comet research framework, our proposed design allows users to retain direct custody of their assets without the need for intermediary service providers, all while preserving transactional anonymity. The study unveils a novel technique to expedite the retrieval of Proof of Provenance, significantly accelerating the verification of transaction legitimacy through the refinement of Merkle Trie structures. In parallel, we introduce a streamlined Digital Ledger designed to offer fast, immutable, and decentralized transaction validation within a permissioned ecosystem. The ultimate objective of this research is to benchmark the performance of the legacy system formulated by the original Comet research team against the newly devised system elucidated in this paper. Our endeavour is to establish a foundational design for a scalable national infrastructure proficient in seamlessly processing thousands of transactions in real-time, without compromising consumer privacy or data integrity.
1707.08273
Noseong Park
Noseong Park, Ankesh Anand, Joel Ruben Antony Moniz, Kookjin Lee, Tanmoy Chakraborty, Jaegul Choo, Hongkyu Park, Youngmin Kim
MMGAN: Manifold Matching Generative Adversarial Network
the 24th International Conference on Pattern Recognition (ICPR), 2018
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is well-known that GANs are difficult to train, and several different techniques have been proposed in order to stabilize their training. In this paper, we propose a novel training method called manifold-matching, and a new GAN model called manifold-matching GAN (MMGAN). MMGAN finds two manifolds representing the vector representations of real and fake images. If these two manifolds match, it means that real and fake images are statistically identical. To assist the manifold-matching task, we also use i) kernel tricks to find better manifold structures, ii) moving-averaged manifolds across mini-batches, and iii) a regularizer based on correlation matrix to suppress mode collapse. We conduct in-depth experiments with three image datasets and compare with several state-of-the-art GAN models. 32.4% of images generated by the proposed MMGAN are recognized as fake images during our user study (16% enhancement compared to other state-of-the-art model). MMGAN achieved an unsupervised inception score of 7.8 for CIFAR-10.
[ { "created": "Wed, 26 Jul 2017 02:09:34 GMT", "version": "v1" }, { "created": "Sun, 30 Jul 2017 06:29:16 GMT", "version": "v2" }, { "created": "Thu, 21 Sep 2017 18:31:19 GMT", "version": "v3" }, { "created": "Thu, 12 Apr 2018 06:46:15 GMT", "version": "v4" } ]
2018-04-13
[ [ "Park", "Noseong", "" ], [ "Anand", "Ankesh", "" ], [ "Moniz", "Joel Ruben Antony", "" ], [ "Lee", "Kookjin", "" ], [ "Chakraborty", "Tanmoy", "" ], [ "Choo", "Jaegul", "" ], [ "Park", "Hongkyu", "" ], [ "Kim", "Youngmin", "" ] ]
It is well-known that GANs are difficult to train, and several different techniques have been proposed in order to stabilize their training. In this paper, we propose a novel training method called manifold-matching, and a new GAN model called manifold-matching GAN (MMGAN). MMGAN finds two manifolds representing the vector representations of real and fake images. If these two manifolds match, it means that real and fake images are statistically identical. To assist the manifold-matching task, we also use i) kernel tricks to find better manifold structures, ii) moving-averaged manifolds across mini-batches, and iii) a regularizer based on correlation matrix to suppress mode collapse. We conduct in-depth experiments with three image datasets and compare with several state-of-the-art GAN models. 32.4% of images generated by the proposed MMGAN are recognized as fake images during our user study (16% enhancement compared to other state-of-the-art model). MMGAN achieved an unsupervised inception score of 7.8 for CIFAR-10.
1811.06871
Sandor Kisfaludi-Bak
S\'andor Kisfaludi-Bak, Jesper Nederlof and Erik Jan van Leeuwen
Nearly ETH-Tight Algorithms for Planar Steiner Tree with Terminals on Few Faces
32 pages, 8 figures, accepted at SODA 2019
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Planar Steiner Tree problem is one of the most fundamental NP-complete problems as it models many network design problems. Recall that an instance of this problem consists of a graph with edge weights, and a subset of vertices (often called terminals); the goal is to find a subtree of the graph of minimum total weight that connects all terminals. A seminal paper by Erickson et al. [Math. Oper. Res., 1987] considers instances where the underlying graph is planar and all terminals can be covered by the boundary of $k$ faces. Erickson et al. show that the problem can be solved by an algorithm using $n^{O(k)}$ time and $n^{O(k)}$ space, where $n$ denotes the number of vertices of the input graph. In the past 30 years there has been no significant improvement of this algorithm, despite several efforts. In this work, we give an algorithm for Planar Steiner Tree with running time $2^{O(k)} n^{O(\sqrt{k})}$ using only polynomial space. Furthermore, we show that the running time of our algorithm is almost tight: we prove that there is no $f(k)n^{o(\sqrt{k})}$ algorithm for Planar Steiner Tree for any computable function $f$, unless the Exponential Time Hypothesis fails.
[ { "created": "Fri, 16 Nov 2018 15:47:38 GMT", "version": "v1" } ]
2018-11-19
[ [ "Kisfaludi-Bak", "Sándor", "" ], [ "Nederlof", "Jesper", "" ], [ "van Leeuwen", "Erik Jan", "" ] ]
The Planar Steiner Tree problem is one of the most fundamental NP-complete problems as it models many network design problems. Recall that an instance of this problem consists of a graph with edge weights, and a subset of vertices (often called terminals); the goal is to find a subtree of the graph of minimum total weight that connects all terminals. A seminal paper by Erickson et al. [Math. Oper. Res., 1987] considers instances where the underlying graph is planar and all terminals can be covered by the boundary of $k$ faces. Erickson et al. show that the problem can be solved by an algorithm using $n^{O(k)}$ time and $n^{O(k)}$ space, where $n$ denotes the number of vertices of the input graph. In the past 30 years there has been no significant improvement of this algorithm, despite several efforts. In this work, we give an algorithm for Planar Steiner Tree with running time $2^{O(k)} n^{O(\sqrt{k})}$ using only polynomial space. Furthermore, we show that the running time of our algorithm is almost tight: we prove that there is no $f(k)n^{o(\sqrt{k})}$ algorithm for Planar Steiner Tree for any computable function $f$, unless the Exponential Time Hypothesis fails.
2003.02638
Marcus Ebner Von Eschenbach
Marcus Ebner von Eschenbach, Binyamin Manela, Jan Peters, Armin Biess
Metric-Based Imitation Learning Between Two Dissimilar Anthropomorphic Robotic Arms
8 pages, 5 figures, submitted to IEEE Robotics and Automation Letters/IROS 2020
null
null
null
cs.RO cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
The development of autonomous robotic systems that can learn from human demonstrations to imitate a desired behavior - rather than being manually programmed - has huge technological potential. One major challenge in imitation learning is the correspondence problem: how to establish corresponding states and actions between expert and learner, when the embodiments of the agents are different (morphology, dynamics, degrees of freedom, etc.). Many existing approaches in imitation learning circumvent the correspondence problem, for example, kinesthetic teaching or teleoperation, which are performed on the robot. In this work we explicitly address the correspondence problem by introducing a distance measure between dissimilar embodiments. This measure is then used as a loss function for static pose imitation and as a feedback signal within a model-free deep reinforcement learning framework for dynamic movement imitation between two anthropomorphic robotic arms in simulation. We find that the measure is well suited for describing the similarity between embodiments and for learning imitation policies by distance minimization.
[ { "created": "Tue, 25 Feb 2020 19:47:19 GMT", "version": "v1" } ]
2020-03-06
[ [ "von Eschenbach", "Marcus Ebner", "" ], [ "Manela", "Binyamin", "" ], [ "Peters", "Jan", "" ], [ "Biess", "Armin", "" ] ]
The development of autonomous robotic systems that can learn from human demonstrations to imitate a desired behavior - rather than being manually programmed - has huge technological potential. One major challenge in imitation learning is the correspondence problem: how to establish corresponding states and actions between expert and learner, when the embodiments of the agents are different (morphology, dynamics, degrees of freedom, etc.). Many existing approaches in imitation learning circumvent the correspondence problem, for example, kinesthetic teaching or teleoperation, which are performed on the robot. In this work we explicitly address the correspondence problem by introducing a distance measure between dissimilar embodiments. This measure is then used as a loss function for static pose imitation and as a feedback signal within a model-free deep reinforcement learning framework for dynamic movement imitation between two anthropomorphic robotic arms in simulation. We find that the measure is well suited for describing the similarity between embodiments and for learning imitation policies by distance minimization.
0909.2455
Grenville Croll
Andrew McGeady, Joseph McGouran
End User Computing in AIB Capital Markets: A Management Summary
7 Pages. Referenced & submitted by GJC in Sept 2009
Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2008 25-31 ISBN 978-905617-69-2
null
null
cs.HC cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper is a management summary of how the area of End User Computing (EUC) has been addressed by AIB Capital Markets. The development of an effective policy is described, as well as the process by which a register of critical EUC applications was assembled and how those applications were brought into a controlled environment. A number of findings are included as well as recommendations for others who would seek to run a similar project.
[ { "created": "Sun, 13 Sep 2009 23:51:59 GMT", "version": "v1" } ]
2009-09-15
[ [ "McGeady", "Andrew", "" ], [ "McGouran", "Joseph", "" ] ]
This paper is a management summary of how the area of End User Computing (EUC) has been addressed by AIB Capital Markets. The development of an effective policy is described, as well as the process by which a register of critical EUC applications was assembled and how those applications were brought into a controlled environment. A number of findings are included as well as recommendations for others who would seek to run a similar project.
2208.10552
Yahav Avigal
Yahav Avigal, Lars Berscheid, Tamim Asfour, Torsten Kr\"oger, Ken Goldberg
SpeedFolding: Learning Efficient Bimanual Folding of Garments
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2022
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Folding garments reliably and efficiently is a long standing challenge in robotic manipulation due to the complex dynamics and high dimensional configuration space of garments. An intuitive approach is to initially manipulate the garment to a canonical smooth configuration before folding. In this work, we develop SpeedFolding, a reliable and efficient bimanual system, which given user-defined instructions as folding lines, manipulates an initially crumpled garment to (1) a smoothed and (2) a folded configuration. Our primary contribution is a novel neural network architecture that is able to predict pairs of gripper poses to parameterize a diverse set of bimanual action primitives. After learning from 4300 human-annotated and self-supervised actions, the robot is able to fold garments from a random initial configuration in under 120s on average with a success rate of 93%. Real-world experiments show that the system is able to generalize to unseen garments of different color, shape, and stiffness. While prior work achieved 3-6 Folds Per Hour (FPH), SpeedFolding achieves 30-40 FPH.
[ { "created": "Mon, 22 Aug 2022 19:01:31 GMT", "version": "v1" }, { "created": "Fri, 9 Sep 2022 18:04:21 GMT", "version": "v2" } ]
2022-09-13
[ [ "Avigal", "Yahav", "" ], [ "Berscheid", "Lars", "" ], [ "Asfour", "Tamim", "" ], [ "Kröger", "Torsten", "" ], [ "Goldberg", "Ken", "" ] ]
Folding garments reliably and efficiently is a long standing challenge in robotic manipulation due to the complex dynamics and high dimensional configuration space of garments. An intuitive approach is to initially manipulate the garment to a canonical smooth configuration before folding. In this work, we develop SpeedFolding, a reliable and efficient bimanual system, which given user-defined instructions as folding lines, manipulates an initially crumpled garment to (1) a smoothed and (2) a folded configuration. Our primary contribution is a novel neural network architecture that is able to predict pairs of gripper poses to parameterize a diverse set of bimanual action primitives. After learning from 4300 human-annotated and self-supervised actions, the robot is able to fold garments from a random initial configuration in under 120s on average with a success rate of 93%. Real-world experiments show that the system is able to generalize to unseen garments of different color, shape, and stiffness. While prior work achieved 3-6 Folds Per Hour (FPH), SpeedFolding achieves 30-40 FPH.
1904.05754
Ping-En Lu
Yu-Hsien Peng, Ping-En Lu, Cheng-Shang Chang and Duan-Shin Lee
Percolation Threshold for Competitive Influence in Random Networks
11 pages, 9 figures, this article is the complete version (with proofs) of the IEEE Global Communications Conference 2019 review paper
in IEEE Transactions on Computational Social Systems, vol. 7, no. 4, pp. 991-1003, Aug. 2020
10.1109/TCSS.2020.2995740
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a new averaging model for modeling the competitive influence of $K$ candidates among $n$ voters in an election process. For such an influence propagation model, we address the question of how many seeded voters a candidate needs to place among undecided voters in order to win an election. We show that for a random network generated from the stochastic block model, there exists a percolation threshold for a candidate to win the election if the number of seeded voters placed by the candidate exceeds the threshold. By conducting extensive experiments, we show that our theoretical percolation thresholds are very close to those obtained from simulations for random networks and the errors are within $10\%$ for a real-world network.
[ { "created": "Thu, 11 Apr 2019 15:13:15 GMT", "version": "v1" }, { "created": "Sun, 14 Apr 2019 12:31:25 GMT", "version": "v2" }, { "created": "Sun, 21 Apr 2019 13:04:44 GMT", "version": "v3" } ]
2020-09-22
[ [ "Peng", "Yu-Hsien", "" ], [ "Lu", "Ping-En", "" ], [ "Chang", "Cheng-Shang", "" ], [ "Lee", "Duan-Shin", "" ] ]
In this paper, we propose a new averaging model for modeling the competitive influence of $K$ candidates among $n$ voters in an election process. For such an influence propagation model, we address the question of how many seeded voters a candidate needs to place among undecided voters in order to win an election. We show that for a random network generated from the stochastic block model, there exists a percolation threshold for a candidate to win the election if the number of seeded voters placed by the candidate exceeds the threshold. By conducting extensive experiments, we show that our theoretical percolation thresholds are very close to those obtained from simulations for random networks and the errors are within $10\%$ for a real-world network.
2104.00179
Chunhui Liu
Chunhui Liu, Xinyu Li, Hao Chen, Davide Modolo, Joseph Tighe
Selective Feature Compression for Efficient Activity Recognition Inference
Accepted by ICCV 2021
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Most action recognition solutions rely on dense sampling to precisely cover the informative temporal clip. Extensively searching temporal region is expensive for a real-world application. In this work, we focus on improving the inference efficiency of current action recognition backbones on trimmed videos, and illustrate that one action model can also cover then informative region by dropping non-informative features. We present Selective Feature Compression (SFC), an action recognition inference strategy that greatly increase model inference efficiency without any accuracy compromise. Differently from previous works that compress kernel sizes and decrease the channel dimension, we propose to compress feature flow at spatio-temporal dimension without changing any backbone parameters. Our experiments on Kinetics-400, UCF101 and ActivityNet show that SFC is able to reduce inference speed by 6-7x and memory usage by 5-6x compared with the commonly used 30 crops dense sampling procedure, while also slightly improving Top1 Accuracy. We thoroughly quantitatively and qualitatively evaluate SFC and all its components and show how does SFC learn to attend to important video regions and to drop temporal features that are uninformative for the task of action recognition.
[ { "created": "Thu, 1 Apr 2021 00:54:51 GMT", "version": "v1" }, { "created": "Thu, 29 Jul 2021 10:59:15 GMT", "version": "v2" } ]
2021-07-30
[ [ "Liu", "Chunhui", "" ], [ "Li", "Xinyu", "" ], [ "Chen", "Hao", "" ], [ "Modolo", "Davide", "" ], [ "Tighe", "Joseph", "" ] ]
Most action recognition solutions rely on dense sampling to precisely cover the informative temporal clip. Extensively searching temporal region is expensive for a real-world application. In this work, we focus on improving the inference efficiency of current action recognition backbones on trimmed videos, and illustrate that one action model can also cover then informative region by dropping non-informative features. We present Selective Feature Compression (SFC), an action recognition inference strategy that greatly increase model inference efficiency without any accuracy compromise. Differently from previous works that compress kernel sizes and decrease the channel dimension, we propose to compress feature flow at spatio-temporal dimension without changing any backbone parameters. Our experiments on Kinetics-400, UCF101 and ActivityNet show that SFC is able to reduce inference speed by 6-7x and memory usage by 5-6x compared with the commonly used 30 crops dense sampling procedure, while also slightly improving Top1 Accuracy. We thoroughly quantitatively and qualitatively evaluate SFC and all its components and show how does SFC learn to attend to important video regions and to drop temporal features that are uninformative for the task of action recognition.
1705.01501
Shankara Narayanan Krishna
Shankara Narayanan Krishna, Khushraj Madnani, P. K. Pandya
Making Metric Temporal Logic Rational
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study an extension of $\mtl$ in pointwise time with rational expression guarded modality $\reg_I(\re)$ where $\re$ is a rational expression over subformulae. We study the decidability and expressiveness of this extension ($\mtl$+$\varphi \ureg_{I, \re} \varphi$+$\reg_{I,\re}\varphi$), called $\regmtl$, as well as its fragment $\sfmtl$ where only star-free rational expressions are allowed. Using the technique of temporal projections, we show that $\regmtl$ has decidable satisfiability by giving an equisatisfiable reduction to $\mtl$. We also identify a subclass $\mitl+\ureg$ of $\regmtl$ for which our equi-satisfiable reduction gives rise to formulae of $\mitl$, yielding elementary decidability. As our second main result, we show a tight automaton-logic connection between $\sfmtl$ and partially ordered (or very weak) 1-clock alternating timed automata.
[ { "created": "Sat, 29 Apr 2017 18:08:50 GMT", "version": "v1" } ]
2017-05-04
[ [ "Krishna", "Shankara Narayanan", "" ], [ "Madnani", "Khushraj", "" ], [ "Pandya", "P. K.", "" ] ]
We study an extension of $\mtl$ in pointwise time with rational expression guarded modality $\reg_I(\re)$ where $\re$ is a rational expression over subformulae. We study the decidability and expressiveness of this extension ($\mtl$+$\varphi \ureg_{I, \re} \varphi$+$\reg_{I,\re}\varphi$), called $\regmtl$, as well as its fragment $\sfmtl$ where only star-free rational expressions are allowed. Using the technique of temporal projections, we show that $\regmtl$ has decidable satisfiability by giving an equisatisfiable reduction to $\mtl$. We also identify a subclass $\mitl+\ureg$ of $\regmtl$ for which our equi-satisfiable reduction gives rise to formulae of $\mitl$, yielding elementary decidability. As our second main result, we show a tight automaton-logic connection between $\sfmtl$ and partially ordered (or very weak) 1-clock alternating timed automata.
2305.15431
Zimu Wang
Zimu Wang, Jiashuo Liu, Hao Zou, Xingxuan Zhang, Yue He, Dongxu Liang, Peng Cui
Exploring and Exploiting Data Heterogeneity in Recommendation
14 pages, 14 figures
null
null
null
cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Massive amounts of data are the foundation of data-driven recommendation models. As an inherent nature of big data, data heterogeneity widely exists in real-world recommendation systems. It reflects the differences in the properties among sub-populations. Ignoring the heterogeneity in recommendation data could limit the performance of recommendation models, hurt the sub-populational robustness, and make the models misled by biases. However, data heterogeneity has not attracted substantial attention in the recommendation community. Therefore, it inspires us to adequately explore and exploit heterogeneity for solving the above problems and assisting data analysis. In this work, we focus on exploring two representative categories of heterogeneity in recommendation data that is the heterogeneity of prediction mechanism and covariate distribution and propose an algorithm that explores the heterogeneity through a bilevel clustering method. Furthermore, the uncovered heterogeneity is exploited for two purposes in recommendation scenarios which are prediction with multiple sub-models and supporting debias. Extensive experiments on real-world data validate the existence of heterogeneity in recommendation data and the effectiveness of exploring and exploiting data heterogeneity in recommendation.
[ { "created": "Sun, 21 May 2023 11:01:14 GMT", "version": "v1" } ]
2023-05-26
[ [ "Wang", "Zimu", "" ], [ "Liu", "Jiashuo", "" ], [ "Zou", "Hao", "" ], [ "Zhang", "Xingxuan", "" ], [ "He", "Yue", "" ], [ "Liang", "Dongxu", "" ], [ "Cui", "Peng", "" ] ]
Massive amounts of data are the foundation of data-driven recommendation models. As an inherent nature of big data, data heterogeneity widely exists in real-world recommendation systems. It reflects the differences in the properties among sub-populations. Ignoring the heterogeneity in recommendation data could limit the performance of recommendation models, hurt the sub-populational robustness, and make the models misled by biases. However, data heterogeneity has not attracted substantial attention in the recommendation community. Therefore, it inspires us to adequately explore and exploit heterogeneity for solving the above problems and assisting data analysis. In this work, we focus on exploring two representative categories of heterogeneity in recommendation data that is the heterogeneity of prediction mechanism and covariate distribution and propose an algorithm that explores the heterogeneity through a bilevel clustering method. Furthermore, the uncovered heterogeneity is exploited for two purposes in recommendation scenarios which are prediction with multiple sub-models and supporting debias. Extensive experiments on real-world data validate the existence of heterogeneity in recommendation data and the effectiveness of exploring and exploiting data heterogeneity in recommendation.
1911.11357
Samaneh Azadi
Samaneh Azadi, Michael Tschannen, Eric Tzeng, Sylvain Gelly, Trevor Darrell, Mario Lucic
Semantic Bottleneck Scene Generation
null
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Coupling the high-fidelity generation capabilities of label-conditional image synthesis methods with the flexibility of unconditional generative models, we propose a semantic bottleneck GAN model for unconditional synthesis of complex scenes. We assume pixel-wise segmentation labels are available during training and use them to learn the scene structure. During inference, our model first synthesizes a realistic segmentation layout from scratch, then synthesizes a realistic scene conditioned on that layout. For the former, we use an unconditional progressive segmentation generation network that captures the distribution of realistic semantic scene layouts. For the latter, we use a conditional segmentation-to-image synthesis network that captures the distribution of photo-realistic images conditioned on the semantic layout. When trained end-to-end, the resulting model outperforms state-of-the-art generative models in unsupervised image synthesis on two challenging domains in terms of the Frechet Inception Distance and user-study evaluations. Moreover, we demonstrate the generated segmentation maps can be used as additional training data to strongly improve recent segmentation-to-image synthesis networks.
[ { "created": "Tue, 26 Nov 2019 06:01:09 GMT", "version": "v1" } ]
2019-11-27
[ [ "Azadi", "Samaneh", "" ], [ "Tschannen", "Michael", "" ], [ "Tzeng", "Eric", "" ], [ "Gelly", "Sylvain", "" ], [ "Darrell", "Trevor", "" ], [ "Lucic", "Mario", "" ] ]
Coupling the high-fidelity generation capabilities of label-conditional image synthesis methods with the flexibility of unconditional generative models, we propose a semantic bottleneck GAN model for unconditional synthesis of complex scenes. We assume pixel-wise segmentation labels are available during training and use them to learn the scene structure. During inference, our model first synthesizes a realistic segmentation layout from scratch, then synthesizes a realistic scene conditioned on that layout. For the former, we use an unconditional progressive segmentation generation network that captures the distribution of realistic semantic scene layouts. For the latter, we use a conditional segmentation-to-image synthesis network that captures the distribution of photo-realistic images conditioned on the semantic layout. When trained end-to-end, the resulting model outperforms state-of-the-art generative models in unsupervised image synthesis on two challenging domains in terms of the Frechet Inception Distance and user-study evaluations. Moreover, we demonstrate the generated segmentation maps can be used as additional training data to strongly improve recent segmentation-to-image synthesis networks.
1906.04964
Martianus Frederic Ezerman
Martianus Frederic Ezerman, San Ling, Buket \"Ozkaya, and Patrick Sol\'e
Good Stabilizer Codes from Quasi-Cyclic Codes over $\mathbb{F}_4$ and $\mathbb{F}_9$
Accepted ISIT 2019
null
10.1109/ISIT.2019.8849416
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We apply quantum Construction X on quasi-cyclic codes with large Hermitian hulls over $\mathbb{F}_4$ and $\mathbb{F}_9$ to derive good qubit and qutrit stabilizer codes, respectively. In several occasions we obtain quantum codes with stricly improved parameters than the current record. In numerous other occasions we obtain quantum codes with best-known performance. For the qutrit ones we supply a systematic construction to fill some gaps in the literature.
[ { "created": "Wed, 12 Jun 2019 06:45:51 GMT", "version": "v1" } ]
2020-04-28
[ [ "Ezerman", "Martianus Frederic", "" ], [ "Ling", "San", "" ], [ "Özkaya", "Buket", "" ], [ "Solé", "Patrick", "" ] ]
We apply quantum Construction X on quasi-cyclic codes with large Hermitian hulls over $\mathbb{F}_4$ and $\mathbb{F}_9$ to derive good qubit and qutrit stabilizer codes, respectively. In several occasions we obtain quantum codes with stricly improved parameters than the current record. In numerous other occasions we obtain quantum codes with best-known performance. For the qutrit ones we supply a systematic construction to fill some gaps in the literature.
2111.07129
Ajoy Mondal Dr.
Sachin Raja, Ajoy Mondal, and C V Jawahar
Visual Understanding of Complex Table Structures from Document Images
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Table structure recognition is necessary for a comprehensive understanding of documents. Tables in unstructured business documents are tough to parse due to the high diversity of layouts, varying alignments of contents, and the presence of empty cells. The problem is particularly difficult because of challenges in identifying individual cells using visual or linguistic contexts or both. Accurate detection of table cells (including empty cells) simplifies structure extraction and hence, it becomes the prime focus of our work. We propose a novel object-detection-based deep model that captures the inherent alignments of cells within tables and is fine-tuned for fast optimization. Despite accurate detection of cells, recognizing structures for dense tables may still be challenging because of difficulties in capturing long-range row/column dependencies in presence of multi-row/column spanning cells. Therefore, we also aim to improve structure recognition by deducing a novel rectilinear graph-based formulation. From a semantics perspective, we highlight the significance of empty cells in a table. To take these cells into account, we suggest an enhancement to a popular evaluation criterion. Finally, we introduce a modestly sized evaluation dataset with an annotation style inspired by human cognition to encourage new approaches to the problem. Our framework improves the previous state-of-the-art performance by a 2.7% average F1-score on benchmark datasets.
[ { "created": "Sat, 13 Nov 2021 14:54:33 GMT", "version": "v1" } ]
2021-11-16
[ [ "Raja", "Sachin", "" ], [ "Mondal", "Ajoy", "" ], [ "Jawahar", "C V", "" ] ]
Table structure recognition is necessary for a comprehensive understanding of documents. Tables in unstructured business documents are tough to parse due to the high diversity of layouts, varying alignments of contents, and the presence of empty cells. The problem is particularly difficult because of challenges in identifying individual cells using visual or linguistic contexts or both. Accurate detection of table cells (including empty cells) simplifies structure extraction and hence, it becomes the prime focus of our work. We propose a novel object-detection-based deep model that captures the inherent alignments of cells within tables and is fine-tuned for fast optimization. Despite accurate detection of cells, recognizing structures for dense tables may still be challenging because of difficulties in capturing long-range row/column dependencies in presence of multi-row/column spanning cells. Therefore, we also aim to improve structure recognition by deducing a novel rectilinear graph-based formulation. From a semantics perspective, we highlight the significance of empty cells in a table. To take these cells into account, we suggest an enhancement to a popular evaluation criterion. Finally, we introduce a modestly sized evaluation dataset with an annotation style inspired by human cognition to encourage new approaches to the problem. Our framework improves the previous state-of-the-art performance by a 2.7% average F1-score on benchmark datasets.
1302.2875
Frank Kschischang
Mansoor I. Yousefi and Frank R. Kschischang
Information Transmission using the Nonlinear Fourier Transform, Part III: Spectrum Modulation
Updated version of IEEE Transactions on Information Theory, vol. 60, no. 7, pp. 4346--4369, July, 2014
IEEE Transactions on Information Theory, vol. 60, no. 7, pp. 4346--4369, July, 2014
10.1109/TIT.2014.2321155
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by the looming "capacity crunch" in fiber-optic networks, information transmission over such systems is revisited. Among numerous distortions, inter-channel interference in multiuser wavelength-division multiplexing (WDM) is identified as the seemingly intractable factor limiting the achievable rate at high launch power. However, this distortion and similar ones arising from nonlinearity are primarily due to the use of methods suited for linear systems, namely WDM and linear pulse-train transmission, for the nonlinear optical channel. Exploiting the integrability of the nonlinear Schr\"odinger (NLS) equation, a nonlinear frequency-division multiplexing (NFDM) scheme is presented, which directly modulates non-interacting signal degrees-of-freedom under NLS propagation. The main distinction between this and previous methods is that NFDM is able to cope with the nonlinearity, and thus, as the the signal power or transmission distance is increased, the new method does not suffer from the deterministic cross-talk between signal components which has degraded the performance of previous approaches. In this paper, emphasis is placed on modulation of the discrete component of the nonlinear Fourier transform of the signal and some simple examples of achievable spectral efficiencies are provided.
[ { "created": "Tue, 12 Feb 2013 17:52:11 GMT", "version": "v1" }, { "created": "Tue, 7 Oct 2014 21:29:50 GMT", "version": "v2" } ]
2014-10-09
[ [ "Yousefi", "Mansoor I.", "" ], [ "Kschischang", "Frank R.", "" ] ]
Motivated by the looming "capacity crunch" in fiber-optic networks, information transmission over such systems is revisited. Among numerous distortions, inter-channel interference in multiuser wavelength-division multiplexing (WDM) is identified as the seemingly intractable factor limiting the achievable rate at high launch power. However, this distortion and similar ones arising from nonlinearity are primarily due to the use of methods suited for linear systems, namely WDM and linear pulse-train transmission, for the nonlinear optical channel. Exploiting the integrability of the nonlinear Schr\"odinger (NLS) equation, a nonlinear frequency-division multiplexing (NFDM) scheme is presented, which directly modulates non-interacting signal degrees-of-freedom under NLS propagation. The main distinction between this and previous methods is that NFDM is able to cope with the nonlinearity, and thus, as the the signal power or transmission distance is increased, the new method does not suffer from the deterministic cross-talk between signal components which has degraded the performance of previous approaches. In this paper, emphasis is placed on modulation of the discrete component of the nonlinear Fourier transform of the signal and some simple examples of achievable spectral efficiencies are provided.
2206.06155
Felix Lanfermann
Felix Lanfermann and Sebastian Schmitt
Concept Identification for Complex Engineering Datasets
19 pages, 14 figures, accepted at Advanced Engineering Informatics
null
10.1016/j.aei.2022.101704
null
cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Finding meaningful concepts in engineering application datasets which allow for a sensible grouping of designs is very helpful in many contexts. It allows for determining different groups of designs with similar properties and provides useful knowledge in the engineering decision making process. Also, it opens the route for further refinements of specific design candidates which exhibit certain characteristic features. In this work, an approach to define meaningful and consistent concepts in an existing engineering dataset is presented. The designs in the dataset are characterized by a multitude of features such as design parameters, geometrical properties or performance values of the design for various boundary conditions. In the proposed approach the complete feature set is partitioned into several subsets called description spaces. The definition of the concepts respects this partitioning which leads to several desired properties of the identified concepts. This cannot be achieved with state-of-the-art clustering or concept identification approaches. A novel concept quality measure is proposed, which provides an objective value for a given definition of concepts in a dataset. The usefulness of the measure is demonstrated by considering a realistic engineering dataset consisting of about 2500 airfoil profiles, for which the performance values (lift and drag) for three different operating conditions were obtained by a computational fluid dynamics simulation. A numerical optimization procedure is employed, which maximizes the concept quality measure and finds meaningful concepts for different setups of the description spaces, while also incorporating user preference. It is demonstrated how these concepts can be used to select archetypal representatives of the dataset which exhibit characteristic features of each concept.
[ { "created": "Thu, 9 Jun 2022 09:39:46 GMT", "version": "v1" }, { "created": "Fri, 22 Jul 2022 07:52:03 GMT", "version": "v2" } ]
2022-08-17
[ [ "Lanfermann", "Felix", "" ], [ "Schmitt", "Sebastian", "" ] ]
Finding meaningful concepts in engineering application datasets which allow for a sensible grouping of designs is very helpful in many contexts. It allows for determining different groups of designs with similar properties and provides useful knowledge in the engineering decision making process. Also, it opens the route for further refinements of specific design candidates which exhibit certain characteristic features. In this work, an approach to define meaningful and consistent concepts in an existing engineering dataset is presented. The designs in the dataset are characterized by a multitude of features such as design parameters, geometrical properties or performance values of the design for various boundary conditions. In the proposed approach the complete feature set is partitioned into several subsets called description spaces. The definition of the concepts respects this partitioning which leads to several desired properties of the identified concepts. This cannot be achieved with state-of-the-art clustering or concept identification approaches. A novel concept quality measure is proposed, which provides an objective value for a given definition of concepts in a dataset. The usefulness of the measure is demonstrated by considering a realistic engineering dataset consisting of about 2500 airfoil profiles, for which the performance values (lift and drag) for three different operating conditions were obtained by a computational fluid dynamics simulation. A numerical optimization procedure is employed, which maximizes the concept quality measure and finds meaningful concepts for different setups of the description spaces, while also incorporating user preference. It is demonstrated how these concepts can be used to select archetypal representatives of the dataset which exhibit characteristic features of each concept.
2308.10110
Yihua Zhang
Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Huan Zhang, Pin-Yu Chen, Shiyu Chang, Zhangyang Wang, Sijia Liu
Robust Mixture-of-Expert Training for Convolutional Neural Networks
ICCV 2023
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Sparsely-gated Mixture of Expert (MoE), an emerging deep model architecture, has demonstrated a great promise to enable high-accuracy and ultra-efficient model inference. Despite the growing popularity of MoE, little work investigated its potential to advance convolutional neural networks (CNNs), especially in the plane of adversarial robustness. Since the lack of robustness has become one of the main hurdles for CNNs, in this paper we ask: How to adversarially robustify a CNN-based MoE model? Can we robustly train it like an ordinary CNN model? Our pilot study shows that the conventional adversarial training (AT) mechanism (developed for vanilla CNNs) no longer remains effective to robustify an MoE-CNN. To better understand this phenomenon, we dissect the robustness of an MoE-CNN into two dimensions: Robustness of routers (i.e., gating functions to select data-specific experts) and robustness of experts (i.e., the router-guided pathways defined by the subnetworks of the backbone CNN). Our analyses show that routers and experts are hard to adapt to each other in the vanilla AT. Thus, we propose a new router-expert alternating Adversarial training framework for MoE, termed AdvMoE. The effectiveness of our proposal is justified across 4 commonly-used CNN model architectures over 4 benchmark datasets. We find that AdvMoE achieves 1% ~ 4% adversarial robustness improvement over the original dense CNN, and enjoys the efficiency merit of sparsity-gated MoE, leading to more than 50% inference cost reduction. Codes are available at https://github.com/OPTML-Group/Robust-MoE-CNN.
[ { "created": "Sat, 19 Aug 2023 20:58:21 GMT", "version": "v1" } ]
2023-08-22
[ [ "Zhang", "Yihua", "" ], [ "Cai", "Ruisi", "" ], [ "Chen", "Tianlong", "" ], [ "Zhang", "Guanhua", "" ], [ "Zhang", "Huan", "" ], [ "Chen", "Pin-Yu", "" ], [ "Chang", "Shiyu", "" ], [ "Wang", "Zhangyang", "" ], [ "Liu", "Sijia", "" ] ]
Sparsely-gated Mixture of Expert (MoE), an emerging deep model architecture, has demonstrated a great promise to enable high-accuracy and ultra-efficient model inference. Despite the growing popularity of MoE, little work investigated its potential to advance convolutional neural networks (CNNs), especially in the plane of adversarial robustness. Since the lack of robustness has become one of the main hurdles for CNNs, in this paper we ask: How to adversarially robustify a CNN-based MoE model? Can we robustly train it like an ordinary CNN model? Our pilot study shows that the conventional adversarial training (AT) mechanism (developed for vanilla CNNs) no longer remains effective to robustify an MoE-CNN. To better understand this phenomenon, we dissect the robustness of an MoE-CNN into two dimensions: Robustness of routers (i.e., gating functions to select data-specific experts) and robustness of experts (i.e., the router-guided pathways defined by the subnetworks of the backbone CNN). Our analyses show that routers and experts are hard to adapt to each other in the vanilla AT. Thus, we propose a new router-expert alternating Adversarial training framework for MoE, termed AdvMoE. The effectiveness of our proposal is justified across 4 commonly-used CNN model architectures over 4 benchmark datasets. We find that AdvMoE achieves 1% ~ 4% adversarial robustness improvement over the original dense CNN, and enjoys the efficiency merit of sparsity-gated MoE, leading to more than 50% inference cost reduction. Codes are available at https://github.com/OPTML-Group/Robust-MoE-CNN.
2311.05863
Yuanmin Tang
Yuanmin Tang, Jing Yu, Keke Gai, Xiangyan Qu, Yue Hu, Gang Xiong, Qi Wu
Watermarking Vision-Language Pre-trained Models for Multi-modal Embedding as a Service
null
null
null
null
cs.CR cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in vision-language pre-trained models (VLPs) have significantly increased visual understanding and cross-modal analysis capabilities. Companies have emerged to provide multi-modal Embedding as a Service (EaaS) based on VLPs (e.g., CLIP-based VLPs), which cost a large amount of training data and resources for high-performance service. However, existing studies indicate that EaaS is vulnerable to model extraction attacks that induce great loss for the owners of VLPs. Protecting the intellectual property and commercial ownership of VLPs is increasingly crucial yet challenging. A major solution of watermarking model for EaaS implants a backdoor in the model by inserting verifiable trigger embeddings into texts, but it is only applicable for large language models and is unrealistic due to data and model privacy. In this paper, we propose a safe and robust backdoor-based embedding watermarking method for VLPs called VLPMarker. VLPMarker utilizes embedding orthogonal transformation to effectively inject triggers into the VLPs without interfering with the model parameters, which achieves high-quality copyright verification and minimal impact on model performance. To enhance the watermark robustness, we further propose a collaborative copyright verification strategy based on both backdoor trigger and embedding distribution, enhancing resilience against various attacks. We increase the watermark practicality via an out-of-distribution trigger selection approach, removing access to the model training data and thus making it possible for many real-world scenarios. Our extensive experiments on various datasets indicate that the proposed watermarking approach is effective and safe for verifying the copyright of VLPs for multi-modal EaaS and robust against model extraction attacks. Our code is available at https://github.com/Pter61/vlpmarker.
[ { "created": "Fri, 10 Nov 2023 04:27:27 GMT", "version": "v1" } ]
2023-11-13
[ [ "Tang", "Yuanmin", "" ], [ "Yu", "Jing", "" ], [ "Gai", "Keke", "" ], [ "Qu", "Xiangyan", "" ], [ "Hu", "Yue", "" ], [ "Xiong", "Gang", "" ], [ "Wu", "Qi", "" ] ]
Recent advances in vision-language pre-trained models (VLPs) have significantly increased visual understanding and cross-modal analysis capabilities. Companies have emerged to provide multi-modal Embedding as a Service (EaaS) based on VLPs (e.g., CLIP-based VLPs), which cost a large amount of training data and resources for high-performance service. However, existing studies indicate that EaaS is vulnerable to model extraction attacks that induce great loss for the owners of VLPs. Protecting the intellectual property and commercial ownership of VLPs is increasingly crucial yet challenging. A major solution of watermarking model for EaaS implants a backdoor in the model by inserting verifiable trigger embeddings into texts, but it is only applicable for large language models and is unrealistic due to data and model privacy. In this paper, we propose a safe and robust backdoor-based embedding watermarking method for VLPs called VLPMarker. VLPMarker utilizes embedding orthogonal transformation to effectively inject triggers into the VLPs without interfering with the model parameters, which achieves high-quality copyright verification and minimal impact on model performance. To enhance the watermark robustness, we further propose a collaborative copyright verification strategy based on both backdoor trigger and embedding distribution, enhancing resilience against various attacks. We increase the watermark practicality via an out-of-distribution trigger selection approach, removing access to the model training data and thus making it possible for many real-world scenarios. Our extensive experiments on various datasets indicate that the proposed watermarking approach is effective and safe for verifying the copyright of VLPs for multi-modal EaaS and robust against model extraction attacks. Our code is available at https://github.com/Pter61/vlpmarker.
1808.02633
Liting Sun
Liting Sun, Wei Zhan, Masayoshi Tomizuka, and Anca D. Dragan
Courteous Autonomous Cars
International Conference on Intelligent Robots (IROS) 2018
null
null
null
cs.RO cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Typically, autonomous cars optimize for a combination of safety, efficiency, and driving quality. But as we get better at this optimization, we start seeing behavior go from too conservative to too aggressive. The car's behavior exposes the incentives we provide in its cost function. In this work, we argue for cars that are not optimizing a purely selfish cost, but also try to be courteous to other interactive drivers. We formalize courtesy as a term in the objective that measures the increase in another driver's cost induced by the autonomous car's behavior. Such a courtesy term enables the robot car to be aware of possible irrationality of the human behavior, and plan accordingly. We analyze the effect of courtesy in a variety of scenarios. We find, for example, that courteous robot cars leave more space when merging in front of a human driver. Moreover, we find that such a courtesy term can help explain real human driver behavior on the NGSIM dataset.
[ { "created": "Wed, 8 Aug 2018 05:45:24 GMT", "version": "v1" }, { "created": "Thu, 16 Aug 2018 02:47:58 GMT", "version": "v2" } ]
2018-08-17
[ [ "Sun", "Liting", "" ], [ "Zhan", "Wei", "" ], [ "Tomizuka", "Masayoshi", "" ], [ "Dragan", "Anca D.", "" ] ]
Typically, autonomous cars optimize for a combination of safety, efficiency, and driving quality. But as we get better at this optimization, we start seeing behavior go from too conservative to too aggressive. The car's behavior exposes the incentives we provide in its cost function. In this work, we argue for cars that are not optimizing a purely selfish cost, but also try to be courteous to other interactive drivers. We formalize courtesy as a term in the objective that measures the increase in another driver's cost induced by the autonomous car's behavior. Such a courtesy term enables the robot car to be aware of possible irrationality of the human behavior, and plan accordingly. We analyze the effect of courtesy in a variety of scenarios. We find, for example, that courteous robot cars leave more space when merging in front of a human driver. Moreover, we find that such a courtesy term can help explain real human driver behavior on the NGSIM dataset.
2407.05627
Surya Agustian Mr.
Surya Agustian, Muhammad Irfan Syah, Nurul Fatiara, and Rahmad Abdillah
New Directions in Text Classification Research: Maximizing The Performance of Sentiment Classification from Limited Data
9 pages, in Indonesian language. intro to a shared task in sentiment classification
null
null
null
cs.CL cs.IR cs.IT cs.LG cs.SI math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The stakeholders' needs in sentiment analysis for various issues, whether positive or negative, are speed and accuracy. One new challenge in sentiment analysis tasks is the limited training data, which often leads to suboptimal machine learning models and poor performance on test data. This paper discusses the problem of text classification based on limited training data (300 to 600 samples) into three classes: positive, negative, and neutral. A benchmark dataset is provided for training and testing data on the issue of Kaesang Pangarep's appointment as Chairman of PSI. External data for aggregation and augmentation purposes are provided, consisting of two datasets: the topic of Covid Vaccination sentiment and an open topic. The official score used is the F1-score, which balances precision and recall among the three classes, positive, negative, and neutral. A baseline score is provided as a reference for researchers for unoptimized classification methods. The optimized score is provided as a reference for the target score to be achieved by any proposed method. Both scoring (baseline and optimized) use the SVM method, which is widely reported as the state-of-the-art in conventional machine learning methods. The F1-scores achieved by the baseline and optimized methods are 40.83% and 51.28%, respectively.
[ { "created": "Mon, 8 Jul 2024 05:42:29 GMT", "version": "v1" } ]
2024-07-09
[ [ "Agustian", "Surya", "" ], [ "Syah", "Muhammad Irfan", "" ], [ "Fatiara", "Nurul", "" ], [ "Abdillah", "Rahmad", "" ] ]
The stakeholders' needs in sentiment analysis for various issues, whether positive or negative, are speed and accuracy. One new challenge in sentiment analysis tasks is the limited training data, which often leads to suboptimal machine learning models and poor performance on test data. This paper discusses the problem of text classification based on limited training data (300 to 600 samples) into three classes: positive, negative, and neutral. A benchmark dataset is provided for training and testing data on the issue of Kaesang Pangarep's appointment as Chairman of PSI. External data for aggregation and augmentation purposes are provided, consisting of two datasets: the topic of Covid Vaccination sentiment and an open topic. The official score used is the F1-score, which balances precision and recall among the three classes, positive, negative, and neutral. A baseline score is provided as a reference for researchers for unoptimized classification methods. The optimized score is provided as a reference for the target score to be achieved by any proposed method. Both scoring (baseline and optimized) use the SVM method, which is widely reported as the state-of-the-art in conventional machine learning methods. The F1-scores achieved by the baseline and optimized methods are 40.83% and 51.28%, respectively.
1808.03485
Arno Solin
Santiago Cort\'es, Arno Solin, Juho Kannala
Deep Learning Based Speed Estimation for Constraining Strapdown Inertial Navigation on Smartphones
To appear in IEEE International Workshop on Machine Learning for Signal Processing (MLSP) 2018
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Strapdown inertial navigation systems are sensitive to the quality of the data provided by the accelerometer and gyroscope. Low-grade IMUs in handheld smart-devices pose a problem for inertial odometry on these devices. We propose a scheme for constraining the inertial odometry problem by complementing non-linear state estimation by a CNN-based deep-learning model for inferring the momentary speed based on a window of IMU samples. We show the feasibility of the model using a wide range of data from an iPhone, and present proof-of-concept results for how the model can be combined with an inertial navigation system for three-dimensional inertial navigation.
[ { "created": "Fri, 10 Aug 2018 11:03:58 GMT", "version": "v1" } ]
2018-08-13
[ [ "Cortés", "Santiago", "" ], [ "Solin", "Arno", "" ], [ "Kannala", "Juho", "" ] ]
Strapdown inertial navigation systems are sensitive to the quality of the data provided by the accelerometer and gyroscope. Low-grade IMUs in handheld smart-devices pose a problem for inertial odometry on these devices. We propose a scheme for constraining the inertial odometry problem by complementing non-linear state estimation by a CNN-based deep-learning model for inferring the momentary speed based on a window of IMU samples. We show the feasibility of the model using a wide range of data from an iPhone, and present proof-of-concept results for how the model can be combined with an inertial navigation system for three-dimensional inertial navigation.
2403.18305
Youngbin Lee
Minjoo Choi, Seonmi Kim, Yejin Kim, Youngbin Lee, Joohwan Hong, Yongjae Lee
A Recommender System for NFT Collectibles with Item Feature
Presented at the AAAI 2023 Bridge on AI for Financial Services (https://sites.google.com/view/aaai-ai-fin/home)
null
null
null
cs.IR cs.AI
http://creativecommons.org/licenses/by/4.0/
Recommender systems have been actively studied and applied in various domains to deal with information overload. Although there are numerous studies on recommender systems for movies, music, and e-commerce, comparatively less attention has been paid to the recommender system for NFTs despite the continuous growth of the NFT market. This paper presents a recommender system for NFTs that utilizes a variety of data sources, from NFT transaction records to external item features, to generate precise recommendations that cater to individual preferences. We develop a data-efficient graph-based recommender system to efficiently capture the complex relationship between each item and users and generate node(item) embeddings which incorporate both node feature information and graph structure. Furthermore, we exploit inputs beyond user-item interactions, such as image feature, text feature, and price feature. Numerical experiments verify the performance of the graph-based recommender system improves significantly after utilizing all types of item features as side information, thereby outperforming all other baselines.
[ { "created": "Wed, 27 Mar 2024 06:59:39 GMT", "version": "v1" }, { "created": "Wed, 3 Apr 2024 06:52:50 GMT", "version": "v2" } ]
2024-04-04
[ [ "Choi", "Minjoo", "" ], [ "Kim", "Seonmi", "" ], [ "Kim", "Yejin", "" ], [ "Lee", "Youngbin", "" ], [ "Hong", "Joohwan", "" ], [ "Lee", "Yongjae", "" ] ]
Recommender systems have been actively studied and applied in various domains to deal with information overload. Although there are numerous studies on recommender systems for movies, music, and e-commerce, comparatively less attention has been paid to the recommender system for NFTs despite the continuous growth of the NFT market. This paper presents a recommender system for NFTs that utilizes a variety of data sources, from NFT transaction records to external item features, to generate precise recommendations that cater to individual preferences. We develop a data-efficient graph-based recommender system to efficiently capture the complex relationship between each item and users and generate node(item) embeddings which incorporate both node feature information and graph structure. Furthermore, we exploit inputs beyond user-item interactions, such as image feature, text feature, and price feature. Numerical experiments verify the performance of the graph-based recommender system improves significantly after utilizing all types of item features as side information, thereby outperforming all other baselines.
1709.05865
Shubham Dham
Shubham Dham, Anirudh Sharma, Abhinav Dhall
Depression Scale Recognition from Audio, Visual and Text Analysis
null
null
null
null
cs.CV cs.LG cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Depression is a major mental health disorder that is rapidly affecting lives worldwide. Depression not only impacts emotional but also physical and psychological state of the person. Its symptoms include lack of interest in daily activities, feeling low, anxiety, frustration, loss of weight and even feeling of self-hatred. This report describes work done by us for Audio Visual Emotion Challenge (AVEC) 2017 during our second year BTech summer internship. With the increase in demand to detect depression automatically with the help of machine learning algorithms, we present our multimodal feature extraction and decision level fusion approach for the same. Features are extracted by processing on the provided Distress Analysis Interview Corpus-Wizard of Oz (DAIC-WOZ) database. Gaussian Mixture Model (GMM) clustering and Fisher vector approach were applied on the visual data; statistical descriptors on gaze, pose; low level audio features and head pose and text features were also extracted. Classification is done on fused as well as independent features using Support Vector Machine (SVM) and neural networks. The results obtained were able to cross the provided baseline on validation data set by 17% on audio features and 24.5% on video features.
[ { "created": "Mon, 18 Sep 2017 11:26:01 GMT", "version": "v1" } ]
2017-09-19
[ [ "Dham", "Shubham", "" ], [ "Sharma", "Anirudh", "" ], [ "Dhall", "Abhinav", "" ] ]
Depression is a major mental health disorder that is rapidly affecting lives worldwide. Depression not only impacts emotional but also physical and psychological state of the person. Its symptoms include lack of interest in daily activities, feeling low, anxiety, frustration, loss of weight and even feeling of self-hatred. This report describes work done by us for Audio Visual Emotion Challenge (AVEC) 2017 during our second year BTech summer internship. With the increase in demand to detect depression automatically with the help of machine learning algorithms, we present our multimodal feature extraction and decision level fusion approach for the same. Features are extracted by processing on the provided Distress Analysis Interview Corpus-Wizard of Oz (DAIC-WOZ) database. Gaussian Mixture Model (GMM) clustering and Fisher vector approach were applied on the visual data; statistical descriptors on gaze, pose; low level audio features and head pose and text features were also extracted. Classification is done on fused as well as independent features using Support Vector Machine (SVM) and neural networks. The results obtained were able to cross the provided baseline on validation data set by 17% on audio features and 24.5% on video features.
2311.04293
Tara Akhound-Sadegh
Tara Akhound-Sadegh, Laurence Perreault-Levasseur, Johannes Brandstetter, Max Welling, Siamak Ravanbakhsh
Lie Point Symmetry and Physics Informed Networks
NeurIPS 2023
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Symmetries have been leveraged to improve the generalization of neural networks through different mechanisms from data augmentation to equivariant architectures. However, despite their potential, their integration into neural solvers for partial differential equations (PDEs) remains largely unexplored. We explore the integration of PDE symmetries, known as Lie point symmetries, in a major family of neural solvers known as physics-informed neural networks (PINNs). We propose a loss function that informs the network about Lie point symmetries in the same way that PINN models try to enforce the underlying PDE through a loss function. Intuitively, our symmetry loss ensures that the infinitesimal generators of the Lie group conserve the PDE solutions. Effectively, this means that once the network learns a solution, it also learns the neighbouring solutions generated by Lie point symmetries. Empirical evaluations indicate that the inductive bias introduced by the Lie point symmetries of the PDEs greatly boosts the sample efficiency of PINNs.
[ { "created": "Tue, 7 Nov 2023 19:07:16 GMT", "version": "v1" } ]
2023-11-09
[ [ "Akhound-Sadegh", "Tara", "" ], [ "Perreault-Levasseur", "Laurence", "" ], [ "Brandstetter", "Johannes", "" ], [ "Welling", "Max", "" ], [ "Ravanbakhsh", "Siamak", "" ] ]
Symmetries have been leveraged to improve the generalization of neural networks through different mechanisms from data augmentation to equivariant architectures. However, despite their potential, their integration into neural solvers for partial differential equations (PDEs) remains largely unexplored. We explore the integration of PDE symmetries, known as Lie point symmetries, in a major family of neural solvers known as physics-informed neural networks (PINNs). We propose a loss function that informs the network about Lie point symmetries in the same way that PINN models try to enforce the underlying PDE through a loss function. Intuitively, our symmetry loss ensures that the infinitesimal generators of the Lie group conserve the PDE solutions. Effectively, this means that once the network learns a solution, it also learns the neighbouring solutions generated by Lie point symmetries. Empirical evaluations indicate that the inductive bias introduced by the Lie point symmetries of the PDEs greatly boosts the sample efficiency of PINNs.
2108.12375
Ali Karimoddini
Muhammad Mobaidul Islam, Abdullah Al Redwan Newaz, and Ali Karimoddini
A Pedestrian Detection and Tracking Framework for Autonomous Cars: Efficient Fusion of Camera and LiDAR Data
null
null
null
null
cs.CV cs.AI cs.LG cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper presents a novel method for pedestrian detection and tracking by fusing camera and LiDAR sensor data. To deal with the challenges associated with the autonomous driving scenarios, an integrated tracking and detection framework is proposed. The detection phase is performed by converting LiDAR streams to computationally tractable depth images, and then, a deep neural network is developed to identify pedestrian candidates both in RGB and depth images. To provide accurate information, the detection phase is further enhanced by fusing multi-modal sensor information using the Kalman filter. The tracking phase is a combination of the Kalman filter prediction and an optical flow algorithm to track multiple pedestrians in a scene. We evaluate our framework on a real public driving dataset. Experimental results demonstrate that the proposed method achieves significant performance improvement over a baseline method that solely uses image-based pedestrian detection.
[ { "created": "Fri, 27 Aug 2021 16:16:01 GMT", "version": "v1" } ]
2021-08-30
[ [ "Islam", "Muhammad Mobaidul", "" ], [ "Newaz", "Abdullah Al Redwan", "" ], [ "Karimoddini", "Ali", "" ] ]
This paper presents a novel method for pedestrian detection and tracking by fusing camera and LiDAR sensor data. To deal with the challenges associated with the autonomous driving scenarios, an integrated tracking and detection framework is proposed. The detection phase is performed by converting LiDAR streams to computationally tractable depth images, and then, a deep neural network is developed to identify pedestrian candidates both in RGB and depth images. To provide accurate information, the detection phase is further enhanced by fusing multi-modal sensor information using the Kalman filter. The tracking phase is a combination of the Kalman filter prediction and an optical flow algorithm to track multiple pedestrians in a scene. We evaluate our framework on a real public driving dataset. Experimental results demonstrate that the proposed method achieves significant performance improvement over a baseline method that solely uses image-based pedestrian detection.
2104.07228
Wenhao Yu
Wenhao Yu, Chenguang Zhu, Tong Zhao, Zhichun Guo, Meng Jiang
Sentence-Permuted Paragraph Generation
EMNLP 2021 (long paper)
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Generating paragraphs of diverse contents is important in many applications. Existing generation models produce similar contents from homogenized contexts due to the fixed left-to-right sentence order. Our idea is permuting the sentence orders to improve the content diversity of multi-sentence paragraph. We propose a novel framework PermGen whose objective is to maximize the expected log-likelihood of output paragraph distributions with respect to all possible sentence orders. PermGen uses hierarchical positional embedding and designs new procedures for training, decoding, and candidate ranking in the sentence-permuted generation. Experiments on three paragraph generation benchmarks demonstrate PermGen generates more diverse outputs with a higher quality than existing models.
[ { "created": "Thu, 15 Apr 2021 04:17:03 GMT", "version": "v1" }, { "created": "Tue, 7 Sep 2021 05:41:59 GMT", "version": "v2" } ]
2021-09-08
[ [ "Yu", "Wenhao", "" ], [ "Zhu", "Chenguang", "" ], [ "Zhao", "Tong", "" ], [ "Guo", "Zhichun", "" ], [ "Jiang", "Meng", "" ] ]
Generating paragraphs of diverse contents is important in many applications. Existing generation models produce similar contents from homogenized contexts due to the fixed left-to-right sentence order. Our idea is permuting the sentence orders to improve the content diversity of multi-sentence paragraph. We propose a novel framework PermGen whose objective is to maximize the expected log-likelihood of output paragraph distributions with respect to all possible sentence orders. PermGen uses hierarchical positional embedding and designs new procedures for training, decoding, and candidate ranking in the sentence-permuted generation. Experiments on three paragraph generation benchmarks demonstrate PermGen generates more diverse outputs with a higher quality than existing models.
2102.03234
David Hafner
Vladlen Koltun, David Hafner
The h-index is no longer an effective correlate of scientific reputation
An interactive visualization of our work can be found at https://h-frac.org
null
10.1371/journal.pone.0253397
null
cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The impact of individual scientists is commonly quantified using citation-based measures. The most common such measure is the h-index. A scientist's h-index affects hiring, promotion, and funding decisions, and thus shapes the progress of science. Here we report a large-scale study of scientometric measures, analyzing millions of articles and hundreds of millions of citations across four scientific fields and two data platforms. We find that the correlation of the h-index with awards that indicate recognition by the scientific community has substantially declined. These trends are associated with changing authorship patterns. We show that these declines can be mitigated by fractional allocation of citations among authors, which has been discussed in the literature but not implemented at scale. We find that a fractional analogue of the h-index outperforms other measures as a correlate and predictor of scientific awards. Our results suggest that the use of the h-index in ranking scientists should be reconsidered, and that fractional allocation measures such as h-frac provide more robust alternatives. An interactive visualization of our work can be found at https://h-frac.org
[ { "created": "Fri, 5 Feb 2021 15:28:39 GMT", "version": "v1" } ]
2021-09-15
[ [ "Koltun", "Vladlen", "" ], [ "Hafner", "David", "" ] ]
The impact of individual scientists is commonly quantified using citation-based measures. The most common such measure is the h-index. A scientist's h-index affects hiring, promotion, and funding decisions, and thus shapes the progress of science. Here we report a large-scale study of scientometric measures, analyzing millions of articles and hundreds of millions of citations across four scientific fields and two data platforms. We find that the correlation of the h-index with awards that indicate recognition by the scientific community has substantially declined. These trends are associated with changing authorship patterns. We show that these declines can be mitigated by fractional allocation of citations among authors, which has been discussed in the literature but not implemented at scale. We find that a fractional analogue of the h-index outperforms other measures as a correlate and predictor of scientific awards. Our results suggest that the use of the h-index in ranking scientists should be reconsidered, and that fractional allocation measures such as h-frac provide more robust alternatives. An interactive visualization of our work can be found at https://h-frac.org
2303.11866
Zaid Khan
Zaid Khan and Yun Fu
Contrastive Alignment of Vision to Language Through Parameter-Efficient Transfer Learning
Accepted to ICLR 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contrastive vision-language models (e.g. CLIP) are typically created by updating all the parameters of a vision model and language model through contrastive training. Can such models be created by a small number of parameter updates to an already-trained language model and vision model? The literature describes techniques that can create vision-language models by updating a small number of parameters in a language model, but these require already aligned visual representations and are non-contrastive, hence unusable for latency-sensitive applications such as neural search. We explore the feasibility and benefits of parameter-efficient contrastive vision-language alignment through transfer learning: creating a model such as CLIP by minimally updating an already-trained vision and language model. We find that a minimal set of parameter updates ($<$7%) can achieve the same performance as full-model training, and updating specific components ($<$1% of parameters) can match 75% of full-model training. We describe a series of experiments: we show that existing knowledge is conserved more strongly in parameter-efficient training and that parameter-efficient scaling scales with model and dataset size. Where paired-image text data is scarce but strong multilingual language models exist (e.g. low resource languages), parameter-efficient training is even preferable to full-model training. Given a fixed compute budget, parameter-efficient training allows training larger models on the same hardware, achieving equivalent performance in less time. Parameter-efficient training hence constitutes an energy-efficient and effective training strategy for contrastive vision-language models that may be preferable to the full-model training paradigm for common use cases. Code and weights at https://github.com/codezakh/LilT.
[ { "created": "Tue, 21 Mar 2023 14:12:08 GMT", "version": "v1" } ]
2023-03-22
[ [ "Khan", "Zaid", "" ], [ "Fu", "Yun", "" ] ]
Contrastive vision-language models (e.g. CLIP) are typically created by updating all the parameters of a vision model and language model through contrastive training. Can such models be created by a small number of parameter updates to an already-trained language model and vision model? The literature describes techniques that can create vision-language models by updating a small number of parameters in a language model, but these require already aligned visual representations and are non-contrastive, hence unusable for latency-sensitive applications such as neural search. We explore the feasibility and benefits of parameter-efficient contrastive vision-language alignment through transfer learning: creating a model such as CLIP by minimally updating an already-trained vision and language model. We find that a minimal set of parameter updates ($<$7%) can achieve the same performance as full-model training, and updating specific components ($<$1% of parameters) can match 75% of full-model training. We describe a series of experiments: we show that existing knowledge is conserved more strongly in parameter-efficient training and that parameter-efficient scaling scales with model and dataset size. Where paired-image text data is scarce but strong multilingual language models exist (e.g. low resource languages), parameter-efficient training is even preferable to full-model training. Given a fixed compute budget, parameter-efficient training allows training larger models on the same hardware, achieving equivalent performance in less time. Parameter-efficient training hence constitutes an energy-efficient and effective training strategy for contrastive vision-language models that may be preferable to the full-model training paradigm for common use cases. Code and weights at https://github.com/codezakh/LilT.
2302.12689
Silvan Mertes
Tobias Huber, Maximilian Demmler, Silvan Mertes, Matthew L. Olson, Elisabeth Andr\'e
GANterfactual-RL: Understanding Reinforcement Learning Agents' Strategies through Visual Counterfactual Explanations
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Counterfactual explanations are a common tool to explain artificial intelligence models. For Reinforcement Learning (RL) agents, they answer "Why not?" or "What if?" questions by illustrating what minimal change to a state is needed such that an agent chooses a different action. Generating counterfactual explanations for RL agents with visual input is especially challenging because of their large state spaces and because their decisions are part of an overarching policy, which includes long-term decision-making. However, research focusing on counterfactual explanations, specifically for RL agents with visual input, is scarce and does not go beyond identifying defective agents. It is unclear whether counterfactual explanations are still helpful for more complex tasks like analyzing the learned strategies of different agents or choosing a fitting agent for a specific task. We propose a novel but simple method to generate counterfactual explanations for RL agents by formulating the problem as a domain transfer problem which allows the use of adversarial learning techniques like StarGAN. Our method is fully model-agnostic and we demonstrate that it outperforms the only previous method in several computational metrics. Furthermore, we show in a user study that our method performs best when analyzing which strategies different agents pursue.
[ { "created": "Fri, 24 Feb 2023 15:29:43 GMT", "version": "v1" } ]
2023-02-27
[ [ "Huber", "Tobias", "" ], [ "Demmler", "Maximilian", "" ], [ "Mertes", "Silvan", "" ], [ "Olson", "Matthew L.", "" ], [ "André", "Elisabeth", "" ] ]
Counterfactual explanations are a common tool to explain artificial intelligence models. For Reinforcement Learning (RL) agents, they answer "Why not?" or "What if?" questions by illustrating what minimal change to a state is needed such that an agent chooses a different action. Generating counterfactual explanations for RL agents with visual input is especially challenging because of their large state spaces and because their decisions are part of an overarching policy, which includes long-term decision-making. However, research focusing on counterfactual explanations, specifically for RL agents with visual input, is scarce and does not go beyond identifying defective agents. It is unclear whether counterfactual explanations are still helpful for more complex tasks like analyzing the learned strategies of different agents or choosing a fitting agent for a specific task. We propose a novel but simple method to generate counterfactual explanations for RL agents by formulating the problem as a domain transfer problem which allows the use of adversarial learning techniques like StarGAN. Our method is fully model-agnostic and we demonstrate that it outperforms the only previous method in several computational metrics. Furthermore, we show in a user study that our method performs best when analyzing which strategies different agents pursue.
1805.11272
Cecilia Summers
Cecilia Summers, Michael J. Dinneen
Improved Mixed-Example Data Augmentation
9 pages
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to reduce overfitting, neural networks are typically trained with data augmentation, the practice of artificially generating additional training data via label-preserving transformations of existing training examples. While these types of transformations make intuitive sense, recent work has demonstrated that even non-label-preserving data augmentation can be surprisingly effective, examining this type of data augmentation through linear combinations of pairs of examples. Despite their effectiveness, little is known about why such methods work. In this work, we aim to explore a new, more generalized form of this type of data augmentation in order to determine whether such linearity is necessary. By considering this broader scope of "mixed-example data augmentation", we find a much larger space of practical augmentation techniques, including methods that improve upon previous state-of-the-art. This generalization has benefits beyond the promise of improved performance, revealing a number of types of mixed-example data augmentation that are radically different from those considered in prior work, which provides evidence that current theories for the effectiveness of such methods are incomplete and suggests that any such theory must explain a much broader phenomenon. Code is available at https://github.com/ceciliaresearch/MixedExample.
[ { "created": "Tue, 29 May 2018 07:06:58 GMT", "version": "v1" }, { "created": "Fri, 1 Jun 2018 06:50:22 GMT", "version": "v2" }, { "created": "Thu, 18 Oct 2018 06:10:23 GMT", "version": "v3" }, { "created": "Sat, 19 Jan 2019 07:04:35 GMT", "version": "v4" } ]
2019-01-23
[ [ "Summers", "Cecilia", "" ], [ "Dinneen", "Michael J.", "" ] ]
In order to reduce overfitting, neural networks are typically trained with data augmentation, the practice of artificially generating additional training data via label-preserving transformations of existing training examples. While these types of transformations make intuitive sense, recent work has demonstrated that even non-label-preserving data augmentation can be surprisingly effective, examining this type of data augmentation through linear combinations of pairs of examples. Despite their effectiveness, little is known about why such methods work. In this work, we aim to explore a new, more generalized form of this type of data augmentation in order to determine whether such linearity is necessary. By considering this broader scope of "mixed-example data augmentation", we find a much larger space of practical augmentation techniques, including methods that improve upon previous state-of-the-art. This generalization has benefits beyond the promise of improved performance, revealing a number of types of mixed-example data augmentation that are radically different from those considered in prior work, which provides evidence that current theories for the effectiveness of such methods are incomplete and suggests that any such theory must explain a much broader phenomenon. Code is available at https://github.com/ceciliaresearch/MixedExample.
2306.00188
Guangyao Zheng
Guangyao Zheng, Shuhao Lai, Vladimir Braverman, Michael A. Jacobs, Vishwa S. Parekh
Multi-environment lifelong deep reinforcement learning for medical imaging
null
null
null
null
cs.LG cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
Deep reinforcement learning(DRL) is increasingly being explored in medical imaging. However, the environments for medical imaging tasks are constantly evolving in terms of imaging orientations, imaging sequences, and pathologies. To that end, we developed a Lifelong DRL framework, SERIL to continually learn new tasks in changing imaging environments without catastrophic forgetting. SERIL was developed using selective experience replay based lifelong learning technique for the localization of five anatomical landmarks in brain MRI on a sequence of twenty-four different imaging environments. The performance of SERIL, when compared to two baseline setups: MERT(multi-environment-best-case) and SERT(single-environment-worst-case) demonstrated excellent performance with an average distance of $9.90\pm7.35$ pixels from the desired landmark across all 120 tasks, compared to $10.29\pm9.07$ for MERT and $36.37\pm22.41$ for SERT($p<0.05$), demonstrating the excellent potential for continuously learning multiple tasks across dynamically changing imaging environments.
[ { "created": "Wed, 31 May 2023 21:06:42 GMT", "version": "v1" } ]
2023-06-02
[ [ "Zheng", "Guangyao", "" ], [ "Lai", "Shuhao", "" ], [ "Braverman", "Vladimir", "" ], [ "Jacobs", "Michael A.", "" ], [ "Parekh", "Vishwa S.", "" ] ]
Deep reinforcement learning(DRL) is increasingly being explored in medical imaging. However, the environments for medical imaging tasks are constantly evolving in terms of imaging orientations, imaging sequences, and pathologies. To that end, we developed a Lifelong DRL framework, SERIL to continually learn new tasks in changing imaging environments without catastrophic forgetting. SERIL was developed using selective experience replay based lifelong learning technique for the localization of five anatomical landmarks in brain MRI on a sequence of twenty-four different imaging environments. The performance of SERIL, when compared to two baseline setups: MERT(multi-environment-best-case) and SERT(single-environment-worst-case) demonstrated excellent performance with an average distance of $9.90\pm7.35$ pixels from the desired landmark across all 120 tasks, compared to $10.29\pm9.07$ for MERT and $36.37\pm22.41$ for SERT($p<0.05$), demonstrating the excellent potential for continuously learning multiple tasks across dynamically changing imaging environments.
2112.06374
Yunhai Han
Yunhai Han, Kelin Yu, Rahul Batra, Nathan Boyd, Chaitanya Mehta, Tuo Zhao, Yu She, Seth Hutchinson, and Ye Zhao
Learning Generalizable Vision-Tactile Robotic Grasping Strategy for Deformable Objects via Transformer
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reliable robotic grasping, especially with deformable objects such as fruits, remains a challenging task due to underactuated contact interactions with a gripper, unknown object dynamics and geometries. In this study, we propose a Transformer-based robotic grasping framework for rigid grippers that leverage tactile and visual information for safe object grasping. Specifically, the Transformer models learn physical feature embeddings with sensor feedback through performing two pre-defined explorative actions (pinching and sliding) and predict a grasping outcome through a multilayer perceptron (MLP) with a given grasping strength. Using these predictions, the gripper predicts a safe grasping strength via inference. Compared with convolutional recurrent networks, the Transformer models can capture the long-term dependencies across the image sequences and process spatial-temporal features simultaneously. We first benchmark the Transformer models on a public dataset for slip detection. Following that, we show that the Transformer models outperform a CNN+LSTM model in terms of grasping accuracy and computational efficiency. We also collect a new fruit grasping dataset and conduct online grasping experiments using the proposed framework for both seen and unseen fruits. {In addition, we extend our model to objects with different shapes and demonstrate the effectiveness of our pre-trained model trained on our large-scale fruit dataset. Our codes and dataset are public on GitHub.
[ { "created": "Mon, 13 Dec 2021 02:07:21 GMT", "version": "v1" }, { "created": "Mon, 20 Dec 2021 03:42:03 GMT", "version": "v2" }, { "created": "Tue, 8 Mar 2022 14:36:14 GMT", "version": "v3" }, { "created": "Wed, 4 Jan 2023 03:07:44 GMT", "version": "v4" }, { "created": "Wed, 17 May 2023 02:54:45 GMT", "version": "v5" }, { "created": "Sun, 23 Jul 2023 13:37:34 GMT", "version": "v6" } ]
2023-07-25
[ [ "Han", "Yunhai", "" ], [ "Yu", "Kelin", "" ], [ "Batra", "Rahul", "" ], [ "Boyd", "Nathan", "" ], [ "Mehta", "Chaitanya", "" ], [ "Zhao", "Tuo", "" ], [ "She", "Yu", "" ], [ "Hutchinson", "Seth", "" ], [ "Zhao", "Ye", "" ] ]
Reliable robotic grasping, especially with deformable objects such as fruits, remains a challenging task due to underactuated contact interactions with a gripper, unknown object dynamics and geometries. In this study, we propose a Transformer-based robotic grasping framework for rigid grippers that leverage tactile and visual information for safe object grasping. Specifically, the Transformer models learn physical feature embeddings with sensor feedback through performing two pre-defined explorative actions (pinching and sliding) and predict a grasping outcome through a multilayer perceptron (MLP) with a given grasping strength. Using these predictions, the gripper predicts a safe grasping strength via inference. Compared with convolutional recurrent networks, the Transformer models can capture the long-term dependencies across the image sequences and process spatial-temporal features simultaneously. We first benchmark the Transformer models on a public dataset for slip detection. Following that, we show that the Transformer models outperform a CNN+LSTM model in terms of grasping accuracy and computational efficiency. We also collect a new fruit grasping dataset and conduct online grasping experiments using the proposed framework for both seen and unseen fruits. {In addition, we extend our model to objects with different shapes and demonstrate the effectiveness of our pre-trained model trained on our large-scale fruit dataset. Our codes and dataset are public on GitHub.
2110.12334
Jingyuan Yang
Jingyuan Yang, Xinbo Gao, Leida Li, Xiumei Wang, and Jinshan Ding
SOLVER: Scene-Object Interrelated Visual Emotion Reasoning Network
Accepted by TIP
in IEEE Transactions on Image Processing, vol. 30, pp. 8686-8701, 2021
10.1109/TIP.2021.3118983
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual Emotion Analysis (VEA) aims at finding out how people feel emotionally towards different visual stimuli, which has attracted great attention recently with the prevalence of sharing images on social networks. Since human emotion involves a highly complex and abstract cognitive process, it is difficult to infer visual emotions directly from holistic or regional features in affective images. It has been demonstrated in psychology that visual emotions are evoked by the interactions between objects as well as the interactions between objects and scenes within an image. Inspired by this, we propose a novel Scene-Object interreLated Visual Emotion Reasoning network (SOLVER) to predict emotions from images. To mine the emotional relationships between distinct objects, we first build up an Emotion Graph based on semantic concepts and visual features. Then, we conduct reasoning on the Emotion Graph using Graph Convolutional Network (GCN), yielding emotion-enhanced object features. We also design a Scene-Object Fusion Module to integrate scenes and objects, which exploits scene features to guide the fusion process of object features with the proposed scene-based attention mechanism. Extensive experiments and comparisons are conducted on eight public visual emotion datasets, and the results demonstrate that the proposed SOLVER consistently outperforms the state-of-the-art methods by a large margin. Ablation studies verify the effectiveness of our method and visualizations prove its interpretability, which also bring new insight to explore the mysteries in VEA. Notably, we further discuss SOLVER on three other potential datasets with extended experiments, where we validate the robustness of our method and notice some limitations of it.
[ { "created": "Sun, 24 Oct 2021 02:41:41 GMT", "version": "v1" } ]
2021-10-26
[ [ "Yang", "Jingyuan", "" ], [ "Gao", "Xinbo", "" ], [ "Li", "Leida", "" ], [ "Wang", "Xiumei", "" ], [ "Ding", "Jinshan", "" ] ]
Visual Emotion Analysis (VEA) aims at finding out how people feel emotionally towards different visual stimuli, which has attracted great attention recently with the prevalence of sharing images on social networks. Since human emotion involves a highly complex and abstract cognitive process, it is difficult to infer visual emotions directly from holistic or regional features in affective images. It has been demonstrated in psychology that visual emotions are evoked by the interactions between objects as well as the interactions between objects and scenes within an image. Inspired by this, we propose a novel Scene-Object interreLated Visual Emotion Reasoning network (SOLVER) to predict emotions from images. To mine the emotional relationships between distinct objects, we first build up an Emotion Graph based on semantic concepts and visual features. Then, we conduct reasoning on the Emotion Graph using Graph Convolutional Network (GCN), yielding emotion-enhanced object features. We also design a Scene-Object Fusion Module to integrate scenes and objects, which exploits scene features to guide the fusion process of object features with the proposed scene-based attention mechanism. Extensive experiments and comparisons are conducted on eight public visual emotion datasets, and the results demonstrate that the proposed SOLVER consistently outperforms the state-of-the-art methods by a large margin. Ablation studies verify the effectiveness of our method and visualizations prove its interpretability, which also bring new insight to explore the mysteries in VEA. Notably, we further discuss SOLVER on three other potential datasets with extended experiments, where we validate the robustness of our method and notice some limitations of it.
1904.02348
Yanchao Wang
Yan-Chao Wang and Feng Lin and Hock-Soon Seah
Orthogonal Voronoi Diagram and Treemap
null
null
null
null
cs.DS cs.GR cs.HC cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a novel space partitioning strategy for implicit hierarchy visualization such that the new plot not only has a tidy layout similar to the treemap, but also is flexible to data changes similar to the Voronoi treemap. To achieve this, we define a new distance function and neighborhood relationship between sites so that space will be divided by axis-aligned segments. Then a sweepline+skyline based heuristic algorithm is proposed to allocate the partitioned spaces to form an orthogonal Voronoi diagram with orthogonal rectangles. To the best of our knowledge, it is the first time to use a sweepline-based strategy for the Voronoi treemap. Moreover, we design a novel strategy to initialize the diagram status and modify the status update procedure so that the generation of our plot is more effective and efficient. We show that the proposed algorithm has an O(nlog(n)) complexity which is the same as the state-of-the-art Voronoi treemap. To this end, we show via experiments on the artificial dataset and real-world dataset the performance of our algorithm in terms of computation time, converge rate, and aspect ratio. Finally, we discuss the pros and cons of our method and make a conclusion.
[ { "created": "Thu, 4 Apr 2019 05:05:49 GMT", "version": "v1" } ]
2020-09-17
[ [ "Wang", "Yan-Chao", "" ], [ "Lin", "Feng", "" ], [ "Seah", "Hock-Soon", "" ] ]
In this paper, we propose a novel space partitioning strategy for implicit hierarchy visualization such that the new plot not only has a tidy layout similar to the treemap, but also is flexible to data changes similar to the Voronoi treemap. To achieve this, we define a new distance function and neighborhood relationship between sites so that space will be divided by axis-aligned segments. Then a sweepline+skyline based heuristic algorithm is proposed to allocate the partitioned spaces to form an orthogonal Voronoi diagram with orthogonal rectangles. To the best of our knowledge, it is the first time to use a sweepline-based strategy for the Voronoi treemap. Moreover, we design a novel strategy to initialize the diagram status and modify the status update procedure so that the generation of our plot is more effective and efficient. We show that the proposed algorithm has an O(nlog(n)) complexity which is the same as the state-of-the-art Voronoi treemap. To this end, we show via experiments on the artificial dataset and real-world dataset the performance of our algorithm in terms of computation time, converge rate, and aspect ratio. Finally, we discuss the pros and cons of our method and make a conclusion.
1805.05132
Chunbiao Zhu
Chunbiao Zhu, Wenhao Zhang, Thomas H. Li, Ge Li
Exploiting the Value of the Center-dark Channel Prior for Salient Object Detection
Project website: https://chunbiaozhu.github.io/ACVR2017/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Saliency detection aims to detect the most attractive objects in images and is widely used as a foundation for various applications. In this paper, we propose a novel salient object detection algorithm for RGB-D images using center-dark channel priors. First, we generate an initial saliency map based on a color saliency map and a depth saliency map of a given RGB-D image. Then, we generate a center-dark channel map based on center saliency and dark channel priors. Finally, we fuse the initial saliency map with the center dark channel map to generate the final saliency map. Extensive evaluations over four benchmark datasets demonstrate that our proposed method performs favorably against most of the state-of-the-art approaches. Besides, we further discuss the application of the proposed algorithm in small target detection and demonstrate the universal value of center-dark channel priors in the field of object detection.
[ { "created": "Mon, 14 May 2018 12:02:20 GMT", "version": "v1" } ]
2018-05-15
[ [ "Zhu", "Chunbiao", "" ], [ "Zhang", "Wenhao", "" ], [ "Li", "Thomas H.", "" ], [ "Li", "Ge", "" ] ]
Saliency detection aims to detect the most attractive objects in images and is widely used as a foundation for various applications. In this paper, we propose a novel salient object detection algorithm for RGB-D images using center-dark channel priors. First, we generate an initial saliency map based on a color saliency map and a depth saliency map of a given RGB-D image. Then, we generate a center-dark channel map based on center saliency and dark channel priors. Finally, we fuse the initial saliency map with the center dark channel map to generate the final saliency map. Extensive evaluations over four benchmark datasets demonstrate that our proposed method performs favorably against most of the state-of-the-art approaches. Besides, we further discuss the application of the proposed algorithm in small target detection and demonstrate the universal value of center-dark channel priors in the field of object detection.
2210.05895
Haodong Duan
Haodong Duan, Jiaqi Wang, Kai Chen, Dahua Lin
DG-STGCN: Dynamic Spatial-Temporal Modeling for Skeleton-based Action Recognition
Codes will be released in https://github.com/kennymckormick/pyskl
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph convolution networks (GCN) have been widely used in skeleton-based action recognition. We note that existing GCN-based approaches primarily rely on prescribed graphical structures (ie., a manually defined topology of skeleton joints), which limits their flexibility to capture complicated correlations between joints. To move beyond this limitation, we propose a new framework for skeleton-based action recognition, namely Dynamic Group Spatio-Temporal GCN (DG-STGCN). It consists of two modules, DG-GCN and DG-TCN, respectively, for spatial and temporal modeling. In particular, DG-GCN uses learned affinity matrices to capture dynamic graphical structures instead of relying on a prescribed one, while DG-TCN performs group-wise temporal convolutions with varying receptive fields and incorporates a dynamic joint-skeleton fusion module for adaptive multi-level temporal modeling. On a wide range of benchmarks, including NTURGB+D, Kinetics-Skeleton, BABEL, and Toyota SmartHome, DG-STGCN consistently outperforms state-of-the-art methods, often by a notable margin.
[ { "created": "Wed, 12 Oct 2022 03:17:37 GMT", "version": "v1" } ]
2022-10-13
[ [ "Duan", "Haodong", "" ], [ "Wang", "Jiaqi", "" ], [ "Chen", "Kai", "" ], [ "Lin", "Dahua", "" ] ]
Graph convolution networks (GCN) have been widely used in skeleton-based action recognition. We note that existing GCN-based approaches primarily rely on prescribed graphical structures (ie., a manually defined topology of skeleton joints), which limits their flexibility to capture complicated correlations between joints. To move beyond this limitation, we propose a new framework for skeleton-based action recognition, namely Dynamic Group Spatio-Temporal GCN (DG-STGCN). It consists of two modules, DG-GCN and DG-TCN, respectively, for spatial and temporal modeling. In particular, DG-GCN uses learned affinity matrices to capture dynamic graphical structures instead of relying on a prescribed one, while DG-TCN performs group-wise temporal convolutions with varying receptive fields and incorporates a dynamic joint-skeleton fusion module for adaptive multi-level temporal modeling. On a wide range of benchmarks, including NTURGB+D, Kinetics-Skeleton, BABEL, and Toyota SmartHome, DG-STGCN consistently outperforms state-of-the-art methods, often by a notable margin.
2407.12999
Mihai Christodorescu
Mihai Christodorescu, Ryan Craven, Soheil Feizi, Neil Gong, Mia Hoffmann, Somesh Jha, Zhengyuan Jiang, Mehrdad Saberi Kamarposhti, John Mitchell, Jessica Newman, Emelia Probasco, Yanjun Qi, Khawaja Shams, Matthew Turek
Securing the Future of GenAI: Policy and Technology
null
null
null
null
cs.CY cs.AI cs.CR
http://creativecommons.org/licenses/by/4.0/
The rise of Generative AI (GenAI) brings about transformative potential across sectors, but its dual-use nature also amplifies risks. Governments globally are grappling with the challenge of regulating GenAI, balancing innovation against safety. China, the United States (US), and the European Union (EU) are at the forefront with initiatives like the Management of Algorithmic Recommendations, the Executive Order, and the AI Act, respectively. However, the rapid evolution of GenAI capabilities often outpaces the development of comprehensive safety measures, creating a gap between regulatory needs and technical advancements. A workshop co-organized by Google, University of Wisconsin, Madison (UW-Madison), and Stanford University aimed to bridge this gap between GenAI policy and technology. The diverse stakeholders of the GenAI space -- from the public and governments to academia and industry -- make any safety measures under consideration more complex, as both technical feasibility and regulatory guidance must be realized. This paper summarizes the discussions during the workshop which addressed questions, such as: How regulation can be designed without hindering technological progress? How technology can evolve to meet regulatory standards? The interplay between legislation and technology is a very vast topic, and we don't claim that this paper is a comprehensive treatment on this topic. This paper is meant to capture findings based on the workshop, and hopefully, can guide discussion on this topic.
[ { "created": "Tue, 21 May 2024 20:30:01 GMT", "version": "v1" } ]
2024-07-19
[ [ "Christodorescu", "Mihai", "" ], [ "Craven", "Ryan", "" ], [ "Feizi", "Soheil", "" ], [ "Gong", "Neil", "" ], [ "Hoffmann", "Mia", "" ], [ "Jha", "Somesh", "" ], [ "Jiang", "Zhengyuan", "" ], [ "Kamarposhti", "Mehrdad Saberi", "" ], [ "Mitchell", "John", "" ], [ "Newman", "Jessica", "" ], [ "Probasco", "Emelia", "" ], [ "Qi", "Yanjun", "" ], [ "Shams", "Khawaja", "" ], [ "Turek", "Matthew", "" ] ]
The rise of Generative AI (GenAI) brings about transformative potential across sectors, but its dual-use nature also amplifies risks. Governments globally are grappling with the challenge of regulating GenAI, balancing innovation against safety. China, the United States (US), and the European Union (EU) are at the forefront with initiatives like the Management of Algorithmic Recommendations, the Executive Order, and the AI Act, respectively. However, the rapid evolution of GenAI capabilities often outpaces the development of comprehensive safety measures, creating a gap between regulatory needs and technical advancements. A workshop co-organized by Google, University of Wisconsin, Madison (UW-Madison), and Stanford University aimed to bridge this gap between GenAI policy and technology. The diverse stakeholders of the GenAI space -- from the public and governments to academia and industry -- make any safety measures under consideration more complex, as both technical feasibility and regulatory guidance must be realized. This paper summarizes the discussions during the workshop which addressed questions, such as: How regulation can be designed without hindering technological progress? How technology can evolve to meet regulatory standards? The interplay between legislation and technology is a very vast topic, and we don't claim that this paper is a comprehensive treatment on this topic. This paper is meant to capture findings based on the workshop, and hopefully, can guide discussion on this topic.
2403.06173
Johann Huber
Johann Huber, Fran\c{c}ois H\'el\'enon, Mathilde Kappel, Elie Chelly, Mahdi Khoramshahi, Fa\"iz Ben Amar, St\'ephane Doncieux
Speeding up 6-DoF Grasp Sampling with Quality-Diversity
7 pages, 8 figures. Preprint version
null
null
null
cs.RO cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in AI have led to significant results in robotic learning, including natural language-conditioned planning and efficient optimization of controllers using generative models. However, the interaction data remains the bottleneck for generalization. Getting data for grasping is a critical challenge, as this skill is required to complete many manipulation tasks. Quality-Diversity (QD) algorithms optimize a set of solutions to get diverse, high-performing solutions to a given problem. This paper investigates how QD can be combined with priors to speed up the generation of diverse grasps poses in simulation compared to standard 6-DoF grasp sampling schemes. Experiments conducted on 4 grippers with 2-to-5 fingers on standard objects show that QD outperforms commonly used methods by a large margin. Further experiments show that QD optimization automatically finds some efficient priors that are usually hard coded. The deployment of generated grasps on a 2-finger gripper and an Allegro hand shows that the diversity produced maintains sim-to-real transferability. We believe these results to be a significant step toward the generation of large datasets that can lead to robust and generalizing robotic grasping policies.
[ { "created": "Sun, 10 Mar 2024 10:58:54 GMT", "version": "v1" } ]
2024-03-12
[ [ "Huber", "Johann", "" ], [ "Hélénon", "François", "" ], [ "Kappel", "Mathilde", "" ], [ "Chelly", "Elie", "" ], [ "Khoramshahi", "Mahdi", "" ], [ "Amar", "Faïz Ben", "" ], [ "Doncieux", "Stéphane", "" ] ]
Recent advances in AI have led to significant results in robotic learning, including natural language-conditioned planning and efficient optimization of controllers using generative models. However, the interaction data remains the bottleneck for generalization. Getting data for grasping is a critical challenge, as this skill is required to complete many manipulation tasks. Quality-Diversity (QD) algorithms optimize a set of solutions to get diverse, high-performing solutions to a given problem. This paper investigates how QD can be combined with priors to speed up the generation of diverse grasps poses in simulation compared to standard 6-DoF grasp sampling schemes. Experiments conducted on 4 grippers with 2-to-5 fingers on standard objects show that QD outperforms commonly used methods by a large margin. Further experiments show that QD optimization automatically finds some efficient priors that are usually hard coded. The deployment of generated grasps on a 2-finger gripper and an Allegro hand shows that the diversity produced maintains sim-to-real transferability. We believe these results to be a significant step toward the generation of large datasets that can lead to robust and generalizing robotic grasping policies.
1904.01569
Saining Xie
Saining Xie, Alexander Kirillov, Ross Girshick, Kaiming He
Exploring Randomly Wired Neural Networks for Image Recognition
Technical report
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural networks for image recognition have evolved through extensive manual design from simple chain-like models to structures with multiple wiring paths. The success of ResNets and DenseNets is due in large part to their innovative wiring plans. Now, neural architecture search (NAS) studies are exploring the joint optimization of wiring and operation types, however, the space of possible wirings is constrained and still driven by manual design despite being searched. In this paper, we explore a more diverse set of connectivity patterns through the lens of randomly wired neural networks. To do this, we first define the concept of a stochastic network generator that encapsulates the entire network generation process. Encapsulation provides a unified view of NAS and randomly wired networks. Then, we use three classical random graph models to generate randomly wired graphs for networks. The results are surprising: several variants of these random generators yield network instances that have competitive accuracy on the ImageNet benchmark. These results suggest that new efforts focusing on designing better network generators may lead to new breakthroughs by exploring less constrained search spaces with more room for novel design.
[ { "created": "Tue, 2 Apr 2019 17:57:16 GMT", "version": "v1" }, { "created": "Mon, 8 Apr 2019 17:50:26 GMT", "version": "v2" } ]
2019-04-09
[ [ "Xie", "Saining", "" ], [ "Kirillov", "Alexander", "" ], [ "Girshick", "Ross", "" ], [ "He", "Kaiming", "" ] ]
Neural networks for image recognition have evolved through extensive manual design from simple chain-like models to structures with multiple wiring paths. The success of ResNets and DenseNets is due in large part to their innovative wiring plans. Now, neural architecture search (NAS) studies are exploring the joint optimization of wiring and operation types, however, the space of possible wirings is constrained and still driven by manual design despite being searched. In this paper, we explore a more diverse set of connectivity patterns through the lens of randomly wired neural networks. To do this, we first define the concept of a stochastic network generator that encapsulates the entire network generation process. Encapsulation provides a unified view of NAS and randomly wired networks. Then, we use three classical random graph models to generate randomly wired graphs for networks. The results are surprising: several variants of these random generators yield network instances that have competitive accuracy on the ImageNet benchmark. These results suggest that new efforts focusing on designing better network generators may lead to new breakthroughs by exploring less constrained search spaces with more room for novel design.
1912.10702
Bin Dai
Bin Dai, Ziyu Wang, David Wipf
The Usual Suspects? Reassessing Blame for VAE Posterior Collapse
null
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In narrow asymptotic settings Gaussian VAE models of continuous data have been shown to possess global optima aligned with ground-truth distributions. Even so, it is well known that poor solutions whereby the latent posterior collapses to an uninformative prior are sometimes obtained in practice. However, contrary to conventional wisdom that largely assigns blame for this phenomena on the undue influence of KL-divergence regularization, we will argue that posterior collapse is, at least in part, a direct consequence of bad local minima inherent to the loss surface of deep autoencoder networks. In particular, we prove that even small nonlinear perturbations of affine VAE decoder models can produce such minima, and in deeper models, analogous minima can force the VAE to behave like an aggressive truncation operator, provably discarding information along all latent dimensions in certain circumstances. Regardless, the underlying message here is not meant to undercut valuable existing explanations of posterior collapse, but rather, to refine the discussion and elucidate alternative risk factors that may have been previously underappreciated.
[ { "created": "Mon, 23 Dec 2019 09:40:30 GMT", "version": "v1" } ]
2019-12-24
[ [ "Dai", "Bin", "" ], [ "Wang", "Ziyu", "" ], [ "Wipf", "David", "" ] ]
In narrow asymptotic settings Gaussian VAE models of continuous data have been shown to possess global optima aligned with ground-truth distributions. Even so, it is well known that poor solutions whereby the latent posterior collapses to an uninformative prior are sometimes obtained in practice. However, contrary to conventional wisdom that largely assigns blame for this phenomena on the undue influence of KL-divergence regularization, we will argue that posterior collapse is, at least in part, a direct consequence of bad local minima inherent to the loss surface of deep autoencoder networks. In particular, we prove that even small nonlinear perturbations of affine VAE decoder models can produce such minima, and in deeper models, analogous minima can force the VAE to behave like an aggressive truncation operator, provably discarding information along all latent dimensions in certain circumstances. Regardless, the underlying message here is not meant to undercut valuable existing explanations of posterior collapse, but rather, to refine the discussion and elucidate alternative risk factors that may have been previously underappreciated.
1910.09704
Vamsi Amalladinne
Vamsi K. Amalladinne, Jean-Francois Chamberland, Krishna R. Narayanan
An enhanced decoding algorithm for coded compressed sensing
Submitted to ICASSP2020
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Coded compressed sensing is an algorithmic framework tailored to sparse recovery in very large dimensional spaces. This framework is originally envisioned for the unsourced multiple access channel, a wireless paradigm attuned to machine-type communications. Coded compressed sensing uses a divide-and-conquer approach to break the sparse recovery task into sub-components whose dimensions are amenable to conventional compressed sensing solvers. The recovered fragments are then stitched together using a low complexity decoder. This article introduces an enhanced decoding algorithm for coded compressed sensing where fragment recovery and the stitching process are executed in tandem, passing information between them. This novel scheme leads to gains in performance and a significant reduction in computational complexity. This algorithmic opportunity stems from the realization that the parity structure inherent to coded compressed sensing can be used to dynamically restrict the search space of the subsequent recovery algorithm.
[ { "created": "Tue, 22 Oct 2019 00:17:37 GMT", "version": "v1" } ]
2019-10-23
[ [ "Amalladinne", "Vamsi K.", "" ], [ "Chamberland", "Jean-Francois", "" ], [ "Narayanan", "Krishna R.", "" ] ]
Coded compressed sensing is an algorithmic framework tailored to sparse recovery in very large dimensional spaces. This framework is originally envisioned for the unsourced multiple access channel, a wireless paradigm attuned to machine-type communications. Coded compressed sensing uses a divide-and-conquer approach to break the sparse recovery task into sub-components whose dimensions are amenable to conventional compressed sensing solvers. The recovered fragments are then stitched together using a low complexity decoder. This article introduces an enhanced decoding algorithm for coded compressed sensing where fragment recovery and the stitching process are executed in tandem, passing information between them. This novel scheme leads to gains in performance and a significant reduction in computational complexity. This algorithmic opportunity stems from the realization that the parity structure inherent to coded compressed sensing can be used to dynamically restrict the search space of the subsequent recovery algorithm.
2006.10909
Yatin Chaudhary
Pankaj Gupta and Yatin Chaudhary and Thomas Runkler and Hinrich Sch\"utze
Neural Topic Modeling with Continual Lifelong Learning
Accepted at ICML2020 (13 pages, 11 figures, 9 tables)
null
null
null
cs.CL cs.IR cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lifelong learning has recently attracted attention in building machine learning systems that continually accumulate and transfer knowledge to help future learning. Unsupervised topic modeling has been popularly used to discover topics from document collections. However, the application of topic modeling is challenging due to data sparsity, e.g., in a small collection of (short) documents and thus, generate incoherent topics and sub-optimal document representations. To address the problem, we propose a lifelong learning framework for neural topic modeling that can continuously process streams of document collections, accumulate topics and guide future topic modeling tasks by knowledge transfer from several sources to better deal with the sparse data. In the lifelong process, we particularly investigate jointly: (1) sharing generative homologies (latent topics) over lifetime to transfer prior knowledge, and (2) minimizing catastrophic forgetting to retain the past learning via novel selective data augmentation, co-training and topic regularization approaches. Given a stream of document collections, we apply the proposed Lifelong Neural Topic Modeling (LNTM) framework in modeling three sparse document collections as future tasks and demonstrate improved performance quantified by perplexity, topic coherence and information retrieval task.
[ { "created": "Fri, 19 Jun 2020 00:43:23 GMT", "version": "v1" }, { "created": "Tue, 27 Jun 2023 05:32:12 GMT", "version": "v2" } ]
2023-06-28
[ [ "Gupta", "Pankaj", "" ], [ "Chaudhary", "Yatin", "" ], [ "Runkler", "Thomas", "" ], [ "Schütze", "Hinrich", "" ] ]
Lifelong learning has recently attracted attention in building machine learning systems that continually accumulate and transfer knowledge to help future learning. Unsupervised topic modeling has been popularly used to discover topics from document collections. However, the application of topic modeling is challenging due to data sparsity, e.g., in a small collection of (short) documents and thus, generate incoherent topics and sub-optimal document representations. To address the problem, we propose a lifelong learning framework for neural topic modeling that can continuously process streams of document collections, accumulate topics and guide future topic modeling tasks by knowledge transfer from several sources to better deal with the sparse data. In the lifelong process, we particularly investigate jointly: (1) sharing generative homologies (latent topics) over lifetime to transfer prior knowledge, and (2) minimizing catastrophic forgetting to retain the past learning via novel selective data augmentation, co-training and topic regularization approaches. Given a stream of document collections, we apply the proposed Lifelong Neural Topic Modeling (LNTM) framework in modeling three sparse document collections as future tasks and demonstrate improved performance quantified by perplexity, topic coherence and information retrieval task.
1806.07226
Wei Jiang
Wei Jiang, Yan Wu
DFNet: Semantic Segmentation on Panoramic Images with Dynamic Loss Weights and Residual Fusion Block
6 pages,3 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For the self-driving and automatic parking, perception is the basic and critical technique, moreover, the detection of lane markings and parking slots is an important part of visual perception. In this paper, we use the semantic segmentation method to segment the area and classify the class of lane makings and parking slots on panoramic surround view (PSV) dataset. We propose the DFNet and make two main contributions, one is dynamic loss weights, and the other is residual fusion block (RFB). Dynamic loss weights are varying from classes, calculated according to the pixel number of each class in a batch. RFB is composed of two convolutional layers, one pooling layer, and a fusion layer to combine the feature maps by pixel multiplication. We evaluate our method on PSV dataset, and achieve an advanced result.
[ { "created": "Mon, 11 Jun 2018 05:09:25 GMT", "version": "v1" } ]
2018-06-20
[ [ "Jiang", "Wei", "" ], [ "Wu", "Yan", "" ] ]
For the self-driving and automatic parking, perception is the basic and critical technique, moreover, the detection of lane markings and parking slots is an important part of visual perception. In this paper, we use the semantic segmentation method to segment the area and classify the class of lane makings and parking slots on panoramic surround view (PSV) dataset. We propose the DFNet and make two main contributions, one is dynamic loss weights, and the other is residual fusion block (RFB). Dynamic loss weights are varying from classes, calculated according to the pixel number of each class in a batch. RFB is composed of two convolutional layers, one pooling layer, and a fusion layer to combine the feature maps by pixel multiplication. We evaluate our method on PSV dataset, and achieve an advanced result.
2109.12008
Bruno Taill\'e
Bruno Taill\'e, Vincent Guigue, Geoffrey Scoutheeten and Patrick Gallinari
Separating Retention from Extraction in the Evaluation of End-to-end Relation Extraction
Accepted at EMNLP 2021
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
State-of-the-art NLP models can adopt shallow heuristics that limit their generalization capability (McCoy et al., 2019). Such heuristics include lexical overlap with the training set in Named-Entity Recognition (Taill\'e et al., 2020) and Event or Type heuristics in Relation Extraction (Rosenman et al., 2020). In the more realistic end-to-end RE setting, we can expect yet another heuristic: the mere retention of training relation triples. In this paper, we propose several experiments confirming that retention of known facts is a key factor of performance on standard benchmarks. Furthermore, one experiment suggests that a pipeline model able to use intermediate type representations is less prone to over-rely on retention.
[ { "created": "Fri, 24 Sep 2021 15:04:39 GMT", "version": "v1" } ]
2021-09-27
[ [ "Taillé", "Bruno", "" ], [ "Guigue", "Vincent", "" ], [ "Scoutheeten", "Geoffrey", "" ], [ "Gallinari", "Patrick", "" ] ]
State-of-the-art NLP models can adopt shallow heuristics that limit their generalization capability (McCoy et al., 2019). Such heuristics include lexical overlap with the training set in Named-Entity Recognition (Taill\'e et al., 2020) and Event or Type heuristics in Relation Extraction (Rosenman et al., 2020). In the more realistic end-to-end RE setting, we can expect yet another heuristic: the mere retention of training relation triples. In this paper, we propose several experiments confirming that retention of known facts is a key factor of performance on standard benchmarks. Furthermore, one experiment suggests that a pipeline model able to use intermediate type representations is less prone to over-rely on retention.
2402.15174
Pablo Donato
Pablo Donato (PARTOUT)
The Flower Calculus
null
Leibniz International Proceedings in Informatics , 2024, 9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024) (299), pp.5:1-5:24
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce the flower calculus, a deep inference proof system for intuitionistic first-order logic inspired by Peirce's existential graphs. It works as a rewriting system over inductive objects called ''flowers'', that enjoy both a graphical interpretation as topological diagrams, and a textual presentation as nested sequents akin to coherent formulas. Importantly, the calculus dispenses completely with the traditional notion of symbolic connective, operating solely on nested flowers containing atomic predicates. We prove both the soundness of the full calculus and the completeness of an analytic fragment with respect to Kripke semantics. This provides to our knowledge the first analyticity result for a proof system based on existential graphs, adapting semantic cut-elimination techniques to a deep inference setting. Furthermore, the kernel of rules targetted by completeness is fully invertible, a desirable property for both automated and interactive proof search.
[ { "created": "Fri, 23 Feb 2024 08:13:22 GMT", "version": "v1" }, { "created": "Thu, 11 Jul 2024 08:45:34 GMT", "version": "v2" }, { "created": "Mon, 15 Jul 2024 08:29:07 GMT", "version": "v3" } ]
2024-07-16
[ [ "Donato", "Pablo", "", "PARTOUT" ] ]
We introduce the flower calculus, a deep inference proof system for intuitionistic first-order logic inspired by Peirce's existential graphs. It works as a rewriting system over inductive objects called ''flowers'', that enjoy both a graphical interpretation as topological diagrams, and a textual presentation as nested sequents akin to coherent formulas. Importantly, the calculus dispenses completely with the traditional notion of symbolic connective, operating solely on nested flowers containing atomic predicates. We prove both the soundness of the full calculus and the completeness of an analytic fragment with respect to Kripke semantics. This provides to our knowledge the first analyticity result for a proof system based on existential graphs, adapting semantic cut-elimination techniques to a deep inference setting. Furthermore, the kernel of rules targetted by completeness is fully invertible, a desirable property for both automated and interactive proof search.
1101.0698
Gerhard de Koning Gans
Gerhard de Koning Gans and Eric R. Verheul
Best Effort and Practice Activation Codes
15 pages, 3 figures, TrustBus 2011
null
null
null
cs.CR cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Activation Codes are used in many different digital services and known by many different names including voucher, e-coupon and discount code. In this paper we focus on a specific class of ACs that are short, human-readable, fixed-length and represent value. Even though this class of codes is extensively used there are no general guidelines for the design of Activation Code schemes. We discuss different methods that are used in practice and propose BEPAC, a new Activation Code scheme that provides both authenticity and confidentiality. The small message space of activation codes introduces some problems that are illustrated by an adaptive chosen-plaintext attack (CPA-2) on a general 3-round Feis- tel network of size 2^(2n) . This attack recovers the complete permutation from at most 2^(n+2) plaintext-ciphertext pairs. For this reason, BEPAC is designed in such a way that authenticity and confidentiality are in- dependent properties, i.e. loss of confidentiality does not imply loss of authenticity.
[ { "created": "Tue, 4 Jan 2011 10:41:27 GMT", "version": "v1" }, { "created": "Thu, 23 Jun 2011 11:26:51 GMT", "version": "v2" } ]
2011-06-24
[ [ "Gans", "Gerhard de Koning", "" ], [ "Verheul", "Eric R.", "" ] ]
Activation Codes are used in many different digital services and known by many different names including voucher, e-coupon and discount code. In this paper we focus on a specific class of ACs that are short, human-readable, fixed-length and represent value. Even though this class of codes is extensively used there are no general guidelines for the design of Activation Code schemes. We discuss different methods that are used in practice and propose BEPAC, a new Activation Code scheme that provides both authenticity and confidentiality. The small message space of activation codes introduces some problems that are illustrated by an adaptive chosen-plaintext attack (CPA-2) on a general 3-round Feis- tel network of size 2^(2n) . This attack recovers the complete permutation from at most 2^(n+2) plaintext-ciphertext pairs. For this reason, BEPAC is designed in such a way that authenticity and confidentiality are in- dependent properties, i.e. loss of confidentiality does not imply loss of authenticity.
1504.07384
Andreas Pavlogiannis
Krishnendu Chatterjee and Rasmus Ibsen-Jensen and Andreas Pavlogiannis
Faster Algorithms for Quantitative Verification in Constant Treewidth Graphs
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the core algorithmic problems related to verification of systems with respect to three classical quantitative properties, namely, the mean-payoff property, the ratio property, and the minimum initial credit for energy property. The algorithmic problem given a graph and a quantitative property asks to compute the optimal value (the infimum value over all traces) from every node of the graph. We consider graphs with constant treewidth, and it is well-known that the control-flow graphs of most programs have constant treewidth. Let $n$ denote the number of nodes of a graph, $m$ the number of edges (for constant treewidth graphs $m=O(n)$) and $W$ the largest absolute value of the weights. Our main theoretical results are as follows. First, for constant treewidth graphs we present an algorithm that approximates the mean-payoff value within a multiplicative factor of $\epsilon$ in time $O(n \cdot \log (n/\epsilon))$ and linear space, as compared to the classical algorithms that require quadratic time. Second, for the ratio property we present an algorithm that for constant treewidth graphs works in time $O(n \cdot \log (|a\cdot b|))=O(n\cdot\log (n\cdot W))$, when the output is $\frac{a}{b}$, as compared to the previously best known algorithm with running time $O(n^2 \cdot \log (n\cdot W))$. Third, for the minimum initial credit problem we show that (i) for general graphs the problem can be solved in $O(n^2\cdot m)$ time and the associated decision problem can be solved in $O(n\cdot m)$ time, improving the previous known $O(n^3\cdot m\cdot \log (n\cdot W))$ and $O(n^2 \cdot m)$ bounds, respectively; and (ii) for constant treewidth graphs we present an algorithm that requires $O(n\cdot \log n)$ time, improving the previous known $O(n^4 \cdot \log (n \cdot W))$ bound.
[ { "created": "Tue, 28 Apr 2015 08:53:53 GMT", "version": "v1" } ]
2015-04-29
[ [ "Chatterjee", "Krishnendu", "" ], [ "Ibsen-Jensen", "Rasmus", "" ], [ "Pavlogiannis", "Andreas", "" ] ]
We consider the core algorithmic problems related to verification of systems with respect to three classical quantitative properties, namely, the mean-payoff property, the ratio property, and the minimum initial credit for energy property. The algorithmic problem given a graph and a quantitative property asks to compute the optimal value (the infimum value over all traces) from every node of the graph. We consider graphs with constant treewidth, and it is well-known that the control-flow graphs of most programs have constant treewidth. Let $n$ denote the number of nodes of a graph, $m$ the number of edges (for constant treewidth graphs $m=O(n)$) and $W$ the largest absolute value of the weights. Our main theoretical results are as follows. First, for constant treewidth graphs we present an algorithm that approximates the mean-payoff value within a multiplicative factor of $\epsilon$ in time $O(n \cdot \log (n/\epsilon))$ and linear space, as compared to the classical algorithms that require quadratic time. Second, for the ratio property we present an algorithm that for constant treewidth graphs works in time $O(n \cdot \log (|a\cdot b|))=O(n\cdot\log (n\cdot W))$, when the output is $\frac{a}{b}$, as compared to the previously best known algorithm with running time $O(n^2 \cdot \log (n\cdot W))$. Third, for the minimum initial credit problem we show that (i) for general graphs the problem can be solved in $O(n^2\cdot m)$ time and the associated decision problem can be solved in $O(n\cdot m)$ time, improving the previous known $O(n^3\cdot m\cdot \log (n\cdot W))$ and $O(n^2 \cdot m)$ bounds, respectively; and (ii) for constant treewidth graphs we present an algorithm that requires $O(n\cdot \log n)$ time, improving the previous known $O(n^4 \cdot \log (n \cdot W))$ bound.
2310.05597
Molly Petersen
Molly R. Petersen, Lonneke van der Plas
Can language models learn analogical reasoning? Investigating training objectives and comparisons to human performance
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
While analogies are a common way to evaluate word embeddings in NLP, it is also of interest to investigate whether or not analogical reasoning is a task in itself that can be learned. In this paper, we test several ways to learn basic analogical reasoning, specifically focusing on analogies that are more typical of what is used to evaluate analogical reasoning in humans than those in commonly used NLP benchmarks. Our experiments find that models are able to learn analogical reasoning, even with a small amount of data. We additionally compare our models to a dataset with a human baseline, and find that after training, models approach human performance.
[ { "created": "Mon, 9 Oct 2023 10:34:38 GMT", "version": "v1" }, { "created": "Fri, 13 Oct 2023 15:07:28 GMT", "version": "v2" }, { "created": "Sun, 22 Oct 2023 09:17:30 GMT", "version": "v3" }, { "created": "Fri, 3 May 2024 10:22:13 GMT", "version": "v4" } ]
2024-05-06
[ [ "Petersen", "Molly R.", "" ], [ "van der Plas", "Lonneke", "" ] ]
While analogies are a common way to evaluate word embeddings in NLP, it is also of interest to investigate whether or not analogical reasoning is a task in itself that can be learned. In this paper, we test several ways to learn basic analogical reasoning, specifically focusing on analogies that are more typical of what is used to evaluate analogical reasoning in humans than those in commonly used NLP benchmarks. Our experiments find that models are able to learn analogical reasoning, even with a small amount of data. We additionally compare our models to a dataset with a human baseline, and find that after training, models approach human performance.
2206.06975
Min Li
Zhengyuan Shi, Min Li, Sadaf Khan, Liuzheng Wang, Naixing Wang, Yu Huang, Qiang Xu
DeepTPI: Test Point Insertion with Deep Reinforcement Learning
Accepted by ITC 2022
null
null
null
cs.LG cs.AI cs.AR
http://creativecommons.org/licenses/by/4.0/
Test point insertion (TPI) is a widely used technique for testability enhancement, especially for logic built-in self-test (LBIST) due to its relatively low fault coverage. In this paper, we propose a novel TPI approach based on deep reinforcement learning (DRL), named DeepTPI. Unlike previous learning-based solutions that formulate the TPI task as a supervised-learning problem, we train a novel DRL agent, instantiated as the combination of a graph neural network (GNN) and a Deep Q-Learning network (DQN), to maximize the test coverage improvement. Specifically, we model circuits as directed graphs and design a graph-based value network to estimate the action values for inserting different test points. The policy of the DRL agent is defined as selecting the action with the maximum value. Moreover, we apply the general node embeddings from a pre-trained model to enhance node features, and propose a dedicated testability-aware attention mechanism for the value network. Experimental results on circuits with various scales show that DeepTPI significantly improves test coverage compared to the commercial DFT tool. The code of this work is available at https://github.com/cure-lab/DeepTPI.
[ { "created": "Tue, 7 Jun 2022 14:13:42 GMT", "version": "v1" }, { "created": "Mon, 27 Jun 2022 13:56:05 GMT", "version": "v2" } ]
2022-06-29
[ [ "Shi", "Zhengyuan", "" ], [ "Li", "Min", "" ], [ "Khan", "Sadaf", "" ], [ "Wang", "Liuzheng", "" ], [ "Wang", "Naixing", "" ], [ "Huang", "Yu", "" ], [ "Xu", "Qiang", "" ] ]
Test point insertion (TPI) is a widely used technique for testability enhancement, especially for logic built-in self-test (LBIST) due to its relatively low fault coverage. In this paper, we propose a novel TPI approach based on deep reinforcement learning (DRL), named DeepTPI. Unlike previous learning-based solutions that formulate the TPI task as a supervised-learning problem, we train a novel DRL agent, instantiated as the combination of a graph neural network (GNN) and a Deep Q-Learning network (DQN), to maximize the test coverage improvement. Specifically, we model circuits as directed graphs and design a graph-based value network to estimate the action values for inserting different test points. The policy of the DRL agent is defined as selecting the action with the maximum value. Moreover, we apply the general node embeddings from a pre-trained model to enhance node features, and propose a dedicated testability-aware attention mechanism for the value network. Experimental results on circuits with various scales show that DeepTPI significantly improves test coverage compared to the commercial DFT tool. The code of this work is available at https://github.com/cure-lab/DeepTPI.
2405.01734
Jai Singhal
Ankush Jain, Rinav Gupta, Jai Singhal
Diabetic Retinopathy Detection Using Quantum Transfer Learning
14 pages, 12 figures and 5 tables
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Diabetic Retinopathy (DR), a prevalent complication in diabetes patients, can lead to vision impairment due to lesions formed on the retina. Detecting DR at an advanced stage often results in irreversible blindness. The traditional process of diagnosing DR through retina fundus images by ophthalmologists is not only time-intensive but also expensive. While classical transfer learning models have been widely adopted for computer-aided detection of DR, their high maintenance costs can hinder their detection efficiency. In contrast, Quantum Transfer Learning offers a more effective solution to this challenge. This approach is notably advantageous because it operates on heuristic principles, making it highly optimized for the task. Our proposed methodology leverages this hybrid quantum transfer learning technique to detect DR. To construct our model, we utilize the APTOS 2019 Blindness Detection dataset, available on Kaggle. We employ the ResNet-18, ResNet34, ResNet50, ResNet101, ResNet152 and Inception V3, pre-trained classical neural networks, for the initial feature extraction. For the classification stage, we use a Variational Quantum Classifier. Our hybrid quantum model has shown remarkable results, achieving an accuracy of 97% for ResNet-18. This demonstrates that quantum computing, when integrated with quantum machine learning, can perform tasks with a level of power and efficiency unattainable by classical computers alone. By harnessing these advanced technologies, we can significantly improve the detection and diagnosis of Diabetic Retinopathy, potentially saving many from the risk of blindness. Keywords: Diabetic Retinopathy, Quantum Transfer Learning, Deep Learning
[ { "created": "Thu, 2 May 2024 21:09:39 GMT", "version": "v1" } ]
2024-05-06
[ [ "Jain", "Ankush", "" ], [ "Gupta", "Rinav", "" ], [ "Singhal", "Jai", "" ] ]
Diabetic Retinopathy (DR), a prevalent complication in diabetes patients, can lead to vision impairment due to lesions formed on the retina. Detecting DR at an advanced stage often results in irreversible blindness. The traditional process of diagnosing DR through retina fundus images by ophthalmologists is not only time-intensive but also expensive. While classical transfer learning models have been widely adopted for computer-aided detection of DR, their high maintenance costs can hinder their detection efficiency. In contrast, Quantum Transfer Learning offers a more effective solution to this challenge. This approach is notably advantageous because it operates on heuristic principles, making it highly optimized for the task. Our proposed methodology leverages this hybrid quantum transfer learning technique to detect DR. To construct our model, we utilize the APTOS 2019 Blindness Detection dataset, available on Kaggle. We employ the ResNet-18, ResNet34, ResNet50, ResNet101, ResNet152 and Inception V3, pre-trained classical neural networks, for the initial feature extraction. For the classification stage, we use a Variational Quantum Classifier. Our hybrid quantum model has shown remarkable results, achieving an accuracy of 97% for ResNet-18. This demonstrates that quantum computing, when integrated with quantum machine learning, can perform tasks with a level of power and efficiency unattainable by classical computers alone. By harnessing these advanced technologies, we can significantly improve the detection and diagnosis of Diabetic Retinopathy, potentially saving many from the risk of blindness. Keywords: Diabetic Retinopathy, Quantum Transfer Learning, Deep Learning
1904.00948
Maxwell Scale Uwadia Osagie
Maxwell Scale Uwadia Osagie, Osatohanmwen Enagbonma and Amanda Iriagbonse Inyang
The Historical Perspective of Botnet tools
null
null
10.9734/CJAST/2019/v32i630040
null
cs.CY
http://creativecommons.org/licenses/by-nc-sa/4.0/
Bot as it is popularly called is an inherent attributes of botnet tool. Botnet is a group of malicious tools acting as an entity. Furthermore, history has it that the aim of what gave rise to botnet was the idea to simplify the method of message exchange within networking platform. However, this has led to several botnet tools ravaging the server environments in recent times. The working principle of these botnet tools is to get client systems that are vulnerable and thereafter, steal valuable credentials. This work is part of a comprehensive research work into botnet detection mechanism but, on this paper it primarily look at how botnet as threat tool began, the trend since inception and as well as few approaches that have been used to curb it.
[ { "created": "Sat, 2 Mar 2019 22:49:20 GMT", "version": "v1" } ]
2019-04-02
[ [ "Osagie", "Maxwell Scale Uwadia", "" ], [ "Enagbonma", "Osatohanmwen", "" ], [ "Inyang", "Amanda Iriagbonse", "" ] ]
Bot as it is popularly called is an inherent attributes of botnet tool. Botnet is a group of malicious tools acting as an entity. Furthermore, history has it that the aim of what gave rise to botnet was the idea to simplify the method of message exchange within networking platform. However, this has led to several botnet tools ravaging the server environments in recent times. The working principle of these botnet tools is to get client systems that are vulnerable and thereafter, steal valuable credentials. This work is part of a comprehensive research work into botnet detection mechanism but, on this paper it primarily look at how botnet as threat tool began, the trend since inception and as well as few approaches that have been used to curb it.
2407.18571
Mahmoud Salhab
Mahmoud Salhab and Haidar Harmanani
Speech Bandwidth Expansion Via High Fidelity Generative Adversarial Networks
null
null
null
null
cs.SD cs.AI eess.AS
http://creativecommons.org/licenses/by/4.0/
Speech bandwidth expansion is crucial for expanding the frequency range of low-bandwidth speech signals, thereby improving audio quality, clarity and perceptibility in digital applications. Its applications span telephony, compression, text-to-speech synthesis, and speech recognition. This paper presents a novel approach using a high-fidelity generative adversarial network, unlike cascaded systems, our system is trained end-to-end on paired narrowband and wideband speech signals. Our method integrates various bandwidth upsampling ratios into a single unified model specifically designed for speech bandwidth expansion applications. Our approach exhibits robust performance across various bandwidth expansion factors, including those not encountered during training, demonstrating zero-shot capability. To the best of our knowledge, this is the first work to showcase this capability. The experimental results demonstrate that our method outperforms previous end-to-end approaches, as well as interpolation and traditional techniques, showcasing its effectiveness in practical speech enhancement applications.
[ { "created": "Fri, 26 Jul 2024 07:54:47 GMT", "version": "v1" }, { "created": "Mon, 29 Jul 2024 07:29:17 GMT", "version": "v2" } ]
2024-07-30
[ [ "Salhab", "Mahmoud", "" ], [ "Harmanani", "Haidar", "" ] ]
Speech bandwidth expansion is crucial for expanding the frequency range of low-bandwidth speech signals, thereby improving audio quality, clarity and perceptibility in digital applications. Its applications span telephony, compression, text-to-speech synthesis, and speech recognition. This paper presents a novel approach using a high-fidelity generative adversarial network, unlike cascaded systems, our system is trained end-to-end on paired narrowband and wideband speech signals. Our method integrates various bandwidth upsampling ratios into a single unified model specifically designed for speech bandwidth expansion applications. Our approach exhibits robust performance across various bandwidth expansion factors, including those not encountered during training, demonstrating zero-shot capability. To the best of our knowledge, this is the first work to showcase this capability. The experimental results demonstrate that our method outperforms previous end-to-end approaches, as well as interpolation and traditional techniques, showcasing its effectiveness in practical speech enhancement applications.
2009.13957
Jinting Wu
Jinting Wu, Yujia Zhang and Xiaoguang Zhao
A Prototype-Based Generalized Zero-Shot Learning Framework for Hand Gesture Recognition
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hand gesture recognition plays a significant role in human-computer interaction for understanding various human gestures and their intent. However, most prior works can only recognize gestures of limited labeled classes and fail to adapt to new categories. The task of Generalized Zero-Shot Learning (GZSL) for hand gesture recognition aims to address the above issue by leveraging semantic representations and detecting both seen and unseen class samples. In this paper, we propose an end-to-end prototype-based GZSL framework for hand gesture recognition which consists of two branches. The first branch is a prototype-based detector that learns gesture representations and determines whether an input sample belongs to a seen or unseen category. The second branch is a zero-shot label predictor which takes the features of unseen classes as input and outputs predictions through a learned mapping mechanism between the feature and the semantic space. We further establish a hand gesture dataset that specifically targets this GZSL task, and comprehensive experiments on this dataset demonstrate the effectiveness of our proposed approach on recognizing both seen and unseen gestures.
[ { "created": "Tue, 29 Sep 2020 12:18:35 GMT", "version": "v1" } ]
2020-09-30
[ [ "Wu", "Jinting", "" ], [ "Zhang", "Yujia", "" ], [ "Zhao", "Xiaoguang", "" ] ]
Hand gesture recognition plays a significant role in human-computer interaction for understanding various human gestures and their intent. However, most prior works can only recognize gestures of limited labeled classes and fail to adapt to new categories. The task of Generalized Zero-Shot Learning (GZSL) for hand gesture recognition aims to address the above issue by leveraging semantic representations and detecting both seen and unseen class samples. In this paper, we propose an end-to-end prototype-based GZSL framework for hand gesture recognition which consists of two branches. The first branch is a prototype-based detector that learns gesture representations and determines whether an input sample belongs to a seen or unseen category. The second branch is a zero-shot label predictor which takes the features of unseen classes as input and outputs predictions through a learned mapping mechanism between the feature and the semantic space. We further establish a hand gesture dataset that specifically targets this GZSL task, and comprehensive experiments on this dataset demonstrate the effectiveness of our proposed approach on recognizing both seen and unseen gestures.
2005.06803
Limin Wang
Zhaoyang Liu, Limin Wang, Wayne Wu, Chen Qian, Tong Lu
TAM: Temporal Adaptive Module for Video Recognition
ICCV 2021 camera-ready version. Code is available at https://github.com/liu-zhy/temporal-adaptive-module
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video data is with complex temporal dynamics due to various factors such as camera motion, speed variation, and different activities. To effectively capture this diverse motion pattern, this paper presents a new temporal adaptive module ({\bf TAM}) to generate video-specific temporal kernels based on its own feature map. TAM proposes a unique two-level adaptive modeling scheme by decoupling the dynamic kernel into a location sensitive importance map and a location invariant aggregation weight. The importance map is learned in a local temporal window to capture short-term information, while the aggregation weight is generated from a global view with a focus on long-term structure. TAM is a modular block and could be integrated into 2D CNNs to yield a powerful video architecture (TANet) with a very small extra computational cost. The extensive experiments on Kinetics-400 and Something-Something datasets demonstrate that our TAM outperforms other temporal modeling methods consistently, and achieves the state-of-the-art performance under the similar complexity. The code is available at \url{ https://github.com/liu-zhy/temporal-adaptive-module}.
[ { "created": "Thu, 14 May 2020 08:22:45 GMT", "version": "v1" }, { "created": "Wed, 14 Oct 2020 02:00:40 GMT", "version": "v2" }, { "created": "Wed, 18 Aug 2021 12:19:06 GMT", "version": "v3" } ]
2021-08-19
[ [ "Liu", "Zhaoyang", "" ], [ "Wang", "Limin", "" ], [ "Wu", "Wayne", "" ], [ "Qian", "Chen", "" ], [ "Lu", "Tong", "" ] ]
Video data is with complex temporal dynamics due to various factors such as camera motion, speed variation, and different activities. To effectively capture this diverse motion pattern, this paper presents a new temporal adaptive module ({\bf TAM}) to generate video-specific temporal kernels based on its own feature map. TAM proposes a unique two-level adaptive modeling scheme by decoupling the dynamic kernel into a location sensitive importance map and a location invariant aggregation weight. The importance map is learned in a local temporal window to capture short-term information, while the aggregation weight is generated from a global view with a focus on long-term structure. TAM is a modular block and could be integrated into 2D CNNs to yield a powerful video architecture (TANet) with a very small extra computational cost. The extensive experiments on Kinetics-400 and Something-Something datasets demonstrate that our TAM outperforms other temporal modeling methods consistently, and achieves the state-of-the-art performance under the similar complexity. The code is available at \url{ https://github.com/liu-zhy/temporal-adaptive-module}.
2404.04405
Haiguang Li
Haiguang Li, Usama Pervaiz, Micha{\l} Matuszak, Robert Kamara, Gilles Roux, Trausti Thormundsson, Joseph Antognini
Dynamic Switch Layers For Unsupervised Learning
Initial Submission
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
On-device machine learning (ODML) enables intelligent applications on resource-constrained devices. However, power consumption poses a major challenge, forcing a trade-off between model accuracy and power efficiency that often limits model complexity. The previously established Gated Compression (GC) layers offer a solution, enabling power efficiency without sacrificing model performance by selectively gating samples that lack signals of interest. However, their reliance on ground truth labels limits GC layers to supervised tasks. This work introduces the Dynamic Switch Layer (DSL), extending the benefits of GC layers to unsupervised learning scenarios, and maintaining power efficiency without the need for labeled data. The DSL builds upon the GC architecture, leveraging a dynamic pathway selection, and adapting model complexity in response to the innate structure of the data. We integrate the DSL into the SoundStream architecture and demonstrate that by routing up to 80% of samples through a lightweight pass we achieve a 12.3x reduction in the amount of computation performed and a 20.9x reduction in model size. This reduces the on-device inference latency by up to 26.5% and improves power efficiency by up to 21.4% without impacting model performance.
[ { "created": "Fri, 5 Apr 2024 21:03:11 GMT", "version": "v1" } ]
2024-04-09
[ [ "Li", "Haiguang", "" ], [ "Pervaiz", "Usama", "" ], [ "Matuszak", "Michał", "" ], [ "Kamara", "Robert", "" ], [ "Roux", "Gilles", "" ], [ "Thormundsson", "Trausti", "" ], [ "Antognini", "Joseph", "" ] ]
On-device machine learning (ODML) enables intelligent applications on resource-constrained devices. However, power consumption poses a major challenge, forcing a trade-off between model accuracy and power efficiency that often limits model complexity. The previously established Gated Compression (GC) layers offer a solution, enabling power efficiency without sacrificing model performance by selectively gating samples that lack signals of interest. However, their reliance on ground truth labels limits GC layers to supervised tasks. This work introduces the Dynamic Switch Layer (DSL), extending the benefits of GC layers to unsupervised learning scenarios, and maintaining power efficiency without the need for labeled data. The DSL builds upon the GC architecture, leveraging a dynamic pathway selection, and adapting model complexity in response to the innate structure of the data. We integrate the DSL into the SoundStream architecture and demonstrate that by routing up to 80% of samples through a lightweight pass we achieve a 12.3x reduction in the amount of computation performed and a 20.9x reduction in model size. This reduces the on-device inference latency by up to 26.5% and improves power efficiency by up to 21.4% without impacting model performance.
2406.08188
Bruno Roy
Bruno Roy
Attention-Based Learning for Fluid State Interpolation and Editing in a Time-Continuous Framework
5 pages, 3 figures, submitted and accepted to SIGGRAPH
null
10.1145/3641234.3671085
null
cs.LG cs.GR
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this work, we introduce FluidsFormer: a transformer-based approach for fluid interpolation within a continuous-time framework. By combining the capabilities of PITT and a residual neural network (RNN), we analytically predict the physical properties of the fluid state. This enables us to interpolate substep frames between simulated keyframes, enhancing the temporal smoothness and sharpness of animations. We demonstrate promising results for smoke interpolation and conduct initial experiments on liquids.
[ { "created": "Wed, 12 Jun 2024 13:19:42 GMT", "version": "v1" } ]
2024-06-13
[ [ "Roy", "Bruno", "" ] ]
In this work, we introduce FluidsFormer: a transformer-based approach for fluid interpolation within a continuous-time framework. By combining the capabilities of PITT and a residual neural network (RNN), we analytically predict the physical properties of the fluid state. This enables us to interpolate substep frames between simulated keyframes, enhancing the temporal smoothness and sharpness of animations. We demonstrate promising results for smoke interpolation and conduct initial experiments on liquids.
2310.08659
Yixiao Li
Yixiao Li, Yifan Yu, Chen Liang, Pengcheng He, Nikos Karampatziakis, Weizhu Chen, Tuo Zhao
LoftQ: LoRA-Fine-Tuning-Aware Quantization for Large Language Models
null
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quantization is an indispensable technique for serving Large Language Models (LLMs) and has recently found its way into LoRA fine-tuning. In this work we focus on the scenario where quantization and LoRA fine-tuning are applied together on a pre-trained model. In such cases it is common to observe a consistent gap in the performance on downstream tasks between full fine-tuning and quantization plus LoRA fine-tuning approach. In response, we propose LoftQ (LoRA-Fine-Tuning-aware Quantization), a novel quantization framework that simultaneously quantizes an LLM and finds a proper low-rank initialization for LoRA fine-tuning. Such an initialization alleviates the discrepancy between the quantized and full-precision model and significantly improves generalization in downstream tasks. We evaluate our method on natural language understanding, question answering, summarization, and natural language generation tasks. Experiments show that our method is highly effective and outperforms existing quantization methods, especially in the challenging 2-bit and 2/4-bit mixed precision regimes. The code is available on https://github.com/yxli2123/LoftQ.
[ { "created": "Thu, 12 Oct 2023 18:34:08 GMT", "version": "v1" }, { "created": "Tue, 17 Oct 2023 01:35:10 GMT", "version": "v2" }, { "created": "Mon, 23 Oct 2023 02:49:42 GMT", "version": "v3" }, { "created": "Tue, 28 Nov 2023 16:06:59 GMT", "version": "v4" } ]
2023-11-29
[ [ "Li", "Yixiao", "" ], [ "Yu", "Yifan", "" ], [ "Liang", "Chen", "" ], [ "He", "Pengcheng", "" ], [ "Karampatziakis", "Nikos", "" ], [ "Chen", "Weizhu", "" ], [ "Zhao", "Tuo", "" ] ]
Quantization is an indispensable technique for serving Large Language Models (LLMs) and has recently found its way into LoRA fine-tuning. In this work we focus on the scenario where quantization and LoRA fine-tuning are applied together on a pre-trained model. In such cases it is common to observe a consistent gap in the performance on downstream tasks between full fine-tuning and quantization plus LoRA fine-tuning approach. In response, we propose LoftQ (LoRA-Fine-Tuning-aware Quantization), a novel quantization framework that simultaneously quantizes an LLM and finds a proper low-rank initialization for LoRA fine-tuning. Such an initialization alleviates the discrepancy between the quantized and full-precision model and significantly improves generalization in downstream tasks. We evaluate our method on natural language understanding, question answering, summarization, and natural language generation tasks. Experiments show that our method is highly effective and outperforms existing quantization methods, especially in the challenging 2-bit and 2/4-bit mixed precision regimes. The code is available on https://github.com/yxli2123/LoftQ.
2208.13714
Qingsong Yan
Qingsong Yan, Qiang Wang, Kaiyong Zhao, Bo Li, Xiaowen Chu, Fei Deng
SphereDepth: Panorama Depth Estimation from Spherical Domain
Conference accept at 3DV 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The panorama image can simultaneously demonstrate complete information of the surrounding environment and has many advantages in virtual tourism, games, robotics, etc. However, the progress of panorama depth estimation cannot completely solve the problems of distortion and discontinuity caused by the commonly used projection methods. This paper proposes SphereDepth, a novel panorama depth estimation method that predicts the depth directly on the spherical mesh without projection preprocessing. The core idea is to establish the relationship between the panorama image and the spherical mesh and then use a deep neural network to extract features on the spherical domain to predict depth. To address the efficiency challenges brought by the high-resolution panorama data, we introduce two hyper-parameters for the proposed spherical mesh processing framework to balance the inference speed and accuracy. Validated on three public panorama datasets, SphereDepth achieves comparable results with the state-of-the-art methods of panorama depth estimation. Benefiting from the spherical domain setting, SphereDepth can generate a high-quality point cloud and significantly alleviate the issues of distortion and discontinuity.
[ { "created": "Mon, 29 Aug 2022 16:50:19 GMT", "version": "v1" }, { "created": "Tue, 30 Aug 2022 03:01:52 GMT", "version": "v2" }, { "created": "Sun, 4 Dec 2022 16:51:00 GMT", "version": "v3" } ]
2022-12-06
[ [ "Yan", "Qingsong", "" ], [ "Wang", "Qiang", "" ], [ "Zhao", "Kaiyong", "" ], [ "Li", "Bo", "" ], [ "Chu", "Xiaowen", "" ], [ "Deng", "Fei", "" ] ]
The panorama image can simultaneously demonstrate complete information of the surrounding environment and has many advantages in virtual tourism, games, robotics, etc. However, the progress of panorama depth estimation cannot completely solve the problems of distortion and discontinuity caused by the commonly used projection methods. This paper proposes SphereDepth, a novel panorama depth estimation method that predicts the depth directly on the spherical mesh without projection preprocessing. The core idea is to establish the relationship between the panorama image and the spherical mesh and then use a deep neural network to extract features on the spherical domain to predict depth. To address the efficiency challenges brought by the high-resolution panorama data, we introduce two hyper-parameters for the proposed spherical mesh processing framework to balance the inference speed and accuracy. Validated on three public panorama datasets, SphereDepth achieves comparable results with the state-of-the-art methods of panorama depth estimation. Benefiting from the spherical domain setting, SphereDepth can generate a high-quality point cloud and significantly alleviate the issues of distortion and discontinuity.
2312.03897
Tiago Pimentel
Tiago Pimentel, Clara Meister, Ethan Gotlieb Wilcox, Kyle Mahowald, Ryan Cotterell
Revisiting the Optimality of Word Lengths
Published at EMNLP 2023
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Zipf (1935) posited that wordforms are optimized to minimize utterances' communicative costs. Under the assumption that cost is given by an utterance's length, he supported this claim by showing that words' lengths are inversely correlated with their frequencies. Communicative cost, however, can be operationalized in different ways. Piantadosi et al. (2011) claim that cost should be measured as the distance between an utterance's information rate and channel capacity, which we dub the channel capacity hypothesis (CCH) here. Following this logic, they then proposed that a word's length should be proportional to the expected value of its surprisal (negative log-probability in context). In this work, we show that Piantadosi et al.'s derivation does not minimize CCH's cost, but rather a lower bound, which we term CCH-lower. We propose a novel derivation, suggesting an improved way to minimize CCH's cost. Under this method, we find that a language's word lengths should instead be proportional to the surprisal's expectation plus its variance-to-mean ratio. Experimentally, we compare these three communicative cost functions: Zipf's, CCH-lower , and CCH. Across 13 languages and several experimental settings, we find that length is better predicted by frequency than either of the other hypotheses. In fact, when surprisal's expectation, or expectation plus variance-to-mean ratio, is estimated using better language models, it leads to worse word length predictions. We take these results as evidence that Zipf's longstanding hypothesis holds.
[ { "created": "Wed, 6 Dec 2023 20:41:47 GMT", "version": "v1" } ]
2023-12-08
[ [ "Pimentel", "Tiago", "" ], [ "Meister", "Clara", "" ], [ "Wilcox", "Ethan Gotlieb", "" ], [ "Mahowald", "Kyle", "" ], [ "Cotterell", "Ryan", "" ] ]
Zipf (1935) posited that wordforms are optimized to minimize utterances' communicative costs. Under the assumption that cost is given by an utterance's length, he supported this claim by showing that words' lengths are inversely correlated with their frequencies. Communicative cost, however, can be operationalized in different ways. Piantadosi et al. (2011) claim that cost should be measured as the distance between an utterance's information rate and channel capacity, which we dub the channel capacity hypothesis (CCH) here. Following this logic, they then proposed that a word's length should be proportional to the expected value of its surprisal (negative log-probability in context). In this work, we show that Piantadosi et al.'s derivation does not minimize CCH's cost, but rather a lower bound, which we term CCH-lower. We propose a novel derivation, suggesting an improved way to minimize CCH's cost. Under this method, we find that a language's word lengths should instead be proportional to the surprisal's expectation plus its variance-to-mean ratio. Experimentally, we compare these three communicative cost functions: Zipf's, CCH-lower , and CCH. Across 13 languages and several experimental settings, we find that length is better predicted by frequency than either of the other hypotheses. In fact, when surprisal's expectation, or expectation plus variance-to-mean ratio, is estimated using better language models, it leads to worse word length predictions. We take these results as evidence that Zipf's longstanding hypothesis holds.
2202.04696
Amir Masoud Jafarpisheh
Amir Masoud Jafarpisheh, Mahtab Mirmohseni, and Mohammad Ali Maddah-Ali
Distributed Attribute-based Private Access Control
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In attribute-based access control, users with certain verified attributes will gain access to some particular data. Concerning with privacy of the users' attributes, we study the problem of distributed attribute-based private access control (DAPAC) with multiple authorities, where each authority will learn and verify only one of the attributes. To investigate its fundamental limits, we introduce an information theoretic DAPAC framework, with $N \in \mathbb{N}$, $N\geq 2$, replicated non-colluding servers (authorities) and some users. Each user has an attribute vector $\mathbf{v^*}=(v_1^*, ..., v_N^*)$ of dimension $N$ and is eligible to retrieve a message $W^{\mathbf{v}^*}$, available in all servers. Each server $n\in [N]$ is able to only observe and verify the $n$'th attribute of a user. In response, it sends a function of its data to the user. The system must satisfy the following conditions: (1) Correctness: the user with attribute vector $\mathbf{v^*}$ is able to retrieve his intended message $W^{\mathbf{v}^*}$ from the servers' response, (2) Data Secrecy: the user will not learn anything about the other messages, (3) Attribute Privacy: each Server~$n$ learns nothing beyond attribute $n$ of the user. The capacity of the DAPAC is defined as the ratio of the file size and the aggregated size of the responses, maximized over all feasible schemes. We obtain a lower bound on the capacity of this problem by proposing an achievable algorithm with rate $\frac{1}{2K}$, where $K$ is the size of the alphabet of each attribute.
[ { "created": "Wed, 9 Feb 2022 19:44:53 GMT", "version": "v1" } ]
2022-02-11
[ [ "Jafarpisheh", "Amir Masoud", "" ], [ "Mirmohseni", "Mahtab", "" ], [ "Maddah-Ali", "Mohammad Ali", "" ] ]
In attribute-based access control, users with certain verified attributes will gain access to some particular data. Concerning with privacy of the users' attributes, we study the problem of distributed attribute-based private access control (DAPAC) with multiple authorities, where each authority will learn and verify only one of the attributes. To investigate its fundamental limits, we introduce an information theoretic DAPAC framework, with $N \in \mathbb{N}$, $N\geq 2$, replicated non-colluding servers (authorities) and some users. Each user has an attribute vector $\mathbf{v^*}=(v_1^*, ..., v_N^*)$ of dimension $N$ and is eligible to retrieve a message $W^{\mathbf{v}^*}$, available in all servers. Each server $n\in [N]$ is able to only observe and verify the $n$'th attribute of a user. In response, it sends a function of its data to the user. The system must satisfy the following conditions: (1) Correctness: the user with attribute vector $\mathbf{v^*}$ is able to retrieve his intended message $W^{\mathbf{v}^*}$ from the servers' response, (2) Data Secrecy: the user will not learn anything about the other messages, (3) Attribute Privacy: each Server~$n$ learns nothing beyond attribute $n$ of the user. The capacity of the DAPAC is defined as the ratio of the file size and the aggregated size of the responses, maximized over all feasible schemes. We obtain a lower bound on the capacity of this problem by proposing an achievable algorithm with rate $\frac{1}{2K}$, where $K$ is the size of the alphabet of each attribute.
2104.07414
AnChen Li
Anchen Li, Bo Yang, Hongxu Chen, Guandong Xu
Hyperbolic Neural Collaborative Recommender
arXiv admin note: substantial text overlap with arXiv:2102.09389
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper explores the use of hyperbolic geometry and deep learning techniques for recommendation. We present Hyperbolic Neural Collaborative Recommender (HNCR), a deep hyperbolic representation learning method that exploits mutual semantic relations among users/items for collaborative filtering (CF) tasks. HNCR contains two major phases: neighbor construction and recommendation framework. The first phase introduces a neighbor construction strategy to construct a semantic neighbor set for each user and item according to the user-item historical interaction. In the second phase, we develop a deep framework based on hyperbolic geometry to integrate constructed neighbor sets into recommendation. Via a series of extensive experiments, we show that HNCR outperforms its Euclidean counterpart and state-of-the-art baselines.
[ { "created": "Thu, 15 Apr 2021 12:28:09 GMT", "version": "v1" } ]
2021-04-16
[ [ "Li", "Anchen", "" ], [ "Yang", "Bo", "" ], [ "Chen", "Hongxu", "" ], [ "Xu", "Guandong", "" ] ]
This paper explores the use of hyperbolic geometry and deep learning techniques for recommendation. We present Hyperbolic Neural Collaborative Recommender (HNCR), a deep hyperbolic representation learning method that exploits mutual semantic relations among users/items for collaborative filtering (CF) tasks. HNCR contains two major phases: neighbor construction and recommendation framework. The first phase introduces a neighbor construction strategy to construct a semantic neighbor set for each user and item according to the user-item historical interaction. In the second phase, we develop a deep framework based on hyperbolic geometry to integrate constructed neighbor sets into recommendation. Via a series of extensive experiments, we show that HNCR outperforms its Euclidean counterpart and state-of-the-art baselines.
2404.15882
Eunsu Baek
Eunsu Baek, Keondo Park, Jiyoon Kim and Hyung-Sin Kim
Unexplored Faces of Robustness and Out-of-Distribution: Covariate Shifts in Environment and Sensor Domains
Published as a conference paper at CVPR 2024
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computer vision applications predict on digital images acquired by a camera from physical scenes through light. However, conventional robustness benchmarks rely on perturbations in digitized images, diverging from distribution shifts occurring in the image acquisition process. To bridge this gap, we introduce a new distribution shift dataset, ImageNet-ES, comprising variations in environmental and camera sensor factors by directly capturing 202k images with a real camera in a controllable testbed. With the new dataset, we evaluate out-of-distribution (OOD) detection and model robustness. We find that existing OOD detection methods do not cope with the covariate shifts in ImageNet-ES, implying that the definition and detection of OOD should be revisited to embrace real-world distribution shifts. We also observe that the model becomes more robust in both ImageNet-C and -ES by learning environment and sensor variations in addition to existing digital augmentations. Lastly, our results suggest that effective shift mitigation via camera sensor control can significantly improve performance without increasing model size. With these findings, our benchmark may aid future research on robustness, OOD, and camera sensor control for computer vision. Our code and dataset are available at https://github.com/Edw2n/ImageNet-ES.
[ { "created": "Wed, 24 Apr 2024 13:59:19 GMT", "version": "v1" }, { "created": "Thu, 25 Apr 2024 05:38:52 GMT", "version": "v2" } ]
2024-04-26
[ [ "Baek", "Eunsu", "" ], [ "Park", "Keondo", "" ], [ "Kim", "Jiyoon", "" ], [ "Kim", "Hyung-Sin", "" ] ]
Computer vision applications predict on digital images acquired by a camera from physical scenes through light. However, conventional robustness benchmarks rely on perturbations in digitized images, diverging from distribution shifts occurring in the image acquisition process. To bridge this gap, we introduce a new distribution shift dataset, ImageNet-ES, comprising variations in environmental and camera sensor factors by directly capturing 202k images with a real camera in a controllable testbed. With the new dataset, we evaluate out-of-distribution (OOD) detection and model robustness. We find that existing OOD detection methods do not cope with the covariate shifts in ImageNet-ES, implying that the definition and detection of OOD should be revisited to embrace real-world distribution shifts. We also observe that the model becomes more robust in both ImageNet-C and -ES by learning environment and sensor variations in addition to existing digital augmentations. Lastly, our results suggest that effective shift mitigation via camera sensor control can significantly improve performance without increasing model size. With these findings, our benchmark may aid future research on robustness, OOD, and camera sensor control for computer vision. Our code and dataset are available at https://github.com/Edw2n/ImageNet-ES.
2304.05368
Yun Zhao
Yuqing Wang, Yun Zhao, Linda Petzold
Are Large Language Models Ready for Healthcare? A Comparative Study on Clinical Language Understanding
24 pages
Machine Learning for Healthcare Conference, PMLR, 2023
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Large language models (LLMs) have made significant progress in various domains, including healthcare. However, the specialized nature of clinical language understanding tasks presents unique challenges and limitations that warrant further investigation. In this study, we conduct a comprehensive evaluation of state-of-the-art LLMs, namely GPT-3.5, GPT-4, and Bard, within the realm of clinical language understanding tasks. These tasks span a diverse range, including named entity recognition, relation extraction, natural language inference, semantic textual similarity, document classification, and question-answering. We also introduce a novel prompting strategy, self-questioning prompting (SQP), tailored to enhance LLMs' performance by eliciting informative questions and answers pertinent to the clinical scenarios at hand. Our evaluation underscores the significance of task-specific learning strategies and prompting techniques for improving LLMs' effectiveness in healthcare-related tasks. Additionally, our in-depth error analysis on the challenging relation extraction task offers valuable insights into error distribution and potential avenues for improvement using SQP. Our study sheds light on the practical implications of employing LLMs in the specialized domain of healthcare, serving as a foundation for future research and the development of potential applications in healthcare settings.
[ { "created": "Sun, 9 Apr 2023 16:31:47 GMT", "version": "v1" }, { "created": "Thu, 13 Apr 2023 05:32:44 GMT", "version": "v2" }, { "created": "Sun, 30 Jul 2023 19:09:02 GMT", "version": "v3" } ]
2023-08-01
[ [ "Wang", "Yuqing", "" ], [ "Zhao", "Yun", "" ], [ "Petzold", "Linda", "" ] ]
Large language models (LLMs) have made significant progress in various domains, including healthcare. However, the specialized nature of clinical language understanding tasks presents unique challenges and limitations that warrant further investigation. In this study, we conduct a comprehensive evaluation of state-of-the-art LLMs, namely GPT-3.5, GPT-4, and Bard, within the realm of clinical language understanding tasks. These tasks span a diverse range, including named entity recognition, relation extraction, natural language inference, semantic textual similarity, document classification, and question-answering. We also introduce a novel prompting strategy, self-questioning prompting (SQP), tailored to enhance LLMs' performance by eliciting informative questions and answers pertinent to the clinical scenarios at hand. Our evaluation underscores the significance of task-specific learning strategies and prompting techniques for improving LLMs' effectiveness in healthcare-related tasks. Additionally, our in-depth error analysis on the challenging relation extraction task offers valuable insights into error distribution and potential avenues for improvement using SQP. Our study sheds light on the practical implications of employing LLMs in the specialized domain of healthcare, serving as a foundation for future research and the development of potential applications in healthcare settings.
1901.03728
Valsamis Ntouskos
Fiora Pirri, Lorenzo Mauro, Edoardo Alati, Valsamis Ntouskos, Mahdieh Izadpanahkakhk, Elham Omrani
Anticipation and next action forecasting in video: an end-to-end model with memory
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Action anticipation and forecasting in videos do not require a hat-trick, as far as there are signs in the context to foresee how actions are going to be deployed. Capturing these signs is hard because the context includes the past. We propose an end-to-end network for action anticipation and forecasting with memory, to both anticipate the current action and foresee the next one. Experiments on action sequence datasets show excellent results indicating that training on histories with a dynamic memory can significantly improve forecasting performance.
[ { "created": "Fri, 11 Jan 2019 19:47:53 GMT", "version": "v1" } ]
2019-01-15
[ [ "Pirri", "Fiora", "" ], [ "Mauro", "Lorenzo", "" ], [ "Alati", "Edoardo", "" ], [ "Ntouskos", "Valsamis", "" ], [ "Izadpanahkakhk", "Mahdieh", "" ], [ "Omrani", "Elham", "" ] ]
Action anticipation and forecasting in videos do not require a hat-trick, as far as there are signs in the context to foresee how actions are going to be deployed. Capturing these signs is hard because the context includes the past. We propose an end-to-end network for action anticipation and forecasting with memory, to both anticipate the current action and foresee the next one. Experiments on action sequence datasets show excellent results indicating that training on histories with a dynamic memory can significantly improve forecasting performance.
2405.16494
Hao Hao
Hao Hao, Xiaoqun Zhang, Bingdong Li and Aimin Zhou
A First Look at Kolmogorov-Arnold Networks in Surrogate-assisted Evolutionary Algorithms
null
null
null
null
cs.NE
http://creativecommons.org/licenses/by/4.0/
Surrogate-assisted Evolutionary Algorithm (SAEA) is an essential method for solving expensive expensive problems. Utilizing surrogate models to substitute the optimization function can significantly reduce reliance on the function evaluations during the search process, thereby lowering the optimization costs. The construction of surrogate models is a critical component in SAEAs, with numerous machine learning algorithms playing a pivotal role in the model-building phase. This paper introduces Kolmogorov-Arnold Networks (KANs) as surrogate models within SAEAs, examining their application and effectiveness. We employ KANs for regression and classification tasks, focusing on the selection of promising solutions during the search process, which consequently reduces the number of expensive function evaluations. Experimental results indicate that KANs demonstrate commendable performance within SAEAs, effectively decreasing the number of function calls and enhancing the optimization efficiency. The relevant code is publicly accessible and can be found in the GitHub repository.
[ { "created": "Sun, 26 May 2024 09:12:44 GMT", "version": "v1" } ]
2024-05-28
[ [ "Hao", "Hao", "" ], [ "Zhang", "Xiaoqun", "" ], [ "Li", "Bingdong", "" ], [ "Zhou", "Aimin", "" ] ]
Surrogate-assisted Evolutionary Algorithm (SAEA) is an essential method for solving expensive expensive problems. Utilizing surrogate models to substitute the optimization function can significantly reduce reliance on the function evaluations during the search process, thereby lowering the optimization costs. The construction of surrogate models is a critical component in SAEAs, with numerous machine learning algorithms playing a pivotal role in the model-building phase. This paper introduces Kolmogorov-Arnold Networks (KANs) as surrogate models within SAEAs, examining their application and effectiveness. We employ KANs for regression and classification tasks, focusing on the selection of promising solutions during the search process, which consequently reduces the number of expensive function evaluations. Experimental results indicate that KANs demonstrate commendable performance within SAEAs, effectively decreasing the number of function calls and enhancing the optimization efficiency. The relevant code is publicly accessible and can be found in the GitHub repository.
1810.03611
Marc-Etienne Brunet
Marc-Etienne Brunet, Colleen Alkalay-Houlihan, Ashton Anderson, Richard Zemel
Understanding the Origins of Bias in Word Embeddings
null
null
null
null
cs.LG cs.CY stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The power of machine learning systems not only promises great technical progress, but risks societal harm. As a recent example, researchers have shown that popular word embedding algorithms exhibit stereotypical biases, such as gender bias. The widespread use of these algorithms in machine learning systems, from automated translation services to curriculum vitae scanners, can amplify stereotypes in important contexts. Although methods have been developed to measure these biases and alter word embeddings to mitigate their biased representations, there is a lack of understanding in how word embedding bias depends on the training data. In this work, we develop a technique for understanding the origins of bias in word embeddings. Given a word embedding trained on a corpus, our method identifies how perturbing the corpus will affect the bias of the resulting embedding. This can be used to trace the origins of word embedding bias back to the original training documents. Using our method, one can investigate trends in the bias of the underlying corpus and identify subsets of documents whose removal would most reduce bias. We demonstrate our techniques on both a New York Times and Wikipedia corpus and find that our influence function-based approximations are very accurate.
[ { "created": "Mon, 8 Oct 2018 18:00:00 GMT", "version": "v1" }, { "created": "Fri, 7 Jun 2019 18:26:54 GMT", "version": "v2" } ]
2019-06-11
[ [ "Brunet", "Marc-Etienne", "" ], [ "Alkalay-Houlihan", "Colleen", "" ], [ "Anderson", "Ashton", "" ], [ "Zemel", "Richard", "" ] ]
The power of machine learning systems not only promises great technical progress, but risks societal harm. As a recent example, researchers have shown that popular word embedding algorithms exhibit stereotypical biases, such as gender bias. The widespread use of these algorithms in machine learning systems, from automated translation services to curriculum vitae scanners, can amplify stereotypes in important contexts. Although methods have been developed to measure these biases and alter word embeddings to mitigate their biased representations, there is a lack of understanding in how word embedding bias depends on the training data. In this work, we develop a technique for understanding the origins of bias in word embeddings. Given a word embedding trained on a corpus, our method identifies how perturbing the corpus will affect the bias of the resulting embedding. This can be used to trace the origins of word embedding bias back to the original training documents. Using our method, one can investigate trends in the bias of the underlying corpus and identify subsets of documents whose removal would most reduce bias. We demonstrate our techniques on both a New York Times and Wikipedia corpus and find that our influence function-based approximations are very accurate.
1601.01597
Jonathan S Turner
Jonathan Turner
Grafalgo - A Library of Graph Algorithms and Supporting Data Structures (revised)
null
null
null
WUCSE-2016-01
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This report provides an (updated) overview of {\sl Grafalgo}, an open-source library of graph algorithms and the data structures used to implement them. The programs in this library were originally written to support a graduate class in advanced data structures and algorithms at Washington University. Because the code's primary purpose was pedagogical, it was written to be as straightforward as possible, while still being highly efficient. Grafalgo is implemented in C++ and incorporates some features of C++11. The library is available on an open-source basis and may be downloaded from https://code.google.com/p/grafalgo/. Source code documentation is at www.arl.wustl.edu/\textasciitilde jst/doc/grafalgo. While not designed as production code, the library is suitable for use in larger systems, so long as its limitations are understood. The readability of the code also makes it relatively straightforward to extend it for other purposes.
[ { "created": "Thu, 7 Jan 2016 16:57:17 GMT", "version": "v1" } ]
2016-01-08
[ [ "Turner", "Jonathan", "" ] ]
This report provides an (updated) overview of {\sl Grafalgo}, an open-source library of graph algorithms and the data structures used to implement them. The programs in this library were originally written to support a graduate class in advanced data structures and algorithms at Washington University. Because the code's primary purpose was pedagogical, it was written to be as straightforward as possible, while still being highly efficient. Grafalgo is implemented in C++ and incorporates some features of C++11. The library is available on an open-source basis and may be downloaded from https://code.google.com/p/grafalgo/. Source code documentation is at www.arl.wustl.edu/\textasciitilde jst/doc/grafalgo. While not designed as production code, the library is suitable for use in larger systems, so long as its limitations are understood. The readability of the code also makes it relatively straightforward to extend it for other purposes.
0911.4329
Ki-Hoon Lee
Ki-Hoon Lee, Kyu-Young Whang, Wook-Shin Han, and Min-Soo Kim
Structural Consistency: Enabling XML Keyword Search to Eliminate Spurious Results Consistently
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
XML keyword search is a user-friendly way to query XML data using only keywords. In XML keyword search, to achieve high precision without sacrificing recall, it is important to remove spurious results not intended by the user. Efforts to eliminate spurious results have enjoyed some success by using the concepts of LCA or its variants, SLCA and MLCA. However, existing methods still could find many spurious results. The fundamental cause for the occurrence of spurious results is that the existing methods try to eliminate spurious results locally without global examination of all the query results and, accordingly, some spurious results are not consistently eliminated. In this paper, we propose a novel keyword search method that removes spurious results consistently by exploiting the new concept of structural consistency.
[ { "created": "Mon, 23 Nov 2009 06:45:37 GMT", "version": "v1" }, { "created": "Tue, 24 Nov 2009 01:00:10 GMT", "version": "v2" } ]
2009-11-24
[ [ "Lee", "Ki-Hoon", "" ], [ "Whang", "Kyu-Young", "" ], [ "Han", "Wook-Shin", "" ], [ "Kim", "Min-Soo", "" ] ]
XML keyword search is a user-friendly way to query XML data using only keywords. In XML keyword search, to achieve high precision without sacrificing recall, it is important to remove spurious results not intended by the user. Efforts to eliminate spurious results have enjoyed some success by using the concepts of LCA or its variants, SLCA and MLCA. However, existing methods still could find many spurious results. The fundamental cause for the occurrence of spurious results is that the existing methods try to eliminate spurious results locally without global examination of all the query results and, accordingly, some spurious results are not consistently eliminated. In this paper, we propose a novel keyword search method that removes spurious results consistently by exploiting the new concept of structural consistency.
2303.15663
Ankita Agarwal
Ankita Agarwal (1), Tanvi Banerjee (1), Joy Gockel (2), Saniya LeBlanc (3), Joe Walker (4), John Middendorf (4) ((1) Wright State University, (2) Colorado School of Mines, (3) The George Washington University, (4) Open Additive, LLC)
Predicting Thermoelectric Power Factor of Bismuth Telluride During Laser Powder Bed Fusion Additive Manufacturing
8 pages, 2 figures, 2 tables, accepted at Data Science for Smart Manufacturing and Healthcare workshop (DS2-MH) at SIAM International Conference on Data Mining (SDM23) conference
null
null
null
cs.LG cs.CE
http://creativecommons.org/licenses/by/4.0/
An additive manufacturing (AM) process, like laser powder bed fusion, allows for the fabrication of objects by spreading and melting powder in layers until a freeform part shape is created. In order to improve the properties of the material involved in the AM process, it is important to predict the material characterization property as a function of the processing conditions. In thermoelectric materials, the power factor is a measure of how efficiently the material can convert heat to electricity. While earlier works have predicted the material characterization properties of different thermoelectric materials using various techniques, implementation of machine learning models to predict the power factor of bismuth telluride (Bi2Te3) during the AM process has not been explored. This is important as Bi2Te3 is a standard material for low temperature applications. Thus, we used data about manufacturing processing parameters involved and in-situ sensor monitoring data collected during AM of Bi2Te3, to train different machine learning models in order to predict its thermoelectric power factor. We implemented supervised machine learning techniques using 80% training and 20% test data and further used the permutation feature importance method to identify important processing parameters and in-situ sensor features which were best at predicting power factor of the material. Ensemble-based methods like random forest, AdaBoost classifier, and bagging classifier performed the best in predicting power factor with the highest accuracy of 90% achieved by the bagging classifier model. Additionally, we found the top 15 processing parameters and in-situ sensor features to characterize the material manufacturing property like power factor. These features could further be optimized to maximize power factor of the thermoelectric material and improve the quality of the products built using this material.
[ { "created": "Tue, 28 Mar 2023 01:09:15 GMT", "version": "v1" } ]
2023-03-29
[ [ "Agarwal", "Ankita", "" ], [ "Banerjee", "Tanvi", "" ], [ "Gockel", "Joy", "" ], [ "LeBlanc", "Saniya", "" ], [ "Walker", "Joe", "" ], [ "Middendorf", "John", "" ] ]
An additive manufacturing (AM) process, like laser powder bed fusion, allows for the fabrication of objects by spreading and melting powder in layers until a freeform part shape is created. In order to improve the properties of the material involved in the AM process, it is important to predict the material characterization property as a function of the processing conditions. In thermoelectric materials, the power factor is a measure of how efficiently the material can convert heat to electricity. While earlier works have predicted the material characterization properties of different thermoelectric materials using various techniques, implementation of machine learning models to predict the power factor of bismuth telluride (Bi2Te3) during the AM process has not been explored. This is important as Bi2Te3 is a standard material for low temperature applications. Thus, we used data about manufacturing processing parameters involved and in-situ sensor monitoring data collected during AM of Bi2Te3, to train different machine learning models in order to predict its thermoelectric power factor. We implemented supervised machine learning techniques using 80% training and 20% test data and further used the permutation feature importance method to identify important processing parameters and in-situ sensor features which were best at predicting power factor of the material. Ensemble-based methods like random forest, AdaBoost classifier, and bagging classifier performed the best in predicting power factor with the highest accuracy of 90% achieved by the bagging classifier model. Additionally, we found the top 15 processing parameters and in-situ sensor features to characterize the material manufacturing property like power factor. These features could further be optimized to maximize power factor of the thermoelectric material and improve the quality of the products built using this material.
2103.12195
Shayan Zargari
Shayan Zargari, Ata Khalili, Qingqing Wu, Mohammad Robat Mili, and Derrick Wing Kwan Ng
Max-Min Fair Energy-Efficient Beamforming Design for Intelligent Reflecting Surface-Aided SWIPT Systems with Non-linear Energy Harvesting Model
Minor Revision by IEEE TVT
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper considers an intelligent reflecting sur-face (IRS)-aided simultaneous wireless information and power transfer (SWIPT) network, where multiple users decode data and harvest energy from the transmitted signal of a transmit-ter. The proposed design framework exploits the cost-effective IRS to establish favorable communication environment to improve the fair energy efficient. In particular, we study the max-min energy efficiency (EE) of the system by jointly designing the transmit information and energy beamforming at the base station (BS), phase shifts at the IRS, as well as the power splitting (PS) ratio at all users subject to the minimum rate, minimum harvested energy, and transmit power constraints. The formulated problem is non-convex and thus challenging to be solved. We propose two algorithms namely penalty-based and inner approximation (IA)-based to handle the non-convexity of the optimization problem. As such, we divide the original problem into two sub-problems and apply the alternating optimization (AO) algorithm for both proposed algorithms to handle it iteratively. In particular, in the penalty-based algorithm for the first sub-problem, the semi-definite relaxation (SDR) technique, difference of convex functions (DC) programming, majorization-minimization (MM) approach, and fractional programming theory are exploited to transform the non-convex optimization problem into a convex form that can be addressed efficiently. For the second sub-problem, a penalty-based approach is proposed to handle the optimization on the phase shifts introduced by the IRS with the proposed algorithms. For the IA-based method, we optimize jointly beamforming vectors and phase shifts while the PS ratio is solved optimally in the first sub-problem...
[ { "created": "Mon, 22 Mar 2021 21:57:51 GMT", "version": "v1" } ]
2021-03-24
[ [ "Zargari", "Shayan", "" ], [ "Khalili", "Ata", "" ], [ "Wu", "Qingqing", "" ], [ "Mili", "Mohammad Robat", "" ], [ "Ng", "Derrick Wing Kwan", "" ] ]
This paper considers an intelligent reflecting sur-face (IRS)-aided simultaneous wireless information and power transfer (SWIPT) network, where multiple users decode data and harvest energy from the transmitted signal of a transmit-ter. The proposed design framework exploits the cost-effective IRS to establish favorable communication environment to improve the fair energy efficient. In particular, we study the max-min energy efficiency (EE) of the system by jointly designing the transmit information and energy beamforming at the base station (BS), phase shifts at the IRS, as well as the power splitting (PS) ratio at all users subject to the minimum rate, minimum harvested energy, and transmit power constraints. The formulated problem is non-convex and thus challenging to be solved. We propose two algorithms namely penalty-based and inner approximation (IA)-based to handle the non-convexity of the optimization problem. As such, we divide the original problem into two sub-problems and apply the alternating optimization (AO) algorithm for both proposed algorithms to handle it iteratively. In particular, in the penalty-based algorithm for the first sub-problem, the semi-definite relaxation (SDR) technique, difference of convex functions (DC) programming, majorization-minimization (MM) approach, and fractional programming theory are exploited to transform the non-convex optimization problem into a convex form that can be addressed efficiently. For the second sub-problem, a penalty-based approach is proposed to handle the optimization on the phase shifts introduced by the IRS with the proposed algorithms. For the IA-based method, we optimize jointly beamforming vectors and phase shifts while the PS ratio is solved optimally in the first sub-problem...
2406.03793
Yue Xu
Yue Xu, Zhilin Lin, Yusong Qiu, Cewu Lu, Yong-Lu Li
Low-Rank Similarity Mining for Multimodal Dataset Distillation
Accepted at ICML 2024
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Though dataset distillation has witnessed rapid development in recent years, the distillation of multimodal data, e.g., image-text pairs, poses unique and under-explored challenges. Unlike unimodal data, image-text contrastive learning (ITC) data lack inherent categorization and should instead place greater emphasis on modality correspondence. In this work, we propose Low-Rank Similarity Mining (LoRS) for multimodal dataset distillation, that concurrently distills a ground truth similarity matrix with image-text pairs, and leverages low-rank factorization for efficiency and scalability. The proposed approach brings significant improvement to the existing algorithms, marking a significant contribution to the field of visual-language dataset distillation. We advocate adopting LoRS as a foundational synthetic data setup for image-text dataset distillation. Our code is available at https://github.com/silicx/LoRS_Distill.
[ { "created": "Thu, 6 Jun 2024 07:05:20 GMT", "version": "v1" } ]
2024-06-07
[ [ "Xu", "Yue", "" ], [ "Lin", "Zhilin", "" ], [ "Qiu", "Yusong", "" ], [ "Lu", "Cewu", "" ], [ "Li", "Yong-Lu", "" ] ]
Though dataset distillation has witnessed rapid development in recent years, the distillation of multimodal data, e.g., image-text pairs, poses unique and under-explored challenges. Unlike unimodal data, image-text contrastive learning (ITC) data lack inherent categorization and should instead place greater emphasis on modality correspondence. In this work, we propose Low-Rank Similarity Mining (LoRS) for multimodal dataset distillation, that concurrently distills a ground truth similarity matrix with image-text pairs, and leverages low-rank factorization for efficiency and scalability. The proposed approach brings significant improvement to the existing algorithms, marking a significant contribution to the field of visual-language dataset distillation. We advocate adopting LoRS as a foundational synthetic data setup for image-text dataset distillation. Our code is available at https://github.com/silicx/LoRS_Distill.
1907.05231
Shuai Ma
Shuai Ma and Jia Yuan Yu
Variance-Based Risk Estimations in Markov Processes via Transformation with State Lumping
7 pages, 7 figures, SMC 2019 accepted. arXiv admin note: text overlap with arXiv:1907.04269
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Variance plays a crucial role in risk-sensitive reinforcement learning, and most risk measures can be analyzed via variance. In this paper, we consider two law-invariant risks as examples: mean-variance risk and exponential utility risk. With the aid of the state-augmentation transformation (SAT), we show that, the two risks can be estimated in Markov decision processes (MDPs) with a stochastic transition-based reward and a randomized policy. To relieve the enlarged state space, a novel definition of isotopic states is proposed for state lumping, considering the special structure of the transformed transition probability. In the numerical experiment, we illustrate state lumping in the SAT, errors from a naive reward simplification, and the validity of the SAT for the two risk estimations.
[ { "created": "Tue, 9 Jul 2019 16:04:33 GMT", "version": "v1" } ]
2019-07-12
[ [ "Ma", "Shuai", "" ], [ "Yu", "Jia Yuan", "" ] ]
Variance plays a crucial role in risk-sensitive reinforcement learning, and most risk measures can be analyzed via variance. In this paper, we consider two law-invariant risks as examples: mean-variance risk and exponential utility risk. With the aid of the state-augmentation transformation (SAT), we show that, the two risks can be estimated in Markov decision processes (MDPs) with a stochastic transition-based reward and a randomized policy. To relieve the enlarged state space, a novel definition of isotopic states is proposed for state lumping, considering the special structure of the transformed transition probability. In the numerical experiment, we illustrate state lumping in the SAT, errors from a naive reward simplification, and the validity of the SAT for the two risk estimations.
2406.14442
Elisa G\'omez De Lope
Elisa G\'omez de Lope, Saurabh Deshpande, Ram\'on Vi\~nas Torn\'e, Pietro Li\`o, Enrico Glaab (on behalf of the NCER-PD consortium) and St\'ephane P. A. Bordas
Graph Representation Learning Strategies for Omics Data: A Case Study on Parkinson's Disease
Submitted to Machine Learning in Computational Biology 2024 as an extended abstract, 2 pages + 1 appendix
null
null
null
cs.LG cs.AI cs.CE q-bio.BM q-bio.MN
http://creativecommons.org/licenses/by-nc-nd/4.0/
Omics data analysis is crucial for studying complex diseases, but its high dimensionality and heterogeneity challenge classical statistical and machine learning methods. Graph neural networks have emerged as promising alternatives, yet the optimal strategies for their design and optimization in real-world biomedical challenges remain unclear. This study evaluates various graph representation learning models for case-control classification using high-throughput biological data from Parkinson's disease and control samples. We compare topologies derived from sample similarity networks and molecular interaction networks, including protein-protein and metabolite-metabolite interactions (PPI, MMI). Graph Convolutional Network (GCNs), Chebyshev spectral graph convolution (ChebyNet), and Graph Attention Network (GAT), are evaluated alongside advanced architectures like graph transformers, the graph U-net, and simpler models like multilayer perceptron (MLP). These models are systematically applied to transcriptomics and metabolomics data independently. Our comparative analysis highlights the benefits and limitations of various architectures in extracting patterns from omics data, paving the way for more accurate and interpretable models in biomedical research.
[ { "created": "Thu, 20 Jun 2024 16:06:39 GMT", "version": "v1" } ]
2024-06-24
[ [ "de Lope", "Elisa Gómez", "", "on behalf of the NCER-PD consortium" ], [ "Deshpande", "Saurabh", "", "on behalf of the NCER-PD consortium" ], [ "Torné", "Ramón Viñas", "", "on behalf of the NCER-PD consortium" ], [ "Liò", "Pietro", "", "on behalf of the NCER-PD consortium" ], [ "Glaab", "Enrico", "", "on behalf of the NCER-PD consortium" ], [ "Bordas", "Stéphane P. A.", "" ] ]
Omics data analysis is crucial for studying complex diseases, but its high dimensionality and heterogeneity challenge classical statistical and machine learning methods. Graph neural networks have emerged as promising alternatives, yet the optimal strategies for their design and optimization in real-world biomedical challenges remain unclear. This study evaluates various graph representation learning models for case-control classification using high-throughput biological data from Parkinson's disease and control samples. We compare topologies derived from sample similarity networks and molecular interaction networks, including protein-protein and metabolite-metabolite interactions (PPI, MMI). Graph Convolutional Network (GCNs), Chebyshev spectral graph convolution (ChebyNet), and Graph Attention Network (GAT), are evaluated alongside advanced architectures like graph transformers, the graph U-net, and simpler models like multilayer perceptron (MLP). These models are systematically applied to transcriptomics and metabolomics data independently. Our comparative analysis highlights the benefits and limitations of various architectures in extracting patterns from omics data, paving the way for more accurate and interpretable models in biomedical research.
2106.02435
Shaokun Zhang
Shaokun Zhang, Xiawu Zheng, Chenyi Yang, Yuchao Li, Yan Wang, Fei Chao, Mengdi Wang, Shen Li, Jun Yang, Rongrong Ji
You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Nature Gradient
12 pages, 3 figures
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Despite superior performance on various natural language processing tasks, pre-trained models such as BERT are challenged by deploying on resource-constraint devices. Most existing model compression approaches require re-compression or fine-tuning across diverse constraints to accommodate various hardware deployments. This practically limits the further application of model compression. Moreover, the ineffective training and searching process of existing elastic compression paradigms[4,27] prevents the direct migration to BERT compression. Motivated by the necessity of efficient inference across various constraints on BERT, we propose a novel approach, YOCO-BERT, to achieve compress once and deploy everywhere. Specifically, we first construct a huge search space with 10^13 architectures, which covers nearly all configurations in BERT model. Then, we propose a novel stochastic nature gradient optimization method to guide the generation of optimal candidate architecture which could keep a balanced trade-off between explorations and exploitation. When a certain resource constraint is given, a lightweight distribution optimization approach is utilized to obtain the optimal network for target deployment without fine-tuning. Compared with state-of-the-art algorithms, YOCO-BERT provides more compact models, yet achieving 2.1%-4.5% average accuracy improvement on the GLUE benchmark. Besides, YOCO-BERT is also more effective, e.g.,the training complexity is O(1)for N different devices. Code is availablehttps://github.com/MAC-AutoML/YOCO-BERT.
[ { "created": "Fri, 4 Jun 2021 12:17:44 GMT", "version": "v1" } ]
2021-06-07
[ [ "Zhang", "Shaokun", "" ], [ "Zheng", "Xiawu", "" ], [ "Yang", "Chenyi", "" ], [ "Li", "Yuchao", "" ], [ "Wang", "Yan", "" ], [ "Chao", "Fei", "" ], [ "Wang", "Mengdi", "" ], [ "Li", "Shen", "" ], [ "Yang", "Jun", "" ], [ "Ji", "Rongrong", "" ] ]
Despite superior performance on various natural language processing tasks, pre-trained models such as BERT are challenged by deploying on resource-constraint devices. Most existing model compression approaches require re-compression or fine-tuning across diverse constraints to accommodate various hardware deployments. This practically limits the further application of model compression. Moreover, the ineffective training and searching process of existing elastic compression paradigms[4,27] prevents the direct migration to BERT compression. Motivated by the necessity of efficient inference across various constraints on BERT, we propose a novel approach, YOCO-BERT, to achieve compress once and deploy everywhere. Specifically, we first construct a huge search space with 10^13 architectures, which covers nearly all configurations in BERT model. Then, we propose a novel stochastic nature gradient optimization method to guide the generation of optimal candidate architecture which could keep a balanced trade-off between explorations and exploitation. When a certain resource constraint is given, a lightweight distribution optimization approach is utilized to obtain the optimal network for target deployment without fine-tuning. Compared with state-of-the-art algorithms, YOCO-BERT provides more compact models, yet achieving 2.1%-4.5% average accuracy improvement on the GLUE benchmark. Besides, YOCO-BERT is also more effective, e.g.,the training complexity is O(1)for N different devices. Code is availablehttps://github.com/MAC-AutoML/YOCO-BERT.
1410.1639
Yichen Jiang
Yichen Jiang, Yi Ji, Tianhua Liu
An Anonymous Communication Scheme based on Ring Signature in VANETs
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vehicular ad hoc networks allow vehicles to connect themselves as networks so that cars could communicate with each other. This paper introduces an anonymous communication scheme providing integrity protection, multi-level privacy and auditability. The scheme is based on a certificateless ring signature proposed in this paper, which is contributed to reduce the length of the signature and simplify the key management. In our scheme, vehicles can compose the anonymous group without the help of road-side infrastructure or central authority. The computation overhead is close to a normal signature scheme, so it is efficient in most application scenarios. We also present a small-scale implementation to show the availability of the prototype system.
[ { "created": "Tue, 7 Oct 2014 08:31:34 GMT", "version": "v1" } ]
2014-10-08
[ [ "Jiang", "Yichen", "" ], [ "Ji", "Yi", "" ], [ "Liu", "Tianhua", "" ] ]
Vehicular ad hoc networks allow vehicles to connect themselves as networks so that cars could communicate with each other. This paper introduces an anonymous communication scheme providing integrity protection, multi-level privacy and auditability. The scheme is based on a certificateless ring signature proposed in this paper, which is contributed to reduce the length of the signature and simplify the key management. In our scheme, vehicles can compose the anonymous group without the help of road-side infrastructure or central authority. The computation overhead is close to a normal signature scheme, so it is efficient in most application scenarios. We also present a small-scale implementation to show the availability of the prototype system.
2311.07822
Pei Zhang
Pei Zhang, Zhaobo Hua, Jinliang Ding
A Central Motor System Inspired Pre-training Reinforcement Learning for Robotic Control
null
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Designing controllers to achieve natural motor capabilities for multi-joint robots is a significant challenge. However, animals in nature are naturally with basic motor abilities and can master various complex motor skills through acquired learning. On the basis of analyzing the mechanism of the central motor system in mammals, we propose a novel pre-training reinforcement learning algorithm that enables robots to learn rich motor skills and apply them to complex task environments without relying on external data. We first design a skill based network similar to the cerebellum by utilizing the selection mechanism of voluntary movements in the basal ganglia and the basic motor regulation ability of the cerebellum. Subsequently, by imitating the structure of advanced centers in the central motor system, we propose a high-level policy to generate different skill combinations, thereby enabling the robot to acquire natural motor abilities. We conduct experiments on 4 types of robots and 22 task environments, and the results show that the proposed method can enable different types of robots to achieve flexible motor skills. Overall, our research provides a promising framework for the design of neural network motor controllers.
[ { "created": "Tue, 14 Nov 2023 00:49:12 GMT", "version": "v1" }, { "created": "Tue, 5 Dec 2023 00:47:30 GMT", "version": "v2" }, { "created": "Tue, 16 Jul 2024 06:57:18 GMT", "version": "v3" } ]
2024-07-17
[ [ "Zhang", "Pei", "" ], [ "Hua", "Zhaobo", "" ], [ "Ding", "Jinliang", "" ] ]
Designing controllers to achieve natural motor capabilities for multi-joint robots is a significant challenge. However, animals in nature are naturally with basic motor abilities and can master various complex motor skills through acquired learning. On the basis of analyzing the mechanism of the central motor system in mammals, we propose a novel pre-training reinforcement learning algorithm that enables robots to learn rich motor skills and apply them to complex task environments without relying on external data. We first design a skill based network similar to the cerebellum by utilizing the selection mechanism of voluntary movements in the basal ganglia and the basic motor regulation ability of the cerebellum. Subsequently, by imitating the structure of advanced centers in the central motor system, we propose a high-level policy to generate different skill combinations, thereby enabling the robot to acquire natural motor abilities. We conduct experiments on 4 types of robots and 22 task environments, and the results show that the proposed method can enable different types of robots to achieve flexible motor skills. Overall, our research provides a promising framework for the design of neural network motor controllers.
2008.06069
Changjae Oh
Ali Shahin Shamsabadi, Changjae Oh, Andrea Cavallaro
Semantically Adversarial Learnable Filters
13 pages
IEEE Transactions on Image Processing, 2021
10.1109/TIP.2021.3112290
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an adversarial framework to craft perturbations that mislead classifiers by accounting for the image content and the semantics of the labels. The proposed framework combines a structure loss and a semantic adversarial loss in a multi-task objective function to train a fully convolutional neural network. The structure loss helps generate perturbations whose type and magnitude are defined by a target image processing filter. The semantic adversarial loss considers groups of (semantic) labels to craft perturbations that prevent the filtered image {from} being classified with a label in the same group. We validate our framework with three different target filters, namely detail enhancement, log transformation and gamma correction filters; and evaluate the adversarially filtered images against three classifiers, ResNet50, ResNet18 and AlexNet, pre-trained on ImageNet. We show that the proposed framework generates filtered images with a high success rate, robustness, and transferability to unseen classifiers. We also discuss objective and subjective evaluations of the adversarial perturbations.
[ { "created": "Thu, 13 Aug 2020 18:12:40 GMT", "version": "v1" }, { "created": "Sun, 2 May 2021 18:12:06 GMT", "version": "v2" }, { "created": "Tue, 5 Apr 2022 21:03:21 GMT", "version": "v3" } ]
2022-04-07
[ [ "Shamsabadi", "Ali Shahin", "" ], [ "Oh", "Changjae", "" ], [ "Cavallaro", "Andrea", "" ] ]
We present an adversarial framework to craft perturbations that mislead classifiers by accounting for the image content and the semantics of the labels. The proposed framework combines a structure loss and a semantic adversarial loss in a multi-task objective function to train a fully convolutional neural network. The structure loss helps generate perturbations whose type and magnitude are defined by a target image processing filter. The semantic adversarial loss considers groups of (semantic) labels to craft perturbations that prevent the filtered image {from} being classified with a label in the same group. We validate our framework with three different target filters, namely detail enhancement, log transformation and gamma correction filters; and evaluate the adversarially filtered images against three classifiers, ResNet50, ResNet18 and AlexNet, pre-trained on ImageNet. We show that the proposed framework generates filtered images with a high success rate, robustness, and transferability to unseen classifiers. We also discuss objective and subjective evaluations of the adversarial perturbations.
2307.09819
Dimitrios Panteleimon Giakatos
Ilias Dimitriadis, Dimitrios P. Giakatos, Stelios Karamanidis, Pavlos Sermpezis, Kelly Kiki, Athena Vakali
Analyzing large scale political discussions on Twitter: the use case of the Greek wiretapping scandal (#ypoklopes)
null
null
null
null
cs.SI
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this paper, we study the Greek wiretappings scandal, which has been revealed in 2022 and attracted a lot of attention by press and citizens. Specifically, we propose a methodology for collecting data and analyzing patterns of online public discussions on Twitter. We apply our methodology to the Greek wiretappings use case, and present findings related to the evolution of the discussion over time, its polarization, and the role of the media. The methodology can be of wider use and replicated to other topics. Finally, we provide publicly an open dataset, and online resources with the results.
[ { "created": "Wed, 19 Jul 2023 08:08:00 GMT", "version": "v1" } ]
2023-07-20
[ [ "Dimitriadis", "Ilias", "" ], [ "Giakatos", "Dimitrios P.", "" ], [ "Karamanidis", "Stelios", "" ], [ "Sermpezis", "Pavlos", "" ], [ "Kiki", "Kelly", "" ], [ "Vakali", "Athena", "" ] ]
In this paper, we study the Greek wiretappings scandal, which has been revealed in 2022 and attracted a lot of attention by press and citizens. Specifically, we propose a methodology for collecting data and analyzing patterns of online public discussions on Twitter. We apply our methodology to the Greek wiretappings use case, and present findings related to the evolution of the discussion over time, its polarization, and the role of the media. The methodology can be of wider use and replicated to other topics. Finally, we provide publicly an open dataset, and online resources with the results.
2312.03806
Xuanchi Ren
Xuanchi Ren, Jiahui Huang, Xiaohui Zeng, Ken Museth, Sanja Fidler, Francis Williams
XCube: Large-Scale 3D Generative Modeling using Sparse Voxel Hierarchies
CVPR 2024 Highlight. Code: https://github.com/nv-tlabs/XCube/ Website: https://research.nvidia.com/labs/toronto-ai/xcube/
null
null
null
cs.CV cs.GR cs.LG
http://creativecommons.org/licenses/by/4.0/
We present XCube (abbreviated as $\mathcal{X}^3$), a novel generative model for high-resolution sparse 3D voxel grids with arbitrary attributes. Our model can generate millions of voxels with a finest effective resolution of up to $1024^3$ in a feed-forward fashion without time-consuming test-time optimization. To achieve this, we employ a hierarchical voxel latent diffusion model which generates progressively higher resolution grids in a coarse-to-fine manner using a custom framework built on the highly efficient VDB data structure. Apart from generating high-resolution objects, we demonstrate the effectiveness of XCube on large outdoor scenes at scales of 100m$\times$100m with a voxel size as small as 10cm. We observe clear qualitative and quantitative improvements over past approaches. In addition to unconditional generation, we show that our model can be used to solve a variety of tasks such as user-guided editing, scene completion from a single scan, and text-to-3D. The source code and more results can be found at https://research.nvidia.com/labs/toronto-ai/xcube/.
[ { "created": "Wed, 6 Dec 2023 16:23:26 GMT", "version": "v1" }, { "created": "Tue, 25 Jun 2024 17:01:54 GMT", "version": "v2" } ]
2024-06-26
[ [ "Ren", "Xuanchi", "" ], [ "Huang", "Jiahui", "" ], [ "Zeng", "Xiaohui", "" ], [ "Museth", "Ken", "" ], [ "Fidler", "Sanja", "" ], [ "Williams", "Francis", "" ] ]
We present XCube (abbreviated as $\mathcal{X}^3$), a novel generative model for high-resolution sparse 3D voxel grids with arbitrary attributes. Our model can generate millions of voxels with a finest effective resolution of up to $1024^3$ in a feed-forward fashion without time-consuming test-time optimization. To achieve this, we employ a hierarchical voxel latent diffusion model which generates progressively higher resolution grids in a coarse-to-fine manner using a custom framework built on the highly efficient VDB data structure. Apart from generating high-resolution objects, we demonstrate the effectiveness of XCube on large outdoor scenes at scales of 100m$\times$100m with a voxel size as small as 10cm. We observe clear qualitative and quantitative improvements over past approaches. In addition to unconditional generation, we show that our model can be used to solve a variety of tasks such as user-guided editing, scene completion from a single scan, and text-to-3D. The source code and more results can be found at https://research.nvidia.com/labs/toronto-ai/xcube/.
2207.09564
Thomas G Kelly
Thomas G. Kelly, Mohammad Divband Soorati, Klaus-Peter Zauner, Sarvapali D. Ramchurn, Danesh Tarapore
Collective Decision Making in Communication-Constrained Environments
6 pages, 7 figures, accepted to the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022)
null
null
null
cs.RO cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the main tasks for autonomous robot swarms is to collectively decide on the best available option. Achieving that requires a high quality communication between the agents that may not be always available in a real world environment. In this paper we introduce the communication-constrained collective decision-making problem where some areas of the environment limit the agents' ability to communicate, either by reducing success rate or blocking the communication channels. We propose a decentralised algorithm for mapping environmental features for robot swarms as well as improving collective decision making in communication-limited environments without prior knowledge of the communication landscape. Our results show that making a collective aware of the communication environment can improve the speed of convergence in the presence of communication limitations, at least 3 times faster, without sacrificing accuracy.
[ { "created": "Tue, 19 Jul 2022 21:48:15 GMT", "version": "v1" } ]
2022-07-21
[ [ "Kelly", "Thomas G.", "" ], [ "Soorati", "Mohammad Divband", "" ], [ "Zauner", "Klaus-Peter", "" ], [ "Ramchurn", "Sarvapali D.", "" ], [ "Tarapore", "Danesh", "" ] ]
One of the main tasks for autonomous robot swarms is to collectively decide on the best available option. Achieving that requires a high quality communication between the agents that may not be always available in a real world environment. In this paper we introduce the communication-constrained collective decision-making problem where some areas of the environment limit the agents' ability to communicate, either by reducing success rate or blocking the communication channels. We propose a decentralised algorithm for mapping environmental features for robot swarms as well as improving collective decision making in communication-limited environments without prior knowledge of the communication landscape. Our results show that making a collective aware of the communication environment can improve the speed of convergence in the presence of communication limitations, at least 3 times faster, without sacrificing accuracy.
1705.04839
Firoj Alam
Firoj Alam, Morena Danieli and Giuseppe Riccardi
Annotating and Modeling Empathy in Spoken Conversations
Journal of Computer Speech and Language
null
10.1016/j.csl.2017.12.003
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Empathy, as defined in behavioral sciences, expresses the ability of human beings to recognize, understand and react to emotions, attitudes and beliefs of others. The lack of an operational definition of empathy makes it difficult to measure it. In this paper, we address two related problems in automatic affective behavior analysis: the design of the annotation protocol and the automatic recognition of empathy from spoken conversations. We propose and evaluate an annotation scheme for empathy inspired by the modal model of emotions. The annotation scheme was evaluated on a corpus of real-life, dyadic spoken conversations. In the context of behavioral analysis, we designed an automatic segmentation and classification system for empathy. Given the different speech and language levels of representation where empathy may be communicated, we investigated features derived from the lexical and acoustic spaces. The feature development process was designed to support both the fusion and automatic selection of relevant features from high dimensional space. The automatic classification system was evaluated on call center conversations where it showed significantly better performance than the baseline.
[ { "created": "Sat, 13 May 2017 14:49:08 GMT", "version": "v1" }, { "created": "Thu, 21 Dec 2017 20:13:29 GMT", "version": "v2" }, { "created": "Fri, 29 Dec 2017 12:52:49 GMT", "version": "v3" } ]
2018-01-01
[ [ "Alam", "Firoj", "" ], [ "Danieli", "Morena", "" ], [ "Riccardi", "Giuseppe", "" ] ]
Empathy, as defined in behavioral sciences, expresses the ability of human beings to recognize, understand and react to emotions, attitudes and beliefs of others. The lack of an operational definition of empathy makes it difficult to measure it. In this paper, we address two related problems in automatic affective behavior analysis: the design of the annotation protocol and the automatic recognition of empathy from spoken conversations. We propose and evaluate an annotation scheme for empathy inspired by the modal model of emotions. The annotation scheme was evaluated on a corpus of real-life, dyadic spoken conversations. In the context of behavioral analysis, we designed an automatic segmentation and classification system for empathy. Given the different speech and language levels of representation where empathy may be communicated, we investigated features derived from the lexical and acoustic spaces. The feature development process was designed to support both the fusion and automatic selection of relevant features from high dimensional space. The automatic classification system was evaluated on call center conversations where it showed significantly better performance than the baseline.
1707.06307
Vincent Knight Dr
Marc Harper and Vincent Knight and Martin Jones and Georgios Koutsovoulos and Nikoleta E. Glynatsi and Owen Campbell
Reinforcement Learning Produces Dominant Strategies for the Iterated Prisoner's Dilemma
null
null
10.1371/journal.pone.0188046
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present tournament results and several powerful strategies for the Iterated Prisoner's Dilemma created using reinforcement learning techniques (evolutionary and particle swarm algorithms). These strategies are trained to perform well against a corpus of over 170 distinct opponents, including many well-known and classic strategies. All the trained strategies win standard tournaments against the total collection of other opponents. The trained strategies and one particular human made designed strategy are the top performers in noisy tournaments also.
[ { "created": "Wed, 19 Jul 2017 21:47:19 GMT", "version": "v1" } ]
2018-02-07
[ [ "Harper", "Marc", "" ], [ "Knight", "Vincent", "" ], [ "Jones", "Martin", "" ], [ "Koutsovoulos", "Georgios", "" ], [ "Glynatsi", "Nikoleta E.", "" ], [ "Campbell", "Owen", "" ] ]
We present tournament results and several powerful strategies for the Iterated Prisoner's Dilemma created using reinforcement learning techniques (evolutionary and particle swarm algorithms). These strategies are trained to perform well against a corpus of over 170 distinct opponents, including many well-known and classic strategies. All the trained strategies win standard tournaments against the total collection of other opponents. The trained strategies and one particular human made designed strategy are the top performers in noisy tournaments also.
2312.01151
Zilong Liu
Zilong Liu, Krzysztof Janowicz, Kitty Currier, Meilin Shi, Jinmeng Rao, Song Gao, Ling Cai, and Anita Graser
Here Is Not There: Measuring Entailment-Based Trajectory Similarity for Location-Privacy Protection and Beyond
null
null
10.5281/zenodo.8286277
null
cs.CY cs.CL cs.SC
http://creativecommons.org/licenses/by/4.0/
While the paths humans take play out in social as well as physical space, measures to describe and compare their trajectories are carried out in abstract, typically Euclidean, space. When these measures are applied to trajectories of actual individuals in an application area, alterations that are inconsequential in abstract space may suddenly become problematic once overlaid with geographic reality. In this work, we present a different view on trajectory similarity by introducing a measure that utilizes logical entailment. This is an inferential perspective that considers facts as triple statements deduced from the social and environmental context in which the travel takes place, and their practical implications. We suggest a formalization of entailment-based trajectory similarity, measured as the overlapping proportion of facts, which are spatial relation statements in our case study. With the proposed measure, we evaluate LSTM-TrajGAN, a privacy-preserving trajectory-generation model. The entailment-based model evaluation reveals potential consequences of disregarding the rich structure of geographic space (e.g., miscalculated insurance risk due to regional shifts in our toy example). Our work highlights the advantage of applying logical entailment to trajectory-similarity reasoning for location-privacy protection and beyond.
[ { "created": "Sat, 2 Dec 2023 14:41:01 GMT", "version": "v1" } ]
2023-12-05
[ [ "Liu", "Zilong", "" ], [ "Janowicz", "Krzysztof", "" ], [ "Currier", "Kitty", "" ], [ "Shi", "Meilin", "" ], [ "Rao", "Jinmeng", "" ], [ "Gao", "Song", "" ], [ "Cai", "Ling", "" ], [ "Graser", "Anita", "" ] ]
While the paths humans take play out in social as well as physical space, measures to describe and compare their trajectories are carried out in abstract, typically Euclidean, space. When these measures are applied to trajectories of actual individuals in an application area, alterations that are inconsequential in abstract space may suddenly become problematic once overlaid with geographic reality. In this work, we present a different view on trajectory similarity by introducing a measure that utilizes logical entailment. This is an inferential perspective that considers facts as triple statements deduced from the social and environmental context in which the travel takes place, and their practical implications. We suggest a formalization of entailment-based trajectory similarity, measured as the overlapping proportion of facts, which are spatial relation statements in our case study. With the proposed measure, we evaluate LSTM-TrajGAN, a privacy-preserving trajectory-generation model. The entailment-based model evaluation reveals potential consequences of disregarding the rich structure of geographic space (e.g., miscalculated insurance risk due to regional shifts in our toy example). Our work highlights the advantage of applying logical entailment to trajectory-similarity reasoning for location-privacy protection and beyond.
2309.13907
Dake Guo
Dake Guo, Xinfa Zhu, Liumeng Xue, Tao Li, Yuanjun Lv, Yuepeng Jiang, Lei Xie
HiGNN-TTS: Hierarchical Prosody Modeling with Graph Neural Networks for Expressive Long-form TTS
Accepted by ASRU2023
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in text-to-speech, particularly those based on Graph Neural Networks (GNNs), have significantly improved the expressiveness of short-form synthetic speech. However, generating human-parity long-form speech with high dynamic prosodic variations is still challenging. To address this problem, we expand the capabilities of GNNs with a hierarchical prosody modeling approach, named HiGNN-TTS. Specifically, we add a virtual global node in the graph to strengthen the interconnection of word nodes and introduce a contextual attention mechanism to broaden the prosody modeling scope of GNNs from intra-sentence to inter-sentence. Additionally, we perform hierarchical supervision from acoustic prosody on each node of the graph to capture the prosodic variations with a high dynamic range. Ablation studies show the effectiveness of HiGNN-TTS in learning hierarchical prosody. Both objective and subjective evaluations demonstrate that HiGNN-TTS significantly improves the naturalness and expressiveness of long-form synthetic speech.
[ { "created": "Mon, 25 Sep 2023 07:07:02 GMT", "version": "v1" }, { "created": "Sat, 7 Oct 2023 01:56:04 GMT", "version": "v2" } ]
2023-10-10
[ [ "Guo", "Dake", "" ], [ "Zhu", "Xinfa", "" ], [ "Xue", "Liumeng", "" ], [ "Li", "Tao", "" ], [ "Lv", "Yuanjun", "" ], [ "Jiang", "Yuepeng", "" ], [ "Xie", "Lei", "" ] ]
Recent advances in text-to-speech, particularly those based on Graph Neural Networks (GNNs), have significantly improved the expressiveness of short-form synthetic speech. However, generating human-parity long-form speech with high dynamic prosodic variations is still challenging. To address this problem, we expand the capabilities of GNNs with a hierarchical prosody modeling approach, named HiGNN-TTS. Specifically, we add a virtual global node in the graph to strengthen the interconnection of word nodes and introduce a contextual attention mechanism to broaden the prosody modeling scope of GNNs from intra-sentence to inter-sentence. Additionally, we perform hierarchical supervision from acoustic prosody on each node of the graph to capture the prosodic variations with a high dynamic range. Ablation studies show the effectiveness of HiGNN-TTS in learning hierarchical prosody. Both objective and subjective evaluations demonstrate that HiGNN-TTS significantly improves the naturalness and expressiveness of long-form synthetic speech.
2312.04183
Amin Radbord
Amin Radbord, Italo Atzeni, Antti Tolli
Enhanced data Detection for Massive MIMO with 1-Bit ADCs
Presented at the IEEE Asilomar Conference on Signals, Systems, and Computers 2023. arXiv admin note: text overlap with arXiv:2303.18061
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
We present new insightful results on the uplink data detection for massive multiple-input multiple-output systems with 1-bit analog-to-digital converters. The expected values of the soft-estimated symbols (i.e., after the linear combining and prior to the data detection) have been recently characterized for multiple user equipments (UEs) and maximum ratio combining (MRC) receiver at the base station. In this paper, we first provide a numerical evaluation of the expected value of the soft-estimated symbols with zero-forcing (ZF) and minimum mean squared error (MMSE) receivers for a multi-UE setting with correlated Rayleigh fading. Then, we propose a joint data detection (JD) strategy, which exploits the interdependence among the soft-estimated symbols of the interfering UEs, along with its low-complexity variant. These strategies are compared with a naive approach that adapts the maximum-likelihood data detection to the 1-bit quantization. Numerical results show that ZF and MMSE provide considerable gains over MRC in terms of symbol error rate. Moreover, the proposed JD and its low-complexity variant provide a significant boost in comparison with the single-UE data detection.
[ { "created": "Thu, 7 Dec 2023 10:11:20 GMT", "version": "v1" } ]
2023-12-08
[ [ "Radbord", "Amin", "" ], [ "Atzeni", "Italo", "" ], [ "Tolli", "Antti", "" ] ]
We present new insightful results on the uplink data detection for massive multiple-input multiple-output systems with 1-bit analog-to-digital converters. The expected values of the soft-estimated symbols (i.e., after the linear combining and prior to the data detection) have been recently characterized for multiple user equipments (UEs) and maximum ratio combining (MRC) receiver at the base station. In this paper, we first provide a numerical evaluation of the expected value of the soft-estimated symbols with zero-forcing (ZF) and minimum mean squared error (MMSE) receivers for a multi-UE setting with correlated Rayleigh fading. Then, we propose a joint data detection (JD) strategy, which exploits the interdependence among the soft-estimated symbols of the interfering UEs, along with its low-complexity variant. These strategies are compared with a naive approach that adapts the maximum-likelihood data detection to the 1-bit quantization. Numerical results show that ZF and MMSE provide considerable gains over MRC in terms of symbol error rate. Moreover, the proposed JD and its low-complexity variant provide a significant boost in comparison with the single-UE data detection.
1904.04195
Hao Tan
Hao Tan, Licheng Yu, Mohit Bansal
Learning to Navigate Unseen Environments: Back Translation with Environmental Dropout
NAACL 2019 (12 pages)
null
null
null
cs.CL cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A grand goal in AI is to build a robot that can accurately navigate based on natural language instructions, which requires the agent to perceive the scene, understand and ground language, and act in the real-world environment. One key challenge here is to learn to navigate in new environments that are unseen during training. Most of the existing approaches perform dramatically worse in unseen environments as compared to seen ones. In this paper, we present a generalizable navigational agent. Our agent is trained in two stages. The first stage is training via mixed imitation and reinforcement learning, combining the benefits from both off-policy and on-policy optimization. The second stage is fine-tuning via newly-introduced 'unseen' triplets (environment, path, instruction). To generate these unseen triplets, we propose a simple but effective 'environmental dropout' method to mimic unseen environments, which overcomes the problem of limited seen environment variability. Next, we apply semi-supervised learning (via back-translation) on these dropped-out environments to generate new paths and instructions. Empirically, we show that our agent is substantially better at generalizability when fine-tuned with these triplets, outperforming the state-of-art approaches by a large margin on the private unseen test set of the Room-to-Room task, and achieving the top rank on the leaderboard.
[ { "created": "Mon, 8 Apr 2019 17:14:52 GMT", "version": "v1" } ]
2019-04-09
[ [ "Tan", "Hao", "" ], [ "Yu", "Licheng", "" ], [ "Bansal", "Mohit", "" ] ]
A grand goal in AI is to build a robot that can accurately navigate based on natural language instructions, which requires the agent to perceive the scene, understand and ground language, and act in the real-world environment. One key challenge here is to learn to navigate in new environments that are unseen during training. Most of the existing approaches perform dramatically worse in unseen environments as compared to seen ones. In this paper, we present a generalizable navigational agent. Our agent is trained in two stages. The first stage is training via mixed imitation and reinforcement learning, combining the benefits from both off-policy and on-policy optimization. The second stage is fine-tuning via newly-introduced 'unseen' triplets (environment, path, instruction). To generate these unseen triplets, we propose a simple but effective 'environmental dropout' method to mimic unseen environments, which overcomes the problem of limited seen environment variability. Next, we apply semi-supervised learning (via back-translation) on these dropped-out environments to generate new paths and instructions. Empirically, we show that our agent is substantially better at generalizability when fine-tuned with these triplets, outperforming the state-of-art approaches by a large margin on the private unseen test set of the Room-to-Room task, and achieving the top rank on the leaderboard.
2207.00622
Zurab Khasidashvili
Zurab Khasidashvili
Accelerating System-Level Debug Using Rule Learning and Subgroup Discovery Techniques
33 pages, 6 figures
null
null
null
cs.SE cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a root-causing procedure for accelerating system-level debug using rule-based techniques. We describe the procedure and how it provides high quality debug hints for reducing the debug effort. This includes the heuristics for engineering features from logs of many tests, and the data analytics techniques for generating powerful debug hints. As a case study, we used these techniques for root-causing failures of the Power Management (PM) design feature Package-C8 and showed their effectiveness. Furthermore, we propose an approach for mining the root-causing experience and results for reuse, to accelerate future debug activities and reduce dependency on validation experts. We believe that these techniques are beneficial also for other validation activities at different levels of abstraction, for complex hardware, software and firmware systems, both pre-silicon and post-silicon.
[ { "created": "Sat, 2 Jul 2022 22:00:30 GMT", "version": "v1" }, { "created": "Sat, 1 Jun 2024 21:57:06 GMT", "version": "v2" } ]
2024-06-04
[ [ "Khasidashvili", "Zurab", "" ] ]
We propose a root-causing procedure for accelerating system-level debug using rule-based techniques. We describe the procedure and how it provides high quality debug hints for reducing the debug effort. This includes the heuristics for engineering features from logs of many tests, and the data analytics techniques for generating powerful debug hints. As a case study, we used these techniques for root-causing failures of the Power Management (PM) design feature Package-C8 and showed their effectiveness. Furthermore, we propose an approach for mining the root-causing experience and results for reuse, to accelerate future debug activities and reduce dependency on validation experts. We believe that these techniques are beneficial also for other validation activities at different levels of abstraction, for complex hardware, software and firmware systems, both pre-silicon and post-silicon.
1911.01509
Karthikeyan Natesan Ramamurthy
Moninder Singh and Karthikeyan Natesan Ramamurthy
Understanding racial bias in health using the Medical Expenditure Panel Survey data
8 pages, 8 tables
null
null
null
cs.LG cs.CY stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over the years, several studies have demonstrated that there exist significant disparities in health indicators in the United States population across various groups. Healthcare expense is used as a proxy for health in algorithms that drive healthcare systems and this exacerbates the existing bias. In this work, we focus on the presence of racial bias in health indicators in the publicly available, and nationally representative Medical Expenditure Panel Survey (MEPS) data. We show that predictive models for care management trained using this data inherit this bias. Finally, we demonstrate that this inherited bias can be reduced significantly using simple mitigation techniques.
[ { "created": "Mon, 4 Nov 2019 22:14:52 GMT", "version": "v1" } ]
2019-11-06
[ [ "Singh", "Moninder", "" ], [ "Ramamurthy", "Karthikeyan Natesan", "" ] ]
Over the years, several studies have demonstrated that there exist significant disparities in health indicators in the United States population across various groups. Healthcare expense is used as a proxy for health in algorithms that drive healthcare systems and this exacerbates the existing bias. In this work, we focus on the presence of racial bias in health indicators in the publicly available, and nationally representative Medical Expenditure Panel Survey (MEPS) data. We show that predictive models for care management trained using this data inherit this bias. Finally, we demonstrate that this inherited bias can be reduced significantly using simple mitigation techniques.