id
stringlengths
9
14
submitter
stringlengths
1
64
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
1
609
journal-ref
stringlengths
4
404
doi
stringlengths
12
153
report-no
stringlengths
2
254
categories
stringlengths
5
112
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.76k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
535
abstract
stringlengths
11
3.75k
2011.09533
Christian Schroeder de Witt
Christian Schroeder de Witt, Tarun Gupta, Denys Makoviichuk, Viktor Makoviychuk, Philip H.S. Torr, Mingfei Sun, Shimon Whiteson
Is Independent Learning All You Need in the StarCraft Multi-Agent Challenge?
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Most recently developed approaches to cooperative multi-agent reinforcement learning in the \emph{centralized training with decentralized execution} setting involve estimating a centralized, joint value function. In this paper, we demonstrate that, despite its various theoretical shortcomings, Independent PPO (IPPO), a form of independent learning in which each agent simply estimates its local value function, can perform just as well as or better than state-of-the-art joint learning approaches on popular multi-agent benchmark suite SMAC with little hyperparameter tuning. We also compare IPPO to several variants; the results suggest that IPPO's strong performance may be due to its robustness to some forms of environment non-stationarity.
[ { "created": "Wed, 18 Nov 2020 20:29:59 GMT", "version": "v1" } ]
2020-11-20
[ [ "de Witt", "Christian Schroeder", "" ], [ "Gupta", "Tarun", "" ], [ "Makoviichuk", "Denys", "" ], [ "Makoviychuk", "Viktor", "" ], [ "Torr", "Philip H. S.", "" ], [ "Sun", "Mingfei", "" ], [ "Whiteson", "Shimon...
Most recently developed approaches to cooperative multi-agent reinforcement learning in the \emph{centralized training with decentralized execution} setting involve estimating a centralized, joint value function. In this paper, we demonstrate that, despite its various theoretical shortcomings, Independent PPO (IPPO), a form of independent learning in which each agent simply estimates its local value function, can perform just as well as or better than state-of-the-art joint learning approaches on popular multi-agent benchmark suite SMAC with little hyperparameter tuning. We also compare IPPO to several variants; the results suggest that IPPO's strong performance may be due to its robustness to some forms of environment non-stationarity.
1807.09464
Colas Le Guernic
Julien Duchene (CALID, LAAS-TSF), Eric Alata (LAAS-TSF), Vincent Nicomette (LAAS-TSF), Mohamed Ka\^aniche (LAAS-TSF), Colas Le Guernic (DGA.MI, TAMIS)
Specification-Based Protocol Obfuscation
null
2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), Jun 2018, Luxembourg City, France. IEEE, 2018
10.1109/DSN.2018.00056
null
cs.CR cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a new obfuscation technique of a communication protocol that is aimed at making the reverse engineering of the protocol more complex. The obfuscation is based on the transformation of protocol message format specification. The obfuscating transformations are applied to the Abstract Syntax Tree (AST) representation of the messages and mainly concern the ordering or aggregation of the AST nodes. The paper also presents the design of a framework that implements the proposed obfuscation technique by automatically generating, from the specification of the message format, a library performing the corresponding transformations. Finally, our framework is applied to two real application protocols (Modbus and HTTP) to illustrate the relevance and efficiency of the proposed approach. Various metrics recorded from the experiments show the significant increase of the complexity of the obfuscated protocol binary compared to the non-obfuscated code. It is also shown that the execution time and memory overheads remain acceptable for a practical deployment of the approach in operation.
[ { "created": "Wed, 25 Jul 2018 07:49:25 GMT", "version": "v1" } ]
2018-07-26
[ [ "Duchene", "Julien", "", "CALID, LAAS-TSF" ], [ "Alata", "Eric", "", "LAAS-TSF" ], [ "Nicomette", "Vincent", "", "LAAS-TSF" ], [ "Kaâniche", "Mohamed", "", "LAAS-TSF" ], [ "Guernic", "Colas Le", "", "DGA.MI, TAMIS" ]...
This paper proposes a new obfuscation technique of a communication protocol that is aimed at making the reverse engineering of the protocol more complex. The obfuscation is based on the transformation of protocol message format specification. The obfuscating transformations are applied to the Abstract Syntax Tree (AST) representation of the messages and mainly concern the ordering or aggregation of the AST nodes. The paper also presents the design of a framework that implements the proposed obfuscation technique by automatically generating, from the specification of the message format, a library performing the corresponding transformations. Finally, our framework is applied to two real application protocols (Modbus and HTTP) to illustrate the relevance and efficiency of the proposed approach. Various metrics recorded from the experiments show the significant increase of the complexity of the obfuscated protocol binary compared to the non-obfuscated code. It is also shown that the execution time and memory overheads remain acceptable for a practical deployment of the approach in operation.
1706.01269
Alex Jourjine
Alex Jourjine
Extended Gauge Theory, Bi-Spinors, and Scalar Supersymmetry
Typos cleanup. 9 pages
null
null
null
hep-th hep-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Within the context of the extended bi-spinor gauge theory we describe a new off-shell realization of scalar supersymmetry (s-susy) of massless interacting fields with U(1), U(1) x SU(N) and U(1) x SU(N_1) x SU(N_2) gauge groups. S-susy acts in the space of graded differential forms. The realization is non-linear in the non-abelian case. S-susy would not require the doubling of the SM particle spectrum. Instead, essentially only the forth generation of quarks and leptons would be needed as extra field content. The theory is by construction globally U(2,2) invariant and is an example of a supersymmetric CFT.
[ { "created": "Mon, 5 Jun 2017 10:46:20 GMT", "version": "v1" }, { "created": "Tue, 26 Sep 2017 16:49:19 GMT", "version": "v2" }, { "created": "Mon, 16 Oct 2017 18:11:58 GMT", "version": "v3" } ]
2017-10-18
[ [ "Jourjine", "Alex", "" ] ]
Within the context of the extended bi-spinor gauge theory we describe a new off-shell realization of scalar supersymmetry (s-susy) of massless interacting fields with U(1), U(1) x SU(N) and U(1) x SU(N_1) x SU(N_2) gauge groups. S-susy acts in the space of graded differential forms. The realization is non-linear in the non-abelian case. S-susy would not require the doubling of the SM particle spectrum. Instead, essentially only the forth generation of quarks and leptons would be needed as extra field content. The theory is by construction globally U(2,2) invariant and is an example of a supersymmetric CFT.
2401.13213
Miao Zhang
Miao Zhang, Zee fryer, Ben Colman, Ali Shahriyari, Gaurav Bharaj
Common-Sense Bias Discovery and Mitigation for Classification Tasks
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning model bias can arise from dataset composition: sensitive features correlated to the learning target disturb the model decision rule and lead to performance differences along the features. Existing de-biasing work captures prominent and delicate image features which are traceable in model latent space, like colors of digits or background of animals. However, using the latent space is not sufficient to understand all dataset feature correlations. In this work, we propose a framework to extract feature clusters in a dataset based on image descriptions, allowing us to capture both subtle and coarse features of the images. The feature co-occurrence pattern is formulated and correlation is measured, utilizing a human-in-the-loop for examination. The analyzed features and correlations are human-interpretable, so we name the method Common-Sense Bias Discovery (CSBD). Having exposed sensitive correlations in a dataset, we demonstrate that downstream model bias can be mitigated by adjusting image sampling weights, without requiring a sensitive group label supervision. Experiments show that our method discovers novel biases on multiple classification tasks for two benchmark image datasets, and the intervention outperforms state-of-the-art unsupervised bias mitigation methods.
[ { "created": "Wed, 24 Jan 2024 03:56:07 GMT", "version": "v1" }, { "created": "Thu, 8 Feb 2024 05:38:54 GMT", "version": "v2" } ]
2024-02-09
[ [ "Zhang", "Miao", "" ], [ "fryer", "Zee", "" ], [ "Colman", "Ben", "" ], [ "Shahriyari", "Ali", "" ], [ "Bharaj", "Gaurav", "" ] ]
Machine learning model bias can arise from dataset composition: sensitive features correlated to the learning target disturb the model decision rule and lead to performance differences along the features. Existing de-biasing work captures prominent and delicate image features which are traceable in model latent space, like colors of digits or background of animals. However, using the latent space is not sufficient to understand all dataset feature correlations. In this work, we propose a framework to extract feature clusters in a dataset based on image descriptions, allowing us to capture both subtle and coarse features of the images. The feature co-occurrence pattern is formulated and correlation is measured, utilizing a human-in-the-loop for examination. The analyzed features and correlations are human-interpretable, so we name the method Common-Sense Bias Discovery (CSBD). Having exposed sensitive correlations in a dataset, we demonstrate that downstream model bias can be mitigated by adjusting image sampling weights, without requiring a sensitive group label supervision. Experiments show that our method discovers novel biases on multiple classification tasks for two benchmark image datasets, and the intervention outperforms state-of-the-art unsupervised bias mitigation methods.
hep-th/0703257
Masakazu Sano
Masakazu Sano, Hisao Suzuki
Integrable Cosmological Models From Higher Dimensional Einstein Equations
10 pages, 2 figures, added reference, corrected typos(v2), explanation improved and references and acknowledgments added, accepted for publication in PRD(v3)
Phys.Rev.D76:064006,2007
10.1103/PhysRevD.76.064006
EPHOU 07-003
hep-th gr-qc
null
We consider the cosmological models for the higher dimensional spacetime which includes the curvatures of our space as well as the curvatures of the internal space. We find that the condition for the integrability of the cosmological equations is that the total space-time dimensions are D=10 or D=11 which is exactly the conditions for superstrings or M-theory. We obtain analytic solutions with generic initial conditions in the four dimensional Einstein frame and study the accelerating universe when both our space and the internal space have negative curvatures.
[ { "created": "Wed, 28 Mar 2007 12:01:45 GMT", "version": "v1" }, { "created": "Thu, 5 Apr 2007 05:04:55 GMT", "version": "v2" }, { "created": "Thu, 19 Jul 2007 02:34:21 GMT", "version": "v3" } ]
2008-11-26
[ [ "Sano", "Masakazu", "" ], [ "Suzuki", "Hisao", "" ] ]
We consider the cosmological models for the higher dimensional spacetime which includes the curvatures of our space as well as the curvatures of the internal space. We find that the condition for the integrability of the cosmological equations is that the total space-time dimensions are D=10 or D=11 which is exactly the conditions for superstrings or M-theory. We obtain analytic solutions with generic initial conditions in the four dimensional Einstein frame and study the accelerating universe when both our space and the internal space have negative curvatures.
2310.12124
Luca Smaldone Ph.D
Giuseppe Gaetano Luciano and Luca Smaldone
Time-energy uncertainty relation for neutrino oscillations: historical development, applications and future prospects
20 pages, published version
Symmetry 15(11), 2032 (2023)
10.3390/sym15112032
null
hep-th gr-qc quant-ph
http://creativecommons.org/licenses/by/4.0/
Time-energy uncertainty relation (TEUR) plays a fundamental role in quantum mechanics, as it allows to grasp peculiar aspects of a variety of phenomena based on very general principles and symmetries of the theory. Using the Mandelstam-Tamm method, TEUR has been recently derived for neutrino oscillations by connecting the uncertainty on neutrino energy with the characteristic time-scale of oscillations. Interestingly enough, the suggestive interpretation of neutrinos as unstable-like particles has proved to naturally emerge in this context. Further aspects have been later discussed in semiclassical gravity by computing corrections to the neutrino energy uncertainty in a generic stationary curved spacetime, and in quantum field theory, where the clock observable turns out to be identified with the non-conserved flavor charge operator. In the present work, we give an overview on the above achievements. In particular, we analyze the implications of TEUR and explore the impact of gravitational and non-relativistic effects on the standard condition for neutrino oscillations.
[ { "created": "Wed, 18 Oct 2023 17:31:14 GMT", "version": "v1" }, { "created": "Thu, 9 Nov 2023 10:53:41 GMT", "version": "v2" } ]
2023-11-10
[ [ "Luciano", "Giuseppe Gaetano", "" ], [ "Smaldone", "Luca", "" ] ]
Time-energy uncertainty relation (TEUR) plays a fundamental role in quantum mechanics, as it allows to grasp peculiar aspects of a variety of phenomena based on very general principles and symmetries of the theory. Using the Mandelstam-Tamm method, TEUR has been recently derived for neutrino oscillations by connecting the uncertainty on neutrino energy with the characteristic time-scale of oscillations. Interestingly enough, the suggestive interpretation of neutrinos as unstable-like particles has proved to naturally emerge in this context. Further aspects have been later discussed in semiclassical gravity by computing corrections to the neutrino energy uncertainty in a generic stationary curved spacetime, and in quantum field theory, where the clock observable turns out to be identified with the non-conserved flavor charge operator. In the present work, we give an overview on the above achievements. In particular, we analyze the implications of TEUR and explore the impact of gravitational and non-relativistic effects on the standard condition for neutrino oscillations.
2405.09155
Manoj Gulati
Lim Chang Quan Thaddeus, C. Rajashekar Reddy, Yuvraj Singh Bhadauria, Dhairya Shah, Manoj Gulati, Ambuj Varshney
TunnelSense: Low-power, Non-Contact Sensing using Tunnel Diodes
This work is accepted at IEEE RFID 2024
null
null
null
cs.ET
http://creativecommons.org/licenses/by-nc-nd/4.0/
Sensing the motion of physical objects in an environment enables numerous applications, from tracking occupancy in buildings and monitoring vital signs to diagnosing faults in machines. Typically, these application scenarios involve attaching a sensor, such as an accelerometer, to the object of interest, like a wearable device that tracks our steps. However, many of these scenarios require tracking motion in a noncontact manner where the sensor is not in touch with the object. A sensor in such a scenario observes variations in radio, light, acoustic, and infrared fields disturbed by the object's motion. Current noncontact sensing mechanisms often require substantial energy and involve complex processing on sophisticated hardware. We present TunnelSense, a novel mechanism that rethinks noncontact sensing using tunnel diode oscillators. They are highly sensitive to changes in their electromagnetic environments. The motion of an object near a tunnel diode oscillator induces corresponding changes in its resonant frequency and thus in the generated radio waves. Additionally, the low-power characteristics of the tunnel diode allow tags designed using them to operate on less than 100microwatt of power consumption and with a biasing voltage starting at 70 millivolt. This enables prolonged tag operation on a small battery or energy harvested from the environment. Among numerous applications enabled by the TunnelSense system, this work demonstrates its ability to detect breathing at distances up to 30 centimeter between the subject and the TunnelSense tag.
[ { "created": "Wed, 15 May 2024 07:39:13 GMT", "version": "v1" } ]
2024-05-16
[ [ "Thaddeus", "Lim Chang Quan", "" ], [ "Reddy", "C. Rajashekar", "" ], [ "Bhadauria", "Yuvraj Singh", "" ], [ "Shah", "Dhairya", "" ], [ "Gulati", "Manoj", "" ], [ "Varshney", "Ambuj", "" ] ]
Sensing the motion of physical objects in an environment enables numerous applications, from tracking occupancy in buildings and monitoring vital signs to diagnosing faults in machines. Typically, these application scenarios involve attaching a sensor, such as an accelerometer, to the object of interest, like a wearable device that tracks our steps. However, many of these scenarios require tracking motion in a noncontact manner where the sensor is not in touch with the object. A sensor in such a scenario observes variations in radio, light, acoustic, and infrared fields disturbed by the object's motion. Current noncontact sensing mechanisms often require substantial energy and involve complex processing on sophisticated hardware. We present TunnelSense, a novel mechanism that rethinks noncontact sensing using tunnel diode oscillators. They are highly sensitive to changes in their electromagnetic environments. The motion of an object near a tunnel diode oscillator induces corresponding changes in its resonant frequency and thus in the generated radio waves. Additionally, the low-power characteristics of the tunnel diode allow tags designed using them to operate on less than 100microwatt of power consumption and with a biasing voltage starting at 70 millivolt. This enables prolonged tag operation on a small battery or energy harvested from the environment. Among numerous applications enabled by the TunnelSense system, this work demonstrates its ability to detect breathing at distances up to 30 centimeter between the subject and the TunnelSense tag.
hep-th/0104060
Li Yu Qi
Han-Ying Guo, Xiao-mei Ji, Yu-Qi Li, and Ke Wu
A Note on Symplectic, Multisymplectic Scheme in Finite Element Method
7 pages, 3 figures
null
10.1088/0253-6102/36/3/259
null
hep-th
null
We find that with uniform mesh, the numerical schemes derived from finite element method can keep a preserved symplectic structure in one-dimensional case and a preserved multisymplectic structure in two-dimentional case in certain discrete version respectively. These results are in fact the intrinsic reason that the numerical experiments indicate that such finite element algorithms are accurate in practice.
[ { "created": "Fri, 6 Apr 2001 09:03:42 GMT", "version": "v1" } ]
2018-01-17
[ [ "Guo", "Han-Ying", "" ], [ "Ji", "Xiao-mei", "" ], [ "Li", "Yu-Qi", "" ], [ "Wu", "Ke", "" ] ]
We find that with uniform mesh, the numerical schemes derived from finite element method can keep a preserved symplectic structure in one-dimensional case and a preserved multisymplectic structure in two-dimentional case in certain discrete version respectively. These results are in fact the intrinsic reason that the numerical experiments indicate that such finite element algorithms are accurate in practice.
2207.07597
Minsang Kim
Minsang Kim, Sang-hyun Je, Eunjoo Park
OASYS: Domain-Agnostic Automated System for Constructing Knowledge Base from Unstructured Text
ACM SIGKDD Workshop on Mining and Learning with Graphs 2022, Accepted
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
In recent years, creating and managing knowledge bases have become crucial to the retail product and enterprise domains. We present an automatic knowledge base construction system that mines data from documents. This system can generate training data during the training process without human intervention. Therefore, it is domain-agnostic trainable using only the target domain text corpus and a pre-defined knowledge base. This system is called OASYS and is the first system built with the Korean language in mind. In addition, we also have constructed a new human-annotated benchmark dataset of the Korean Wikipedia corpus paired with a Korean DBpedia to aid system evaluation. The system performance results on human-annotated benchmark test dataset are meaningful and show that the generated knowledge base from OASYS trained on only auto-generated data is useful. We provide both a human-annotated test dataset and an auto-generated dataset.
[ { "created": "Wed, 29 Jun 2022 22:03:38 GMT", "version": "v1" } ]
2022-07-18
[ [ "Kim", "Minsang", "" ], [ "Je", "Sang-hyun", "" ], [ "Park", "Eunjoo", "" ] ]
In recent years, creating and managing knowledge bases have become crucial to the retail product and enterprise domains. We present an automatic knowledge base construction system that mines data from documents. This system can generate training data during the training process without human intervention. Therefore, it is domain-agnostic trainable using only the target domain text corpus and a pre-defined knowledge base. This system is called OASYS and is the first system built with the Korean language in mind. In addition, we also have constructed a new human-annotated benchmark dataset of the Korean Wikipedia corpus paired with a Korean DBpedia to aid system evaluation. The system performance results on human-annotated benchmark test dataset are meaningful and show that the generated knowledge base from OASYS trained on only auto-generated data is useful. We provide both a human-annotated test dataset and an auto-generated dataset.
hep-th/0202142
Robert Brandenberger
Robert H. Brandenberger (CERN & Brown Univ.), Jerome Martin (IAP)
On Signatures of Short Distance Physics in the Cosmic Microwave Background
11 pages, 4 figures
Int.J.Mod.Phys. A17 (2002) 3663-3680
10.1142/S0217751X02010765
BRWON-HET-1302
hep-th astro-ph gr-qc hep-ph
null
Following a self-contained review of the basics of the theory of cosmological perturbations, we discuss why the conclusions reached in the recent paper by Kaloper et al are too pessimistic estimates of the amplitude of possible imprints of trans-Planckian (string) physics on the spectrum of cosmic microwave anisotropies in an inflationary Universe. It is shown that the likely origin of large trans-Planckian effects on late time cosmological fluctuations comes from nonadiabatic evolution of the state of fluctuations while the wavelength is smaller than the Planck (string) scale, resulting in an excited state at the time that the wavelength crosses the Hubble radius during inflation.
[ { "created": "Thu, 21 Feb 2002 11:35:00 GMT", "version": "v1" } ]
2016-09-06
[ [ "Brandenberger", "Robert H.", "", "CERN & Brown Univ." ], [ "Martin", "Jerome", "", "IAP" ] ]
Following a self-contained review of the basics of the theory of cosmological perturbations, we discuss why the conclusions reached in the recent paper by Kaloper et al are too pessimistic estimates of the amplitude of possible imprints of trans-Planckian (string) physics on the spectrum of cosmic microwave anisotropies in an inflationary Universe. It is shown that the likely origin of large trans-Planckian effects on late time cosmological fluctuations comes from nonadiabatic evolution of the state of fluctuations while the wavelength is smaller than the Planck (string) scale, resulting in an excited state at the time that the wavelength crosses the Hubble radius during inflation.
hep-th/0212226
Andrei Ivanov
M. Faber, A. N. Ivanov
On the ground state of a free massless (pseudo)scalar field in two dimensions
20 pages, Latex, no figures
null
null
null
hep-th
null
We investigate the ground state of a free massless (pseudo)scalar field in 1+1-dimensional space-time. We argue that in the quantum field theory of a free massless (pseudo)scalar field without infrared divergences (Eur. Phys. J. C24, 653 (2002)) the ground state can be represented by a tensor product of wave functions of the fiducial vacuum and of the collective zero-mode, describing the motion of the ``center of mass'' of a free massless (pseudo)scalar field. We show that the bosonized version of the BCS wave function of the ground state of the massless Thirring model obtained in (Phys.Lett. B563, 231 (2003)) describes the ground state of the free massless (pseudo)scalar field.
[ { "created": "Wed, 18 Dec 2002 22:07:01 GMT", "version": "v1" }, { "created": "Sun, 29 Jun 2003 09:08:29 GMT", "version": "v2" } ]
2007-05-23
[ [ "Faber", "M.", "" ], [ "Ivanov", "A. N.", "" ] ]
We investigate the ground state of a free massless (pseudo)scalar field in 1+1-dimensional space-time. We argue that in the quantum field theory of a free massless (pseudo)scalar field without infrared divergences (Eur. Phys. J. C24, 653 (2002)) the ground state can be represented by a tensor product of wave functions of the fiducial vacuum and of the collective zero-mode, describing the motion of the ``center of mass'' of a free massless (pseudo)scalar field. We show that the bosonized version of the BCS wave function of the ground state of the massless Thirring model obtained in (Phys.Lett. B563, 231 (2003)) describes the ground state of the free massless (pseudo)scalar field.
2111.07492
Chen Ma
Chen Ma, Xiangyu Guo, Li Chen, Jun-Hai Yong, Yisen Wang
Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks
Accepted at NeurIPS 2021. The missing square term in Eqn.(13), as well as many other mistakes of the previous version, have been fixed in the current version
null
null
null
cs.CV cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One major problem in black-box adversarial attacks is the high query complexity in the hard-label attack setting, where only the top-1 predicted label is available. In this paper, we propose a novel geometric-based approach called Tangent Attack (TA), which identifies an optimal tangent point of a virtual hemisphere located on the decision boundary to reduce the distortion of the attack. Assuming the decision boundary is locally flat, we theoretically prove that the minimum $\ell_2$ distortion can be obtained by reaching the decision boundary along the tangent line passing through such tangent point in each iteration. To improve the robustness of our method, we further propose a generalized method which replaces the hemisphere with a semi-ellipsoid to adapt to curved decision boundaries. Our approach is free of pre-training. Extensive experiments conducted on the ImageNet and CIFAR-10 datasets demonstrate that our approach can consume only a small number of queries to achieve the low-magnitude distortion. The implementation source code is released online at https://github.com/machanic/TangentAttack.
[ { "created": "Mon, 15 Nov 2021 01:51:37 GMT", "version": "v1" }, { "created": "Thu, 18 Nov 2021 05:21:57 GMT", "version": "v2" }, { "created": "Thu, 16 Dec 2021 13:20:41 GMT", "version": "v3" }, { "created": "Sun, 16 Jan 2022 09:41:09 GMT", "version": "v4" }, { "c...
2022-03-01
[ [ "Ma", "Chen", "" ], [ "Guo", "Xiangyu", "" ], [ "Chen", "Li", "" ], [ "Yong", "Jun-Hai", "" ], [ "Wang", "Yisen", "" ] ]
One major problem in black-box adversarial attacks is the high query complexity in the hard-label attack setting, where only the top-1 predicted label is available. In this paper, we propose a novel geometric-based approach called Tangent Attack (TA), which identifies an optimal tangent point of a virtual hemisphere located on the decision boundary to reduce the distortion of the attack. Assuming the decision boundary is locally flat, we theoretically prove that the minimum $\ell_2$ distortion can be obtained by reaching the decision boundary along the tangent line passing through such tangent point in each iteration. To improve the robustness of our method, we further propose a generalized method which replaces the hemisphere with a semi-ellipsoid to adapt to curved decision boundaries. Our approach is free of pre-training. Extensive experiments conducted on the ImageNet and CIFAR-10 datasets demonstrate that our approach can consume only a small number of queries to achieve the low-magnitude distortion. The implementation source code is released online at https://github.com/machanic/TangentAttack.
2403.20312
Jaisidh Singh
Jaisidh Singh, Ishaan Shrivastava, Mayank Vatsa, Richa Singh, Aparna Bharati
Learn "No" to Say "Yes" Better: Improving Vision-Language Models via Negations
14 pages + 6 figures in main manuscript (excluding references)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Existing vision-language models (VLMs) treat text descriptions as a unit, confusing individual concepts in a prompt and impairing visual semantic matching and reasoning. An important aspect of reasoning in logic and language is negations. This paper highlights the limitations of popular VLMs such as CLIP, at understanding the implications of negations, i.e., the effect of the word "not" in a given prompt. To enable evaluation of VLMs on fluent prompts with negations, we present CC-Neg, a dataset containing 228,246 images, true captions and their corresponding negated captions. Using CC-Neg along with modifications to the contrastive loss of CLIP, our proposed CoN-CLIP framework, has an improved understanding of negations. This training paradigm improves CoN-CLIP's ability to encode semantics reliably, resulting in 3.85% average gain in top-1 accuracy for zero-shot image classification across 8 datasets. Further, CoN-CLIP outperforms CLIP on challenging compositionality benchmarks such as SugarCREPE by 4.4%, showcasing emergent compositional understanding of objects, relations, and attributes in text. Overall, our work addresses a crucial limitation of VLMs by introducing a dataset and framework that strengthens semantic associations between images and text, demonstrating improved large-scale foundation models with significantly reduced computational cost, promoting efficiency and accessibility.
[ { "created": "Fri, 29 Mar 2024 17:33:42 GMT", "version": "v1" } ]
2024-04-01
[ [ "Singh", "Jaisidh", "" ], [ "Shrivastava", "Ishaan", "" ], [ "Vatsa", "Mayank", "" ], [ "Singh", "Richa", "" ], [ "Bharati", "Aparna", "" ] ]
Existing vision-language models (VLMs) treat text descriptions as a unit, confusing individual concepts in a prompt and impairing visual semantic matching and reasoning. An important aspect of reasoning in logic and language is negations. This paper highlights the limitations of popular VLMs such as CLIP, at understanding the implications of negations, i.e., the effect of the word "not" in a given prompt. To enable evaluation of VLMs on fluent prompts with negations, we present CC-Neg, a dataset containing 228,246 images, true captions and their corresponding negated captions. Using CC-Neg along with modifications to the contrastive loss of CLIP, our proposed CoN-CLIP framework, has an improved understanding of negations. This training paradigm improves CoN-CLIP's ability to encode semantics reliably, resulting in 3.85% average gain in top-1 accuracy for zero-shot image classification across 8 datasets. Further, CoN-CLIP outperforms CLIP on challenging compositionality benchmarks such as SugarCREPE by 4.4%, showcasing emergent compositional understanding of objects, relations, and attributes in text. Overall, our work addresses a crucial limitation of VLMs by introducing a dataset and framework that strengthens semantic associations between images and text, demonstrating improved large-scale foundation models with significantly reduced computational cost, promoting efficiency and accessibility.
1804.03242
Junyu Liu
Ning Bao, Junyu Liu
Quantum complexity and the virial theorem
v2: add references and a footnote. v3: published version, with concrete examples
JHEP 1808 (2018) 144
10.1007/JHEP08(2018)144
CALT-TH-2018-016
hep-th gr-qc quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is conjectured that in the geometric formulation of quantum computing, one can study quantum complexity through classical entropy of statistical ensembles established non-relativistically in the group manifold of unitary operators. The kinetic and positional decompositions of statistical entropy are conjectured to correspond to the Kolmogorov complexity and computational complexity, respectively, of corresponding quantum circuits. In this paper, we claim that by applying the virial theorem to the group manifold, one can derive a generic relation between Kolmogorov complexity and computational complexity in the thermal equilibrium.
[ { "created": "Mon, 9 Apr 2018 21:20:38 GMT", "version": "v1" }, { "created": "Tue, 15 May 2018 22:50:20 GMT", "version": "v2" }, { "created": "Tue, 21 Aug 2018 19:33:10 GMT", "version": "v3" } ]
2018-08-27
[ [ "Bao", "Ning", "" ], [ "Liu", "Junyu", "" ] ]
It is conjectured that in the geometric formulation of quantum computing, one can study quantum complexity through classical entropy of statistical ensembles established non-relativistically in the group manifold of unitary operators. The kinetic and positional decompositions of statistical entropy are conjectured to correspond to the Kolmogorov complexity and computational complexity, respectively, of corresponding quantum circuits. In this paper, we claim that by applying the virial theorem to the group manifold, one can derive a generic relation between Kolmogorov complexity and computational complexity in the thermal equilibrium.
hep-th/0607017
Han-Ying Guo
Han-Ying Guo
The Beltrami Model of De Sitter Space: From Snyder's quantized space-time to de Sitter invariant relativity
15 pages. Invited talk given at `International workshop on noncommutative geometry and physics', Beijing, Nov. 7-10, 2005. To appear in the proceedings
null
null
null
hep-th
null
In terms of the Beltrami model of de Sitter space we show that there is an interchangeable relation between Snyder's quantized space-time model in dS-space of momenta at the Planck length $\ell_P=(G\hbar c^{-3})^{1/2}$ and the dS-invariant special relativity in dS-spacetime of radius $R\simeq(3\Lambda^{-1})^{1/2}$, which is another fundamental length related to the cosmological constant. Here, the cosmological constant $\Lambda$ is regarded as a fundamental constant together with the speed of light $c$, Newton constant $G$ and Planck constant $\hbar$. Furthermore, the physics at two fundamental scales of length, the \dS-radius $R$ and the Planck length $\ell_P$, should be dual to each other and linked via the gravity with local dS-invariance characterized by a dimensionless coupling constant $g= \sqrt{3} \ell_P/R\simeq(G\hbar c^{-3}\Lambda)^{1/2}\sim 10^{-61}$.
[ { "created": "Tue, 4 Jul 2006 04:09:19 GMT", "version": "v1" } ]
2007-05-23
[ [ "Guo", "Han-Ying", "" ] ]
In terms of the Beltrami model of de Sitter space we show that there is an interchangeable relation between Snyder's quantized space-time model in dS-space of momenta at the Planck length $\ell_P=(G\hbar c^{-3})^{1/2}$ and the dS-invariant special relativity in dS-spacetime of radius $R\simeq(3\Lambda^{-1})^{1/2}$, which is another fundamental length related to the cosmological constant. Here, the cosmological constant $\Lambda$ is regarded as a fundamental constant together with the speed of light $c$, Newton constant $G$ and Planck constant $\hbar$. Furthermore, the physics at two fundamental scales of length, the \dS-radius $R$ and the Planck length $\ell_P$, should be dual to each other and linked via the gravity with local dS-invariance characterized by a dimensionless coupling constant $g= \sqrt{3} \ell_P/R\simeq(G\hbar c^{-3}\Lambda)^{1/2}\sim 10^{-61}$.
2403.17369
ZiYang Gong
Ziyang Gong, Fuhao Li, Yupeng Deng, Deblina Bhattacharjee, Xianzheng Ma, Xiangwei Zhu, Zhenming Ji
CoDA: Instructive Chain-of-Domain Adaptation with Severity-Aware Visual Prompt Tuning
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unsupervised Domain Adaptation (UDA) aims to adapt models from labeled source domains to unlabeled target domains. When adapting to adverse scenes, existing UDA methods fail to perform well due to the lack of instructions, leading their models to overlook discrepancies within all adverse scenes. To tackle this, we propose CoDA which instructs models to distinguish, focus, and learn from these discrepancies at scene and image levels. Specifically, CoDA consists of a Chain-of-Domain (CoD) strategy and a Severity-Aware Visual Prompt Tuning (SAVPT) mechanism. CoD focuses on scene-level instructions to divide all adverse scenes into easy and hard scenes, guiding models to adapt from source to easy domains with easy scene images, and then to hard domains with hard scene images, thereby laying a solid foundation for whole adaptations. Building upon this foundation, we employ SAVPT to dive into more detailed image-level instructions to boost performance. SAVPT features a novel metric Severity that divides all adverse scene images into low-severity and high-severity images. Then Severity directs visual prompts and adapters, instructing models to concentrate on unified severity features instead of scene-specific features, without adding complexity to the model architecture. CoDA achieves SOTA performances on widely-used benchmarks under all adverse scenes. Notably, CoDA outperforms the existing ones by 4.6%, and 10.3% mIoU on the Foggy Driving, and Foggy Zurich benchmarks, respectively. Our code is available at https://github.com/Cuzyoung/CoDA
[ { "created": "Tue, 26 Mar 2024 04:09:08 GMT", "version": "v1" }, { "created": "Thu, 4 Apr 2024 08:05:06 GMT", "version": "v2" }, { "created": "Mon, 15 Jul 2024 06:34:03 GMT", "version": "v3" } ]
2024-07-16
[ [ "Gong", "Ziyang", "" ], [ "Li", "Fuhao", "" ], [ "Deng", "Yupeng", "" ], [ "Bhattacharjee", "Deblina", "" ], [ "Ma", "Xianzheng", "" ], [ "Zhu", "Xiangwei", "" ], [ "Ji", "Zhenming", "" ] ]
Unsupervised Domain Adaptation (UDA) aims to adapt models from labeled source domains to unlabeled target domains. When adapting to adverse scenes, existing UDA methods fail to perform well due to the lack of instructions, leading their models to overlook discrepancies within all adverse scenes. To tackle this, we propose CoDA which instructs models to distinguish, focus, and learn from these discrepancies at scene and image levels. Specifically, CoDA consists of a Chain-of-Domain (CoD) strategy and a Severity-Aware Visual Prompt Tuning (SAVPT) mechanism. CoD focuses on scene-level instructions to divide all adverse scenes into easy and hard scenes, guiding models to adapt from source to easy domains with easy scene images, and then to hard domains with hard scene images, thereby laying a solid foundation for whole adaptations. Building upon this foundation, we employ SAVPT to dive into more detailed image-level instructions to boost performance. SAVPT features a novel metric Severity that divides all adverse scene images into low-severity and high-severity images. Then Severity directs visual prompts and adapters, instructing models to concentrate on unified severity features instead of scene-specific features, without adding complexity to the model architecture. CoDA achieves SOTA performances on widely-used benchmarks under all adverse scenes. Notably, CoDA outperforms the existing ones by 4.6%, and 10.3% mIoU on the Foggy Driving, and Foggy Zurich benchmarks, respectively. Our code is available at https://github.com/Cuzyoung/CoDA
1704.04268
John Rhodes
Elizabeth S. Allman and James H. Degnan and John A. Rhodes
Split probabilities and species tree inference under the multispecies coalescent model
43 pages
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using topological summaries of gene trees as a basis for species tree inference is a promising approach to obtain acceptable speed on genomic-scale datasets, and to avoid some undesirable modeling assumptions. Here we study the probabilities of splits on gene trees under the multispecies coalescent model, and how their features might inform species tree inference. After investigating the behavior of split consensus methods, we investigate split invariants --- that is, polynomial relationships between split probabilities. These invariants are then used to show that, even though a split is an unrooted notion, split probabilities retain enough information to identify the rooted species tree topology for trees of more than 5 taxa, with one possible 6-taxon exception.
[ { "created": "Thu, 13 Apr 2017 19:45:39 GMT", "version": "v1" } ]
2017-04-17
[ [ "Allman", "Elizabeth S.", "" ], [ "Degnan", "James H.", "" ], [ "Rhodes", "John A.", "" ] ]
Using topological summaries of gene trees as a basis for species tree inference is a promising approach to obtain acceptable speed on genomic-scale datasets, and to avoid some undesirable modeling assumptions. Here we study the probabilities of splits on gene trees under the multispecies coalescent model, and how their features might inform species tree inference. After investigating the behavior of split consensus methods, we investigate split invariants --- that is, polynomial relationships between split probabilities. These invariants are then used to show that, even though a split is an unrooted notion, split probabilities retain enough information to identify the rooted species tree topology for trees of more than 5 taxa, with one possible 6-taxon exception.
hep-th/0104194
Leonardo Castellani
L. Castellani and L. Sommovigo
Supersymmetric domain wall x G/H solutions of IIB supergravity
8 pages, latex
null
null
DFTT-11/2001
hep-th
null
1-brane nonmaximally supersymmetric solutions of D=10 chiral supergravity are discussed. In the dual frame, their near brane geometry is the product of a 3-dimensional domain wall spacetime and a 7-dimensional homogeneous Einstein space G/H.
[ { "created": "Mon, 23 Apr 2001 16:00:44 GMT", "version": "v1" } ]
2007-05-23
[ [ "Castellani", "L.", "" ], [ "Sommovigo", "L.", "" ] ]
1-brane nonmaximally supersymmetric solutions of D=10 chiral supergravity are discussed. In the dual frame, their near brane geometry is the product of a 3-dimensional domain wall spacetime and a 7-dimensional homogeneous Einstein space G/H.
hep-th/9506197
Rainer Dick
Rainer Dick
Remarks on chiral symmetry breaking with massless fermions
LaTex, 7 pages, one misleading remark corrected and a comment added
null
null
null
hep-th hep-ph
null
In this talk I present recent results on Lorentz covariant correlation functions $\langle q(p_1)\overline{q}(p_2)\rangle$ on the cone $p^2=0$. In particular, chiral symmetry breaking terms are constructed which resemble fermionic 2--point functions of 2--D CFT up to a scalar factor.
[ { "created": "Thu, 29 Jun 1995 18:45:35 GMT", "version": "v1" }, { "created": "Thu, 6 Jul 1995 02:49:58 GMT", "version": "v2" } ]
2008-02-03
[ [ "Dick", "Rainer", "" ] ]
In this talk I present recent results on Lorentz covariant correlation functions $\langle q(p_1)\overline{q}(p_2)\rangle$ on the cone $p^2=0$. In particular, chiral symmetry breaking terms are constructed which resemble fermionic 2--point functions of 2--D CFT up to a scalar factor.
hep-th/0506001
Valeri Frolov
Valeri P. Frolov, Werner Israel, and Andrei Zelnikov
Gravitational field of relativistic gyratons
11 pages
Phys.Rev. D72 (2005) 084031
10.1103/PhysRevD.72.084031
Alberta-Thy-08-05
hep-th gr-qc
null
The metric ansatz is used to describe the gravitational field of a beam-pulse of spinning radiation (gyraton) in an arbitrary number of spacetime dimensions D. First we demonstrate that this metric belongs to the class of metrics for which all scalar invariants constructed from the curvature and its covariant derivatives vanish. Next, it is shown that the vacuum Einstein equations reduce to two linear problems in (D-2)-dimensional Euclidean space. The first is to find the static magnetic potential created by a point-like source. The second requires finding the electric potential created by a point-like source surrounded by given distribution of the electric charge. To obtain a generic gyraton-type solution of the vacuum Einstein equations it is sufficient to allow the coefficients in the corresponding harmonic decompositions of solutions of the linear problems to depend arbitrarily on retarded time and substitute the obtained expressions in the metric ansatz. We discuss properties of the solutions for relativistic gyratons and consider special examples.
[ { "created": "Wed, 1 Jun 2005 00:06:46 GMT", "version": "v1" }, { "created": "Thu, 1 Sep 2005 21:11:45 GMT", "version": "v2" }, { "created": "Tue, 8 Nov 2005 20:57:24 GMT", "version": "v3" } ]
2009-11-11
[ [ "Frolov", "Valeri P.", "" ], [ "Israel", "Werner", "" ], [ "Zelnikov", "Andrei", "" ] ]
The metric ansatz is used to describe the gravitational field of a beam-pulse of spinning radiation (gyraton) in an arbitrary number of spacetime dimensions D. First we demonstrate that this metric belongs to the class of metrics for which all scalar invariants constructed from the curvature and its covariant derivatives vanish. Next, it is shown that the vacuum Einstein equations reduce to two linear problems in (D-2)-dimensional Euclidean space. The first is to find the static magnetic potential created by a point-like source. The second requires finding the electric potential created by a point-like source surrounded by given distribution of the electric charge. To obtain a generic gyraton-type solution of the vacuum Einstein equations it is sufficient to allow the coefficients in the corresponding harmonic decompositions of solutions of the linear problems to depend arbitrarily on retarded time and substitute the obtained expressions in the metric ansatz. We discuss properties of the solutions for relativistic gyratons and consider special examples.
1609.02318
Alex Alvarado
Nikita A. Shevchenko, Stanislav A. Derevyanko, Jaroslaw E. Prilepsky, Alex Alvarado, Polina Bayvel, and Sergei K. Turitsyn
Capacity Lower Bounds of the Noncentral Chi-Channel with Applications to Soliton Amplitude Modulation
null
null
10.1109/TCOMM.2018.2808286
null
cs.IT math.IT physics.optics
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The channel law for amplitude-modulated solitons transmitted through a nonlinear optical fibre with ideal distributed amplification and a receiver based on the nonlinear Fourier transform is a noncentral chi-distribution with $2n$ degrees of freedom, where $n=2$ and $n=3$ correspond to the single- and dual-polarisation cases, respectively. In this paper, we study capacity lower bounds of this channel under an average power constraint in bits per channel use. We develop an asymptotic semi-analytic approximation for a capacity lower bound for arbitrary $n$ and a Rayleigh input distribution. It is shown that this lower bound grows logarithmically with signal-to-noise ratio (SNR), independently of the value of $n$. Numerical results for other continuous input distributions are also provided. A half-Gaussian input distribution is shown to give larger rates than a Rayleigh input distribution for $n=1,2,3$. At an SNR of $25$ dB, the best lower bounds we developed are approximately $3.68$ bit per channel use. The practically relevant case of amplitude shift-keying (ASK) constellations is also numerically analysed. For the same SNR of $25$ dB, a $16$-ASK constellation yields a rate of approximately $3.45$ bit per channel use.
[ { "created": "Thu, 8 Sep 2016 08:20:23 GMT", "version": "v1" }, { "created": "Sat, 1 Apr 2017 16:46:21 GMT", "version": "v2" }, { "created": "Sun, 10 Sep 2017 08:21:15 GMT", "version": "v3" }, { "created": "Fri, 16 Feb 2018 16:04:17 GMT", "version": "v4" } ]
2020-06-05
[ [ "Shevchenko", "Nikita A.", "" ], [ "Derevyanko", "Stanislav A.", "" ], [ "Prilepsky", "Jaroslaw E.", "" ], [ "Alvarado", "Alex", "" ], [ "Bayvel", "Polina", "" ], [ "Turitsyn", "Sergei K.", "" ] ]
The channel law for amplitude-modulated solitons transmitted through a nonlinear optical fibre with ideal distributed amplification and a receiver based on the nonlinear Fourier transform is a noncentral chi-distribution with $2n$ degrees of freedom, where $n=2$ and $n=3$ correspond to the single- and dual-polarisation cases, respectively. In this paper, we study capacity lower bounds of this channel under an average power constraint in bits per channel use. We develop an asymptotic semi-analytic approximation for a capacity lower bound for arbitrary $n$ and a Rayleigh input distribution. It is shown that this lower bound grows logarithmically with signal-to-noise ratio (SNR), independently of the value of $n$. Numerical results for other continuous input distributions are also provided. A half-Gaussian input distribution is shown to give larger rates than a Rayleigh input distribution for $n=1,2,3$. At an SNR of $25$ dB, the best lower bounds we developed are approximately $3.68$ bit per channel use. The practically relevant case of amplitude shift-keying (ASK) constellations is also numerically analysed. For the same SNR of $25$ dB, a $16$-ASK constellation yields a rate of approximately $3.45$ bit per channel use.
1301.3106
Syed Jafar
Syed A. Jafar
Topological Interference Management through Index Coding
Revised for the IEEE Transactions on Information Theory
null
10.1109/TIT.2013.2285151
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work studies linear interference networks, both wired and wireless, with no channel state information at the transmitters (CSIT) except a coarse knowledge of the end-to-end one-hop topology of the network that only allows a distinction between weak (zero) and significant (non-zero) channels and no further knowledge of the channel coefficients' realizations. The network capacity (wired) and DoF (wireless) are found to be bounded above by the capacity of an index coding problem for which the antidote graph is the complement of the given interference graph. The problems are shown to be equivalent under linear solutions. An interference alignment perspective is then used to translate the existing index coding solutions into the wired network capacity and wireless network DoF solutions, as well as to find new and unified solutions to different classes of all three problems.
[ { "created": "Mon, 14 Jan 2013 19:55:43 GMT", "version": "v1" }, { "created": "Mon, 30 Sep 2013 01:12:25 GMT", "version": "v2" } ]
2016-11-17
[ [ "Jafar", "Syed A.", "" ] ]
This work studies linear interference networks, both wired and wireless, with no channel state information at the transmitters (CSIT) except a coarse knowledge of the end-to-end one-hop topology of the network that only allows a distinction between weak (zero) and significant (non-zero) channels and no further knowledge of the channel coefficients' realizations. The network capacity (wired) and DoF (wireless) are found to be bounded above by the capacity of an index coding problem for which the antidote graph is the complement of the given interference graph. The problems are shown to be equivalent under linear solutions. An interference alignment perspective is then used to translate the existing index coding solutions into the wired network capacity and wireless network DoF solutions, as well as to find new and unified solutions to different classes of all three problems.
2208.14827
Niloofar Vardian
Niloofar Vardian
Entanglement Renormalization of the class of Continuous Matrix Product States
6 pages, 1 figure
null
10.1103/PhysRevD.108.094029
null
hep-th cond-mat.stat-mech cond-mat.str-el
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Continuous tensor network gives a variational ansatz for the ground state of the quantum field theories (QFTs). The notable examples are the continuous matrix product state (cMPS) and the continuous multiscale entanglement renormalization ansatz (cMERA). While cMPS is just adapted to the non-relativistic QFTs, only the Gaussian cMERA is well-understood which we can not use to approximate the ground state of the interacting relativistic QFTs. But instead, cMERA also corresponds to a real-space renormalization group flow in the context of the wave functions. In this letter, we investigate the backward Gaussian cMERA renormalization group flow of the class of cMPS by putting the standard cMPS at the IR scale. At the UV scale, for the bosonic systems in the thermodynamic limit, we achieve the variational class of states that has been proposed recently as the relativistic cMPS (RCMPS) is adapted to the relativistic QFTs without requiring to introduce of any additional IR or UV cut-off. We also extend the RCMPS to fermionic systems and theories on a finite circle.
[ { "created": "Wed, 31 Aug 2022 12:56:15 GMT", "version": "v1" } ]
2023-11-23
[ [ "Vardian", "Niloofar", "" ] ]
Continuous tensor network gives a variational ansatz for the ground state of the quantum field theories (QFTs). The notable examples are the continuous matrix product state (cMPS) and the continuous multiscale entanglement renormalization ansatz (cMERA). While cMPS is just adapted to the non-relativistic QFTs, only the Gaussian cMERA is well-understood which we can not use to approximate the ground state of the interacting relativistic QFTs. But instead, cMERA also corresponds to a real-space renormalization group flow in the context of the wave functions. In this letter, we investigate the backward Gaussian cMERA renormalization group flow of the class of cMPS by putting the standard cMPS at the IR scale. At the UV scale, for the bosonic systems in the thermodynamic limit, we achieve the variational class of states that has been proposed recently as the relativistic cMPS (RCMPS) is adapted to the relativistic QFTs without requiring to introduce of any additional IR or UV cut-off. We also extend the RCMPS to fermionic systems and theories on a finite circle.
1410.2090
Amirpasha Shirazinia Dr.
Amirpasha Shirazinia, Subhrakanti Dey
Power-Constrained Sparse Gaussian Linear Dimensionality Reduction over Noisy Channels
Accepted for publication in IEEE Transactions on Signal Processing (16 pages)
null
10.1109/TSP.2015.2455521
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we investigate power-constrained sensing matrix design in a sparse Gaussian linear dimensionality reduction framework. Our study is carried out in a single--terminal setup as well as in a multi--terminal setup consisting of orthogonal or coherent multiple access channels (MAC). We adopt the mean square error (MSE) performance criterion for sparse source reconstruction in a system where source-to-sensor channel(s) and sensor-to-decoder communication channel(s) are noisy. Our proposed sensing matrix design procedure relies upon minimizing a lower-bound on the MSE in single-- and multiple--terminal setups. We propose a three-stage sensing matrix optimization scheme that combines semi-definite relaxation (SDR) programming, a low-rank approximation problem and power-rescaling. Under certain conditions, we derive closed-form solutions to the proposed optimization procedure. Through numerical experiments, by applying practical sparse reconstruction algorithms, we show the superiority of the proposed scheme by comparing it with other relevant methods. This performance improvement is achieved at the price of higher computational complexity. Hence, in order to address the complexity burden, we present an equivalent stochastic optimization method to the problem of interest that can be solved approximately, while still providing a superior performance over the popular methods.
[ { "created": "Wed, 8 Oct 2014 13:01:25 GMT", "version": "v1" }, { "created": "Thu, 9 Oct 2014 13:10:37 GMT", "version": "v2" }, { "created": "Mon, 27 Jul 2015 10:25:35 GMT", "version": "v3" } ]
2015-10-28
[ [ "Shirazinia", "Amirpasha", "" ], [ "Dey", "Subhrakanti", "" ] ]
In this paper, we investigate power-constrained sensing matrix design in a sparse Gaussian linear dimensionality reduction framework. Our study is carried out in a single--terminal setup as well as in a multi--terminal setup consisting of orthogonal or coherent multiple access channels (MAC). We adopt the mean square error (MSE) performance criterion for sparse source reconstruction in a system where source-to-sensor channel(s) and sensor-to-decoder communication channel(s) are noisy. Our proposed sensing matrix design procedure relies upon minimizing a lower-bound on the MSE in single-- and multiple--terminal setups. We propose a three-stage sensing matrix optimization scheme that combines semi-definite relaxation (SDR) programming, a low-rank approximation problem and power-rescaling. Under certain conditions, we derive closed-form solutions to the proposed optimization procedure. Through numerical experiments, by applying practical sparse reconstruction algorithms, we show the superiority of the proposed scheme by comparing it with other relevant methods. This performance improvement is achieved at the price of higher computational complexity. Hence, in order to address the complexity burden, we present an equivalent stochastic optimization method to the problem of interest that can be solved approximately, while still providing a superior performance over the popular methods.
0906.3368
Skenderis Kostas
Joost Hoogeveen and Kostas Skenderis
Decoupling of unphysical states in the minimal pure spinor formalism I
77 pages (51 pages + appendices), added hyperrefs
JHEP01(2010)041
10.1007/JHEP01(2010)041
NSF-KITP-09-102, ITF-2009-15
hep-th
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This is the first of a series of two papers where decoupling of unphysical states in the minimal pure spinor formalism is investigated. The multi-loop amplitude prescription for the minimal pure spinor superstring formulated in hep-th/0406055 involves the insertion of picture changing operators in the path integral. These operators are BRST closed in a distributional sense and depend on a number of constant tensors. One can trace the origin of these insertions to gauge fixing, so the amplitudes are formally independent of the constant tensors. We show however by explicit tree-level and one-loop computations that the picture changing operators are not BRST closed inside correlators and the amplitudes do depend on these constant tensors. This is due to the fact that the gauge fixing condition implicit in the existing minimal amplitude prescription is singular and this can lead to Lorentz violation and non-decoupling of BRST exact states. As discussed in hep-th/0406055, a manifestly Lorentz invariant prescription can be obtained by integrating over the constant tensors and in the sequel to this paper, it is shown that when one includes these integrations unphysical states do decouple to all orders despite the fact that the PCO's are not BRST closed inside correlators.
[ { "created": "Thu, 18 Jun 2009 08:36:32 GMT", "version": "v1" }, { "created": "Fri, 19 Jun 2009 11:42:15 GMT", "version": "v2" } ]
2010-01-16
[ [ "Hoogeveen", "Joost", "" ], [ "Skenderis", "Kostas", "" ] ]
This is the first of a series of two papers where decoupling of unphysical states in the minimal pure spinor formalism is investigated. The multi-loop amplitude prescription for the minimal pure spinor superstring formulated in hep-th/0406055 involves the insertion of picture changing operators in the path integral. These operators are BRST closed in a distributional sense and depend on a number of constant tensors. One can trace the origin of these insertions to gauge fixing, so the amplitudes are formally independent of the constant tensors. We show however by explicit tree-level and one-loop computations that the picture changing operators are not BRST closed inside correlators and the amplitudes do depend on these constant tensors. This is due to the fact that the gauge fixing condition implicit in the existing minimal amplitude prescription is singular and this can lead to Lorentz violation and non-decoupling of BRST exact states. As discussed in hep-th/0406055, a manifestly Lorentz invariant prescription can be obtained by integrating over the constant tensors and in the sequel to this paper, it is shown that when one includes these integrations unphysical states do decouple to all orders despite the fact that the PCO's are not BRST closed inside correlators.
1210.4211
Wei Lu
Wei Lu, Laks V.S. Lakshmanan
Profit Maximization over Social Networks
19 pages, 8 figures. An abbreviated version appears in 2012 IEEE International Conference on Data Mining (ICDM'12). The second version includes some minor fixes
null
10.1109/ICDM.2012.145
null
cs.SI cs.GT physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Influence maximization is the problem of finding a set of influential users in a social network such that the expected spread of influence under a certain propagation model is maximized. Much of the previous work has neglected the important distinction between social influence and actual product adoption. However, as recognized in the management science literature, an individual who gets influenced by social acquaintances may not necessarily adopt a product (or technology), due, e.g., to monetary concerns. In this work, we distinguish between influence and adoption by explicitly modeling the states of being influenced and of adopting a product. We extend the classical Linear Threshold (LT) model to incorporate prices and valuations, and factor them into users' decision-making process of adopting a product. We show that the expected profit function under our proposed model maintains submodularity under certain conditions, but no longer exhibits monotonicity, unlike the expected influence spread function. To maximize the expected profit under our extended LT model, we employ an unbudgeted greedy framework to propose three profit maximization algorithms. The results of our detailed experimental study on three real-world datasets demonstrate that of the three algorithms, \textsf{PAGE}, which assigns prices dynamically based on the profit potential of each candidate seed, has the best performance both in the expected profit achieved and in running time.
[ { "created": "Mon, 15 Oct 2012 22:32:37 GMT", "version": "v1" }, { "created": "Wed, 5 Jun 2013 16:41:04 GMT", "version": "v2" } ]
2016-11-18
[ [ "Lu", "Wei", "" ], [ "Lakshmanan", "Laks V. S.", "" ] ]
Influence maximization is the problem of finding a set of influential users in a social network such that the expected spread of influence under a certain propagation model is maximized. Much of the previous work has neglected the important distinction between social influence and actual product adoption. However, as recognized in the management science literature, an individual who gets influenced by social acquaintances may not necessarily adopt a product (or technology), due, e.g., to monetary concerns. In this work, we distinguish between influence and adoption by explicitly modeling the states of being influenced and of adopting a product. We extend the classical Linear Threshold (LT) model to incorporate prices and valuations, and factor them into users' decision-making process of adopting a product. We show that the expected profit function under our proposed model maintains submodularity under certain conditions, but no longer exhibits monotonicity, unlike the expected influence spread function. To maximize the expected profit under our extended LT model, we employ an unbudgeted greedy framework to propose three profit maximization algorithms. The results of our detailed experimental study on three real-world datasets demonstrate that of the three algorithms, \textsf{PAGE}, which assigns prices dynamically based on the profit potential of each candidate seed, has the best performance both in the expected profit achieved and in running time.
0710.5051
Nikolaos Tetradis
G. Dvali, H. B. Nielsen, N. Tetradis
Localization of Gauge Fields and Monopole Tunnelling
11 pages, 3 figures, improvements in the presentation, version to appear in Physical Review D
Phys.Rev.D77:085005,2008
10.1103/PhysRevD.77.085005
null
hep-th hep-ph
null
We study the dynamical localization of a massless gauge field on a lower-dimensional surface (2-brane). In flat space, the necessary and sufficient condition for this phenomenon is the existence of confinement in the bulk. The resulting configuration is equivalent to a dual Josephson junction. This duality leads to an interesting puzzle, as it implies that a localized massless theory, even in the Abelian case, must become confining at exponentially large distances. Through the use of topological arguments we clarify the physics behind this large-distance confinement and identify the instantons of the brane world-volume theory that are responsible for its appearance. We show that they correspond to the (condensed) bulk magnetic charges (monopoles), that occasionally tunnel through the brane and induce weak confinement of the brane theory. We consider the possible generalization of this effect to higher dimensions and discuss phenomenological bounds on the confinement of electric charges at exponentially large distances within our Universe.
[ { "created": "Fri, 26 Oct 2007 10:51:34 GMT", "version": "v1" }, { "created": "Thu, 28 Feb 2008 11:29:54 GMT", "version": "v2" } ]
2008-11-26
[ [ "Dvali", "G.", "" ], [ "Nielsen", "H. B.", "" ], [ "Tetradis", "N.", "" ] ]
We study the dynamical localization of a massless gauge field on a lower-dimensional surface (2-brane). In flat space, the necessary and sufficient condition for this phenomenon is the existence of confinement in the bulk. The resulting configuration is equivalent to a dual Josephson junction. This duality leads to an interesting puzzle, as it implies that a localized massless theory, even in the Abelian case, must become confining at exponentially large distances. Through the use of topological arguments we clarify the physics behind this large-distance confinement and identify the instantons of the brane world-volume theory that are responsible for its appearance. We show that they correspond to the (condensed) bulk magnetic charges (monopoles), that occasionally tunnel through the brane and induce weak confinement of the brane theory. We consider the possible generalization of this effect to higher dimensions and discuss phenomenological bounds on the confinement of electric charges at exponentially large distances within our Universe.
1012.1909
Manar Mohaisen
Manar Mohaisen, KyungHi Chang
On Transmit Antenna Selection for Multiuser MIMO Systems with Dirty Paper Coding
5 pages, 6 figures, 1 table, [The 20th Personal, Indoor and Mobile Radio Communications Symposium 2009 (PIMRC-09)]
The 20th Personal, Indoor and Mobile Radio Communications Symposium 2009 (PIMRC-09)
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we address the transmit antenna selection in multi-user MIMO systems with precoding. The optimum and reduced complexity sub-optimum antenna selection algorithms are introduced. QR-decomposition (QRD) based antenna selection is investigated and the reason behind its sub-optimality is analytically derived. We introduce the conventional QRD-based algorithm and propose an efficient QRD-based transmit antenna scheme (maxR) that is both implementation and performance efficient. Moreover, we derive explicit formulae for the computational complexities of the aforementioned algorithms. Simulation results and analysis demonstrate that the proposed maxR algorithm requires only 1% of the computational efforts required by the optimal algorithm for a degradation of 1dB and 0.1dB in the case of linear zero-forcing and Tomlinson-Harashima precoding schemes, respectively.
[ { "created": "Thu, 9 Dec 2010 02:03:17 GMT", "version": "v1" } ]
2010-12-10
[ [ "Mohaisen", "Manar", "" ], [ "Chang", "KyungHi", "" ] ]
In this paper, we address the transmit antenna selection in multi-user MIMO systems with precoding. The optimum and reduced complexity sub-optimum antenna selection algorithms are introduced. QR-decomposition (QRD) based antenna selection is investigated and the reason behind its sub-optimality is analytically derived. We introduce the conventional QRD-based algorithm and propose an efficient QRD-based transmit antenna scheme (maxR) that is both implementation and performance efficient. Moreover, we derive explicit formulae for the computational complexities of the aforementioned algorithms. Simulation results and analysis demonstrate that the proposed maxR algorithm requires only 1% of the computational efforts required by the optimal algorithm for a degradation of 1dB and 0.1dB in the case of linear zero-forcing and Tomlinson-Harashima precoding schemes, respectively.
1902.06066
Varshaneya V
Varshaneya V, Balasubramanian S and Darshan Gera
RES-SE-NET: Boosting Performance of Resnets by Enhancing Bridge-connections
null
null
null
null
cs.LG cs.CV stat.ML
http://creativecommons.org/licenses/by-sa/4.0/
One of the ways to train deep neural networks effectively is to use residual connections. Residual connections can be classified as being either identity connections or bridge-connections with a reshaping convolution. Empirical observations on CIFAR-10 and CIFAR-100 datasets using a baseline Resnet model, with bridge-connections removed, have shown a significant reduction in accuracy. This reduction is due to lack of contribution, in the form of feature maps, by the bridge-connections. Hence bridge-connections are vital for Resnet. However, all feature maps in the bridge-connections are considered to be equally important. In this work, an upgraded architecture "Res-SE-Net" is proposed to further strengthen the contribution from the bridge-connections by quantifying the importance of each feature map and weighting them accordingly using Squeeze-and-Excitation (SE) block. It is demonstrated that Res-SE-Net generalizes much better than Resnet and SE-Resnet on the benchmark CIFAR-10 and CIFAR-100 datasets.
[ { "created": "Sat, 16 Feb 2019 08:25:16 GMT", "version": "v1" } ]
2019-02-19
[ [ "V", "Varshaneya", "" ], [ "S", "Balasubramanian", "" ], [ "Gera", "Darshan", "" ] ]
One of the ways to train deep neural networks effectively is to use residual connections. Residual connections can be classified as being either identity connections or bridge-connections with a reshaping convolution. Empirical observations on CIFAR-10 and CIFAR-100 datasets using a baseline Resnet model, with bridge-connections removed, have shown a significant reduction in accuracy. This reduction is due to lack of contribution, in the form of feature maps, by the bridge-connections. Hence bridge-connections are vital for Resnet. However, all feature maps in the bridge-connections are considered to be equally important. In this work, an upgraded architecture "Res-SE-Net" is proposed to further strengthen the contribution from the bridge-connections by quantifying the importance of each feature map and weighting them accordingly using Squeeze-and-Excitation (SE) block. It is demonstrated that Res-SE-Net generalizes much better than Resnet and SE-Resnet on the benchmark CIFAR-10 and CIFAR-100 datasets.
1501.02855
Luis Sentis
Donghyun Kim, Ye Zhao, Gray Thomas, and Luis Sentis
Assessing Whole-Body Operational Space Control in a Point-Foot Series Elastic Biped: Balance on Split Terrain and Undirected Walking
17 pages, 9 figures, 4 tables
null
null
null
cs.RO cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present advancements in control and trajectory generation for agile behavior in bipedal robots. We demonstrate that Whole-Body Operational Space Control (WBOSC), developed a few years ago, is well suited for achieving two types of agile behaviors, namely, balancing on a high pitch split terrain and achieving undirected walking on flat terrain. The work presented here is the first implementation of WBOSC on a biped robot, and more specifically a biped robot with series elastic actuators. We present and analyze a new algorithm that dynamically balances point foot robots by choosing footstep placements. Dealing with the naturally unstable dynamics of these type of systems is a difficult problem that requires both the controller and the trajectory generation algorithm to operate quickly and efficiently. We put forth a comprehensive development and integration effort: the design and construction of the biped system and experimental infrastructure, a customization of WBOSC for the agile behaviors, and new trajectory generation algorithms. Using this custom built controller, we conduct, for first time, an experiment in which a biped robot balances in a high pitch split terrain, demonstrating our ability to precisely regulate internal forces using force sensing feedback techniques. Finally, we demonstrate the stabilizing capabilities of our online trajectory generation algorithm in the physics-based simulator and through physical experiments with a planarized locomotion setup.
[ { "created": "Tue, 13 Jan 2015 00:17:39 GMT", "version": "v1" } ]
2015-01-14
[ [ "Kim", "Donghyun", "" ], [ "Zhao", "Ye", "" ], [ "Thomas", "Gray", "" ], [ "Sentis", "Luis", "" ] ]
In this paper we present advancements in control and trajectory generation for agile behavior in bipedal robots. We demonstrate that Whole-Body Operational Space Control (WBOSC), developed a few years ago, is well suited for achieving two types of agile behaviors, namely, balancing on a high pitch split terrain and achieving undirected walking on flat terrain. The work presented here is the first implementation of WBOSC on a biped robot, and more specifically a biped robot with series elastic actuators. We present and analyze a new algorithm that dynamically balances point foot robots by choosing footstep placements. Dealing with the naturally unstable dynamics of these type of systems is a difficult problem that requires both the controller and the trajectory generation algorithm to operate quickly and efficiently. We put forth a comprehensive development and integration effort: the design and construction of the biped system and experimental infrastructure, a customization of WBOSC for the agile behaviors, and new trajectory generation algorithms. Using this custom built controller, we conduct, for first time, an experiment in which a biped robot balances in a high pitch split terrain, demonstrating our ability to precisely regulate internal forces using force sensing feedback techniques. Finally, we demonstrate the stabilizing capabilities of our online trajectory generation algorithm in the physics-based simulator and through physical experiments with a planarized locomotion setup.
1009.5705
Grenville Croll
Angus Dunn
Spreadsheets - the Good, the Bad and the Downright Ugly
8 Pages
Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2010 157-164 ISBN 978-1-905404-50-6
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spreadsheets are ubiquitous, heavily relied on throughout vast swathes of finance, commerce, industry, academia and Government. They are also acknowledged to be extraordinarily and unacceptably prone to error. If these two points are accepted, it has to follow that their uncontrolled use has the potential to inflict considerable damage. One approach to controlling such error should be to define as "good practice" a set of characteristics that a spreadsheet must possess and as "bad practice" another set that it must avoid. Defining such characteristics should, in principle, perfectly do-able. However, being able to say with authority at a definite moment that any particular spreadsheet complies with these characteristics is very much more difficult. The author asserts that the use of automated spreadsheet development could markedly help in ensuring and demonstrating such compliance.
[ { "created": "Tue, 28 Sep 2010 21:46:50 GMT", "version": "v1" } ]
2010-09-30
[ [ "Dunn", "Angus", "" ] ]
Spreadsheets are ubiquitous, heavily relied on throughout vast swathes of finance, commerce, industry, academia and Government. They are also acknowledged to be extraordinarily and unacceptably prone to error. If these two points are accepted, it has to follow that their uncontrolled use has the potential to inflict considerable damage. One approach to controlling such error should be to define as "good practice" a set of characteristics that a spreadsheet must possess and as "bad practice" another set that it must avoid. Defining such characteristics should, in principle, perfectly do-able. However, being able to say with authority at a definite moment that any particular spreadsheet complies with these characteristics is very much more difficult. The author asserts that the use of automated spreadsheet development could markedly help in ensuring and demonstrating such compliance.
2207.14417
Alexandros Evangelidis
Muqsit Azeem, Alexandros Evangelidis, Jan K\v{r}et\'insk\'y, Alexander Slivinskiy, and Maximilian Weininger
Optimistic and Topological Value Iteration for Simple Stochastic Games
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While value iteration (VI) is a standard solution approach to simple stochastic games (SSGs), it suffered from the lack of a stopping criterion. Recently, several solutions have appeared, among them also "optimistic" VI (OVI). However, OVI is applicable only to one-player SSGs with no end components. We lift these two assumptions, making it available to general SSGs. Further, we utilize the idea in the context of topological VI, where we provide an efficient precise solution. In order to compare the new algorithms with the state of the art, we use not only the standard benchmarks, but we also design a random generator of SSGs, which can be biased towards various types of models, aiding in understanding the advantages of different algorithms on SSGs.
[ { "created": "Fri, 29 Jul 2022 00:34:47 GMT", "version": "v1" } ]
2022-08-01
[ [ "Azeem", "Muqsit", "" ], [ "Evangelidis", "Alexandros", "" ], [ "Křetínský", "Jan", "" ], [ "Slivinskiy", "Alexander", "" ], [ "Weininger", "Maximilian", "" ] ]
While value iteration (VI) is a standard solution approach to simple stochastic games (SSGs), it suffered from the lack of a stopping criterion. Recently, several solutions have appeared, among them also "optimistic" VI (OVI). However, OVI is applicable only to one-player SSGs with no end components. We lift these two assumptions, making it available to general SSGs. Further, we utilize the idea in the context of topological VI, where we provide an efficient precise solution. In order to compare the new algorithms with the state of the art, we use not only the standard benchmarks, but we also design a random generator of SSGs, which can be biased towards various types of models, aiding in understanding the advantages of different algorithms on SSGs.
1208.5350
Man Yi Yim
Man Yi Yim, Ad Aertsen, Stefan Rotter
Impact of intrinsic biophysical diversity on the activity of spiking neurons
4 pages, 5 figures
Phys. Rev. E 87, 032710 (2013)
10.1103/PhysRevE.87.032710
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the effect of intrinsic heterogeneity on the activity of a population of leaky integrate-and-fire neurons. By rescaling the dynamical equation, we derive mathematical relations between multiple neuronal parameters and a fluctuating input noise. To this end, common input to heterogeneous neurons is conceived as an identical noise with neuron-specific mean and variance. As a consequence, the neuronal output rates can differ considerably, and their relative spike timing becomes desynchronized. This theory can quantitatively explain some recent experimental findings.
[ { "created": "Mon, 27 Aug 2012 10:06:12 GMT", "version": "v1" }, { "created": "Wed, 16 Jan 2013 10:27:55 GMT", "version": "v2" }, { "created": "Thu, 31 Jan 2013 14:42:54 GMT", "version": "v3" }, { "created": "Tue, 19 Feb 2013 17:08:00 GMT", "version": "v4" } ]
2013-08-21
[ [ "Yim", "Man Yi", "" ], [ "Aertsen", "Ad", "" ], [ "Rotter", "Stefan", "" ] ]
We study the effect of intrinsic heterogeneity on the activity of a population of leaky integrate-and-fire neurons. By rescaling the dynamical equation, we derive mathematical relations between multiple neuronal parameters and a fluctuating input noise. To this end, common input to heterogeneous neurons is conceived as an identical noise with neuron-specific mean and variance. As a consequence, the neuronal output rates can differ considerably, and their relative spike timing becomes desynchronized. This theory can quantitatively explain some recent experimental findings.
cs/0703053
Nicolas Lomenie
Guray Erus (CRIP5), Nicolas Lom\'enie (CRIP5)
Extraction of cartographic objects in high resolution satellite images for object model generation
null
4th Workshop on pattern Recognition in Remote Sensing in conjunction with ICPR2006 (08/2006) 00-00
null
null
cs.CV
null
The aim of this study is to detect man-made cartographic objects in high-resolution satellite images. New generation satellites offer a sub-metric spatial resolution, in which it is possible (and necessary) to develop methods at object level rather than at pixel level, and to exploit structural features of objects. With this aim, a method to generate structural object models from manually segmented images has been developed. To generate the model from non-segmented images, extraction of the objects from the sample images is required. A hybrid method of extraction (both in terms of input sources and segmentation algorithms) is proposed: A region based segmentation is applied on a 10 meter resolution multi-spectral image. The result is used as marker in a "marker-controlled watershed method using edges" on a 2.5 meter resolution panchromatic image. Very promising results have been obtained even on images where the limits of the target objects are not apparent.
[ { "created": "Mon, 12 Mar 2007 15:57:23 GMT", "version": "v1" } ]
2016-08-14
[ [ "Erus", "Guray", "", "CRIP5" ], [ "Loménie", "Nicolas", "", "CRIP5" ] ]
The aim of this study is to detect man-made cartographic objects in high-resolution satellite images. New generation satellites offer a sub-metric spatial resolution, in which it is possible (and necessary) to develop methods at object level rather than at pixel level, and to exploit structural features of objects. With this aim, a method to generate structural object models from manually segmented images has been developed. To generate the model from non-segmented images, extraction of the objects from the sample images is required. A hybrid method of extraction (both in terms of input sources and segmentation algorithms) is proposed: A region based segmentation is applied on a 10 meter resolution multi-spectral image. The result is used as marker in a "marker-controlled watershed method using edges" on a 2.5 meter resolution panchromatic image. Very promising results have been obtained even on images where the limits of the target objects are not apparent.
2208.14153
Yuhang Liu
Yuhang Liu, Zhen Zhang, Dong Gong, Mingming Gong, Biwei Huang, Anton van den Hengel, Kun Zhang, Javen Qinfeng Shi
Identifying Weight-Variant Latent Causal Models
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The task of causal representation learning aims to uncover latent higher-level causal representations that affect lower-level observations. Identifying true latent causal representations from observed data, while allowing instantaneous causal relations among latent variables, remains a challenge, however. To this end, we start from the analysis of three intrinsic properties in identifying latent space from observations: transitivity, permutation indeterminacy, and scaling indeterminacy. We find that transitivity acts as a key role in impeding the identifiability of latent causal representations. To address the unidentifiable issue due to transitivity, we introduce a novel identifiability condition where the underlying latent causal model satisfies a linear-Gaussian model, in which the causal coefficients and the distribution of Gaussian noise are modulated by an additional observed variable. Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling. Furthermore, based on this theoretical result, we propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them, together with the mapping from the latent causal variables to the observed ones. We show that the proposed method learns the true parameters asymptotically. Experimental results on synthetic and real data demonstrate the identifiability and consistency results and the efficacy of the proposed method in learning latent causal representations.
[ { "created": "Tue, 30 Aug 2022 11:12:59 GMT", "version": "v1" }, { "created": "Fri, 30 Sep 2022 07:14:54 GMT", "version": "v2" }, { "created": "Wed, 16 Nov 2022 10:36:08 GMT", "version": "v3" }, { "created": "Tue, 6 Dec 2022 06:15:03 GMT", "version": "v4" }, { "cr...
2023-02-21
[ [ "Liu", "Yuhang", "" ], [ "Zhang", "Zhen", "" ], [ "Gong", "Dong", "" ], [ "Gong", "Mingming", "" ], [ "Huang", "Biwei", "" ], [ "Hengel", "Anton van den", "" ], [ "Zhang", "Kun", "" ], [ "Shi", ...
The task of causal representation learning aims to uncover latent higher-level causal representations that affect lower-level observations. Identifying true latent causal representations from observed data, while allowing instantaneous causal relations among latent variables, remains a challenge, however. To this end, we start from the analysis of three intrinsic properties in identifying latent space from observations: transitivity, permutation indeterminacy, and scaling indeterminacy. We find that transitivity acts as a key role in impeding the identifiability of latent causal representations. To address the unidentifiable issue due to transitivity, we introduce a novel identifiability condition where the underlying latent causal model satisfies a linear-Gaussian model, in which the causal coefficients and the distribution of Gaussian noise are modulated by an additional observed variable. Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling. Furthermore, based on this theoretical result, we propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them, together with the mapping from the latent causal variables to the observed ones. We show that the proposed method learns the true parameters asymptotically. Experimental results on synthetic and real data demonstrate the identifiability and consistency results and the efficacy of the proposed method in learning latent causal representations.
0806.4959
Troels Harmark
Gianluca Grignani, Troels Harmark and Marta Orselli
The SU(2) x SU(2) sector in the string dual of N=6 superconformal Chern-Simons theory
19 pages.: Typos fixed, Sec. 6 improved
Nucl.Phys.B810:115-134,2009
10.1016/j.nuclphysb.2008.10.019
null
hep-th
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We examine the string dual of the recently constructed $\mathcal{N}=6$ superconformal Chern-Simons theory of Aharony, Bergman, Jafferis and Maldacena (ABJM theory). We focus in particular on the $SU(2)\times SU(2)$ sector. We find a sigma-model limit in which the resulting sigma-model is two Landau-Lifshitz models added together. We consider a Penrose limit for which we can approach the $SU(2)\times SU(2)$ sector. Finally, we find a new Giant Magnon solution in the $SU(2)\times SU(2)$ sector corresponding to one magnon in each $SU(2)$. We put these results together to find the full magnon dispersion relation and we compare this to recently found results for ABJM theory at weak coupling.
[ { "created": "Mon, 30 Jun 2008 17:45:13 GMT", "version": "v1" }, { "created": "Tue, 1 Jul 2008 17:26:39 GMT", "version": "v2" }, { "created": "Mon, 14 Jul 2008 14:18:33 GMT", "version": "v3" }, { "created": "Thu, 17 Jul 2008 16:54:23 GMT", "version": "v4" } ]
2017-09-07
[ [ "Grignani", "Gianluca", "" ], [ "Harmark", "Troels", "" ], [ "Orselli", "Marta", "" ] ]
We examine the string dual of the recently constructed $\mathcal{N}=6$ superconformal Chern-Simons theory of Aharony, Bergman, Jafferis and Maldacena (ABJM theory). We focus in particular on the $SU(2)\times SU(2)$ sector. We find a sigma-model limit in which the resulting sigma-model is two Landau-Lifshitz models added together. We consider a Penrose limit for which we can approach the $SU(2)\times SU(2)$ sector. Finally, we find a new Giant Magnon solution in the $SU(2)\times SU(2)$ sector corresponding to one magnon in each $SU(2)$. We put these results together to find the full magnon dispersion relation and we compare this to recently found results for ABJM theory at weak coupling.
1908.10004
Jia Tian
Jia Tian, Jue Hou, Bin Chen
Asymmetric $\lambda$-deformed cosets
23 pages ver2 references added
null
10.1016/j.nuclphysb.2020.114944
null
hep-th
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the integrable asymmetric $\lambda$-deformations of the $SO(n+1)/SO(n)$ coset models, following the prescription proposed in \cite{AsyLambda}. We construct all corresponding deformed geometries in an inductive way. Remarkably we find a $Z_2$ transformation which maps the asymmetric $\lambda$--deformed models to the symmetric $\lambda$--deformed models.
[ { "created": "Tue, 27 Aug 2019 03:17:17 GMT", "version": "v1" }, { "created": "Tue, 3 Sep 2019 05:37:36 GMT", "version": "v2" } ]
2020-03-18
[ [ "Tian", "Jia", "" ], [ "Hou", "Jue", "" ], [ "Chen", "Bin", "" ] ]
We study the integrable asymmetric $\lambda$-deformations of the $SO(n+1)/SO(n)$ coset models, following the prescription proposed in \cite{AsyLambda}. We construct all corresponding deformed geometries in an inductive way. Remarkably we find a $Z_2$ transformation which maps the asymmetric $\lambda$--deformed models to the symmetric $\lambda$--deformed models.
1202.5255
Avihay Kadosh
Avihay Kadosh, Aharon Davidson and Elisabetta Pallante
Slinky evolution of domain wall brane cosmology
24 pages, 4 figures, extended discussion of slinky evolution, minor revisions, conclusions unchanged
Phys. Rev. D 86, 124015 (2012)
10.1103/PhysRevD.86.124015
null
hep-th gr-qc hep-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Invoking an initial symmetry between the time $ t $ and some extra spatial dimension $ y $, we discuss a novel scenario where the dynamical formation of the 4-dim brane and its cosmological evolution are induced simultaneously by a common $ t<->y $ symmetry breaking mechanism. The local maximum of the underlying scalar potential is mapped onto a 'watershed' curve in the $ (t,y) $ plane; the direction tangent to this curve is identified as the cosmic time, whereas the perpendicular direction serves locally as the extra spatial dimension. Special attention is devoted to the so-called slinky configurations, whose brane cosmology is characterized by a decaying cosmological constant along the watershed curve. Such a slinky solution is first constructed within a simplified case where the watershed is constrained by $ y = 0 $. The physical requirements for a slinky configuration to generate a realistic model of cosmological evolution are then discussed in a more elaborated framework.
[ { "created": "Thu, 23 Feb 2012 18:19:06 GMT", "version": "v1" }, { "created": "Thu, 9 Aug 2012 10:18:03 GMT", "version": "v2" } ]
2012-12-06
[ [ "Kadosh", "Avihay", "" ], [ "Davidson", "Aharon", "" ], [ "Pallante", "Elisabetta", "" ] ]
Invoking an initial symmetry between the time $ t $ and some extra spatial dimension $ y $, we discuss a novel scenario where the dynamical formation of the 4-dim brane and its cosmological evolution are induced simultaneously by a common $ t<->y $ symmetry breaking mechanism. The local maximum of the underlying scalar potential is mapped onto a 'watershed' curve in the $ (t,y) $ plane; the direction tangent to this curve is identified as the cosmic time, whereas the perpendicular direction serves locally as the extra spatial dimension. Special attention is devoted to the so-called slinky configurations, whose brane cosmology is characterized by a decaying cosmological constant along the watershed curve. Such a slinky solution is first constructed within a simplified case where the watershed is constrained by $ y = 0 $. The physical requirements for a slinky configuration to generate a realistic model of cosmological evolution are then discussed in a more elaborated framework.
2007.01520
Alexander Mitchell Mr
Alexander L. Mitchell, Martin Engelcke, Oiwi Parker Jones, David Surovik, Siddhant Gangapurwala, Oliwier Melon, Ioannis Havoutis, and Ingmar Posner
First Steps: Latent-Space Control with Semantic Constraints for Quadruped Locomotion
8 pages, 7 figures, accepted at IROS 2020
null
null
null
cs.RO cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditional approaches to quadruped control frequently employ simplified, hand-derived models. This significantly reduces the capability of the robot since its effective kinematic range is curtailed. In addition, kinodynamic constraints are often non-differentiable and difficult to implement in an optimisation approach. In this work, these challenges are addressed by framing quadruped control as optimisation in a structured latent space. A deep generative model captures a statistical representation of feasible joint configurations, whilst complex dynamic and terminal constraints are expressed via high-level, semantic indicators and represented by learned classifiers operating upon the latent space. As a consequence, complex constraints are rendered differentiable and evaluated an order of magnitude faster than analytical approaches. We validate the feasibility of locomotion trajectories optimised using our approach both in simulation and on a real-world ANYmal quadruped. Our results demonstrate that this approach is capable of generating smooth and realisable trajectories. To the best of our knowledge, this is the first time latent space control has been successfully applied to a complex, real robot platform.
[ { "created": "Fri, 3 Jul 2020 07:04:18 GMT", "version": "v1" }, { "created": "Fri, 20 Nov 2020 16:31:46 GMT", "version": "v2" } ]
2020-11-23
[ [ "Mitchell", "Alexander L.", "" ], [ "Engelcke", "Martin", "" ], [ "Jones", "Oiwi Parker", "" ], [ "Surovik", "David", "" ], [ "Gangapurwala", "Siddhant", "" ], [ "Melon", "Oliwier", "" ], [ "Havoutis", "Ioannis...
Traditional approaches to quadruped control frequently employ simplified, hand-derived models. This significantly reduces the capability of the robot since its effective kinematic range is curtailed. In addition, kinodynamic constraints are often non-differentiable and difficult to implement in an optimisation approach. In this work, these challenges are addressed by framing quadruped control as optimisation in a structured latent space. A deep generative model captures a statistical representation of feasible joint configurations, whilst complex dynamic and terminal constraints are expressed via high-level, semantic indicators and represented by learned classifiers operating upon the latent space. As a consequence, complex constraints are rendered differentiable and evaluated an order of magnitude faster than analytical approaches. We validate the feasibility of locomotion trajectories optimised using our approach both in simulation and on a real-world ANYmal quadruped. Our results demonstrate that this approach is capable of generating smooth and realisable trajectories. To the best of our knowledge, this is the first time latent space control has been successfully applied to a complex, real robot platform.
2203.00770
Kan Yu
Ming Zhan (1), Zhibo Pang (2 and 3), Dacfey Dzung (2), Kan Yu (4), Ming Xiao (3) ((1) Southwest University, (2) ABB Corporate Research, (3) KTH Royal Institute of Technology, (4) La Trobe University)
Short-Packet Interleaver against Impulse Interference in Practical Industrial Environments
14 pages, 12 figures, submitted to IEEE Transactions on Wireless Communications
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The most common cause of transmission failure in Wireless High Performance (WirelessHP) target industry environments is impulse interference. As interleavers are commonly used to improve the reliability on the Orthogonal Frequency Division Multiplexing (OFDM) symbol level for long packet transmission, this paper considers the feasibility of applying short-packet bit interleaving to enhance the impulse/burst interference resisting capability on both OFDM symbol and frame level. Using the Universal Software Radio Peripherals (USRP) and PC hardware platform, the Packet Error Rate (PER) performance of interleaved coded short-packet transmission with Convolutional Codes (CC), Reed-Solomon codes (RS) and RS+CC concatenated codes are tested and analyzed. Applying the IEEE 1613 standard for impulse interference generation, extensive PER tests of CC(1=2) and RS(31; 21)+CC(1=2) concatenated codes are performed. With practical experiments, we prove the effectiveness of bit in terleaved coded short-packet transmission in real factory environments. We also investigate how PER performance depends on the interleavers, codes and impulse interference power and frequency.
[ { "created": "Tue, 1 Mar 2022 22:24:37 GMT", "version": "v1" } ]
2022-03-03
[ [ "Zhan", "Ming", "", "2 and 3" ], [ "Pang", "Zhibo", "", "2 and 3" ], [ "Dzung", "Dacfey", "" ], [ "Yu", "Kan", "" ], [ "Xiao", "Ming", "" ] ]
The most common cause of transmission failure in Wireless High Performance (WirelessHP) target industry environments is impulse interference. As interleavers are commonly used to improve the reliability on the Orthogonal Frequency Division Multiplexing (OFDM) symbol level for long packet transmission, this paper considers the feasibility of applying short-packet bit interleaving to enhance the impulse/burst interference resisting capability on both OFDM symbol and frame level. Using the Universal Software Radio Peripherals (USRP) and PC hardware platform, the Packet Error Rate (PER) performance of interleaved coded short-packet transmission with Convolutional Codes (CC), Reed-Solomon codes (RS) and RS+CC concatenated codes are tested and analyzed. Applying the IEEE 1613 standard for impulse interference generation, extensive PER tests of CC(1=2) and RS(31; 21)+CC(1=2) concatenated codes are performed. With practical experiments, we prove the effectiveness of bit in terleaved coded short-packet transmission in real factory environments. We also investigate how PER performance depends on the interleavers, codes and impulse interference power and frequency.
0802.1520
Ophir Flomenbom
O. Flomenbom, and R. J. Silbey
Toolbox for analyzing finite two-state trajectories
null
Phys. Rev. E 78, 066105 (2008)
10.1103/PhysRevE.78.066105
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many experiments, the aim is to deduce an underlying multi-substate on-off kinetic scheme (KS) from the statistical properties of a two-state trajectory. However, the mapping of a KS into a two-state trajectory leads to the loss of information about the KS, and so, in many cases, more than one KS can be associated with the data. We recently showed that the optimal way to solve this problem is to use canonical forms of reduced dimensions (RD). RD forms are on-off networks with connections only between substates of different states, where the connections can have non-exponential waiting time probability density functions (WT-PDFs). In theory, only a single RD form can be associated with the data. To utilize RD forms in the analysis of the data, a RD form should be associated with the data. Here, we give a toolbox for building a RD form from a finite two-state trajectory. The methods in the toolbox are based on known statistical methods in data analysis, combined with statistical methods and numerical algorithms designed specifically for the current problem. Our toolbox is self-contained - it builds a mechanism based only on the information it extracts from the data, and its implementation on the data is fast (analyzing a 10^6 cycle trajectory from a thirty-parameter mechanism takes a couple of hours on a PC with a 2.66 GHz processor). The toolbox is automated and is freely available for academic research upon electronic request.
[ { "created": "Mon, 11 Feb 2008 20:07:26 GMT", "version": "v1" }, { "created": "Wed, 8 Oct 2008 23:31:58 GMT", "version": "v2" }, { "created": "Thu, 25 Dec 2008 03:07:42 GMT", "version": "v3" } ]
2010-08-16
[ [ "Flomenbom", "O.", "" ], [ "Silbey", "R. J.", "" ] ]
In many experiments, the aim is to deduce an underlying multi-substate on-off kinetic scheme (KS) from the statistical properties of a two-state trajectory. However, the mapping of a KS into a two-state trajectory leads to the loss of information about the KS, and so, in many cases, more than one KS can be associated with the data. We recently showed that the optimal way to solve this problem is to use canonical forms of reduced dimensions (RD). RD forms are on-off networks with connections only between substates of different states, where the connections can have non-exponential waiting time probability density functions (WT-PDFs). In theory, only a single RD form can be associated with the data. To utilize RD forms in the analysis of the data, a RD form should be associated with the data. Here, we give a toolbox for building a RD form from a finite two-state trajectory. The methods in the toolbox are based on known statistical methods in data analysis, combined with statistical methods and numerical algorithms designed specifically for the current problem. Our toolbox is self-contained - it builds a mechanism based only on the information it extracts from the data, and its implementation on the data is fast (analyzing a 10^6 cycle trajectory from a thirty-parameter mechanism takes a couple of hours on a PC with a 2.66 GHz processor). The toolbox is automated and is freely available for academic research upon electronic request.
2102.11448
DiJia Su
DiJia Su, Jason D. Lee, John M. Mulvey, H. Vincent Poor
MUSBO: Model-based Uncertainty Regularized and Sample Efficient Batch Optimization for Deployment Constrained Reinforcement Learning
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many contemporary applications such as healthcare, finance, robotics, and recommendation systems, continuous deployment of new policies for data collection and online learning is either cost ineffective or impractical. We consider a setting that lies between pure offline reinforcement learning (RL) and pure online RL called deployment constrained RL in which the number of policy deployments for data sampling is limited. To solve this challenging task, we propose a new algorithmic learning framework called Model-based Uncertainty regularized and Sample Efficient Batch Optimization (MUSBO). Our framework discovers novel and high quality samples for each deployment to enable efficient data collection. During each offline training session, we bootstrap the policy update by quantifying the amount of uncertainty within our collected data. In the high support region (low uncertainty), we encourage our policy by taking an aggressive update. In the low support region (high uncertainty) when the policy bootstraps into the out-of-distribution region, we downweight it by our estimated uncertainty quantification. Experimental results show that MUSBO achieves state-of-the-art performance in the deployment constrained RL setting.
[ { "created": "Tue, 23 Feb 2021 01:30:55 GMT", "version": "v1" }, { "created": "Thu, 3 Jun 2021 23:59:52 GMT", "version": "v2" } ]
2021-06-07
[ [ "Su", "DiJia", "" ], [ "Lee", "Jason D.", "" ], [ "Mulvey", "John M.", "" ], [ "Poor", "H. Vincent", "" ] ]
In many contemporary applications such as healthcare, finance, robotics, and recommendation systems, continuous deployment of new policies for data collection and online learning is either cost ineffective or impractical. We consider a setting that lies between pure offline reinforcement learning (RL) and pure online RL called deployment constrained RL in which the number of policy deployments for data sampling is limited. To solve this challenging task, we propose a new algorithmic learning framework called Model-based Uncertainty regularized and Sample Efficient Batch Optimization (MUSBO). Our framework discovers novel and high quality samples for each deployment to enable efficient data collection. During each offline training session, we bootstrap the policy update by quantifying the amount of uncertainty within our collected data. In the high support region (low uncertainty), we encourage our policy by taking an aggressive update. In the low support region (high uncertainty) when the policy bootstraps into the out-of-distribution region, we downweight it by our estimated uncertainty quantification. Experimental results show that MUSBO achieves state-of-the-art performance in the deployment constrained RL setting.
2005.04864
Xingyu Chen
Xingyu Chen and Zijie Liu
The Fairness of Leximin in Allocation of Indivisible Chores
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The leximin solution -- which selects an allocation that maximizes the minimum utility, then the second minimum utility, and so forth -- is known to provide EFX (envy-free up to any good) fairness guarantee in some contexts when allocating indivisible goods. However, it remains unknown how fair the leximin solution is when used to allocate indivisible chores. In this paper, we demonstrate that the leximin solution can be modified to also provide compelling fairness guarantees for the allocation of indivisible chores. First, we generalize the definition of the leximin solution. Then, we show that the leximin solution finds a PROP1 (proportional up to one good) and PO (Pareto-optimal) allocation for 3 or 4 agents in the context of chores allocation with additive distinct valuations. Additionally, we prove that the leximin solution is EFX for combinations of goods and chores for agents with general but identical valuations.
[ { "created": "Mon, 11 May 2020 05:15:43 GMT", "version": "v1" } ]
2020-05-12
[ [ "Chen", "Xingyu", "" ], [ "Liu", "Zijie", "" ] ]
The leximin solution -- which selects an allocation that maximizes the minimum utility, then the second minimum utility, and so forth -- is known to provide EFX (envy-free up to any good) fairness guarantee in some contexts when allocating indivisible goods. However, it remains unknown how fair the leximin solution is when used to allocate indivisible chores. In this paper, we demonstrate that the leximin solution can be modified to also provide compelling fairness guarantees for the allocation of indivisible chores. First, we generalize the definition of the leximin solution. Then, we show that the leximin solution finds a PROP1 (proportional up to one good) and PO (Pareto-optimal) allocation for 3 or 4 agents in the context of chores allocation with additive distinct valuations. Additionally, we prove that the leximin solution is EFX for combinations of goods and chores for agents with general but identical valuations.
1611.06065
Maude Pupin
Qassim Esmaeel, Maude Pupin (CRIStAL, BONSAI), Nam Phuong Kieu, Gabrielle Chataign\'e, Max B\'echet, Jovana Deravel, Fran\c{c}ois Krier, Monica H\"ofte, Philippe Jacques, Val\'erie Lecl\`ere (CRIStAL, BONSAI)
Burkholderia genome mining for nonribosomal peptide synthetases reveals a great potential for novel siderophores and lipopeptides synthesis
null
MicrobiologyOpen, 2016, 5 (3), pp.512 - 526
10.1002/mbo3.347
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Burkholderia is an important genus encompassing a variety of species, including pathogenic strains as well as strains that promote plant growth. We have carried out a global strategy, which combined two complementary approaches. The first one is genome guided with deep analysis of genome sequences and the second one is assay guided with experiments to support the predictions obtained in silico. This efficient screening for new secondary metabolites, performed on 48 gapless genomes of Burkholderia species, revealed a total of 161 clusters containing nonribosomal peptide synthetases (NRPSs), with the potential to synthesize at least 11 novel products. Most of them are siderophores or lipopeptides, two classes of products with potential application in biocontrol. The strategy led to the identification, for the first time, of the cluster for cepaciachelin biosynthesis in the genome of Burkholderia ambifaria AMMD and a cluster corresponding to a new malleobactin-like siderophore, called phymabactin, was identified in Burkholderia phymatum STM815 genome. In both cases, the siderophore was produced when the strain was grown in iron-limited conditions. Elsewhere, the cluster for the antifungal burkholdin was detected in the genome of B. ambifaria AMMD and also Burkholderia sp. KJ006. Burkholderia pseudomallei strains harbor the genetic potential to produce a novel lipopeptide called burkhomycin, containing a peptidyl moiety of 12 monomers. A mixture of lipopeptides produced by Burkholderia rhizoxinica lowered the surface tension of the supernatant from 70 to 27 mN/m. The production of nonribosomal secondary metabolites seems related to the three phylogenetic groups obtained from 16S rRNA sequences. Moreover, the genome-mining approach gave new insights into the nonribosomal synthesis exemplified by the identification of dual C/E domains in lipopeptide NRPSs, up to now essentially found in Pseudomonas strains.
[ { "created": "Fri, 18 Nov 2016 13:28:43 GMT", "version": "v1" } ]
2016-11-21
[ [ "Esmaeel", "Qassim", "", "CRIStAL, BONSAI" ], [ "Pupin", "Maude", "", "CRIStAL, BONSAI" ], [ "Kieu", "Nam Phuong", "", "CRIStAL, BONSAI" ], [ "Chataigné", "Gabrielle", "", "CRIStAL, BONSAI" ], [ "Béchet", "Max", "", "C...
Burkholderia is an important genus encompassing a variety of species, including pathogenic strains as well as strains that promote plant growth. We have carried out a global strategy, which combined two complementary approaches. The first one is genome guided with deep analysis of genome sequences and the second one is assay guided with experiments to support the predictions obtained in silico. This efficient screening for new secondary metabolites, performed on 48 gapless genomes of Burkholderia species, revealed a total of 161 clusters containing nonribosomal peptide synthetases (NRPSs), with the potential to synthesize at least 11 novel products. Most of them are siderophores or lipopeptides, two classes of products with potential application in biocontrol. The strategy led to the identification, for the first time, of the cluster for cepaciachelin biosynthesis in the genome of Burkholderia ambifaria AMMD and a cluster corresponding to a new malleobactin-like siderophore, called phymabactin, was identified in Burkholderia phymatum STM815 genome. In both cases, the siderophore was produced when the strain was grown in iron-limited conditions. Elsewhere, the cluster for the antifungal burkholdin was detected in the genome of B. ambifaria AMMD and also Burkholderia sp. KJ006. Burkholderia pseudomallei strains harbor the genetic potential to produce a novel lipopeptide called burkhomycin, containing a peptidyl moiety of 12 monomers. A mixture of lipopeptides produced by Burkholderia rhizoxinica lowered the surface tension of the supernatant from 70 to 27 mN/m. The production of nonribosomal secondary metabolites seems related to the three phylogenetic groups obtained from 16S rRNA sequences. Moreover, the genome-mining approach gave new insights into the nonribosomal synthesis exemplified by the identification of dual C/E domains in lipopeptide NRPSs, up to now essentially found in Pseudomonas strains.
1108.5184
Hong Lu
Yi-Xin Chen, H. Lu and Kai-Nan Shao
Linearized Modes in Extended and Critical Gravities
24 pages, 2 figures
null
10.1088/0264-9381/29/8/085017
CAS-KITPC/ITP-277
hep-th gr-qc
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We construct explicit solutions for the linearized massive and massless spin-2, vector and scalar modes around the AdS spacetimes in diverse dimensions. These modes may arise in extended (super)gravities with higher curvature terms in general dimensions. Log modes in critical gravities can also be straightforwardly deduced. We analyze the properties of these modes and obtain the tachyon-free condition, which allows negative mass square for these modes. However, such modes may not satisfy the standard AdS boundary condition and can be truncated out from the spectrum.
[ { "created": "Thu, 25 Aug 2011 20:05:42 GMT", "version": "v1" } ]
2015-05-30
[ [ "Chen", "Yi-Xin", "" ], [ "Lu", "H.", "" ], [ "Shao", "Kai-Nan", "" ] ]
We construct explicit solutions for the linearized massive and massless spin-2, vector and scalar modes around the AdS spacetimes in diverse dimensions. These modes may arise in extended (super)gravities with higher curvature terms in general dimensions. Log modes in critical gravities can also be straightforwardly deduced. We analyze the properties of these modes and obtain the tachyon-free condition, which allows negative mass square for these modes. However, such modes may not satisfy the standard AdS boundary condition and can be truncated out from the spectrum.
hep-th/9504095
Paul Townsend
P.K. Townsend
String-Membrane Duality in Seven Dimensions
The original version of this paper dealt mostly with one side of string-membrane duality: the solitonic interpretation of the heterotic string as a $K_3$ compactified D=11 superfivebrane. The revised version includes a discussion of the converse prediction: that the supermembrane has a solitonic interpretation as a $T^3$ compactified heterotic fivebrane. It also includes a discussion of D=8 membrane-membrane duality, and various changes to the references
Phys.Lett.B354:247-255,1995
10.1016/0370-2693(95)00649-6
DAMTP, R/95/15
hep-th
null
The conjectured equivalence of the heterotic string to a $K_3$ compactified type IIA superstring is combined with the conjectured equivalence of the latter to a compactified 11-dimensional supermembrane to derive a string membrane duality in seven dimensions; the membrane is a soliton of the string theory and vice versa. A prediction of this duality is that the heterotic string is a $K_3$ compactification of the solitonic 11-dimensional fivebrane. It is verified that the worldsheet action of the D=10 heterotic string is indeed obtainable by $K_3$ compactification of the worldvolume action of the 11-dimensional fivebrane, and it is suggested how the worldvolume action of the D=11 supermebrane may be similarly obtained by $T^3$ compactification of the worldvolume action of a D=10 heterotic fivebrane. Generalizations to $D=8$ string-threebrane and membrane-membrane duality are also discussed.
[ { "created": "Tue, 18 Apr 1995 12:22:24 GMT", "version": "v1" }, { "created": "Wed, 10 May 1995 16:41:11 GMT", "version": "v2" } ]
2010-11-01
[ [ "Townsend", "P. K.", "" ] ]
The conjectured equivalence of the heterotic string to a $K_3$ compactified type IIA superstring is combined with the conjectured equivalence of the latter to a compactified 11-dimensional supermembrane to derive a string membrane duality in seven dimensions; the membrane is a soliton of the string theory and vice versa. A prediction of this duality is that the heterotic string is a $K_3$ compactification of the solitonic 11-dimensional fivebrane. It is verified that the worldsheet action of the D=10 heterotic string is indeed obtainable by $K_3$ compactification of the worldvolume action of the 11-dimensional fivebrane, and it is suggested how the worldvolume action of the D=11 supermebrane may be similarly obtained by $T^3$ compactification of the worldvolume action of a D=10 heterotic fivebrane. Generalizations to $D=8$ string-threebrane and membrane-membrane duality are also discussed.
2307.04815
Raul De Palma Aristides
R. P. Aristides and A. J. Pons and H. A. Cerdeira and C. Masoller and G. Tirabass
Parameter and coupling estimation in small groups of Izhikevich neurons
null
Chaos, vol. 33, n. 4, 2023
10.1063/5.0144499
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Nowadays, experimental techniques allow scientists to have access to large amounts of data. In order to obtain reliable information from the complex systems which produce these data, appropriate analysis tools are needed}. The Kalman filter is a {frequently used} technique to infer, assuming a model of the system, the parameters of the model from uncertain observations. A well-known implementation of the Kalman filter, the Unscented Kalman filter (UKF), was recently shown to be able to infer the connectivity of a set of coupled chaotic oscillators. {I}n this work, we test whether the UKF can also reconstruct the connectivity of {small groups of} coupled neurons when their links are either electrical or chemical {synapses}. {In particular, w}e consider Izhikevich neurons, and aim to infer which neurons influence each other, considering {simulated spike trains as the experimental observations used by the UKF}. First, we {verify} that the UKF can recover the parameters of a single neuron, even when the parameters vary in time. Second, we analyze small neural ensembles and}} demonstrate that the UKF allows inferring the connectivity between the neurons, even for heterogeneous, directed, and {temporally evolving} networks. {Our results show that time-dependent parameter and coupling estimation is possible in this nonlinearly coupled system.
[ { "created": "Fri, 16 Jun 2023 10:27:50 GMT", "version": "v1" } ]
2023-07-12
[ [ "Aristides", "R. P.", "" ], [ "Pons", "A. J.", "" ], [ "Cerdeira", "H. A.", "" ], [ "Masoller", "C.", "" ], [ "Tirabass", "G.", "" ] ]
Nowadays, experimental techniques allow scientists to have access to large amounts of data. In order to obtain reliable information from the complex systems which produce these data, appropriate analysis tools are needed}. The Kalman filter is a {frequently used} technique to infer, assuming a model of the system, the parameters of the model from uncertain observations. A well-known implementation of the Kalman filter, the Unscented Kalman filter (UKF), was recently shown to be able to infer the connectivity of a set of coupled chaotic oscillators. {I}n this work, we test whether the UKF can also reconstruct the connectivity of {small groups of} coupled neurons when their links are either electrical or chemical {synapses}. {In particular, w}e consider Izhikevich neurons, and aim to infer which neurons influence each other, considering {simulated spike trains as the experimental observations used by the UKF}. First, we {verify} that the UKF can recover the parameters of a single neuron, even when the parameters vary in time. Second, we analyze small neural ensembles and}} demonstrate that the UKF allows inferring the connectivity between the neurons, even for heterogeneous, directed, and {temporally evolving} networks. {Our results show that time-dependent parameter and coupling estimation is possible in this nonlinearly coupled system.
hep-th/9408139
Hu Zhan-ning
Zhan-Ning Hu, Bo-Yu Hou
Remarks on the Star-Triangle Relation in the Baxter-Bazhanov Model
6 pages, latex file, AS-ITP-94-39
null
10.1007/BF02184882
null
hep-th
null
In this letter we show that the restricted star-triangle relation introduced by Bazhanov and Baxter can be obtained either from the star-triangle relation of chiral Potts model or from the star-square relation which is proposed by Kashaev $et ~al$ and give a response of the guess which is suggested by Bazhanov and Baxter in Ref. \cite{b2}.
[ { "created": "Thu, 25 Aug 1994 23:08:28 GMT", "version": "v1" } ]
2009-10-28
[ [ "Hu", "Zhan-Ning", "" ], [ "Hou", "Bo-Yu", "" ] ]
In this letter we show that the restricted star-triangle relation introduced by Bazhanov and Baxter can be obtained either from the star-triangle relation of chiral Potts model or from the star-square relation which is proposed by Kashaev $et ~al$ and give a response of the guess which is suggested by Bazhanov and Baxter in Ref. \cite{b2}.
1107.5827
Girma Hailu
Girma Hailu
Linear Confinement of Quarks from Supergravity
8 pages, PDFLaTeX
Phys.Rev. D84 (2011) 106008
10.1103/PhysRevD.84.106008
null
hep-th hep-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A supergravity background that produces linear confinement of quarks in four dimensions is presented.
[ { "created": "Thu, 28 Jul 2011 20:44:25 GMT", "version": "v1" } ]
2015-05-30
[ [ "Hailu", "Girma", "" ] ]
A supergravity background that produces linear confinement of quarks in four dimensions is presented.
hep-th/9306115
null
Jan Sladkowski
Does noncommutative geometry predict nonlinear Higgs mechanism?
12 pages, LaTeX file, BI-TP 93/26
Int.J.Theor.Phys. 33 (1994) 2381-2388
10.1007/BF00673963
null
hep-th hep-ph
null
It is argued that the noncommutative geometry construction of the standard model predicts a nonlinear symmetry breaking mechanism rather than the orthodox Higgs mechanism. Such models have experimentally verifiable consequences.
[ { "created": "Tue, 22 Jun 1993 15:11:14 GMT", "version": "v1" } ]
2009-10-22
[ [ "Sladkowski", "Jan", "" ] ]
It is argued that the noncommutative geometry construction of the standard model predicts a nonlinear symmetry breaking mechanism rather than the orthodox Higgs mechanism. Such models have experimentally verifiable consequences.
1208.0811
Guanhong Pei
Guanhong Pei and Anil Kumar S. Vullikanti
Efficient Algorithms for Maximum Link Scheduling in Distributed Computing Models with SINR Constraints
null
null
null
null
cs.DC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A fundamental problem in wireless networks is the maximum link scheduling problem: given a set $L$ of links, compute the largest possible subset $L'\subseteq L$ of links that can be scheduled simultaneously without interference. This problem is particularly challenging in the physical interference model based on SINR constraints (referred to as the SINR model), which has gained a lot of interest in recent years. Constant factor approximation algorithms have been developed for this problem, but low complexity distributed algorithms that give the same approximation guarantee in the SINR model are not known. Distributed algorithms are especially challenging in this model, because of its non-locality. In this paper, we develop a set of fast distributed algorithms in the SINR model, providing constant approximation for the maximum link scheduling problem under uniform power assignment. We find that different aspects of available technology, such as full/half-duplex communication, and non-adaptive/adaptive power control, have a significant impact on the performance of the algorithm; these issues have not been explored in the context of distributed algorithms in the SINR model before. Our algorithms' running time is $O(g(L) \log^c m)$, where $c=1,2,3$ for different problem instances, and $g(L)$ is the "link diversity" determined by the logarithmic scale of a communication link length. Since $g(L)$ is small and remains in a constant range in most cases, our algorithms serve as the first set of "sublinear" time distributed solution. The algorithms are randomized and crucially use physical carrier sensing in distributed communication steps.
[ { "created": "Fri, 3 Aug 2012 18:26:06 GMT", "version": "v1" }, { "created": "Fri, 16 Nov 2012 15:46:26 GMT", "version": "v2" } ]
2012-11-19
[ [ "Pei", "Guanhong", "" ], [ "Vullikanti", "Anil Kumar S.", "" ] ]
A fundamental problem in wireless networks is the maximum link scheduling problem: given a set $L$ of links, compute the largest possible subset $L'\subseteq L$ of links that can be scheduled simultaneously without interference. This problem is particularly challenging in the physical interference model based on SINR constraints (referred to as the SINR model), which has gained a lot of interest in recent years. Constant factor approximation algorithms have been developed for this problem, but low complexity distributed algorithms that give the same approximation guarantee in the SINR model are not known. Distributed algorithms are especially challenging in this model, because of its non-locality. In this paper, we develop a set of fast distributed algorithms in the SINR model, providing constant approximation for the maximum link scheduling problem under uniform power assignment. We find that different aspects of available technology, such as full/half-duplex communication, and non-adaptive/adaptive power control, have a significant impact on the performance of the algorithm; these issues have not been explored in the context of distributed algorithms in the SINR model before. Our algorithms' running time is $O(g(L) \log^c m)$, where $c=1,2,3$ for different problem instances, and $g(L)$ is the "link diversity" determined by the logarithmic scale of a communication link length. Since $g(L)$ is small and remains in a constant range in most cases, our algorithms serve as the first set of "sublinear" time distributed solution. The algorithms are randomized and crucially use physical carrier sensing in distributed communication steps.
2002.05712
Zhuliang Yao
Zhuliang Yao, Yue Cao, Shuxin Zheng, Gao Huang, Stephen Lin
Cross-Iteration Batch Normalization
Accepted to CVPR 2021
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A well-known issue of Batch Normalization is its significantly reduced effectiveness in the case of small mini-batch sizes. When a mini-batch contains few examples, the statistics upon which the normalization is defined cannot be reliably estimated from it during a training iteration. To address this problem, we present Cross-Iteration Batch Normalization (CBN), in which examples from multiple recent iterations are jointly utilized to enhance estimation quality. A challenge of computing statistics over multiple iterations is that the network activations from different iterations are not comparable to each other due to changes in network weights. We thus compensate for the network weight changes via a proposed technique based on Taylor polynomials, so that the statistics can be accurately estimated and batch normalization can be effectively applied. On object detection and image classification with small mini-batch sizes, CBN is found to outperform the original batch normalization and a direct calculation of statistics over previous iterations without the proposed compensation technique. Code is available at https://github.com/Howal/Cross-iterationBatchNorm .
[ { "created": "Thu, 13 Feb 2020 18:52:57 GMT", "version": "v1" }, { "created": "Fri, 14 Feb 2020 11:10:04 GMT", "version": "v2" }, { "created": "Thu, 25 Mar 2021 06:57:36 GMT", "version": "v3" } ]
2021-03-26
[ [ "Yao", "Zhuliang", "" ], [ "Cao", "Yue", "" ], [ "Zheng", "Shuxin", "" ], [ "Huang", "Gao", "" ], [ "Lin", "Stephen", "" ] ]
A well-known issue of Batch Normalization is its significantly reduced effectiveness in the case of small mini-batch sizes. When a mini-batch contains few examples, the statistics upon which the normalization is defined cannot be reliably estimated from it during a training iteration. To address this problem, we present Cross-Iteration Batch Normalization (CBN), in which examples from multiple recent iterations are jointly utilized to enhance estimation quality. A challenge of computing statistics over multiple iterations is that the network activations from different iterations are not comparable to each other due to changes in network weights. We thus compensate for the network weight changes via a proposed technique based on Taylor polynomials, so that the statistics can be accurately estimated and batch normalization can be effectively applied. On object detection and image classification with small mini-batch sizes, CBN is found to outperform the original batch normalization and a direct calculation of statistics over previous iterations without the proposed compensation technique. Code is available at https://github.com/Howal/Cross-iterationBatchNorm .
2405.13182
Zachary Kilpatrick PhD
Heather L Cihak and Zachary P Kilpatrick
Robustly encoding certainty in a metastable neural circuit model
15 pages, 10 figures
null
null
null
q-bio.NC nlin.PS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Localized persistent neural activity can encode delayed estimates of continuous variables. Common experiments require that subjects store and report the feature value (e.g., orientation) of a particular cue (e.g., oriented bar on a screen) after a delay. Visualizing recorded activity of neurons along their feature tuning reveals activity bumps whose centers wander stochastically, degrading the estimate over time. Bump position therefore represents the remembered estimate. Recent work suggests bump amplitude may represent estimate certainty reflecting a probabilistic population code for a Bayesian posterior. Idealized models of this type are fragile due to the fine tuning common to constructed continuum attractors in dynamical systems. Here we propose an alternative metastable model for robustly supporting multiple bump amplitudes by extending neural circuit models to include quantized nonlinearities. Asymptotic projections of circuit activity produce low-dimensional evolution equations for the amplitude and position of bump solutions in response to external stimuli and noise perturbations. Analysis of reduced equations accurately characterizes phase variance and the dynamics of amplitude transitions between stable discrete values. More salient cues generate bumps of higher amplitude which wander less, consistent with the experimental finding that greater certainty correlates with more accurate memories.
[ { "created": "Tue, 21 May 2024 20:13:35 GMT", "version": "v1" }, { "created": "Tue, 30 Jul 2024 19:15:50 GMT", "version": "v2" } ]
2024-08-01
[ [ "Cihak", "Heather L", "" ], [ "Kilpatrick", "Zachary P", "" ] ]
Localized persistent neural activity can encode delayed estimates of continuous variables. Common experiments require that subjects store and report the feature value (e.g., orientation) of a particular cue (e.g., oriented bar on a screen) after a delay. Visualizing recorded activity of neurons along their feature tuning reveals activity bumps whose centers wander stochastically, degrading the estimate over time. Bump position therefore represents the remembered estimate. Recent work suggests bump amplitude may represent estimate certainty reflecting a probabilistic population code for a Bayesian posterior. Idealized models of this type are fragile due to the fine tuning common to constructed continuum attractors in dynamical systems. Here we propose an alternative metastable model for robustly supporting multiple bump amplitudes by extending neural circuit models to include quantized nonlinearities. Asymptotic projections of circuit activity produce low-dimensional evolution equations for the amplitude and position of bump solutions in response to external stimuli and noise perturbations. Analysis of reduced equations accurately characterizes phase variance and the dynamics of amplitude transitions between stable discrete values. More salient cues generate bumps of higher amplitude which wander less, consistent with the experimental finding that greater certainty correlates with more accurate memories.
2011.02574
Andrei Cramariuc
Le Chen, Yunke Ao, Florian Tschopp, Andrei Cramariuc, Michel Breyer, Jen Jen Chung, Roland Siegwart, Cesar Cadena
Learning Trajectories for Visual-Inertial System Calibration via Model-based Heuristic Deep Reinforcement Learning
null
Proceedings of the 4th Conference on Robot Learning (CoRL) 2020
null
null
cs.RO cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual-inertial systems rely on precise calibrations of both camera intrinsics and inter-sensor extrinsics, which typically require manually performing complex motions in front of a calibration target. In this work we present a novel approach to obtain favorable trajectories for visual-inertial system calibration, using model-based deep reinforcement learning. Our key contribution is to model the calibration process as a Markov decision process and then use model-based deep reinforcement learning with particle swarm optimization to establish a sequence of calibration trajectories to be performed by a robot arm. Our experiments show that while maintaining similar or shorter path lengths, the trajectories generated by our learned policy result in lower calibration errors compared to random or handcrafted trajectories.
[ { "created": "Wed, 4 Nov 2020 23:20:15 GMT", "version": "v1" } ]
2021-02-17
[ [ "Chen", "Le", "" ], [ "Ao", "Yunke", "" ], [ "Tschopp", "Florian", "" ], [ "Cramariuc", "Andrei", "" ], [ "Breyer", "Michel", "" ], [ "Chung", "Jen Jen", "" ], [ "Siegwart", "Roland", "" ], [ "Caden...
Visual-inertial systems rely on precise calibrations of both camera intrinsics and inter-sensor extrinsics, which typically require manually performing complex motions in front of a calibration target. In this work we present a novel approach to obtain favorable trajectories for visual-inertial system calibration, using model-based deep reinforcement learning. Our key contribution is to model the calibration process as a Markov decision process and then use model-based deep reinforcement learning with particle swarm optimization to establish a sequence of calibration trajectories to be performed by a robot arm. Our experiments show that while maintaining similar or shorter path lengths, the trajectories generated by our learned policy result in lower calibration errors compared to random or handcrafted trajectories.
1805.04625
Shun Watanabe
Himanshu Tyagi, Shun Watanabe
Strong Converse using Change of Measure Arguments
35 pages, no figure; v2 updated references
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The strong converse for a coding theorem shows that the optimal asymptotic rate possible with vanishing error cannot be improved by allowing a fixed error. Building on a method introduced by Gu and Effros for centralized coding problems, we develop a general and simple recipe for proving strong converse that is applicable for distributed problems as well. Heuristically, our proof of strong converse mimics the standard steps for proving a weak converse, except that we apply those steps to a modified distribution obtained by conditioning the original distribution on the event that no error occurs. A key component of our recipe is the replacement of the hard Markov constraints implied by the distributed nature of the problem with a soft information cost using a variational formula introduced by Oohama. We illustrate our method by providing a short proof of the strong converse for the Wyner-Ziv problem and strong converse theorems for interactive function computation, common randomness and secret key agreement, and the wiretap channel; the latter three strong converse problems were open prior to this work.
[ { "created": "Sat, 12 May 2018 00:34:37 GMT", "version": "v1" }, { "created": "Wed, 21 Aug 2019 14:13:36 GMT", "version": "v2" } ]
2019-08-22
[ [ "Tyagi", "Himanshu", "" ], [ "Watanabe", "Shun", "" ] ]
The strong converse for a coding theorem shows that the optimal asymptotic rate possible with vanishing error cannot be improved by allowing a fixed error. Building on a method introduced by Gu and Effros for centralized coding problems, we develop a general and simple recipe for proving strong converse that is applicable for distributed problems as well. Heuristically, our proof of strong converse mimics the standard steps for proving a weak converse, except that we apply those steps to a modified distribution obtained by conditioning the original distribution on the event that no error occurs. A key component of our recipe is the replacement of the hard Markov constraints implied by the distributed nature of the problem with a soft information cost using a variational formula introduced by Oohama. We illustrate our method by providing a short proof of the strong converse for the Wyner-Ziv problem and strong converse theorems for interactive function computation, common randomness and secret key agreement, and the wiretap channel; the latter three strong converse problems were open prior to this work.
1510.02840
Mauricio Toro
Mauricio Toro
Concurrent Constraint Machine Improvisation: Models and Implementation
8 pages
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine improvisation creates music either by explicit coding of rules or by applying machine learning methods. We deal with the latter case. An improvisation system capable of real-time must execute two process concurrently: one to apply machine learning methods to musical sequences in order to capture prominent musical features, and one to produce musical sequences stylistically consistent with the learned material. As an example, the Concurrent Constraint Factor Oracle Model for Music Improvisation (ccfomi), based upon Non-deterministic Timed Concurrent Constraint (ntcc) calculus, uses the Factor Oracle to store the learned sequences.
[ { "created": "Fri, 9 Oct 2015 22:22:01 GMT", "version": "v1" } ]
2015-10-13
[ [ "Toro", "Mauricio", "" ] ]
Machine improvisation creates music either by explicit coding of rules or by applying machine learning methods. We deal with the latter case. An improvisation system capable of real-time must execute two process concurrently: one to apply machine learning methods to musical sequences in order to capture prominent musical features, and one to produce musical sequences stylistically consistent with the learned material. As an example, the Concurrent Constraint Factor Oracle Model for Music Improvisation (ccfomi), based upon Non-deterministic Timed Concurrent Constraint (ntcc) calculus, uses the Factor Oracle to store the learned sequences.
2108.08679
Alexander Barg
Alexander Barg, Zitan Chen, and Itzhak Tamo
A construction of maximally recoverable codes
null
Designs, Codes and Cryptography, 2022, vol. 90, pp. 939-945
10.1007/s10623-022-01020-8
null
cs.IT math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We construct a family of linear maximally recoverable codes with locality $r$ and dimension $r+1.$ For codes of length $n$ with $r\approx n^\alpha, 0\le\alpha\le 1$ the code alphabet is of the order $n^{1+3\alpha},$ which improves upon the previously known constructions of maximally recoverable codes.
[ { "created": "Thu, 19 Aug 2021 13:40:55 GMT", "version": "v1" } ]
2023-03-07
[ [ "Barg", "Alexander", "" ], [ "Chen", "Zitan", "" ], [ "Tamo", "Itzhak", "" ] ]
We construct a family of linear maximally recoverable codes with locality $r$ and dimension $r+1.$ For codes of length $n$ with $r\approx n^\alpha, 0\le\alpha\le 1$ the code alphabet is of the order $n^{1+3\alpha},$ which improves upon the previously known constructions of maximally recoverable codes.
2309.14950
Atif Belal
Atif Belal, Akhil Meethal, Francisco Perdigon Romero, Marco Pedersoli, Eric Granger
Multi-Source Domain Adaptation for Object Detection with Prototype-based Mean-teacher
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
Adapting visual object detectors to operational target domains is a challenging task, commonly achieved using unsupervised domain adaptation (UDA) methods. Recent studies have shown that when the labeled dataset comes from multiple source domains, treating them as separate domains and performing a multi-source domain adaptation (MSDA) improves the accuracy and robustness over blending these source domains and performing a UDA. For adaptation, existing MSDA methods learn domain-invariant and domain-specific parameters (for each source domain). However, unlike single-source UDA methods, learning domain-specific parameters makes them grow significantly in proportion to the number of source domains. This paper proposes a novel MSDA method called Prototype-based Mean Teacher (PMT), which uses class prototypes instead of domain-specific subnets to encode domain-specific information. These prototypes are learned using a contrastive loss, aligning the same categories across domains and separating different categories far apart. Given the use of prototypes, the number of parameters required for our PMT method does not increase significantly with the number of source domains, thus reducing memory issues and possible overfitting. Empirical studies indicate that PMT outperforms state-of-the-art MSDA methods on several challenging object detection datasets. Our code is available at https://github.com/imatif17/Prototype-Mean-Teacher.
[ { "created": "Tue, 26 Sep 2023 14:08:03 GMT", "version": "v1" }, { "created": "Fri, 15 Dec 2023 21:00:50 GMT", "version": "v2" }, { "created": "Wed, 31 Jul 2024 20:04:53 GMT", "version": "v3" } ]
2024-08-02
[ [ "Belal", "Atif", "" ], [ "Meethal", "Akhil", "" ], [ "Romero", "Francisco Perdigon", "" ], [ "Pedersoli", "Marco", "" ], [ "Granger", "Eric", "" ] ]
Adapting visual object detectors to operational target domains is a challenging task, commonly achieved using unsupervised domain adaptation (UDA) methods. Recent studies have shown that when the labeled dataset comes from multiple source domains, treating them as separate domains and performing a multi-source domain adaptation (MSDA) improves the accuracy and robustness over blending these source domains and performing a UDA. For adaptation, existing MSDA methods learn domain-invariant and domain-specific parameters (for each source domain). However, unlike single-source UDA methods, learning domain-specific parameters makes them grow significantly in proportion to the number of source domains. This paper proposes a novel MSDA method called Prototype-based Mean Teacher (PMT), which uses class prototypes instead of domain-specific subnets to encode domain-specific information. These prototypes are learned using a contrastive loss, aligning the same categories across domains and separating different categories far apart. Given the use of prototypes, the number of parameters required for our PMT method does not increase significantly with the number of source domains, thus reducing memory issues and possible overfitting. Empirical studies indicate that PMT outperforms state-of-the-art MSDA methods on several challenging object detection datasets. Our code is available at https://github.com/imatif17/Prototype-Mean-Teacher.
2303.17251
Stefano Cresci
Stefano Cresci, Kai-Cheng Yang, Angelo Spognardi, Roberto Di Pietro, Filippo Menczer, Marinella Petrocchi
Demystifying Misconceptions in Social Bots Research
null
null
null
null
cs.SI cs.AI cs.CY cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Research on social bots aims at advancing knowledge and providing solutions to one of the most debated forms of online manipulation. Yet, social bot research is plagued by widespread biases, hyped results, and misconceptions that set the stage for ambiguities, unrealistic expectations, and seemingly irreconcilable findings. Overcoming such issues is instrumental towards ensuring reliable solutions and reaffirming the validity of the scientific method. In this contribution, we review some recent results in social bots research, highlighting and revising factual errors as well as methodological and conceptual biases. More importantly, we demystify common misconceptions, addressing fundamental points on how social bots research is discussed. Our analysis surfaces the need to discuss research about online disinformation and manipulation in a rigorous, unbiased, and responsible way. This article bolsters such effort by identifying and refuting common fallacious arguments used by both proponents and opponents of social bots research, as well as providing directions toward sound methodologies for future research in the field.
[ { "created": "Thu, 30 Mar 2023 09:29:53 GMT", "version": "v1" }, { "created": "Wed, 27 Mar 2024 14:48:48 GMT", "version": "v2" } ]
2024-03-28
[ [ "Cresci", "Stefano", "" ], [ "Yang", "Kai-Cheng", "" ], [ "Spognardi", "Angelo", "" ], [ "Di Pietro", "Roberto", "" ], [ "Menczer", "Filippo", "" ], [ "Petrocchi", "Marinella", "" ] ]
Research on social bots aims at advancing knowledge and providing solutions to one of the most debated forms of online manipulation. Yet, social bot research is plagued by widespread biases, hyped results, and misconceptions that set the stage for ambiguities, unrealistic expectations, and seemingly irreconcilable findings. Overcoming such issues is instrumental towards ensuring reliable solutions and reaffirming the validity of the scientific method. In this contribution, we review some recent results in social bots research, highlighting and revising factual errors as well as methodological and conceptual biases. More importantly, we demystify common misconceptions, addressing fundamental points on how social bots research is discussed. Our analysis surfaces the need to discuss research about online disinformation and manipulation in a rigorous, unbiased, and responsible way. This article bolsters such effort by identifying and refuting common fallacious arguments used by both proponents and opponents of social bots research, as well as providing directions toward sound methodologies for future research in the field.
1607.04063
Carsten Witt
Dirk Sudholt and Carsten Witt
Update Strength in EDAs and ACO: How to Avoid Genetic Drift
32 pages. An extended abstract of this work will appear in the proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2016). This revision fixes the abstract in the metadata
null
null
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We provide a rigorous runtime analysis concerning the update strength, a vital parameter in probabilistic model-building GAs such as the step size $1/K$ in the compact Genetic Algorithm (cGA) and the evaporation factor $\rho$ in ACO. While a large update strength is desirable for exploitation, there is a general trade-off: too strong updates can lead to genetic drift and poor performance. We demonstrate this trade-off for the cGA and a simple MMAS ACO algorithm on the OneMax function. More precisely, we obtain lower bounds on the expected runtime of $\Omega(K\sqrt{n} + n \log n)$ and $\Omega(\sqrt{n}/\rho + n \log n)$, respectively, showing that the update strength should be limited to $1/K, \rho = O(1/(\sqrt{n} \log n))$. In fact, choosing $1/K, \rho \sim 1/(\sqrt{n}\log n)$ both algorithms efficiently optimize OneMax in expected time $O(n \log n)$. Our analyses provide new insights into the stochastic behavior of probabilistic model-building GAs and propose new guidelines for setting the update strength in global optimization.
[ { "created": "Thu, 14 Jul 2016 10:11:59 GMT", "version": "v1" }, { "created": "Fri, 15 Jul 2016 07:51:28 GMT", "version": "v2" } ]
2016-07-18
[ [ "Sudholt", "Dirk", "" ], [ "Witt", "Carsten", "" ] ]
We provide a rigorous runtime analysis concerning the update strength, a vital parameter in probabilistic model-building GAs such as the step size $1/K$ in the compact Genetic Algorithm (cGA) and the evaporation factor $\rho$ in ACO. While a large update strength is desirable for exploitation, there is a general trade-off: too strong updates can lead to genetic drift and poor performance. We demonstrate this trade-off for the cGA and a simple MMAS ACO algorithm on the OneMax function. More precisely, we obtain lower bounds on the expected runtime of $\Omega(K\sqrt{n} + n \log n)$ and $\Omega(\sqrt{n}/\rho + n \log n)$, respectively, showing that the update strength should be limited to $1/K, \rho = O(1/(\sqrt{n} \log n))$. In fact, choosing $1/K, \rho \sim 1/(\sqrt{n}\log n)$ both algorithms efficiently optimize OneMax in expected time $O(n \log n)$. Our analyses provide new insights into the stochastic behavior of probabilistic model-building GAs and propose new guidelines for setting the update strength in global optimization.
2205.08664
Taro L. Saito
Taro L. Saito, Naoki Takezoe, Yukihiro Okada, Takako Shimamoto, Dongmin Yu, Suprith Chandrashekharachar, Kai Sasaki, Shohei Okumiya, Yan Wang, Takashi Kurihara, Ryu Kobayashi, Keisuke Suzuki, Zhenghong Yang, Makoto Onizuka
Journey of Migrating Millions of Queries on The Cloud
This version is published in DBTest '22: Proceedings of the 2022 workshop on 9th International Workshop of Testing Database Systems
null
10.1145/3531348.3532177
null
cs.DB
http://creativecommons.org/licenses/by/4.0/
Treasure Data is processing millions of distributed SQL queries every day on the cloud. Upgrading the query engine service at this scale is challenging because we need to migrate all of the production queries of the customers to a new version while preserving the correctness and performance of the data processing pipelines. To ensure the quality of the query engines, we utilize our query logs to build customer-specific benchmarks and replay these queries with real customer data in a secure pre-production environment. To simulate millions of queries, we need effective minimization of test query sets and better reporting of the simulation results to proactively find incompatible changes and performance regression of the new version. This paper describes the overall design of our system and shares various challenges in maintaining the quality of the query engine service on the cloud.
[ { "created": "Tue, 17 May 2022 23:48:26 GMT", "version": "v1" } ]
2022-05-19
[ [ "Saito", "Taro L.", "" ], [ "Takezoe", "Naoki", "" ], [ "Okada", "Yukihiro", "" ], [ "Shimamoto", "Takako", "" ], [ "Yu", "Dongmin", "" ], [ "Chandrashekharachar", "Suprith", "" ], [ "Sasaki", "Kai", "" ]...
Treasure Data is processing millions of distributed SQL queries every day on the cloud. Upgrading the query engine service at this scale is challenging because we need to migrate all of the production queries of the customers to a new version while preserving the correctness and performance of the data processing pipelines. To ensure the quality of the query engines, we utilize our query logs to build customer-specific benchmarks and replay these queries with real customer data in a secure pre-production environment. To simulate millions of queries, we need effective minimization of test query sets and better reporting of the simulation results to proactively find incompatible changes and performance regression of the new version. This paper describes the overall design of our system and shares various challenges in maintaining the quality of the query engine service on the cloud.
1403.0093
Masoud Abbaszadeh
Masoud Abbaszadeh, Horacio J. Marquez
Robust Nonlinear L2 Filtering of Uncertain Lipschitz Systems via Pareto Optimization
21 pages, 5 figures. arXiv admin note: text overlap with arXiv:1010.0696
null
null
null
cs.SY math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new approach for robust Hinfty filtering for a class of Lipschitz nonlinear systems with time-varying uncertainties both in the linear and nonlinear parts of the system is proposed in an LMI framework. The admissible Lipschitz constant of the system and the disturbance attenuation level are maximized simultaneously through convex multiobjective optimization. The resulting Hinfty filter guarantees asymptotic stability of the estimation error dynamics with exponential convergence and is robust against nonlinear additive uncertainty and time-varying parametric uncertainties. Explicit bounds on the nonlinear uncertainty are derived based on norm-wise and element-wise robustness analysis.
[ { "created": "Sat, 1 Mar 2014 15:08:19 GMT", "version": "v1" } ]
2014-03-04
[ [ "Abbaszadeh", "Masoud", "" ], [ "Marquez", "Horacio J.", "" ] ]
A new approach for robust Hinfty filtering for a class of Lipschitz nonlinear systems with time-varying uncertainties both in the linear and nonlinear parts of the system is proposed in an LMI framework. The admissible Lipschitz constant of the system and the disturbance attenuation level are maximized simultaneously through convex multiobjective optimization. The resulting Hinfty filter guarantees asymptotic stability of the estimation error dynamics with exponential convergence and is robust against nonlinear additive uncertainty and time-varying parametric uncertainties. Explicit bounds on the nonlinear uncertainty are derived based on norm-wise and element-wise robustness analysis.
1011.3278
Xiao-Lun Wu
Tuba Altindal, Li Xie, Xiao-Lun Wu
Implications of 3-step swimming patterns in bacterial chemotaxis
18 pages, 4 figures, submitted to biophysical journal
null
10.1016/j.bpj.2010.11.029
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We recently found that marine bacteria Vibrio alginolyticus execute a cyclic 3-step (run- reverse-flick) motility pattern that is distinctively different from the 2-step (run-tumble) pattern of Escherichia coli. How this novel swimming pattern is regulated by cells of V. alginolyticus is not currently known, but its significance for bacterial chemotaxis is self- evident and will be delineated herein. Using an approach introduced by de Gennes, we calculated the migration speed of a cell executing the 3-step pattern in a linear chemical gradient, and found that a biphasic chemotactic response arises naturally. The implication of such a response for the cells to adapt to ocean environments and its possible connection to E. coli 's response are also discussed.
[ { "created": "Mon, 15 Nov 2010 01:42:38 GMT", "version": "v1" } ]
2017-07-26
[ [ "Altindal", "Tuba", "" ], [ "Xie", "Li", "" ], [ "Wu", "Xiao-Lun", "" ] ]
We recently found that marine bacteria Vibrio alginolyticus execute a cyclic 3-step (run- reverse-flick) motility pattern that is distinctively different from the 2-step (run-tumble) pattern of Escherichia coli. How this novel swimming pattern is regulated by cells of V. alginolyticus is not currently known, but its significance for bacterial chemotaxis is self- evident and will be delineated herein. Using an approach introduced by de Gennes, we calculated the migration speed of a cell executing the 3-step pattern in a linear chemical gradient, and found that a biphasic chemotactic response arises naturally. The implication of such a response for the cells to adapt to ocean environments and its possible connection to E. coli 's response are also discussed.
1906.11878
Hossein Ghayoumi Zadeh
Mehdi Abbaszadeh, Aliakbar Rahimifard, Mohammadali Eftekhari, Hossein Ghayoumi Zadeh, Ali Fayazi, Ali Dini, Mostafa Danaeian
Deep Learning-Based Classification Of the Defective Pistachios Via Deep Autoencoder Neural Networks
null
null
null
null
cs.CV cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pistachio nut is mainly consumed as raw, salted or roasted because of its high nutritional properties and favorable taste. Pistachio nuts with shell and kernel defects, besides not being acceptable for a consumer, are also prone to insects damage, mold decay, and aflatoxin contamination. In this research, a deep learning-based imaging algorithm was developed to improve the sorting of nuts with shell and kernel defects that indicate the risk of aflatoxin contamination, such as dark stains, oily stains, adhering hull, fungal decay and Aspergillus molds. This paper presents an unsupervised learning method to classify defective and unpleasant pistachios based on deep Auto-encoder neural networks. The testing of the designed neural network on a validation dataset showed that nuts having dark stain, oily stain or adhering hull with an accuracy of 80.3% can be distinguished from normal nuts. Due to the limited memory available in the HPC of university, the results are reasonable and justifiable.
[ { "created": "Mon, 10 Jun 2019 13:02:50 GMT", "version": "v1" } ]
2019-07-01
[ [ "Abbaszadeh", "Mehdi", "" ], [ "Rahimifard", "Aliakbar", "" ], [ "Eftekhari", "Mohammadali", "" ], [ "Zadeh", "Hossein Ghayoumi", "" ], [ "Fayazi", "Ali", "" ], [ "Dini", "Ali", "" ], [ "Danaeian", "Mostafa", ...
Pistachio nut is mainly consumed as raw, salted or roasted because of its high nutritional properties and favorable taste. Pistachio nuts with shell and kernel defects, besides not being acceptable for a consumer, are also prone to insects damage, mold decay, and aflatoxin contamination. In this research, a deep learning-based imaging algorithm was developed to improve the sorting of nuts with shell and kernel defects that indicate the risk of aflatoxin contamination, such as dark stains, oily stains, adhering hull, fungal decay and Aspergillus molds. This paper presents an unsupervised learning method to classify defective and unpleasant pistachios based on deep Auto-encoder neural networks. The testing of the designed neural network on a validation dataset showed that nuts having dark stain, oily stain or adhering hull with an accuracy of 80.3% can be distinguished from normal nuts. Due to the limited memory available in the HPC of university, the results are reasonable and justifiable.
1912.13382
Jose Del Aguila Ferrandis Mr
Jos\'e del \'Aguila Ferrandis, Michael Triantafyllou, Chryssostomos Chryssostomidis, George Karniadakis
Learning functionals via LSTM neural networks for predicting vessel dynamics in extreme sea states
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predicting motions of vessels in extreme sea states represents one of the most challenging problems in naval hydrodynamics. It involves computing complex nonlinear wave-body interactions, hence taxing heavily computational resources. Here, we put forward a new simulation paradigm by training recurrent type neural networks (RNNs) that take as input the stochastic wave elevation at a certain sea state and output the main vessel motions, e.g., pitch, heave and roll. We first compare the performance of standard RNNs versus GRU and LSTM neural networks (NNs) and show that LSTM NNs lead to the best performance. We then examine the testing error of two representative vessels, a catamaran in sea state 1 and a battleship in sea state 8. We demonstrate that good accuracy is achieved for both cases in predicting the vessel motions for unseen wave elevations. We train the NNs with expensive CFD simulations offline, but upon training, the prediction of the vessel dynamics online can be obtained at a fraction of a second. This work is motivated by the universal approximation theorem for functionals [1], and it is the first implementation of such theory to realistic engineering problems.
[ { "created": "Mon, 23 Dec 2019 18:39:12 GMT", "version": "v1" } ]
2020-01-01
[ [ "Ferrandis", "José del Águila", "" ], [ "Triantafyllou", "Michael", "" ], [ "Chryssostomidis", "Chryssostomos", "" ], [ "Karniadakis", "George", "" ] ]
Predicting motions of vessels in extreme sea states represents one of the most challenging problems in naval hydrodynamics. It involves computing complex nonlinear wave-body interactions, hence taxing heavily computational resources. Here, we put forward a new simulation paradigm by training recurrent type neural networks (RNNs) that take as input the stochastic wave elevation at a certain sea state and output the main vessel motions, e.g., pitch, heave and roll. We first compare the performance of standard RNNs versus GRU and LSTM neural networks (NNs) and show that LSTM NNs lead to the best performance. We then examine the testing error of two representative vessels, a catamaran in sea state 1 and a battleship in sea state 8. We demonstrate that good accuracy is achieved for both cases in predicting the vessel motions for unseen wave elevations. We train the NNs with expensive CFD simulations offline, but upon training, the prediction of the vessel dynamics online can be obtained at a fraction of a second. This work is motivated by the universal approximation theorem for functionals [1], and it is the first implementation of such theory to realistic engineering problems.
0805.2621
Jae-Suk Park
Hyun-Keun Jun and Jae-Suk Park
Topological Sigma B Model in 4-Dimensions
16 pages, JHEP style (minor corrections)
JHEP 0811:005,2008
10.1088/1126-6708/2008/11/005
null
hep-th
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a 4-dimensional version of topological sigma B-model, governing maps from a smooth compact 4-manifold M to a Calabi-Yau target manifold X. The theory depends on on complex structure of X, while is independent of Kaehler metric of X. The theory is also a 4-dimensiona topological field theory in the sense that the theory is independent of variation of Riemannian metric of the source 4-manifold M, potentially leading to new smooth invariant of 4-manifolds. We argue that the theory also comes with a topological family parametrized by the extended moduli space of complex structures.
[ { "created": "Fri, 16 May 2008 21:09:06 GMT", "version": "v1" }, { "created": "Wed, 10 Sep 2008 05:36:02 GMT", "version": "v2" } ]
2009-12-07
[ [ "Jun", "Hyun-Keun", "" ], [ "Park", "Jae-Suk", "" ] ]
We propose a 4-dimensional version of topological sigma B-model, governing maps from a smooth compact 4-manifold M to a Calabi-Yau target manifold X. The theory depends on on complex structure of X, while is independent of Kaehler metric of X. The theory is also a 4-dimensiona topological field theory in the sense that the theory is independent of variation of Riemannian metric of the source 4-manifold M, potentially leading to new smooth invariant of 4-manifolds. We argue that the theory also comes with a topological family parametrized by the extended moduli space of complex structures.
1602.03638
Mikael Mortensen
Mikael Mortensen and Hans Petter Langtangen
High performance Python for direct numerical simulations of turbulent flows
null
null
10.1016/j.cpc.2016.02.005
null
cs.MS cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Direct Numerical Simulations (DNS) of the Navier Stokes equations is an invaluable research tool in fluid dynamics. Still, there are few publicly available research codes and, due to the heavy number crunching implied, available codes are usually written in low-level languages such as C/C++ or Fortran. In this paper we describe a pure scientific Python pseudo-spectral DNS code that nearly matches the performance of C++ for thousands of processors and billions of unknowns. We also describe a version optimized through Cython, that is found to match the speed of C++. The solvers are written from scratch in Python, both the mesh, the MPI domain decomposition, and the temporal integrators. The solvers have been verified and benchmarked on the Shaheen supercomputer at the KAUST supercomputing laboratory, and we are able to show very good scaling up to several thousand cores. A very important part of the implementation is the mesh decomposition (we implement both slab and pencil decompositions) and 3D parallel Fast Fourier Transforms (FFT). The mesh decomposition and FFT routines have been implemented in Python using serial FFT routines (either NumPy, pyFFTW or any other serial FFT module), NumPy array manipulations and with MPI communications handled by MPI for Python (mpi4py). We show how we are able to execute a 3D parallel FFT in Python for a slab mesh decomposition using 4 lines of compact Python code, for which the parallel performance on Shaheen is found to be slightly better than similar routines provided through the FFTW library. For a pencil mesh decomposition 7 lines of code is required to execute a transform.
[ { "created": "Thu, 11 Feb 2016 08:12:37 GMT", "version": "v1" } ]
2016-05-04
[ [ "Mortensen", "Mikael", "" ], [ "Langtangen", "Hans Petter", "" ] ]
Direct Numerical Simulations (DNS) of the Navier Stokes equations is an invaluable research tool in fluid dynamics. Still, there are few publicly available research codes and, due to the heavy number crunching implied, available codes are usually written in low-level languages such as C/C++ or Fortran. In this paper we describe a pure scientific Python pseudo-spectral DNS code that nearly matches the performance of C++ for thousands of processors and billions of unknowns. We also describe a version optimized through Cython, that is found to match the speed of C++. The solvers are written from scratch in Python, both the mesh, the MPI domain decomposition, and the temporal integrators. The solvers have been verified and benchmarked on the Shaheen supercomputer at the KAUST supercomputing laboratory, and we are able to show very good scaling up to several thousand cores. A very important part of the implementation is the mesh decomposition (we implement both slab and pencil decompositions) and 3D parallel Fast Fourier Transforms (FFT). The mesh decomposition and FFT routines have been implemented in Python using serial FFT routines (either NumPy, pyFFTW or any other serial FFT module), NumPy array manipulations and with MPI communications handled by MPI for Python (mpi4py). We show how we are able to execute a 3D parallel FFT in Python for a slab mesh decomposition using 4 lines of compact Python code, for which the parallel performance on Shaheen is found to be slightly better than similar routines provided through the FFTW library. For a pencil mesh decomposition 7 lines of code is required to execute a transform.
1610.06070
Norihiro Tanahashi
Koji Hashimoto and Norihiro Tanahashi
Universality in Chaos of Particle Motion near Black Hole Horizon
12 pages, 4 figures; v2: references added, numerical plots in Fig. 4 corrected
Phys. Rev. D 95, 024007 (2017)
10.1103/PhysRevD.95.024007
OU-HET-911
hep-th gr-qc
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motion of a particle near a horizon of a spherically symmetric black hole is shown to possess a universal Lyapunov exponent of a chaos provided by its surface gravity. To probe the horizon, we introduce electromagnetic or scalar force to the particle so that it does not fall into the horizon. There appears an unstable maximum of the total potential where the evaluated maximal Lyapunov exponent is found to be independent of the external forces and the particle mass. The Lyapunov exponent is universally given by the surface gravity of the black hole. Unless there are other sources of a chaos, the Lyapunov exponent is subject to an inequality $\lambda \leq 2\pi T_{\rm BH}/\hbar$, which is identical to the bound recently discovered by Maldacena, Shenker and Stanford.
[ { "created": "Wed, 19 Oct 2016 15:44:35 GMT", "version": "v1" }, { "created": "Tue, 1 Nov 2016 01:18:58 GMT", "version": "v2" } ]
2017-01-11
[ [ "Hashimoto", "Koji", "" ], [ "Tanahashi", "Norihiro", "" ] ]
Motion of a particle near a horizon of a spherically symmetric black hole is shown to possess a universal Lyapunov exponent of a chaos provided by its surface gravity. To probe the horizon, we introduce electromagnetic or scalar force to the particle so that it does not fall into the horizon. There appears an unstable maximum of the total potential where the evaluated maximal Lyapunov exponent is found to be independent of the external forces and the particle mass. The Lyapunov exponent is universally given by the surface gravity of the black hole. Unless there are other sources of a chaos, the Lyapunov exponent is subject to an inequality $\lambda \leq 2\pi T_{\rm BH}/\hbar$, which is identical to the bound recently discovered by Maldacena, Shenker and Stanford.
2310.03091
Daile Osorio-Roig
Daile Osorio-Roig, Lazaro J. Gonzalez-Soler, Christian Rathgeb, Christoph Busch
Privacy-preserving Multi-biometric Indexing based on Frequent Binary Patterns
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The development of large-scale identification systems that ensure the privacy protection of enrolled subjects represents a major challenge. Biometric deployments that provide interoperability and usability by including efficient multi-biometric solutions are a recent requirement. In the context of privacy protection, several template protection schemes have been proposed in the past. However, these schemes seem inadequate for indexing (workload reduction) in biometric identification systems. More specifically, they have been used in identification systems that perform exhaustive searches, leading to a degradation of computational efficiency. To overcome these limitations, we propose an efficient privacy-preserving multi-biometric identification system that retrieves protected deep cancelable templates and is agnostic with respect to biometric characteristics and biometric template protection schemes. To this end, a multi-biometric binning scheme is designed to exploit the low intra-class variation properties contained in the frequent binary patterns extracted from different types of biometric characteristics. Experimental results reported on publicly available databases using state-of-the-art Deep Neural Network (DNN)-based embedding extractors show that the protected multi-biometric identification system can reduce the computational workload to approximately 57\% (indexing up to three types of biometric characteristics) and 53% (indexing up to two types of biometric characteristics), while simultaneously improving the biometric performance of the baseline biometric system at the high-security thresholds. The source code of the proposed multi-biometric indexing approach together with the composed multi-biometric dataset, will be made available to the research community once the article is accepted.
[ { "created": "Wed, 4 Oct 2023 18:18:24 GMT", "version": "v1" } ]
2023-10-06
[ [ "Osorio-Roig", "Daile", "" ], [ "Gonzalez-Soler", "Lazaro J.", "" ], [ "Rathgeb", "Christian", "" ], [ "Busch", "Christoph", "" ] ]
The development of large-scale identification systems that ensure the privacy protection of enrolled subjects represents a major challenge. Biometric deployments that provide interoperability and usability by including efficient multi-biometric solutions are a recent requirement. In the context of privacy protection, several template protection schemes have been proposed in the past. However, these schemes seem inadequate for indexing (workload reduction) in biometric identification systems. More specifically, they have been used in identification systems that perform exhaustive searches, leading to a degradation of computational efficiency. To overcome these limitations, we propose an efficient privacy-preserving multi-biometric identification system that retrieves protected deep cancelable templates and is agnostic with respect to biometric characteristics and biometric template protection schemes. To this end, a multi-biometric binning scheme is designed to exploit the low intra-class variation properties contained in the frequent binary patterns extracted from different types of biometric characteristics. Experimental results reported on publicly available databases using state-of-the-art Deep Neural Network (DNN)-based embedding extractors show that the protected multi-biometric identification system can reduce the computational workload to approximately 57\% (indexing up to three types of biometric characteristics) and 53% (indexing up to two types of biometric characteristics), while simultaneously improving the biometric performance of the baseline biometric system at the high-security thresholds. The source code of the proposed multi-biometric indexing approach together with the composed multi-biometric dataset, will be made available to the research community once the article is accepted.
2010.10468
Sherif Abdulatif
Sherif Abdulatif, Karim Armanious, Jayasankar T. Sajeev, Karim Guirguis, Bin Yang
Investigating Cross-Domain Losses for Speech Enhancement
5 pages, 3 figures and 1 table
null
null
null
cs.SD cs.LG eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent years have seen a surge in the number of available frameworks for speech enhancement (SE) and recognition. Whether model-based or constructed via deep learning, these frameworks often rely in isolation on either time-domain signals or time-frequency (TF) representations of speech data. In this study, we investigate the advantages of each set of approaches by separately examining their impact on speech intelligibility and quality. Furthermore, we combine the fragmented benefits of time-domain and TF speech representations by introducing two new cross-domain SE frameworks. A quantitative comparative analysis against recent model-based and deep learning SE approaches is performed to illustrate the merit of the proposed frameworks.
[ { "created": "Tue, 20 Oct 2020 17:28:07 GMT", "version": "v1" }, { "created": "Sun, 30 May 2021 01:56:54 GMT", "version": "v2" } ]
2021-06-01
[ [ "Abdulatif", "Sherif", "" ], [ "Armanious", "Karim", "" ], [ "Sajeev", "Jayasankar T.", "" ], [ "Guirguis", "Karim", "" ], [ "Yang", "Bin", "" ] ]
Recent years have seen a surge in the number of available frameworks for speech enhancement (SE) and recognition. Whether model-based or constructed via deep learning, these frameworks often rely in isolation on either time-domain signals or time-frequency (TF) representations of speech data. In this study, we investigate the advantages of each set of approaches by separately examining their impact on speech intelligibility and quality. Furthermore, we combine the fragmented benefits of time-domain and TF speech representations by introducing two new cross-domain SE frameworks. A quantitative comparative analysis against recent model-based and deep learning SE approaches is performed to illustrate the merit of the proposed frameworks.
1207.4258
Lin Chen
Lin Chen, Athanasios V. Vasilakos
Joint Rate Adaptation and Medium Access in Wireless LANs: a Non-cooperative Game Theoretic Perspective
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wireless local area networks (WLANs) based on IEEE 802.11 standards are becoming ubiquitous today and typically support multiple data rates. In such multi-rate WLANs, distributed medium access and rate adaptation are two key elements to achieve efficient radio resource utilization, especially in non-cooperative environments. In this paper, we present an analytical study on the non-cooperative multi-rate WLANs composed of selfish users jointly adjusting their data rate and contention window size at the medium access level to maximize their own throughput, irrespective of the impact of their selfish behaviors on overall system performance. Specifically, we develop an adapted Tit-For-Tat (TFT) strategy to guide the system to an efficient equilibrium in non-cooperative environments. We model the interactions among selfish users under the adapted TFT framework as a non-cooperative joint medium access and rate adaptation game. A systematic analysis is conducted on the structural properties of the game to provide insights on the interaction between rate adaptation and 802.11 medium access control in a competitive setting. We show that the game has multiple equilibria, which, after the equilibrium refinement process that we develop, reduce to a unique efficient equilibrium. We further develop a distributed algorithm to achieve this equilibrium and demonstrate that the equilibrium achieves the performance very close to the system optimum in a social perspective.
[ { "created": "Wed, 18 Jul 2012 04:04:35 GMT", "version": "v1" } ]
2012-07-19
[ [ "Chen", "Lin", "" ], [ "Vasilakos", "Athanasios V.", "" ] ]
Wireless local area networks (WLANs) based on IEEE 802.11 standards are becoming ubiquitous today and typically support multiple data rates. In such multi-rate WLANs, distributed medium access and rate adaptation are two key elements to achieve efficient radio resource utilization, especially in non-cooperative environments. In this paper, we present an analytical study on the non-cooperative multi-rate WLANs composed of selfish users jointly adjusting their data rate and contention window size at the medium access level to maximize their own throughput, irrespective of the impact of their selfish behaviors on overall system performance. Specifically, we develop an adapted Tit-For-Tat (TFT) strategy to guide the system to an efficient equilibrium in non-cooperative environments. We model the interactions among selfish users under the adapted TFT framework as a non-cooperative joint medium access and rate adaptation game. A systematic analysis is conducted on the structural properties of the game to provide insights on the interaction between rate adaptation and 802.11 medium access control in a competitive setting. We show that the game has multiple equilibria, which, after the equilibrium refinement process that we develop, reduce to a unique efficient equilibrium. We further develop a distributed algorithm to achieve this equilibrium and demonstrate that the equilibrium achieves the performance very close to the system optimum in a social perspective.
2204.03508
Zhihan Zhang
Zhihan Zhang, Wenhao Yu, Mengxia Yu, Zhichun Guo, Meng Jiang
A Survey of Multi-task Learning in Natural Language Processing: Regarding Task Relatedness and Training Methods
Accepted to EACL 2023 as regular long paper
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Multi-task learning (MTL) has become increasingly popular in natural language processing (NLP) because it improves the performance of related tasks by exploiting their commonalities and differences. Nevertheless, it is still not understood very well how multi-task learning can be implemented based on the relatedness of training tasks. In this survey, we review recent advances of multi-task learning methods in NLP, with the aim of summarizing them into two general multi-task training methods based on their task relatedness: (i) joint training and (ii) multi-step training. We present examples in various NLP downstream applications, summarize the task relationships and discuss future directions of this promising topic.
[ { "created": "Thu, 7 Apr 2022 15:22:19 GMT", "version": "v1" }, { "created": "Tue, 14 Feb 2023 19:58:57 GMT", "version": "v2" } ]
2023-02-16
[ [ "Zhang", "Zhihan", "" ], [ "Yu", "Wenhao", "" ], [ "Yu", "Mengxia", "" ], [ "Guo", "Zhichun", "" ], [ "Jiang", "Meng", "" ] ]
Multi-task learning (MTL) has become increasingly popular in natural language processing (NLP) because it improves the performance of related tasks by exploiting their commonalities and differences. Nevertheless, it is still not understood very well how multi-task learning can be implemented based on the relatedness of training tasks. In this survey, we review recent advances of multi-task learning methods in NLP, with the aim of summarizing them into two general multi-task training methods based on their task relatedness: (i) joint training and (ii) multi-step training. We present examples in various NLP downstream applications, summarize the task relationships and discuss future directions of this promising topic.
2007.06402
Raphael Achddou
Rapha\"el Achddou, J.Matias di Martino, Guillermo Sapiro
Nested Learning For Multi-Granular Tasks
null
null
null
null
cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Standard deep neural networks (DNNs) are commonly trained in an end-to-end fashion for specific tasks such as object recognition, face identification, or character recognition, among many examples. This specificity often leads to overconfident models that generalize poorly to samples that are not from the original training distribution. Moreover, such standard DNNs do not allow to leverage information from heterogeneously annotated training data, where for example, labels may be provided with different levels of granularity. Furthermore, DNNs do not produce results with simultaneous different levels of confidence for different levels of detail, they are most commonly an all or nothing approach. To address these challenges, we introduce the concept of nested learning: how to obtain a hierarchical representation of the input such that a coarse label can be extracted first, and sequentially refine this representation, if the sample permits, to obtain successively refined predictions, all of them with the corresponding confidence. We explicitly enforce this behavior by creating a sequence of nested information bottlenecks. Looking at the problem of nested learning from an information theory perspective, we design a network topology with two important properties. First, a sequence of low dimensional (nested) feature embeddings are enforced. Then we show how the explicit combination of nested outputs can improve both the robustness and the accuracy of finer predictions. Experimental results on Cifar-10, Cifar-100, MNIST, Fashion-MNIST, Dbpedia, and Plantvillage demonstrate that nested learning outperforms the same network trained in the standard end-to-end fashion.
[ { "created": "Mon, 13 Jul 2020 14:27:14 GMT", "version": "v1" } ]
2020-07-14
[ [ "Achddou", "Raphaël", "" ], [ "di Martino", "J. Matias", "" ], [ "Sapiro", "Guillermo", "" ] ]
Standard deep neural networks (DNNs) are commonly trained in an end-to-end fashion for specific tasks such as object recognition, face identification, or character recognition, among many examples. This specificity often leads to overconfident models that generalize poorly to samples that are not from the original training distribution. Moreover, such standard DNNs do not allow to leverage information from heterogeneously annotated training data, where for example, labels may be provided with different levels of granularity. Furthermore, DNNs do not produce results with simultaneous different levels of confidence for different levels of detail, they are most commonly an all or nothing approach. To address these challenges, we introduce the concept of nested learning: how to obtain a hierarchical representation of the input such that a coarse label can be extracted first, and sequentially refine this representation, if the sample permits, to obtain successively refined predictions, all of them with the corresponding confidence. We explicitly enforce this behavior by creating a sequence of nested information bottlenecks. Looking at the problem of nested learning from an information theory perspective, we design a network topology with two important properties. First, a sequence of low dimensional (nested) feature embeddings are enforced. Then we show how the explicit combination of nested outputs can improve both the robustness and the accuracy of finer predictions. Experimental results on Cifar-10, Cifar-100, MNIST, Fashion-MNIST, Dbpedia, and Plantvillage demonstrate that nested learning outperforms the same network trained in the standard end-to-end fashion.
1904.02755
Soham Ghosh
Soham Ghosh, Anuva Agarwal, Zarana Parekh, Alexander Hauptmann
ExCL: Extractive Clip Localization Using Natural Language Descriptions
Accepted at NAACL 2019, Short Paper
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The task of retrieving clips within videos based on a given natural language query requires cross-modal reasoning over multiple frames. Prior approaches such as sliding window classifiers are inefficient, while text-clip similarity driven ranking-based approaches such as segment proposal networks are far more complicated. In order to select the most relevant video clip corresponding to the given text description, we propose a novel extractive approach that predicts the start and end frames by leveraging cross-modal interactions between the text and video - this removes the need to retrieve and re-rank multiple proposal segments. Using recurrent networks we encode the two modalities into a joint representation which is then used in different variants of start-end frame predictor networks. Through extensive experimentation and ablative analysis, we demonstrate that our simple and elegant approach significantly outperforms state of the art on two datasets and has comparable performance on a third.
[ { "created": "Thu, 4 Apr 2019 19:17:04 GMT", "version": "v1" } ]
2019-04-08
[ [ "Ghosh", "Soham", "" ], [ "Agarwal", "Anuva", "" ], [ "Parekh", "Zarana", "" ], [ "Hauptmann", "Alexander", "" ] ]
The task of retrieving clips within videos based on a given natural language query requires cross-modal reasoning over multiple frames. Prior approaches such as sliding window classifiers are inefficient, while text-clip similarity driven ranking-based approaches such as segment proposal networks are far more complicated. In order to select the most relevant video clip corresponding to the given text description, we propose a novel extractive approach that predicts the start and end frames by leveraging cross-modal interactions between the text and video - this removes the need to retrieve and re-rank multiple proposal segments. Using recurrent networks we encode the two modalities into a joint representation which is then used in different variants of start-end frame predictor networks. Through extensive experimentation and ablative analysis, we demonstrate that our simple and elegant approach significantly outperforms state of the art on two datasets and has comparable performance on a third.
2009.08716
Zhengjie Yang
Zhengjie Yang, Wei Bao, Dong Yuan, Nguyen H. Tran, and Albert Y. Zomaya
Federated Learning with Nesterov Accelerated Gradient
publised in TPDS. 18 pages, 6 figures
null
10.1109/TPDS.2022.3206480
null
cs.LG cs.DC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated learning (FL) is a fast-developing technique that allows multiple workers to train a global model based on a distributed dataset. Conventional FL (FedAvg) employs gradient descent algorithm, which may not be efficient enough. Momentum is able to improve the situation by adding an additional momentum step to accelerate the convergence and has demonstrated its benefits in both centralized and FL environments. It is well-known that Nesterov Accelerated Gradient (NAG) is a more advantageous form of momentum, but it is not clear how to quantify the benefits of NAG in FL so far. This motives us to propose FedNAG, which employs NAG in each worker as well as NAG momentum and model aggregation in the aggregator. We provide a detailed convergence analysis of FedNAG and compare it with FedAvg. Extensive experiments based on real-world datasets and trace-driven simulation are conducted, demonstrating that FedNAG increases the learning accuracy by 3-24% and decreases the total training time by 11-70% compared with the benchmarks under a wide range of settings.
[ { "created": "Fri, 18 Sep 2020 09:38:11 GMT", "version": "v1" }, { "created": "Wed, 26 Oct 2022 02:46:51 GMT", "version": "v2" } ]
2022-10-27
[ [ "Yang", "Zhengjie", "" ], [ "Bao", "Wei", "" ], [ "Yuan", "Dong", "" ], [ "Tran", "Nguyen H.", "" ], [ "Zomaya", "Albert Y.", "" ] ]
Federated learning (FL) is a fast-developing technique that allows multiple workers to train a global model based on a distributed dataset. Conventional FL (FedAvg) employs gradient descent algorithm, which may not be efficient enough. Momentum is able to improve the situation by adding an additional momentum step to accelerate the convergence and has demonstrated its benefits in both centralized and FL environments. It is well-known that Nesterov Accelerated Gradient (NAG) is a more advantageous form of momentum, but it is not clear how to quantify the benefits of NAG in FL so far. This motives us to propose FedNAG, which employs NAG in each worker as well as NAG momentum and model aggregation in the aggregator. We provide a detailed convergence analysis of FedNAG and compare it with FedAvg. Extensive experiments based on real-world datasets and trace-driven simulation are conducted, demonstrating that FedNAG increases the learning accuracy by 3-24% and decreases the total training time by 11-70% compared with the benchmarks under a wide range of settings.
0909.2377
Nevin Vunka Jungum
Soumaya Zirari, Philippe Canalda and Francois Spies
Geometric and Signal Strength Dilution of Precision (DoP)Wi-Fi
International Journal of Computer Science Issues (IJCSI), Volume 3, pp35-44, August 2009
S.Zirari,P. Canalda and F.Spies, " Geometric and Signal Strength Dilution of Precision (DoP)Wi-Fi", International Journal of Computer Science Issues (IJCSI), Volume 3, pp35-44, August 2009
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The democratization of wireless networks combined to the emergence of mobile devices increasingly autonomous and efficient lead to new services. Positioning services become overcrowded. Accuracy is the main quality criteria in positioning. But to better appreciate this one a coefficient is needed. In this paper we present Geometric and Signal Strength Dilution of Precision (DOP) for positioning systems based on Wi-Fi and Signal Strength measurements.
[ { "created": "Sat, 12 Sep 2009 22:24:52 GMT", "version": "v1" } ]
2009-09-15
[ [ "Zirari", "Soumaya", "" ], [ "Canalda", "Philippe", "" ], [ "Spies", "Francois", "" ] ]
The democratization of wireless networks combined to the emergence of mobile devices increasingly autonomous and efficient lead to new services. Positioning services become overcrowded. Accuracy is the main quality criteria in positioning. But to better appreciate this one a coefficient is needed. In this paper we present Geometric and Signal Strength Dilution of Precision (DOP) for positioning systems based on Wi-Fi and Signal Strength measurements.
1710.01416
Saed Khawaldeh
Vu Hoang Minh, Tajwar Abrar Aleef, Usama Pervaiz, Yeman Brhane Hagos, Saed Khawaldeh
Smoothness-based Edge Detection using Low-SNR Camera for Robot Navigation
null
null
null
null
cs.CV cs.RO stat.AP stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the emerging advancement in the branch of autonomous robotics, the ability of a robot to efficiently localize and construct maps of its surrounding is crucial. This paper deals with utilizing thermal-infrared cameras, as opposed to conventional cameras as the primary sensor to capture images of the robot's surroundings. For localization, the images need to be further processed before feeding them to a navigational system. The main motivation of this paper was to develop an edge detection methodology capable of utilizing the low-SNR poor output from such a thermal camera and effectively detect smooth edges of the surrounding environment. The enhanced edge detector proposed in this paper takes the raw image from the thermal sensor, denoises the images, applies Canny edge detection followed by CSS method. The edges are ranked to remove any noise and only edges of the highest rank are kept. Then, the broken edges are linked by computing edge metrics and a smooth edge of the surrounding is displayed in a binary image. Several comparisons are also made in the paper between the proposed technique and the existing techniques.
[ { "created": "Tue, 3 Oct 2017 22:48:41 GMT", "version": "v1" } ]
2017-10-05
[ [ "Minh", "Vu Hoang", "" ], [ "Aleef", "Tajwar Abrar", "" ], [ "Pervaiz", "Usama", "" ], [ "Hagos", "Yeman Brhane", "" ], [ "Khawaldeh", "Saed", "" ] ]
In the emerging advancement in the branch of autonomous robotics, the ability of a robot to efficiently localize and construct maps of its surrounding is crucial. This paper deals with utilizing thermal-infrared cameras, as opposed to conventional cameras as the primary sensor to capture images of the robot's surroundings. For localization, the images need to be further processed before feeding them to a navigational system. The main motivation of this paper was to develop an edge detection methodology capable of utilizing the low-SNR poor output from such a thermal camera and effectively detect smooth edges of the surrounding environment. The enhanced edge detector proposed in this paper takes the raw image from the thermal sensor, denoises the images, applies Canny edge detection followed by CSS method. The edges are ranked to remove any noise and only edges of the highest rank are kept. Then, the broken edges are linked by computing edge metrics and a smooth edge of the surrounding is displayed in a binary image. Several comparisons are also made in the paper between the proposed technique and the existing techniques.
2303.18119
Juan M. Gandarias
Luca Fortini (1,2), Mattia Leonori (1), Juan M. Gandarias (1), Elena de Momi (2), Arash Ajoudani (1) ((1) Human-Robot Interfaces and Interaction, Istituto Italiano di Tecnologia, Genoa, Italy (2) Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy)
Markerless 3D human pose tracking through multiple cameras and AI: Enabling high accuracy, robustness, and real-time performance
19 pages, 7 figures
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tracking 3D human motion in real-time is crucial for numerous applications across many fields. Traditional approaches involve attaching artificial fiducial objects or sensors to the body, limiting their usability and comfort-of-use and consequently narrowing their application fields. Recent advances in Artificial Intelligence (AI) have allowed for markerless solutions. However, most of these methods operate in 2D, while those providing 3D solutions compromise accuracy and real-time performance. To address this challenge and unlock the potential of visual pose estimation methods in real-world scenarios, we propose a markerless framework that combines multi-camera views and 2D AI-based pose estimation methods to track 3D human motion. Our approach integrates a Weighted Least Square (WLS) algorithm that computes 3D human motion from multiple 2D pose estimations provided by an AI-driven method. The method is integrated within the Open-VICO framework allowing simulation and real-world execution. Several experiments have been conducted, which have shown high accuracy and real-time performance, demonstrating the high level of readiness for real-world applications and the potential to revolutionize human motion capture.
[ { "created": "Fri, 31 Mar 2023 15:06:50 GMT", "version": "v1" } ]
2023-04-03
[ [ "Fortini", "Luca", "" ], [ "Leonori", "Mattia", "" ], [ "Gandarias", "Juan M.", "" ], [ "de Momi", "Elena", "" ], [ "Ajoudani", "Arash", "" ] ]
Tracking 3D human motion in real-time is crucial for numerous applications across many fields. Traditional approaches involve attaching artificial fiducial objects or sensors to the body, limiting their usability and comfort-of-use and consequently narrowing their application fields. Recent advances in Artificial Intelligence (AI) have allowed for markerless solutions. However, most of these methods operate in 2D, while those providing 3D solutions compromise accuracy and real-time performance. To address this challenge and unlock the potential of visual pose estimation methods in real-world scenarios, we propose a markerless framework that combines multi-camera views and 2D AI-based pose estimation methods to track 3D human motion. Our approach integrates a Weighted Least Square (WLS) algorithm that computes 3D human motion from multiple 2D pose estimations provided by an AI-driven method. The method is integrated within the Open-VICO framework allowing simulation and real-world execution. Several experiments have been conducted, which have shown high accuracy and real-time performance, demonstrating the high level of readiness for real-world applications and the potential to revolutionize human motion capture.
1002.1846
Goran Duplancic
G. Duplancic, D. Glavan, H. Stefancic
Probability distribution of the vacuum energy density
5 pages, 2 figures, revised version to appear in Phys.Rev.D
Phys.Rev.D82:125008,2010
10.1103/PhysRevD.82.125008
null
hep-th astro-ph.CO gr-qc hep-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As the vacuum state of a quantum field is not an eigenstate of the Hamiltonian density, the vacuum energy density can be represented as a random variable. We present an analytical calculation of the probability distribution of the vacuum energy density for real and complex massless scalar fields in Minkowski space. The obtained probability distributions are broad and the vacuum expectation value of the Hamiltonian density is not fully representative of the vacuum energy density.
[ { "created": "Tue, 9 Feb 2010 13:13:29 GMT", "version": "v1" }, { "created": "Fri, 26 Nov 2010 13:52:49 GMT", "version": "v2" } ]
2010-12-24
[ [ "Duplancic", "G.", "" ], [ "Glavan", "D.", "" ], [ "Stefancic", "H.", "" ] ]
As the vacuum state of a quantum field is not an eigenstate of the Hamiltonian density, the vacuum energy density can be represented as a random variable. We present an analytical calculation of the probability distribution of the vacuum energy density for real and complex massless scalar fields in Minkowski space. The obtained probability distributions are broad and the vacuum expectation value of the Hamiltonian density is not fully representative of the vacuum energy density.
2210.15042
Mohammadreza Ebrahimi
Rouzbeh Behnia, Mohamamdreza Ebrahimi, Jason Pacheco, Balaji Padmanabhan
Privately Fine-Tuning Large Language Models with Differential Privacy
Publised at IEEE ICDM Workshop on Machine Learning for Cybersecurity (MLC) 2022
2022 IEEE International Conference on Data Mining Workshops (ICDMW), pp. 560-566
10.1109/ICDMW58026.2022.00078
null
cs.CR cs.CL
http://creativecommons.org/licenses/by/4.0/
Pre-trained Large Language Models (LLMs) are an integral part of modern AI that have led to breakthrough performances in complex AI tasks. Major AI companies with expensive infrastructures are able to develop and train these large models with billions and millions of parameters from scratch. Third parties, researchers, and practitioners are increasingly adopting these pre-trained models and fine-tuning them on their private data to accomplish their downstream AI tasks. However, it has been shown that an adversary can extract/reconstruct the exact training samples from these LLMs, which can lead to revealing personally identifiable information. The issue has raised deep concerns about the privacy of LLMs. Differential privacy (DP) provides a rigorous framework that allows adding noise in the process of training or fine-tuning LLMs such that extracting the training data becomes infeasible (i.e., with a cryptographically small success probability). While the theoretical privacy guarantees offered in most extant studies assume learning models from scratch through many training iterations in an asymptotic setting, this assumption does not hold in fine-tuning scenarios in which the number of training iterations is significantly smaller. To address the gap, we present \ewtune, a DP framework for fine-tuning LLMs based on Edgeworth accountant with finite-sample privacy guarantees. Our results across four well-established natural language understanding (NLU) tasks show that while \ewtune~adds privacy guarantees to LLM fine-tuning process, it directly contributes to decreasing the induced noise to up to 5.6\% and improves the state-of-the-art LLMs performance by up to 1.1\% across all NLU tasks. We have open-sourced our implementations for wide adoption and public testing purposes.
[ { "created": "Wed, 26 Oct 2022 21:18:31 GMT", "version": "v1" }, { "created": "Fri, 17 Mar 2023 00:55:42 GMT", "version": "v2" }, { "created": "Mon, 20 Mar 2023 01:33:23 GMT", "version": "v3" } ]
2023-05-02
[ [ "Behnia", "Rouzbeh", "" ], [ "Ebrahimi", "Mohamamdreza", "" ], [ "Pacheco", "Jason", "" ], [ "Padmanabhan", "Balaji", "" ] ]
Pre-trained Large Language Models (LLMs) are an integral part of modern AI that have led to breakthrough performances in complex AI tasks. Major AI companies with expensive infrastructures are able to develop and train these large models with billions and millions of parameters from scratch. Third parties, researchers, and practitioners are increasingly adopting these pre-trained models and fine-tuning them on their private data to accomplish their downstream AI tasks. However, it has been shown that an adversary can extract/reconstruct the exact training samples from these LLMs, which can lead to revealing personally identifiable information. The issue has raised deep concerns about the privacy of LLMs. Differential privacy (DP) provides a rigorous framework that allows adding noise in the process of training or fine-tuning LLMs such that extracting the training data becomes infeasible (i.e., with a cryptographically small success probability). While the theoretical privacy guarantees offered in most extant studies assume learning models from scratch through many training iterations in an asymptotic setting, this assumption does not hold in fine-tuning scenarios in which the number of training iterations is significantly smaller. To address the gap, we present \ewtune, a DP framework for fine-tuning LLMs based on Edgeworth accountant with finite-sample privacy guarantees. Our results across four well-established natural language understanding (NLU) tasks show that while \ewtune~adds privacy guarantees to LLM fine-tuning process, it directly contributes to decreasing the induced noise to up to 5.6\% and improves the state-of-the-art LLMs performance by up to 1.1\% across all NLU tasks. We have open-sourced our implementations for wide adoption and public testing purposes.
2301.10903
Tom Steudtner
Ian Jack, Hugh Osborn, Tom Steudtner
Explorations in Scalar Fermion Theories: $\beta$-functions, Supersymmetry and Fixed Points
76 pages, 3 external figures
null
null
DO-TH 22/06
hep-th hep-ph
http://creativecommons.org/licenses/by/4.0/
Results for $\beta$-functions and anomalous dimensions in general scalar fermion theories are presented to three loops. Various constraints on the individual coefficients for each diagram following from supersymmetry are analysed. The results are used to discuss potential fixed points in the $\varepsilon$-expansion for scalar fermion theories, with arbitrary numbers of scalar fields, and where there are just two scalar couplings and one Yukawa coupling. For different examples the fixed points follow a similar pattern as the numbers of fermions is varied. For diagrams with subdivergences there are extensive consistency constraints arising from the existence of a perturbative $a$-function and these are analysed in detail. Further arbitrary scheme variations which preserve the form of $\beta$ functions and anomalous dimensions in terms of 1PI diagrams are also discussed. The existence of linear and quadratic scheme invariants is demonstrated and the consistency condition are shown to be expressible in terms of these invariants.
[ { "created": "Thu, 26 Jan 2023 02:23:13 GMT", "version": "v1" }, { "created": "Thu, 4 Jan 2024 11:48:31 GMT", "version": "v2" } ]
2024-01-05
[ [ "Jack", "Ian", "" ], [ "Osborn", "Hugh", "" ], [ "Steudtner", "Tom", "" ] ]
Results for $\beta$-functions and anomalous dimensions in general scalar fermion theories are presented to three loops. Various constraints on the individual coefficients for each diagram following from supersymmetry are analysed. The results are used to discuss potential fixed points in the $\varepsilon$-expansion for scalar fermion theories, with arbitrary numbers of scalar fields, and where there are just two scalar couplings and one Yukawa coupling. For different examples the fixed points follow a similar pattern as the numbers of fermions is varied. For diagrams with subdivergences there are extensive consistency constraints arising from the existence of a perturbative $a$-function and these are analysed in detail. Further arbitrary scheme variations which preserve the form of $\beta$ functions and anomalous dimensions in terms of 1PI diagrams are also discussed. The existence of linear and quadratic scheme invariants is demonstrated and the consistency condition are shown to be expressible in terms of these invariants.
0806.3704
Matteo Beccaria
Matteo Beccaria
The generalized scaling function of AdS/CFT and semiclassical string theory
31 pages, 8 eps figures
JHEP 0807:082,2008
10.1088/1126-6708/2008/07/082
null
hep-th
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, Freyhult, Rej and Staudacher (FRS) proposed an integral equation determining the leading logarithmic term of the anomalous dimension of sl(2) twist-operators in N=4 SYM for large Lorentz spin M and twist L at fixed j = L/log(M). We discuss the large j limit of the FRS equation. This limit can be matched with the {\em fast long string} limit of AdS_5 X S^5 superstring perturbation theory at all couplings. In particular, a certain part of the classical and one-loop string result is known to be protected and can be computed in the weakly coupled large-j limit of the FRS equation. We present various analytical and numerical results supporting agreement at one and two loops in the gauge theory.
[ { "created": "Mon, 23 Jun 2008 15:16:55 GMT", "version": "v1" } ]
2011-06-02
[ [ "Beccaria", "Matteo", "" ] ]
Recently, Freyhult, Rej and Staudacher (FRS) proposed an integral equation determining the leading logarithmic term of the anomalous dimension of sl(2) twist-operators in N=4 SYM for large Lorentz spin M and twist L at fixed j = L/log(M). We discuss the large j limit of the FRS equation. This limit can be matched with the {\em fast long string} limit of AdS_5 X S^5 superstring perturbation theory at all couplings. In particular, a certain part of the classical and one-loop string result is known to be protected and can be computed in the weakly coupled large-j limit of the FRS equation. We present various analytical and numerical results supporting agreement at one and two loops in the gauge theory.
2308.05038
Himarsha R Jayanetti
Himarsha R. Jayanetti, Erika Frydenlund, Michele C. Weigle
Xenophobic Events vs. Refugee Population -- Using GDELT to Identify Countries with Disproportionate Coverage
10 pages, 2 figures, accepted as a Working Paper at SBP-BRiMS 2023. arXiv admin note: text overlap with arXiv:2305.01708
null
null
null
cs.CY
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this preliminary study, we used the Global Database of Events, Language, and Tone (GDELT) database to examine xenophobic events reported in the media during 2022. We collected a dataset of 2,778 unique events and created a choropleth map illustrating the frequency of events scaled by the refugee population's proportion in each host country. We identified the top 10 countries with the highest scaled event frequencies among those with more than 50,000 refugees. Contrary to the belief that hosting a significant number of forced migrants results in higher xenophobic incidents, our findings indicate a potential connection to political factors. We also categorized the 20 root event codes in the CAMEO event data as either "Direct" or "Indirect". Almost 90% of the events related to refugees in 2022 were classified as "Indirect".
[ { "created": "Wed, 9 Aug 2023 16:10:05 GMT", "version": "v1" } ]
2023-08-10
[ [ "Jayanetti", "Himarsha R.", "" ], [ "Frydenlund", "Erika", "" ], [ "Weigle", "Michele C.", "" ] ]
In this preliminary study, we used the Global Database of Events, Language, and Tone (GDELT) database to examine xenophobic events reported in the media during 2022. We collected a dataset of 2,778 unique events and created a choropleth map illustrating the frequency of events scaled by the refugee population's proportion in each host country. We identified the top 10 countries with the highest scaled event frequencies among those with more than 50,000 refugees. Contrary to the belief that hosting a significant number of forced migrants results in higher xenophobic incidents, our findings indicate a potential connection to political factors. We also categorized the 20 root event codes in the CAMEO event data as either "Direct" or "Indirect". Almost 90% of the events related to refugees in 2022 were classified as "Indirect".
1711.00296
Linus Wulff
Linus Wulff
Classifying integrable symmetric space strings via factorized scattering
17 pages; v2: Improvements to sec 1, results now summarized in Tab 1. Matches published version
null
10.1007/JHEP02(2018)106
null
hep-th
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
All symmetric space $AdS_n$ solutions of type II supergravity have recently been found for $n>2$. For the supersymmetric solutions (and their T-duals) it is known that the Green-Schwarz string is classically integrable. We complete the classification by ruling out integrability for the remaining non-supersymmetric solutions. This is achieved by showing that tree-level scattering on the worldsheet of a GKP or BMN string fails to factorize for these cases.
[ { "created": "Wed, 1 Nov 2017 11:35:41 GMT", "version": "v1" }, { "created": "Mon, 26 Feb 2018 14:04:07 GMT", "version": "v2" } ]
2018-04-04
[ [ "Wulff", "Linus", "" ] ]
All symmetric space $AdS_n$ solutions of type II supergravity have recently been found for $n>2$. For the supersymmetric solutions (and their T-duals) it is known that the Green-Schwarz string is classically integrable. We complete the classification by ruling out integrability for the remaining non-supersymmetric solutions. This is achieved by showing that tree-level scattering on the worldsheet of a GKP or BMN string fails to factorize for these cases.
2008.13478
Emilio Torrente-Lujan
A. Belhaj, H. Belmahi, M. Benali, W. El Hadri, H. El Moumni, E. Torrente-Lujan
Shadows of 5D Black Holes from String Theory
null
null
10.1016/j.physletb.2020.136025
FISPAC-TH/271-20, UQBAR-TH/314-203
hep-th gr-qc
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the shadow behaviors of five dimensional (5D) black holes embedded in type IIB superstring/supergravity inspired spacetimes by considering solutions with and without rotations. Geometrical properties as shapes and sizes are analyzed in terms of the D3-brane number and the rotation parameter. Concretely, we find that the shapes are indeed significantly distorted by such physical parameters and the size of the shadows decreases with the brane or "color" number and the rotation. Then, we investigate geometrical observables and energy emission rate aspects.
[ { "created": "Mon, 31 Aug 2020 10:43:11 GMT", "version": "v1" } ]
2020-12-23
[ [ "Belhaj", "A.", "" ], [ "Belmahi", "H.", "" ], [ "Benali", "M.", "" ], [ "Hadri", "W. El", "" ], [ "Moumni", "H. El", "" ], [ "Torrente-Lujan", "E.", "" ] ]
We study the shadow behaviors of five dimensional (5D) black holes embedded in type IIB superstring/supergravity inspired spacetimes by considering solutions with and without rotations. Geometrical properties as shapes and sizes are analyzed in terms of the D3-brane number and the rotation parameter. Concretely, we find that the shapes are indeed significantly distorted by such physical parameters and the size of the shadows decreases with the brane or "color" number and the rotation. Then, we investigate geometrical observables and energy emission rate aspects.
2308.04333
George Boateng
George Boateng, Jonathan Abrefah Mensah, Kevin Takyi Yeboah, William Edor, Andrew Kojo Mensah-Onumah, Naafi Dasana Ibrahim, Nana Sam Yeboah
Towards an AI to Win Ghana's National Science and Maths Quiz
7 pages. Under review at Deep Learning Indaba and Black in AI Workshop @NeurIPS 2023
null
null
null
cs.HC cs.CL cs.CY cs.SD eess.AS
http://creativecommons.org/licenses/by-nc-nd/4.0/
Can an AI win Ghana's National Science and Maths Quiz (NSMQ)? That is the question we seek to answer in the NSMQ AI project, an open-source project that is building AI to compete live in the NSMQ and win. The NSMQ is an annual live science and mathematics competition for senior secondary school students in Ghana in which 3 teams of 2 students compete by answering questions across biology, chemistry, physics, and math in 5 rounds over 5 progressive stages until a winning team is crowned for that year. The NSMQ is an exciting live quiz competition with interesting technical challenges across speech-to-text, text-to-speech, question-answering, and human-computer interaction. In this ongoing work that began in January 2023, we give an overview of the project, describe each of the teams, progress made thus far, and the next steps toward our planned launch and debut of the AI in October for NSMQ 2023. An AI that conquers this grand challenge can have real-world impact on education such as enabling millions of students across Africa to have one-on-one learning support from this AI.
[ { "created": "Tue, 8 Aug 2023 15:26:58 GMT", "version": "v1" } ]
2023-08-09
[ [ "Boateng", "George", "" ], [ "Mensah", "Jonathan Abrefah", "" ], [ "Yeboah", "Kevin Takyi", "" ], [ "Edor", "William", "" ], [ "Mensah-Onumah", "Andrew Kojo", "" ], [ "Ibrahim", "Naafi Dasana", "" ], [ "Yeboah", ...
Can an AI win Ghana's National Science and Maths Quiz (NSMQ)? That is the question we seek to answer in the NSMQ AI project, an open-source project that is building AI to compete live in the NSMQ and win. The NSMQ is an annual live science and mathematics competition for senior secondary school students in Ghana in which 3 teams of 2 students compete by answering questions across biology, chemistry, physics, and math in 5 rounds over 5 progressive stages until a winning team is crowned for that year. The NSMQ is an exciting live quiz competition with interesting technical challenges across speech-to-text, text-to-speech, question-answering, and human-computer interaction. In this ongoing work that began in January 2023, we give an overview of the project, describe each of the teams, progress made thus far, and the next steps toward our planned launch and debut of the AI in October for NSMQ 2023. An AI that conquers this grand challenge can have real-world impact on education such as enabling millions of students across Africa to have one-on-one learning support from this AI.
hep-th/9610176
null
Anna Tollsten
String Solutions to Supergravity
9 pages, latex, uses a4.sty, no figures, contribution to the proceedings of the workshop Gauge Theory, Applied Supersymmetry and Quantum Gravity, Imperial College, London 1996
null
10.1142/9781848160927_0030
NBI-HE 96-61
hep-th
null
We find the comlete solution to ten-dimensional supergravity coupled to a three-form field strength, given the ``standard ansatz" for the fields, and show that in addition to the well-known elementary and solitonic (heterotic) string solutions, one of the possibilities is an (unstable) elementary type I string solution.
[ { "created": "Wed, 23 Oct 1996 11:16:59 GMT", "version": "v1" } ]
2016-12-21
[ [ "Tollsten", "Anna", "" ] ]
We find the comlete solution to ten-dimensional supergravity coupled to a three-form field strength, given the ``standard ansatz" for the fields, and show that in addition to the well-known elementary and solitonic (heterotic) string solutions, one of the possibilities is an (unstable) elementary type I string solution.
2405.15223
Jialong Wu
Jialong Wu, Shaofeng Yin, Ningya Feng, Xu He, Dong Li, Jianye Hao, Mingsheng Long
iVideoGPT: Interactive VideoGPTs are Scalable World Models
Project website: https://thuml.github.io/iVideoGPT
null
null
null
cs.CV cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
World models empower model-based agents to interactively explore, reason, and plan within imagined environments for real-world decision-making. However, the high demand for interactivity poses challenges in harnessing recent advancements in video generative models for developing world models at scale. This work introduces Interactive VideoGPT (iVideoGPT), a scalable autoregressive transformer framework that integrates multimodal signals--visual observations, actions, and rewards--into a sequence of tokens, facilitating an interactive experience of agents via next-token prediction. iVideoGPT features a novel compressive tokenization technique that efficiently discretizes high-dimensional visual observations. Leveraging its scalable architecture, we are able to pre-train iVideoGPT on millions of human and robotic manipulation trajectories, establishing a versatile foundation that is adaptable to serve as interactive world models for a wide range of downstream tasks. These include action-conditioned video prediction, visual planning, and model-based reinforcement learning, where iVideoGPT achieves competitive performance compared with state-of-the-art methods. Our work advances the development of interactive general world models, bridging the gap between generative video models and practical model-based reinforcement learning applications.
[ { "created": "Fri, 24 May 2024 05:29:12 GMT", "version": "v1" }, { "created": "Sun, 2 Jun 2024 09:44:20 GMT", "version": "v2" } ]
2024-06-04
[ [ "Wu", "Jialong", "" ], [ "Yin", "Shaofeng", "" ], [ "Feng", "Ningya", "" ], [ "He", "Xu", "" ], [ "Li", "Dong", "" ], [ "Hao", "Jianye", "" ], [ "Long", "Mingsheng", "" ] ]
World models empower model-based agents to interactively explore, reason, and plan within imagined environments for real-world decision-making. However, the high demand for interactivity poses challenges in harnessing recent advancements in video generative models for developing world models at scale. This work introduces Interactive VideoGPT (iVideoGPT), a scalable autoregressive transformer framework that integrates multimodal signals--visual observations, actions, and rewards--into a sequence of tokens, facilitating an interactive experience of agents via next-token prediction. iVideoGPT features a novel compressive tokenization technique that efficiently discretizes high-dimensional visual observations. Leveraging its scalable architecture, we are able to pre-train iVideoGPT on millions of human and robotic manipulation trajectories, establishing a versatile foundation that is adaptable to serve as interactive world models for a wide range of downstream tasks. These include action-conditioned video prediction, visual planning, and model-based reinforcement learning, where iVideoGPT achieves competitive performance compared with state-of-the-art methods. Our work advances the development of interactive general world models, bridging the gap between generative video models and practical model-based reinforcement learning applications.
2007.13440
Niall Twomey
Niall Twomey, Mikhail Fain, Andrey Ponikar, Nadine Sarraf
Towards Multi-Language Recipe Personalisation and Recommendation
5 tables
Fourteenth ACM Conference on Recommender Systems (RecSys 2020)
10.1145/3383313.3418478
null
cs.IR cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-language recipe personalisation and recommendation is an under-explored field of information retrieval in academic and production systems. The existing gaps in our current understanding are numerous, even on fundamental questions such as whether consistent and high-quality recipe recommendation can be delivered across languages. In this paper, we introduce the multi-language recipe recommendation setting and present grounding results that will help to establish the potential and absolute value of future work in this area. Our work draws on several billion events from millions of recipes and users from Arabic, English, Indonesian, Russian, and Spanish. We represent recipes using a combination of normalised ingredients, standardised skills and image embeddings obtained without human intervention. In modelling, we take a classical approach based on optimising an embedded bi-linear user-item metric space towards the interactions that most strongly elicit cooking intent. For users without interaction histories, a bespoke content-based cold-start model that predicts context and recipe affinity is introduced. We show that our approach to personalisation is stable and easily scales to new languages. A robust cross-validation campaign is employed and consistently rejects baseline models and representations, strongly favouring those we propose. Our results are presented in a language-oriented (as opposed to model-oriented) fashion to emphasise the language-based goals of this work. We believe that this is the first large-scale work that comprehensively considers the value and potential of multi-language recipe recommendation and personalisation as well as delivering scalable and reliable models.
[ { "created": "Mon, 27 Jul 2020 11:26:49 GMT", "version": "v1" }, { "created": "Tue, 18 Aug 2020 10:57:33 GMT", "version": "v2" } ]
2020-08-19
[ [ "Twomey", "Niall", "" ], [ "Fain", "Mikhail", "" ], [ "Ponikar", "Andrey", "" ], [ "Sarraf", "Nadine", "" ] ]
Multi-language recipe personalisation and recommendation is an under-explored field of information retrieval in academic and production systems. The existing gaps in our current understanding are numerous, even on fundamental questions such as whether consistent and high-quality recipe recommendation can be delivered across languages. In this paper, we introduce the multi-language recipe recommendation setting and present grounding results that will help to establish the potential and absolute value of future work in this area. Our work draws on several billion events from millions of recipes and users from Arabic, English, Indonesian, Russian, and Spanish. We represent recipes using a combination of normalised ingredients, standardised skills and image embeddings obtained without human intervention. In modelling, we take a classical approach based on optimising an embedded bi-linear user-item metric space towards the interactions that most strongly elicit cooking intent. For users without interaction histories, a bespoke content-based cold-start model that predicts context and recipe affinity is introduced. We show that our approach to personalisation is stable and easily scales to new languages. A robust cross-validation campaign is employed and consistently rejects baseline models and representations, strongly favouring those we propose. Our results are presented in a language-oriented (as opposed to model-oriented) fashion to emphasise the language-based goals of this work. We believe that this is the first large-scale work that comprehensively considers the value and potential of multi-language recipe recommendation and personalisation as well as delivering scalable and reliable models.
2212.05752
Chengyu Zheng
Chengyu Zheng, Ning song, Ruoyu Zhang, Lei Huang, Zhiqiang Wei, Jie Nie (corresponding author)
Scale-Semantic Joint Decoupling Network for Image-text Retrieval in Remote Sensing
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image-text retrieval in remote sensing aims to provide flexible information for data analysis and application. In recent years, state-of-the-art methods are dedicated to ``scale decoupling'' and ``semantic decoupling'' strategies to further enhance the capability of representation. However, these previous approaches focus on either the disentangling scale or semantics but ignore merging these two ideas in a union model, which extremely limits the performance of cross-modal retrieval models. To address these issues, we propose a novel Scale-Semantic Joint Decoupling Network (SSJDN) for remote sensing image-text retrieval. Specifically, we design the Bidirectional Scale Decoupling (BSD) module, which exploits Salience Feature Extraction (SFE) and Salience-Guided Suppression (SGS) units to adaptively extract potential features and suppress cumbersome features at other scales in a bidirectional pattern to yield different scale clues. Besides, we design the Label-supervised Semantic Decoupling (LSD) module by leveraging the category semantic labels as prior knowledge to supervise images and texts probing significant semantic-related information. Finally, we design a Semantic-guided Triple Loss (STL), which adaptively generates a constant to adjust the loss function to improve the probability of matching the same semantic image and text and shorten the convergence time of the retrieval model. Our proposed SSJDN outperforms state-of-the-art approaches in numerical experiments conducted on four benchmark remote sensing datasets.
[ { "created": "Mon, 12 Dec 2022 08:02:35 GMT", "version": "v1" } ]
2022-12-13
[ [ "Zheng", "Chengyu", "", "corresponding author" ], [ "song", "Ning", "", "corresponding author" ], [ "Zhang", "Ruoyu", "", "corresponding author" ], [ "Huang", "Lei", "", "corresponding author" ], [ "Wei", "Zhiqiang", "", ...
Image-text retrieval in remote sensing aims to provide flexible information for data analysis and application. In recent years, state-of-the-art methods are dedicated to ``scale decoupling'' and ``semantic decoupling'' strategies to further enhance the capability of representation. However, these previous approaches focus on either the disentangling scale or semantics but ignore merging these two ideas in a union model, which extremely limits the performance of cross-modal retrieval models. To address these issues, we propose a novel Scale-Semantic Joint Decoupling Network (SSJDN) for remote sensing image-text retrieval. Specifically, we design the Bidirectional Scale Decoupling (BSD) module, which exploits Salience Feature Extraction (SFE) and Salience-Guided Suppression (SGS) units to adaptively extract potential features and suppress cumbersome features at other scales in a bidirectional pattern to yield different scale clues. Besides, we design the Label-supervised Semantic Decoupling (LSD) module by leveraging the category semantic labels as prior knowledge to supervise images and texts probing significant semantic-related information. Finally, we design a Semantic-guided Triple Loss (STL), which adaptively generates a constant to adjust the loss function to improve the probability of matching the same semantic image and text and shorten the convergence time of the retrieval model. Our proposed SSJDN outperforms state-of-the-art approaches in numerical experiments conducted on four benchmark remote sensing datasets.
hep-th/9710193
Kim
Dae Kwan Kim, K.G. Klimenko
Finite Density Effect in the Gross-Neveu Model in a Weakly Curved $R^1\times S^2$ Spacetime
RevTeX, minor changes, new references are added
J.Phys.A31:5565,1998
10.1088/0305-4470/31/25/007
null
hep-th hep-ph
null
The three-dimensional Gross-Neveu model in $R^{1} \times S^{2}$ spacetime is considered at finite particles number density. We evaluate an effective potential of the composite scalar field $\sigma(x)$, which is expressed in terms of a scalar curvature $R$ and nonzero chemical potential $\mu$. We then derive the critical values of $(R,\mu)$ at which the system undergoes the first order phase transition from the phase with broken chiral invariance to the symmetric phase.
[ { "created": "Sat, 25 Oct 1997 17:14:17 GMT", "version": "v1" }, { "created": "Sat, 8 Nov 1997 04:12:14 GMT", "version": "v2" } ]
2008-11-26
[ [ "Kim", "Dae Kwan", "" ], [ "Klimenko", "K. G.", "" ] ]
The three-dimensional Gross-Neveu model in $R^{1} \times S^{2}$ spacetime is considered at finite particles number density. We evaluate an effective potential of the composite scalar field $\sigma(x)$, which is expressed in terms of a scalar curvature $R$ and nonzero chemical potential $\mu$. We then derive the critical values of $(R,\mu)$ at which the system undergoes the first order phase transition from the phase with broken chiral invariance to the symmetric phase.
2312.16097
Si Wen
Si Wen and Brandon D. Gallas
Expanding to Arbitrary Study Designs: ANOVA to Estimate Limits of Agreement for MRMC Studies
null
null
null
null
q-bio.QM
http://creativecommons.org/publicdomain/zero/1.0/
A multi-reader multi-case (MRMC) analysis is applied to account for both reader and case variability when evaluating the clinical performance of a medical imaging device or reader performance under different reading modalities. For a clinical task that measures a quantitative biomarker an agreement analysis, such as limits of agreement (LOA), can be used. In this work, we decompose the total variation in the data using a three-way mixed effect ANOVA model to estimate the MRMC variance of individual differences and the LOA between different reading modalities. There are rules for writing down the expectation of the mean squares in terms of the variance components for fully-crossed data, i.e. data where all the readers read all the cases in all modalities being studied. Sometimes the annotation task is labor-intensive and time-consuming or distributed across sites, so that a fully-crossed study is not practical. In this work, we focus on estimating the MRMC variance in the within- and between-readers and within- and between-modalities LOA for an arbitrary study design. Simulation studies were conducted to validate the LOA variance estimates. The method was also applied to a dataset to compare pathologist performance for assessing the density of stromal tumor infiltrating lymphocytes on different platforms.
[ { "created": "Tue, 26 Dec 2023 15:49:42 GMT", "version": "v1" } ]
2023-12-27
[ [ "Wen", "Si", "" ], [ "Gallas", "Brandon D.", "" ] ]
A multi-reader multi-case (MRMC) analysis is applied to account for both reader and case variability when evaluating the clinical performance of a medical imaging device or reader performance under different reading modalities. For a clinical task that measures a quantitative biomarker an agreement analysis, such as limits of agreement (LOA), can be used. In this work, we decompose the total variation in the data using a three-way mixed effect ANOVA model to estimate the MRMC variance of individual differences and the LOA between different reading modalities. There are rules for writing down the expectation of the mean squares in terms of the variance components for fully-crossed data, i.e. data where all the readers read all the cases in all modalities being studied. Sometimes the annotation task is labor-intensive and time-consuming or distributed across sites, so that a fully-crossed study is not practical. In this work, we focus on estimating the MRMC variance in the within- and between-readers and within- and between-modalities LOA for an arbitrary study design. Simulation studies were conducted to validate the LOA variance estimates. The method was also applied to a dataset to compare pathologist performance for assessing the density of stromal tumor infiltrating lymphocytes on different platforms.
hep-th/9410194
J{\o}rgen Rasmussen, Nbi
J. Rasmussen and M. Weis
Induced Topology on the Hoop Group
4 pages in LaTeX, NBI-HE-94-46
null
null
null
hep-th
null
A new topology is proposed on the space of holonomy equivalence classes of loops, induced by the topology of the space $\Sigma$ in which the loops are embedded. The possible role for the new topology in the context of the work by Ashtekar et al. is discussed.
[ { "created": "Wed, 26 Oct 1994 14:14:21 GMT", "version": "v1" } ]
2007-05-23
[ [ "Rasmussen", "J.", "" ], [ "Weis", "M.", "" ] ]
A new topology is proposed on the space of holonomy equivalence classes of loops, induced by the topology of the space $\Sigma$ in which the loops are embedded. The possible role for the new topology in the context of the work by Ashtekar et al. is discussed.
0912.2090
Balt van Rees
Kostas Skenderis, Balt C. van Rees
Holography and wormholes in 2+1 dimensions
37+20 pages, 23 figures; CMP version
Commun.Math.Phys.301:583-626,2011
10.1007/s00220-010-1163-z
ITF-2009-27
hep-th
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We provide a holographic interpretation of a class of three-dimensional wormhole spacetimes. These spacetimes have multiple asymptotic regions which are separated from each other by horizons. Each such region is isometric to the BTZ black hole and there is non-trivial spacetime topology hidden behind the horizons. We show that application of the real-time gauge/gravity duality results in a complete holographic description of these spacetimes with the dual state capturing the non-trivial topology behind the horizons. We also show that these spacetimes are in correspondence with trivalent graphs and provide an explicit metric description with all physical parameters appearing in the metric.
[ { "created": "Fri, 11 Dec 2009 20:21:52 GMT", "version": "v1" }, { "created": "Wed, 1 Sep 2010 09:59:26 GMT", "version": "v2" } ]
2011-02-15
[ [ "Skenderis", "Kostas", "" ], [ "van Rees", "Balt C.", "" ] ]
We provide a holographic interpretation of a class of three-dimensional wormhole spacetimes. These spacetimes have multiple asymptotic regions which are separated from each other by horizons. Each such region is isometric to the BTZ black hole and there is non-trivial spacetime topology hidden behind the horizons. We show that application of the real-time gauge/gravity duality results in a complete holographic description of these spacetimes with the dual state capturing the non-trivial topology behind the horizons. We also show that these spacetimes are in correspondence with trivalent graphs and provide an explicit metric description with all physical parameters appearing in the metric.
hep-th/9403142
Robert Coquereaux
R. Coquereaux
Triangular dissections, aperiodic tilings and Jones algebras
14 pages. Revised version. 18 Postcript figures, a 500 kb uuencoded file called images.uu available by mosaic or gopher from gopher://cpt.univ-mrs.fr/11/preprints/94/fundamental-interactions/94-P.3020
Adv.Appl.Math. 16 (1995) 402-424
null
CPT - 94 /P.3020
hep-th funct-an math.MG math.QA
null
The Brattelli diagram associated with a given bicolored Dynkin-Coxeter graph of type $A_n$ determines planar fractal sets obtained by infinite dissections of a given triangle. All triangles appearing in the dissection process have angles that are multiples of $\pi/ (n+1).$ There are usually several possible infinite dissections compatible with a given $n$ but a given one makes use of $n/2$ triangle types if $n$ is even. Jones algebra with index $[ 4 \ \cos^2{\pi \over n+1}]^{-1}$ (values of the discrete range) act naturally on vector spaces associated with those fractal sets. Triangles of a given type are always congruent at each step of the dissection process. In the particular case $n=4$, there are isometric and the whole structure lead, after proper inflation, to aperiodic Penrose tilings. The ``tilings'' associated with other values of the index are discussed and shown to be encoded by equivalence classes of infinite sequences (with appropriate constraints) using $n/2$ digits (if $n$ is even) and generalizing the Fibonacci numbers.
[ { "created": "Wed, 23 Mar 1994 15:07:30 GMT", "version": "v1" }, { "created": "Mon, 27 Mar 1995 11:32:27 GMT", "version": "v2" } ]
2008-02-03
[ [ "Coquereaux", "R.", "" ] ]
The Brattelli diagram associated with a given bicolored Dynkin-Coxeter graph of type $A_n$ determines planar fractal sets obtained by infinite dissections of a given triangle. All triangles appearing in the dissection process have angles that are multiples of $\pi/ (n+1).$ There are usually several possible infinite dissections compatible with a given $n$ but a given one makes use of $n/2$ triangle types if $n$ is even. Jones algebra with index $[ 4 \ \cos^2{\pi \over n+1}]^{-1}$ (values of the discrete range) act naturally on vector spaces associated with those fractal sets. Triangles of a given type are always congruent at each step of the dissection process. In the particular case $n=4$, there are isometric and the whole structure lead, after proper inflation, to aperiodic Penrose tilings. The ``tilings'' associated with other values of the index are discussed and shown to be encoded by equivalence classes of infinite sequences (with appropriate constraints) using $n/2$ digits (if $n$ is even) and generalizing the Fibonacci numbers.
2307.14292
Pedro Montealegre
Pierre Fraigniaud, Fr\'ed\'eric Mazoit, Pedro Montealegre, Ivan Rapaport, Ioan Todinca
Distributed Certification for Classes of Dense Graphs
null
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
A proof-labeling scheme (PLS) for a boolean predicate $\Pi$ on labeled graphs is a mechanism used for certifying the legality with respect to $\Pi$ of global network states in a distributed manner. In a PLS, a certificate is assigned to each processing node of the network, and the nodes are in charge of checking that the collection of certificates forms a global proof that the system is in a correct state, by exchanging the certificates once, between neighbors only. The main measure of complexity is the size of the certificates. Many PLSs have been designed for certifying specific predicates, including cycle-freeness, minimum-weight spanning tree, planarity, etc. In 2021, a breakthrough has been obtained, as a meta-theorem stating that a large set of properties have compact PLSs in a large class of networks. Namely, for every $\mathrm{MSO}_2$ property $\Pi$ on labeled graphs, there exists a PLS for $\Pi$ with $O(\log n)$-bit certificates for all graphs of bounded tree-depth. This result has been extended to the larger class of graphs with bounded {tree-width}, using certificates on $O(\log^2 n)$ bits. We extend this result even further, to the larger class of graphs with bounded clique-width, which, as opposed to the other two aforementioned classes, includes dense graphs. We show that, for every $\mathrm{MSO}_1$ property $\Pi$ on labeled graphs, there exists a PLS for $\Pi$ with $O(\log^2 n)$ bit certificates for all graphs of bounded clique-width.
[ { "created": "Wed, 26 Jul 2023 16:49:39 GMT", "version": "v1" } ]
2023-07-27
[ [ "Fraigniaud", "Pierre", "" ], [ "Mazoit", "Frédéric", "" ], [ "Montealegre", "Pedro", "" ], [ "Rapaport", "Ivan", "" ], [ "Todinca", "Ioan", "" ] ]
A proof-labeling scheme (PLS) for a boolean predicate $\Pi$ on labeled graphs is a mechanism used for certifying the legality with respect to $\Pi$ of global network states in a distributed manner. In a PLS, a certificate is assigned to each processing node of the network, and the nodes are in charge of checking that the collection of certificates forms a global proof that the system is in a correct state, by exchanging the certificates once, between neighbors only. The main measure of complexity is the size of the certificates. Many PLSs have been designed for certifying specific predicates, including cycle-freeness, minimum-weight spanning tree, planarity, etc. In 2021, a breakthrough has been obtained, as a meta-theorem stating that a large set of properties have compact PLSs in a large class of networks. Namely, for every $\mathrm{MSO}_2$ property $\Pi$ on labeled graphs, there exists a PLS for $\Pi$ with $O(\log n)$-bit certificates for all graphs of bounded tree-depth. This result has been extended to the larger class of graphs with bounded {tree-width}, using certificates on $O(\log^2 n)$ bits. We extend this result even further, to the larger class of graphs with bounded clique-width, which, as opposed to the other two aforementioned classes, includes dense graphs. We show that, for every $\mathrm{MSO}_1$ property $\Pi$ on labeled graphs, there exists a PLS for $\Pi$ with $O(\log^2 n)$ bit certificates for all graphs of bounded clique-width.
1903.06007
Esteban Bautista
Esteban Bautista and Patrice Abry and Paulo Gon\c{c}alves
$L^\gamma$-PageRank for Semi-Supervised Learning
Submitted to Applied Network Science (special issue on machine learning with graphs)
null
null
null
cs.SI cs.LG eess.SP stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
PageRank for Semi-Supervised Learning has shown to leverage data structures and limited tagged examples to yield meaningful classification. Despite successes, classification performance can still be improved, particularly in cases of fuzzy graphs or unbalanced labeled data. To address such limitations, a novel approach based on powers of the Laplacian matrix $L^\gamma$ ($\gamma > 0$), referred to as $L^\gamma$-PageRank, is proposed. Its theoretical study shows that it operates on signed graphs, where nodes belonging to one same class are more likely to share positive edges while nodes from different classes are more likely to be connected with negative edges. It is shown that by selecting an optimal $\gamma$, classification performance can be significantly enhanced. A procedure for the automated estimation of the optimal $\gamma$, from a unique observation of data, is devised and assessed. Experiments on several datasets demonstrate the effectiveness of both $L^\gamma$-PageRank classification and the optimal $\gamma$ estimation.
[ { "created": "Mon, 11 Mar 2019 16:31:37 GMT", "version": "v1" } ]
2019-03-15
[ [ "Bautista", "Esteban", "" ], [ "Abry", "Patrice", "" ], [ "Gonçalves", "Paulo", "" ] ]
PageRank for Semi-Supervised Learning has shown to leverage data structures and limited tagged examples to yield meaningful classification. Despite successes, classification performance can still be improved, particularly in cases of fuzzy graphs or unbalanced labeled data. To address such limitations, a novel approach based on powers of the Laplacian matrix $L^\gamma$ ($\gamma > 0$), referred to as $L^\gamma$-PageRank, is proposed. Its theoretical study shows that it operates on signed graphs, where nodes belonging to one same class are more likely to share positive edges while nodes from different classes are more likely to be connected with negative edges. It is shown that by selecting an optimal $\gamma$, classification performance can be significantly enhanced. A procedure for the automated estimation of the optimal $\gamma$, from a unique observation of data, is devised and assessed. Experiments on several datasets demonstrate the effectiveness of both $L^\gamma$-PageRank classification and the optimal $\gamma$ estimation.
2307.02575
Hannah Kerner
Hannah Kerner, Catherine Nakalembe, Adam Yang, Ivan Zvonkov, Ryan McWeeny, Gabriel Tseng, Inbal Becker-Reshef
How accurate are existing land cover maps for agriculture in Sub-Saharan Africa?
null
Scientific Data, 11(1), 486
10.1038/s41597-024-03306-z
null
cs.LG cs.CY
http://creativecommons.org/licenses/by-sa/4.0/
Satellite Earth observations (EO) can provide affordable and timely information for assessing crop conditions and food production. Such monitoring systems are essential in Africa, where there is high food insecurity and sparse agricultural statistics. EO-based monitoring systems require accurate cropland maps to provide information about croplands, but there is a lack of data to determine which of the many available land cover maps most accurately identify cropland in African countries. This study provides a quantitative evaluation and intercomparison of 11 publicly available land cover maps to assess their suitability for cropland classification and EO-based agriculture monitoring in Africa using statistically rigorous reference datasets from 8 countries. We hope the results of this study will help users determine the most suitable map for their needs and encourage future work to focus on resolving inconsistencies between maps and improving accuracy in low-accuracy regions.
[ { "created": "Wed, 5 Jul 2023 18:17:23 GMT", "version": "v1" }, { "created": "Sun, 2 Jun 2024 11:42:03 GMT", "version": "v2" } ]
2024-06-04
[ [ "Kerner", "Hannah", "" ], [ "Nakalembe", "Catherine", "" ], [ "Yang", "Adam", "" ], [ "Zvonkov", "Ivan", "" ], [ "McWeeny", "Ryan", "" ], [ "Tseng", "Gabriel", "" ], [ "Becker-Reshef", "Inbal", "" ] ]
Satellite Earth observations (EO) can provide affordable and timely information for assessing crop conditions and food production. Such monitoring systems are essential in Africa, where there is high food insecurity and sparse agricultural statistics. EO-based monitoring systems require accurate cropland maps to provide information about croplands, but there is a lack of data to determine which of the many available land cover maps most accurately identify cropland in African countries. This study provides a quantitative evaluation and intercomparison of 11 publicly available land cover maps to assess their suitability for cropland classification and EO-based agriculture monitoring in Africa using statistically rigorous reference datasets from 8 countries. We hope the results of this study will help users determine the most suitable map for their needs and encourage future work to focus on resolving inconsistencies between maps and improving accuracy in low-accuracy regions.
2310.05710
Erich Schikuta
Johannes Koppenwallner, Erich Schikuta
DiCE -- A Data Encryption Proxy for the Cloud
12 pages
null
null
null
cs.CR cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Outsourcing a relational database to the cloud offers several benefits, including scalability, availability, and cost-effectiveness. However, there are concerns about the confidentiality and security of the outsourced data. A general approach here would be to encrypt the data with a standardized encryption algorithm and then store the data only encrypted in the cloud. The problem with this approach, however, is that with encryption, important properties of the data such as sorting, format or comparability, which are essential for the functioning of database queries, are lost. One solution to this problem is the use of (e.g. order-preserving) encryption algorithms, which also preserve these properties in the encrypted data, thus enabling queries to encrypted data. These algorithms range from simple algorithms like Caesar encryption to secure algorithms like mOPE. In order to be able to use these algorithms as easy as possible, ``DiCE'' a JDBC driver was developed, that parses SQL queries as a proxy and transparently encrypts and decrypts these queries. This allows to execute many queries on an encrypted database in the cloud with (nearly) the performance as on unencrypted databases. The DiCE driver can be used with any other JDBC driver and therefore supports a variety of databases. The driver can be configured to support different encryption algorithms. To keep track of the operations, the ``Dice Information Client'' has been developed to track the encryption and decryption of the driver. Although the result of the performance analysis shows a certain overhead due to the parsing and encryption of the SQL queries in the Dice driver, this overhead is significantly reduced by other influencing factors such as the network and parallel queries.
[ { "created": "Mon, 9 Oct 2023 13:33:59 GMT", "version": "v1" } ]
2023-10-10
[ [ "Koppenwallner", "Johannes", "" ], [ "Schikuta", "Erich", "" ] ]
Outsourcing a relational database to the cloud offers several benefits, including scalability, availability, and cost-effectiveness. However, there are concerns about the confidentiality and security of the outsourced data. A general approach here would be to encrypt the data with a standardized encryption algorithm and then store the data only encrypted in the cloud. The problem with this approach, however, is that with encryption, important properties of the data such as sorting, format or comparability, which are essential for the functioning of database queries, are lost. One solution to this problem is the use of (e.g. order-preserving) encryption algorithms, which also preserve these properties in the encrypted data, thus enabling queries to encrypted data. These algorithms range from simple algorithms like Caesar encryption to secure algorithms like mOPE. In order to be able to use these algorithms as easy as possible, ``DiCE'' a JDBC driver was developed, that parses SQL queries as a proxy and transparently encrypts and decrypts these queries. This allows to execute many queries on an encrypted database in the cloud with (nearly) the performance as on unencrypted databases. The DiCE driver can be used with any other JDBC driver and therefore supports a variety of databases. The driver can be configured to support different encryption algorithms. To keep track of the operations, the ``Dice Information Client'' has been developed to track the encryption and decryption of the driver. Although the result of the performance analysis shows a certain overhead due to the parsing and encryption of the SQL queries in the Dice driver, this overhead is significantly reduced by other influencing factors such as the network and parallel queries.
1209.6117
A.G. Tsuchiya
A.G. Tsuchiya
On the pole structures of the disconnected part of hyper elliptic g loop M point super string amplitudes
Genus one result is corrected and the proof modified
null
null
null
hep-th
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Structures of the disconnected part of higher genus superstring amplitudes restricted to the hyper elliptic cases are investigated in the NSR formalism, based on the DHoker Phong and recent results. A set of equations, which we can regard as a basic tool to sum over the spin structures of any of g loop, M point amplitudes systematically, is shown by using a classical result of Abelian functions. We discuss structures of g loop, M point massless external boson superstring amplitudes by assuming that the spin structure dependence of any of the disconnected amplitudes is only on one kind of constants, the genus g Weierstrass Pe function valued at the summation of g number of half periods chosen out of 2g+1 half periods. This is a natural generalization of the case of genus 1. This assumption will be validated by a conjectured theorem which states that the spin structure dependent part of any string amplitude will be naturally decomposed into two parts. One is composed of manifestly modular invariant functions of positions of inserting operators, and the other is the polynomial of Pe function constants related to the moduli of Riemann surfaces only. It is shown that this is actually the case for any M for g=1, and M=1,2,3 for any g. Due to a technical problem, our consideration is at present restricted to the case that g(g+1)divided by 2 is odd. Example calculations are shown for the genus 2 by the method described here. In particular, our method correctly reproduces biholomorphic 1 form of DHoker Phong result as for the four point amplitudes of the disconnected parts.
[ { "created": "Thu, 27 Sep 2012 03:34:44 GMT", "version": "v1" }, { "created": "Sun, 7 Oct 2012 11:23:41 GMT", "version": "v2" }, { "created": "Fri, 3 Apr 2015 00:21:37 GMT", "version": "v3" } ]
2015-04-06
[ [ "Tsuchiya", "A. G.", "" ] ]
Structures of the disconnected part of higher genus superstring amplitudes restricted to the hyper elliptic cases are investigated in the NSR formalism, based on the DHoker Phong and recent results. A set of equations, which we can regard as a basic tool to sum over the spin structures of any of g loop, M point amplitudes systematically, is shown by using a classical result of Abelian functions. We discuss structures of g loop, M point massless external boson superstring amplitudes by assuming that the spin structure dependence of any of the disconnected amplitudes is only on one kind of constants, the genus g Weierstrass Pe function valued at the summation of g number of half periods chosen out of 2g+1 half periods. This is a natural generalization of the case of genus 1. This assumption will be validated by a conjectured theorem which states that the spin structure dependent part of any string amplitude will be naturally decomposed into two parts. One is composed of manifestly modular invariant functions of positions of inserting operators, and the other is the polynomial of Pe function constants related to the moduli of Riemann surfaces only. It is shown that this is actually the case for any M for g=1, and M=1,2,3 for any g. Due to a technical problem, our consideration is at present restricted to the case that g(g+1)divided by 2 is odd. Example calculations are shown for the genus 2 by the method described here. In particular, our method correctly reproduces biholomorphic 1 form of DHoker Phong result as for the four point amplitudes of the disconnected parts.