id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
0708.1256
Jose-Luis Aragon
J. L\'opez-Sauceda and J.L. Arag\'on
Eutacticity in sea urchin evolution
17 pages, 6 figures
Bulletin of Mathematical Biology, 70 (2008) 625-634
10.1007/s11538-007-9273-2
null
q-bio.QM
null
An eutactic star, in a n-dimensional space, is a set of N vectors which can be viewed as the projection of N orthogonal vectors in a N-dimensional space. By adequately associating a star of vectors to a particular sea urchin we propose that a measure of the eutacticity of the star constitutes a measure of the regularity of the sea urchin. Then we study changes of regularity (eutacticity) in a macroevolutive and taxonomic level of sea urchins belonging to the Echinoidea Class. An analysis considering changes through geological time suggests a high degree of regularity in the shape of these organisms through their evolution. Rare deviations from regularity measured in Holasteroida order are discussed.
[ { "created": "Thu, 9 Aug 2007 18:31:07 GMT", "version": "v1" } ]
2021-11-05
[ [ "López-Sauceda", "J.", "" ], [ "Aragón", "J. L.", "" ] ]
An eutactic star, in a n-dimensional space, is a set of N vectors which can be viewed as the projection of N orthogonal vectors in a N-dimensional space. By adequately associating a star of vectors to a particular sea urchin we propose that a measure of the eutacticity of the star constitutes a measure of the regularity of the sea urchin. Then we study changes of regularity (eutacticity) in a macroevolutive and taxonomic level of sea urchins belonging to the Echinoidea Class. An analysis considering changes through geological time suggests a high degree of regularity in the shape of these organisms through their evolution. Rare deviations from regularity measured in Holasteroida order are discussed.
1512.03988
Andrew Bruce Duncan
Andrew Duncan, Radek Erban, Konstantinos Zygalakis
Hybrid framework for the simulation of stochastic chemical kinetics
37 pages, 6 figures
null
10.1016/j.jcp.2016.08.034
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stochasticity plays a fundamental role in various biochemical processes, such as cell regulatory networks and enzyme cascades. Isothermal, well-mixed systems can be modelled as Markov processes, typically simulated using the Gillespie Stochastic Simulation Algorithm (SSA). While easy to implement and exact, the computational cost of using the Gillespie SSA to simulate such systems can become prohibitive as the frequency of reaction events increases. This has motivated numerous coarse-grained schemes, where the "fast" reactions are approximated either using Langevin dynamics or deterministically. While such approaches provide a good approximation when all reactants are abundant, the approximation breaks down when one or more species exist only in small concentrations and the fluctuations arising from the discrete nature of the reactions becomes significant. This is particularly problematic when using such methods to compute statistics of extinction times for chemical species, as well as simulating non-equilibrium systems such as cell-cycle models in which a single species can cycle between abundance and scarcity. In this paper, a hybrid jump-diffusion model for simulating well- mixed stochastic kinetics is derived. It acts as a bridge between the Gillespie SSA and the chemical Langevin equation. For low reactant reactions the underlying behaviour is purely discrete, while purely diffusive when the concentrations of all species is large, with the two different behaviours coexisting in the intermediate region. A bound on the weak error in the classical large volume scaling limit is obtained, and three different numerical discretizations of the jump-diffusion model are described. The benefits of such a formalism are illustrated using computational examples.
[ { "created": "Sun, 13 Dec 2015 02:40:27 GMT", "version": "v1" } ]
2016-10-12
[ [ "Duncan", "Andrew", "" ], [ "Erban", "Radek", "" ], [ "Zygalakis", "Konstantinos", "" ] ]
Stochasticity plays a fundamental role in various biochemical processes, such as cell regulatory networks and enzyme cascades. Isothermal, well-mixed systems can be modelled as Markov processes, typically simulated using the Gillespie Stochastic Simulation Algorithm (SSA). While easy to implement and exact, the computational cost of using the Gillespie SSA to simulate such systems can become prohibitive as the frequency of reaction events increases. This has motivated numerous coarse-grained schemes, where the "fast" reactions are approximated either using Langevin dynamics or deterministically. While such approaches provide a good approximation when all reactants are abundant, the approximation breaks down when one or more species exist only in small concentrations and the fluctuations arising from the discrete nature of the reactions becomes significant. This is particularly problematic when using such methods to compute statistics of extinction times for chemical species, as well as simulating non-equilibrium systems such as cell-cycle models in which a single species can cycle between abundance and scarcity. In this paper, a hybrid jump-diffusion model for simulating well- mixed stochastic kinetics is derived. It acts as a bridge between the Gillespie SSA and the chemical Langevin equation. For low reactant reactions the underlying behaviour is purely discrete, while purely diffusive when the concentrations of all species is large, with the two different behaviours coexisting in the intermediate region. A bound on the weak error in the classical large volume scaling limit is obtained, and three different numerical discretizations of the jump-diffusion model are described. The benefits of such a formalism are illustrated using computational examples.
1303.3057
Jie Liang
Youfang Cao and Jie Liang
Optimal enumeration of state space of finitely buffered stochastic molecular networks and exact computation of steady state landscape probability
23 pages, 7 figures
BMC Systems Biology, 2008, 2:30:1-13
10.1186/1752-0509-2-30
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stochasticity plays important roles in molecular networks when molecular concentrations are in the range of $0.1 \mu$M to $10 n$M (about 100 to 10 copies in a cell). The chemical master equation provides a fundamental framework for studying these networks, and the time-varying landscape probability distribution over the full microstates provide a full characterization of the network dynamics. A complete characterization of the space of the microstates is a prerequisite for obtaining the full landscape probability distribution of a network. However, there are neither closed-form solutions nor algorithms fully describing all microstates for a given molecular network. We have developed an algorithm that can exhaustively enumerate the microstates of a molecular network of small copy numbers under the finite buffer condition that the net gain in newly synthesized molecules is smaller than a predefined limit. We also describe a simple method for computing the exact mean or steady state landscape probability distribution over microstates. We show how the full landscape probability for the gene networks of the self-regulating gene and the toggle-switch in the steady state can be fully characterized. We also give an example using the MAPK cascade network. Our algorithm works for networks of small copy numbers buffered with a finite copy number of net molecules that can be synthesized, regardless of the reaction stoichiometry, and is optimal in both storage and time complexity. The buffer size is limited by the available memory or disk storage. Our algorithm is applicable to a class of biological networks when the copy numbers of molecules are small and the network is closed, or the network is open but the net gain in newly synthesized molecules does not exceed a predefined buffer capacity.
[ { "created": "Tue, 12 Mar 2013 23:33:26 GMT", "version": "v1" } ]
2013-03-14
[ [ "Cao", "Youfang", "" ], [ "Liang", "Jie", "" ] ]
Stochasticity plays important roles in molecular networks when molecular concentrations are in the range of $0.1 \mu$M to $10 n$M (about 100 to 10 copies in a cell). The chemical master equation provides a fundamental framework for studying these networks, and the time-varying landscape probability distribution over the full microstates provide a full characterization of the network dynamics. A complete characterization of the space of the microstates is a prerequisite for obtaining the full landscape probability distribution of a network. However, there are neither closed-form solutions nor algorithms fully describing all microstates for a given molecular network. We have developed an algorithm that can exhaustively enumerate the microstates of a molecular network of small copy numbers under the finite buffer condition that the net gain in newly synthesized molecules is smaller than a predefined limit. We also describe a simple method for computing the exact mean or steady state landscape probability distribution over microstates. We show how the full landscape probability for the gene networks of the self-regulating gene and the toggle-switch in the steady state can be fully characterized. We also give an example using the MAPK cascade network. Our algorithm works for networks of small copy numbers buffered with a finite copy number of net molecules that can be synthesized, regardless of the reaction stoichiometry, and is optimal in both storage and time complexity. The buffer size is limited by the available memory or disk storage. Our algorithm is applicable to a class of biological networks when the copy numbers of molecules are small and the network is closed, or the network is open but the net gain in newly synthesized molecules does not exceed a predefined buffer capacity.
2312.13521
Matthew Lebo
Samuel J. Aronson (1,2), Kalotina Machini (1,3), Jiyeon Shin (2), Pranav Sriraman (1), Sean Hamill (4), Emma R. Henricks (1), Charlotte Mailly (1,2), Angie J. Nottage (1), Sami S. Amr (1,3), Michael Oates (1,2), Matthew S. Lebo (1,3) ((1) Mass General Brigham Personalized Medicine, (2) Accelerator for Clinical Transformation, Mass General Brigham, (3) Department of Pathology, Brigham and Women's Hospital, (4) Microsoft Corporation)
Preparing to Integrate Generative Pretrained Transformer Series 4 models into Genetic Variant Assessment Workflows: Assessing Performance, Drift, and Nondeterminism Characteristics Relative to Classifying Functional Evidence in Literature
5 pages, 1 table, 4 figures, 2 supplementary tables, 1 supplementary figure. These authors contributed equally: Samuel J. Aronson, Kalotina Machini, and Jiyeon Shin Corresponding author: Samuel J. Aronson
null
null
null
q-bio.GN cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background. Large Language Models (LLMs) hold promise for improving genetic variant literature review in clinical testing. We assessed Generative Pretrained Transformer 4's (GPT-4) performance, nondeterminism, and drift to inform its suitability for use in complex clinical processes. Methods. A 2-prompt process for classification of functional evidence was optimized using a development set of 45 articles. The prompts asked GPT-4 to supply all functional data present in an article related to a variant or indicate that no functional evidence is present. For articles indicated as containing functional evidence, a second prompt asked GPT-4 to classify the evidence into pathogenic, benign, or intermediate/inconclusive categories. A final test set of 72 manually classified articles was used to test performance. Results. Over a 2.5-month period (Dec 2023-Feb 2024), we observed substantial differences in intraday (nondeterminism) and across day (drift) results, which lessened after 1/18/24. This variability is seen within and across models in the GPT-4 series, affecting different performance statistics to different degrees. Twenty runs after 1/18/24 identified articles containing functional evidence with 92.2% sensitivity, 95.6% positive predictive value (PPV) and 86.3% negative predictive value (NPV). The second prompt's identified pathogenic functional evidence with 90.0% sensitivity, 74.0% PPV and 95.3% NVP and for benign evidence with 88.0% sensitivity, 76.6% PPV and 96.9% NVP. Conclusion. Nondeterminism and drift within LLMs must be assessed and monitored when introducing LLM based functionality into clinical workflows. Failing to do this assessment or accounting for these challenges could lead to incorrect or missing information that is critical for patient care. The performance of our prompts appears adequate to assist in article prioritization but not in automated decision making.
[ { "created": "Thu, 21 Dec 2023 01:56:00 GMT", "version": "v1" }, { "created": "Fri, 16 Feb 2024 21:25:20 GMT", "version": "v2" } ]
2024-02-20
[ [ "Aronson", "Samuel J.", "" ], [ "Machini", "Kalotina", "" ], [ "Shin", "Jiyeon", "" ], [ "Sriraman", "Pranav", "" ], [ "Hamill", "Sean", "" ], [ "Henricks", "Emma R.", "" ], [ "Mailly", "Charlotte", "" ], [ "Nottage", "Angie J.", "" ], [ "Amr", "Sami S.", "" ], [ "Oates", "Michael", "" ], [ "Lebo", "Matthew S.", "" ] ]
Background. Large Language Models (LLMs) hold promise for improving genetic variant literature review in clinical testing. We assessed Generative Pretrained Transformer 4's (GPT-4) performance, nondeterminism, and drift to inform its suitability for use in complex clinical processes. Methods. A 2-prompt process for classification of functional evidence was optimized using a development set of 45 articles. The prompts asked GPT-4 to supply all functional data present in an article related to a variant or indicate that no functional evidence is present. For articles indicated as containing functional evidence, a second prompt asked GPT-4 to classify the evidence into pathogenic, benign, or intermediate/inconclusive categories. A final test set of 72 manually classified articles was used to test performance. Results. Over a 2.5-month period (Dec 2023-Feb 2024), we observed substantial differences in intraday (nondeterminism) and across day (drift) results, which lessened after 1/18/24. This variability is seen within and across models in the GPT-4 series, affecting different performance statistics to different degrees. Twenty runs after 1/18/24 identified articles containing functional evidence with 92.2% sensitivity, 95.6% positive predictive value (PPV) and 86.3% negative predictive value (NPV). The second prompt's identified pathogenic functional evidence with 90.0% sensitivity, 74.0% PPV and 95.3% NVP and for benign evidence with 88.0% sensitivity, 76.6% PPV and 96.9% NVP. Conclusion. Nondeterminism and drift within LLMs must be assessed and monitored when introducing LLM based functionality into clinical workflows. Failing to do this assessment or accounting for these challenges could lead to incorrect or missing information that is critical for patient care. The performance of our prompts appears adequate to assist in article prioritization but not in automated decision making.
1112.2662
Derdei Bichara
Phillipe Adda and Derdei Bichara
Global stability for SIR and SIRS models with differential mortality
page 2, one figure
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider SIR and SIRS models with differential mortality. Global stability of equilibria is established by using Lyapunov's method
[ { "created": "Mon, 12 Dec 2011 19:26:07 GMT", "version": "v1" } ]
2011-12-13
[ [ "Adda", "Phillipe", "" ], [ "Bichara", "Derdei", "" ] ]
We consider SIR and SIRS models with differential mortality. Global stability of equilibria is established by using Lyapunov's method
2407.06703
Gian Marco Visani
Gian Marco Visani, Michael N. Pun, William Galvin, Eric Daniel, Kevin Borisiak, Utheri Wagura, Armita Nourmohammad
HERMES: Holographic Equivariant neuRal network model for Mutational Effect and Stability prediction
16 pages, 8 figures
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by/4.0/
Predicting the stability and fitness effects of amino acid mutations in proteins is a cornerstone of biological discovery and engineering. Various experimental techniques have been developed to measure mutational effects, providing us with extensive datasets across a diverse range of proteins. By training on these data, traditional computational modeling and more recent machine learning approaches have advanced significantly in predicting mutational effects. Here, we introduce HERMES, a 3D rotationally equivariant structure-based neural network model for mutational effect and stability prediction. Pre-trained to predict amino acid propensity from its surrounding 3D structure, HERMES can be fine-tuned for mutational effects using our open-source code. We present a suite of HERMES models, pre-trained with different strategies, and fine-tuned to predict the stability effect of mutations. Benchmarking against other models shows that HERMES often outperforms or matches their performance in predicting mutational effect on stability, binding, and fitness. HERMES offers versatile tools for evaluating mutational effects and can be fine-tuned for specific predictive objectives.
[ { "created": "Tue, 9 Jul 2024 09:31:05 GMT", "version": "v1" } ]
2024-07-10
[ [ "Visani", "Gian Marco", "" ], [ "Pun", "Michael N.", "" ], [ "Galvin", "William", "" ], [ "Daniel", "Eric", "" ], [ "Borisiak", "Kevin", "" ], [ "Wagura", "Utheri", "" ], [ "Nourmohammad", "Armita", "" ] ]
Predicting the stability and fitness effects of amino acid mutations in proteins is a cornerstone of biological discovery and engineering. Various experimental techniques have been developed to measure mutational effects, providing us with extensive datasets across a diverse range of proteins. By training on these data, traditional computational modeling and more recent machine learning approaches have advanced significantly in predicting mutational effects. Here, we introduce HERMES, a 3D rotationally equivariant structure-based neural network model for mutational effect and stability prediction. Pre-trained to predict amino acid propensity from its surrounding 3D structure, HERMES can be fine-tuned for mutational effects using our open-source code. We present a suite of HERMES models, pre-trained with different strategies, and fine-tuned to predict the stability effect of mutations. Benchmarking against other models shows that HERMES often outperforms or matches their performance in predicting mutational effect on stability, binding, and fitness. HERMES offers versatile tools for evaluating mutational effects and can be fine-tuned for specific predictive objectives.
1305.6700
Carl Boettiger
Carl Boettiger, Noam Ross, Alan Hastings
Early warning signals: The charted and uncharted territories
null
Theoretical Ecology, 2013 (in production)
10.1007/s12080-013-0192-6
null
q-bio.PE q-bio.QM
http://creativecommons.org/licenses/by/3.0/
The realization that complex systems such as ecological communities can collapse or shift regimes suddenly and without rapid external forcing poses a serious challenge to our understanding and management of the natural world. The potential to identify early warning signals that would allow researchers and managers to predict such events before they happen has therefore been an invaluable discovery that offers a way forward in spite of such seemingly unpredictable behavior. Research into early warning signals has demonstrated that it is possible to define and detect such early warning signals in advance of a transition in certain contexts. Here we describe the pattern emerging as research continues to explore just how far we can generalize these results. A core of examples emerges that shares three properties: the phenomenon of rapid regime shifts, a pattern of 'critical slowing down' that can be used to detect the approaching shift, and a mechanism of bifurcation driving the sudden change. As research has expanded beyond these core examples, it is becoming clear that not all systems that show regime shifts exhibit critical slowing down, or vice versa. Even when systems exhibit critical slowing down, statistical detection is a challenge. We review the literature that explores these edge cases and highlight the need for (a) new early warning behaviors that can be used in cases where rapid shifts do not exhibit critical slowing down, (b) the development of methods to identify which behavior might be an appropriate signal when encountering a novel system; bearing in mind that a positive indication for some systems is a negative indication in others, and (c) statistical methods that can distinguish between signatures of early warning behaviors and noise.
[ { "created": "Wed, 29 May 2013 06:00:23 GMT", "version": "v1" } ]
2013-05-30
[ [ "Boettiger", "Carl", "" ], [ "Ross", "Noam", "" ], [ "Hastings", "Alan", "" ] ]
The realization that complex systems such as ecological communities can collapse or shift regimes suddenly and without rapid external forcing poses a serious challenge to our understanding and management of the natural world. The potential to identify early warning signals that would allow researchers and managers to predict such events before they happen has therefore been an invaluable discovery that offers a way forward in spite of such seemingly unpredictable behavior. Research into early warning signals has demonstrated that it is possible to define and detect such early warning signals in advance of a transition in certain contexts. Here we describe the pattern emerging as research continues to explore just how far we can generalize these results. A core of examples emerges that shares three properties: the phenomenon of rapid regime shifts, a pattern of 'critical slowing down' that can be used to detect the approaching shift, and a mechanism of bifurcation driving the sudden change. As research has expanded beyond these core examples, it is becoming clear that not all systems that show regime shifts exhibit critical slowing down, or vice versa. Even when systems exhibit critical slowing down, statistical detection is a challenge. We review the literature that explores these edge cases and highlight the need for (a) new early warning behaviors that can be used in cases where rapid shifts do not exhibit critical slowing down, (b) the development of methods to identify which behavior might be an appropriate signal when encountering a novel system; bearing in mind that a positive indication for some systems is a negative indication in others, and (c) statistical methods that can distinguish between signatures of early warning behaviors and noise.
2301.09419
Diana Pham
Diana T. Pham and Zdzislaw E. Musielak
Lagrangian Formalism in Biology: II. Non-Standard and Null Lagrangians and their Role in Population Dynamics
16 pages, 2 tables
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Non-standard Lagrangians do not display any discernible energy-like terms, yet they give the same equations of motion as standard Lagrangians, which have easily identifiable energy-like terms. A new method to derive non-standard Lagrangians for second-order nonlinear differential equations with damping is developed and the limitations of this method are explored. It is shown that the limitations do not exist only for those nonlinear dynamical systems that can be converted into linear ones. The obtained results are applied to selected population dynamics models for which non-standard Lagrangians and their corresponding null Lagrangians and gauge functions are derived, and their roles in the population dynamics are discussed.
[ { "created": "Fri, 20 Jan 2023 14:00:04 GMT", "version": "v1" } ]
2023-01-24
[ [ "Pham", "Diana T.", "" ], [ "Musielak", "Zdzislaw E.", "" ] ]
Non-standard Lagrangians do not display any discernible energy-like terms, yet they give the same equations of motion as standard Lagrangians, which have easily identifiable energy-like terms. A new method to derive non-standard Lagrangians for second-order nonlinear differential equations with damping is developed and the limitations of this method are explored. It is shown that the limitations do not exist only for those nonlinear dynamical systems that can be converted into linear ones. The obtained results are applied to selected population dynamics models for which non-standard Lagrangians and their corresponding null Lagrangians and gauge functions are derived, and their roles in the population dynamics are discussed.
1402.4058
Ihor Lubashevsky
Ihor Lubashevsky and Bohdan Datsko
Fractional Dynamics and Multi-Slide Model of Human Memory
Submitted to 36th Annual Conference of the Cognitive Science Society, Quebec City, Canada, July 23-26 2014
null
null
null
q-bio.NC nlin.AO physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a single chunk model of long-term memory that combines the basic features of the ACT-R theory and the multiple trace memory architecture. The pivot point of the developed theory is a mathematical description of the creation of new memory traces caused by learning a certain fragment of information pattern and affected by the fragments of this pattern already retained by the current moment of time. Using the available psychological and physiological data these constructions are justified. The final equation governing the learning and forgetting processes is constructed in the form of the differential equation with the Caputo type fractional time derivative. Several characteristic situations of the learning (continuous and discontinuous) and forgetting processes are studied numerically. In particular, it is demonstrated that, first, the "learning" and "forgetting" exponents of the corresponding power laws of the memory fractional dynamics should be regarded as independent system parameters. Second, as far as the spacing effects are concerned, the longer the discontinuous learning process, the longer the time interval within which a subject remembers the information without its considerable lost. Besides, the latter relationship is a linear proportionality.
[ { "created": "Thu, 13 Feb 2014 05:06:43 GMT", "version": "v1" } ]
2014-02-18
[ [ "Lubashevsky", "Ihor", "" ], [ "Datsko", "Bohdan", "" ] ]
We propose a single chunk model of long-term memory that combines the basic features of the ACT-R theory and the multiple trace memory architecture. The pivot point of the developed theory is a mathematical description of the creation of new memory traces caused by learning a certain fragment of information pattern and affected by the fragments of this pattern already retained by the current moment of time. Using the available psychological and physiological data these constructions are justified. The final equation governing the learning and forgetting processes is constructed in the form of the differential equation with the Caputo type fractional time derivative. Several characteristic situations of the learning (continuous and discontinuous) and forgetting processes are studied numerically. In particular, it is demonstrated that, first, the "learning" and "forgetting" exponents of the corresponding power laws of the memory fractional dynamics should be regarded as independent system parameters. Second, as far as the spacing effects are concerned, the longer the discontinuous learning process, the longer the time interval within which a subject remembers the information without its considerable lost. Besides, the latter relationship is a linear proportionality.
2404.07325
J Gregory Caporaso
Chloe Herman, Bridget M. Barker, Thais F. Bartelli, Vidhi Chandra, Rosa Krajmalnik-Brown, Mary Jewell, Le Li, Chen Liao, Florencia McAllister, Khemlal Nirmalkar, Joao B. Xavier, J. Gregory Caporaso
Assessing Engraftment Following Fecal Microbiota Transplant
18 pages, 6 figures, 2 supplemental tables
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Fecal Microbiota Transplant (FMT) is an FDA approved treatment for recurrent Clostridium difficile infections, and is being explored for other clinical applications, from alleviating digestive and neurological disorders, to priming the microbiome for cancer treatment, and restoring microbiomes impacted by cancer treatment. Quantifying the extent of engraftment following an FMT is important in determining if a recipient didn't respond because the engrafted microbiome didn't produce the desired outcomes (a successful FMT, but negative treatment outcome), or the microbiome didn't engraft (an unsuccessful FMT and negative treatment outcome). The lack of a consistent methodology for quantifying FMT engraftment extent hinders the assessment of FMT success and its relation to clinical outcomes, and presents challenges for comparing FMT results and protocols across studies. Here we review 46 studies of FMT in humans and model organisms and group their approaches for assessing the extent to which an FMT engrafts into three criteria: 1) Chimeric Asymmetric Community Coalescence investigates microbiome shifts following FMT engraftment. 2) Donated Microbiome Indicator Features tracks donated microbiome features as a signal of engraftment with methods such as differential abundance testing based on the current sample collection, or tracking changes in feature abundances that have been previously identified. 3) Temporal Stability examines how resistant post-FMT recipient's microbiomes are to reverting back to their baseline microbiome. Investigated together, these criteria provide a clear assessment of microbiome engraftment. We discuss the pros and cons of each of these criteria, providing illustrative examples of their application. We also introduce key terminology and recommendations on how FMT studies can be analyzed for rigorous engraftment extent assessment.
[ { "created": "Wed, 10 Apr 2024 20:10:58 GMT", "version": "v1" } ]
2024-04-12
[ [ "Herman", "Chloe", "" ], [ "Barker", "Bridget M.", "" ], [ "Bartelli", "Thais F.", "" ], [ "Chandra", "Vidhi", "" ], [ "Krajmalnik-Brown", "Rosa", "" ], [ "Jewell", "Mary", "" ], [ "Li", "Le", "" ], [ "Liao", "Chen", "" ], [ "McAllister", "Florencia", "" ], [ "Nirmalkar", "Khemlal", "" ], [ "Xavier", "Joao B.", "" ], [ "Caporaso", "J. Gregory", "" ] ]
Fecal Microbiota Transplant (FMT) is an FDA approved treatment for recurrent Clostridium difficile infections, and is being explored for other clinical applications, from alleviating digestive and neurological disorders, to priming the microbiome for cancer treatment, and restoring microbiomes impacted by cancer treatment. Quantifying the extent of engraftment following an FMT is important in determining if a recipient didn't respond because the engrafted microbiome didn't produce the desired outcomes (a successful FMT, but negative treatment outcome), or the microbiome didn't engraft (an unsuccessful FMT and negative treatment outcome). The lack of a consistent methodology for quantifying FMT engraftment extent hinders the assessment of FMT success and its relation to clinical outcomes, and presents challenges for comparing FMT results and protocols across studies. Here we review 46 studies of FMT in humans and model organisms and group their approaches for assessing the extent to which an FMT engrafts into three criteria: 1) Chimeric Asymmetric Community Coalescence investigates microbiome shifts following FMT engraftment. 2) Donated Microbiome Indicator Features tracks donated microbiome features as a signal of engraftment with methods such as differential abundance testing based on the current sample collection, or tracking changes in feature abundances that have been previously identified. 3) Temporal Stability examines how resistant post-FMT recipient's microbiomes are to reverting back to their baseline microbiome. Investigated together, these criteria provide a clear assessment of microbiome engraftment. We discuss the pros and cons of each of these criteria, providing illustrative examples of their application. We also introduce key terminology and recommendations on how FMT studies can be analyzed for rigorous engraftment extent assessment.
q-bio/0408015
Tonau Nakai
Ayako Yamada, Koji Kubo, Tonau Nakai, Kanta Tsumoto and Kenichi Yoshikawa
All-or-none switching of transcriptional activity on single DNA molecules caused by a discrete conformational transition
14 pages, 2 figures
null
10.1063/1.1937990
null
q-bio.BM q-bio.SC
null
Recently, it has been confirmed that long duplex DNA molecules with sizes larger than several tens of kilo-base pairs (kbp), exhibit a discrete conformational transition from an elongated coil state to a compact globule state upon the addition of various kinds of chemical species that usually induce DNA condensation. In this study, we performed a single-molecule observation on a large DNA, Lambda ZAP II DNA (ca. 41 kbp), in a solution containing RNA polymerase and substrates along with spermine, a tetravalent cation, at different concentrations, by use of fluorescence staining of both DNA and RNA. We found that transcription, or RNA production, is completely inhibited in the compact state, but is actively performed in the unfolded coil state. Such an all-or-none effect on transcriptional activity induced by the discrete conformational transition of single DNA molecules is discussed in relation to the mechanism of the regulation of large-scale genetic activity.
[ { "created": "Wed, 18 Aug 2004 17:02:29 GMT", "version": "v1" } ]
2009-11-10
[ [ "Yamada", "Ayako", "" ], [ "Kubo", "Koji", "" ], [ "Nakai", "Tonau", "" ], [ "Tsumoto", "Kanta", "" ], [ "Yoshikawa", "Kenichi", "" ] ]
Recently, it has been confirmed that long duplex DNA molecules with sizes larger than several tens of kilo-base pairs (kbp), exhibit a discrete conformational transition from an elongated coil state to a compact globule state upon the addition of various kinds of chemical species that usually induce DNA condensation. In this study, we performed a single-molecule observation on a large DNA, Lambda ZAP II DNA (ca. 41 kbp), in a solution containing RNA polymerase and substrates along with spermine, a tetravalent cation, at different concentrations, by use of fluorescence staining of both DNA and RNA. We found that transcription, or RNA production, is completely inhibited in the compact state, but is actively performed in the unfolded coil state. Such an all-or-none effect on transcriptional activity induced by the discrete conformational transition of single DNA molecules is discussed in relation to the mechanism of the regulation of large-scale genetic activity.
1403.4954
Micha{\l} Jamr\'oz
Michal Jamroz, Andrzej Kolinski, Sebastian Kmiecik
CABS-flex predictions of protein flexibility compared with NMR ensembles
null
Bioinformatics (2014) 30 (15): 2150-2154
10.1093/bioinformatics/btu184
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Identification of flexible regions of protein structures is important for understanding of their biological functions. Recently, we have developed a fast approach for predicting protein structure fluctuations from a single protein model: the CABS-flex. CABS-flex was shown to be an efficient alternative to conventional all-atom molecular dynamics (MD). In this work, we evaluate CABS-flex and MD predictions by comparison with protein structural variations within NMR ensembles. Results: Based on a benchmark set of 140 proteins, we show that the relative fluctuations of protein residues obtained from CABS-flex are well correlated to those of NMR ensembles. On average, this correlation is stronger than that between MD and NMR ensembles. In conclusion, CABS-flex is useful and complementary to MD in predicting of protein regions that undergo conformational changes and the extent of such changes.
[ { "created": "Wed, 19 Mar 2014 20:05:35 GMT", "version": "v1" } ]
2014-08-19
[ [ "Jamroz", "Michal", "" ], [ "Kolinski", "Andrzej", "" ], [ "Kmiecik", "Sebastian", "" ] ]
Motivation: Identification of flexible regions of protein structures is important for understanding of their biological functions. Recently, we have developed a fast approach for predicting protein structure fluctuations from a single protein model: the CABS-flex. CABS-flex was shown to be an efficient alternative to conventional all-atom molecular dynamics (MD). In this work, we evaluate CABS-flex and MD predictions by comparison with protein structural variations within NMR ensembles. Results: Based on a benchmark set of 140 proteins, we show that the relative fluctuations of protein residues obtained from CABS-flex are well correlated to those of NMR ensembles. On average, this correlation is stronger than that between MD and NMR ensembles. In conclusion, CABS-flex is useful and complementary to MD in predicting of protein regions that undergo conformational changes and the extent of such changes.
1710.07989
Andrew Marquis
Andrew D. Marquis, Andrea Arnold, Caron Dean, Brian E. Carlson, Mette S. Olufsen
Practical Identifiability and Uncertainty Quantification of a Pulsatile Cardiovascular Model
47 pages, 9 figures, 3 tables
null
10.1016/j.mbs.2018.07.001
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mathematical models are essential tools to study how the cardiovascular system maintains homeostasis. The utility of such models is limited by the accuracy of their predictions, which can be determined by uncertainty quantification (UQ). A challenge associated with the use of UQ is that many published methods assume that the underlying model is identifiable (e.g. that a one-to-one mapping exists from the parameter space to the model output). In this study we present a novel methodology that is used here to calibrate a lumped-parameter model to left ventricular pressure and volume time series data sets. Key steps include using (1) literature and available data to determine nominal parameter values; (2) sensitivity analysis and subset selection to determine a set of identifiable parameters; (3) optimization to find a point estimate for identifiable parameters; and (4) frequentist and Bayesian UQ calculations to assess the predictive capability of the model. Our results show that it is possible to determine 5 identifiable model parameters that can be estimated to our experimental data from three rats, and that computed UQ intervals capture the measurement and model error.
[ { "created": "Sun, 22 Oct 2017 17:55:48 GMT", "version": "v1" }, { "created": "Tue, 12 Dec 2017 00:52:48 GMT", "version": "v2" } ]
2018-07-20
[ [ "Marquis", "Andrew D.", "" ], [ "Arnold", "Andrea", "" ], [ "Dean", "Caron", "" ], [ "Carlson", "Brian E.", "" ], [ "Olufsen", "Mette S.", "" ] ]
Mathematical models are essential tools to study how the cardiovascular system maintains homeostasis. The utility of such models is limited by the accuracy of their predictions, which can be determined by uncertainty quantification (UQ). A challenge associated with the use of UQ is that many published methods assume that the underlying model is identifiable (e.g. that a one-to-one mapping exists from the parameter space to the model output). In this study we present a novel methodology that is used here to calibrate a lumped-parameter model to left ventricular pressure and volume time series data sets. Key steps include using (1) literature and available data to determine nominal parameter values; (2) sensitivity analysis and subset selection to determine a set of identifiable parameters; (3) optimization to find a point estimate for identifiable parameters; and (4) frequentist and Bayesian UQ calculations to assess the predictive capability of the model. Our results show that it is possible to determine 5 identifiable model parameters that can be estimated to our experimental data from three rats, and that computed UQ intervals capture the measurement and model error.
2406.14579
Anas El Fathi
Anas El Fathi, Elliott Pryor, Marc D. Breton
Attention Networks for Personalized Mealtime Insulin Dosing in People with Type 1 Diabetes
6 pages, 4 figures, Biological and Medical Systems - 12th BMS 2024 - IFAC
null
null
null
q-bio.QM cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Calculating mealtime insulin doses poses a significant challenge for individuals with Type 1 Diabetes (T1D). Doses should perfectly compensate for expected post-meal glucose excursions, requiring a profound understanding of the individual's insulin sensitivity and the meal macronutrients'. Usually, people rely on intuition and experience to develop this understanding. In this work, we demonstrate how a reinforcement learning agent, employing a self-attention encoder network, can effectively mimic and enhance this intuitive process. Trained on 80 virtual subjects from the FDA-approved UVA/Padova T1D adult cohort and tested on twenty, self-attention demonstrates superior performance compared to other network architectures. Results reveal a significant reduction in glycemic risk, from 16.5 to 9.6 in scenarios using sensor-augmented pump and from 9.1 to 6.7 in scenarios using automated insulin delivery. This new paradigm bypasses conventional therapy parameters, offering the potential to simplify treatment and promising improved quality of life and glycemic outcomes for people with T1D.
[ { "created": "Tue, 18 Jun 2024 17:59:32 GMT", "version": "v1" } ]
2024-06-24
[ [ "Fathi", "Anas El", "" ], [ "Pryor", "Elliott", "" ], [ "Breton", "Marc D.", "" ] ]
Calculating mealtime insulin doses poses a significant challenge for individuals with Type 1 Diabetes (T1D). Doses should perfectly compensate for expected post-meal glucose excursions, requiring a profound understanding of the individual's insulin sensitivity and the meal macronutrients'. Usually, people rely on intuition and experience to develop this understanding. In this work, we demonstrate how a reinforcement learning agent, employing a self-attention encoder network, can effectively mimic and enhance this intuitive process. Trained on 80 virtual subjects from the FDA-approved UVA/Padova T1D adult cohort and tested on twenty, self-attention demonstrates superior performance compared to other network architectures. Results reveal a significant reduction in glycemic risk, from 16.5 to 9.6 in scenarios using sensor-augmented pump and from 9.1 to 6.7 in scenarios using automated insulin delivery. This new paradigm bypasses conventional therapy parameters, offering the potential to simplify treatment and promising improved quality of life and glycemic outcomes for people with T1D.
0803.4197
Vladimir Privman
V. Privman, G. Strack, D. Solenov, M. Pita, E. Katz
Optimization of Enzymatic Biochemical Logic for Noise Reduction and Scalability: How Many Biocomputing Gates Can Be Interconnected in a Circuit?
null
J. Phys. Chem. B 112, 11777-11784 (2008)
10.1021/jp802673q
null
q-bio.MN cond-mat.other cond-mat.soft cs.CC q-bio.BM q-bio.OT q-bio.QM quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We report an experimental evaluation of the "input-output surface" for a biochemical AND gate. The obtained data are modeled within the rate-equation approach, with the aim to map out the gate function and cast it in the language of logic variables appropriate for analysis of Boolean logic for scalability. In order to minimize "analog" noise, we consider a theoretical approach for determining an optimal set for the process parameters to minimize "analog" noise amplification for gate concatenation. We establish that under optimized conditions, presently studied biochemical gates can be concatenated for up to order 10 processing steps. Beyond that, new paradigms for avoiding noise build-up will have to be developed. We offer a general discussion of the ideas and possible future challenges for both experimental and theoretical research for advancing scalable biochemical computing.
[ { "created": "Mon, 31 Mar 2008 19:52:30 GMT", "version": "v1" }, { "created": "Sat, 13 Sep 2008 16:34:17 GMT", "version": "v2" } ]
2010-10-12
[ [ "Privman", "V.", "" ], [ "Strack", "G.", "" ], [ "Solenov", "D.", "" ], [ "Pita", "M.", "" ], [ "Katz", "E.", "" ] ]
We report an experimental evaluation of the "input-output surface" for a biochemical AND gate. The obtained data are modeled within the rate-equation approach, with the aim to map out the gate function and cast it in the language of logic variables appropriate for analysis of Boolean logic for scalability. In order to minimize "analog" noise, we consider a theoretical approach for determining an optimal set for the process parameters to minimize "analog" noise amplification for gate concatenation. We establish that under optimized conditions, presently studied biochemical gates can be concatenated for up to order 10 processing steps. Beyond that, new paradigms for avoiding noise build-up will have to be developed. We offer a general discussion of the ideas and possible future challenges for both experimental and theoretical research for advancing scalable biochemical computing.
0812.1271
Suan Li Mai
Mai Suan Li, A. M. Gabovich, and A. I. Voitenko
New method for deciphering free energy landscape of three-state proteins
18 pages, 1 table, 5 figures
J. Chem. Phys. 129, 105102 (2008)
10.1063/1.2976760
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We have developed a new simulation method to estimate the distance between the native state and the first transition state, and the distance between the intermediate state and the second transition state of a protein which mechanically unfolds via intermediates. Assuming that the end-to-end extension $\Delta R$ is a good reaction coordinate to describe the free energy landscape of proteins subjected to an external force, we define the midpoint extension $\Delta R^*$ between two transition states from either constant-force or constant loading rate pulling simulations. In the former case, $\Delta R^*$ is defined as a middle point between two plateaus in the time-dependent curve of $\Delta R$, while, in the latter one, it is a middle point between two peaks in the force-extension curve. Having determined $\Delta R^*$, one can compute times needed to cross two transition state barriers starting from the native state. With the help of the Bell and microscopic kinetic theory, force dependencies of these unfolding times can be used to locate the intermediate state and to extract unfolding barriers. We have applied our method to the titin domain I27 and the fourth domain of {\em Dictyostelium discoideum} filamin (DDFLN4), and obtained reasonable agreement with experiments, using the C$_{\alpha}$-Go model.
[ { "created": "Sat, 6 Dec 2008 09:46:42 GMT", "version": "v1" } ]
2009-11-13
[ [ "Li", "Mai Suan", "" ], [ "Gabovich", "A. M.", "" ], [ "Voitenko", "A. I.", "" ] ]
We have developed a new simulation method to estimate the distance between the native state and the first transition state, and the distance between the intermediate state and the second transition state of a protein which mechanically unfolds via intermediates. Assuming that the end-to-end extension $\Delta R$ is a good reaction coordinate to describe the free energy landscape of proteins subjected to an external force, we define the midpoint extension $\Delta R^*$ between two transition states from either constant-force or constant loading rate pulling simulations. In the former case, $\Delta R^*$ is defined as a middle point between two plateaus in the time-dependent curve of $\Delta R$, while, in the latter one, it is a middle point between two peaks in the force-extension curve. Having determined $\Delta R^*$, one can compute times needed to cross two transition state barriers starting from the native state. With the help of the Bell and microscopic kinetic theory, force dependencies of these unfolding times can be used to locate the intermediate state and to extract unfolding barriers. We have applied our method to the titin domain I27 and the fourth domain of {\em Dictyostelium discoideum} filamin (DDFLN4), and obtained reasonable agreement with experiments, using the C$_{\alpha}$-Go model.
2007.09291
In\^es Hip\'olito
Maxwell Ramstead, Karl Friston, Ines Hipolito
Is the free-energy principle a formal theory of semantics? From variational density dynamics to neural and phenotypic representations
35 pages, 4 figures, 1 table
null
10.3390/e22080889
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The aim of this paper is twofold: (1) to assess whether the construct of neural representations plays an explanatory role under the variational free-energy principle and its corollary process theory, active inference; and (2) if so, to assess which philosophical stance - in relation to the ontological and epistemological status of representations - is most appropriate. We focus on non-realist (deflationary and fictionalist-instrumentalist) approaches. We consider a deflationary account of mental representation, according to which the explanatorily relevant contents of neural representations are mathematical, rather than cognitive, and a fictionalist or instrumentalist account, according to which representations are scientifically useful fictions that serve explanatory (and other) aims. After reviewing the free-energy principle and active inference, we argue that the model of adaptive phenotypes under the free-energy principle can be used to furnish a formal semantics, enabling us to assign semantic content to specific phenotypic states (the internal states of a Markovian system that exists far from equilibrium). We propose a modified fictionalist account: an organism-centered fictionalism or instrumentalism. We argue that, under the free-energy principle, pursuing even a deflationary account of the content of neural representations licenses the appeal to the kind of semantic content involved in the aboutness or intentionality of cognitive systems; our position is thus coherent with, but rests on distinct assumptions from, the realist position. We argue that the free-energy principle thereby explains the aboutness or intentionality in living systems and hence their capacity to parse their sensory stream using an ontology or set of semantic factors.
[ { "created": "Sat, 18 Jul 2020 00:51:57 GMT", "version": "v1" }, { "created": "Sat, 25 Jul 2020 03:40:06 GMT", "version": "v2" }, { "created": "Sat, 8 Aug 2020 02:03:23 GMT", "version": "v3" } ]
2023-07-19
[ [ "Ramstead", "Maxwell", "" ], [ "Friston", "Karl", "" ], [ "Hipolito", "Ines", "" ] ]
The aim of this paper is twofold: (1) to assess whether the construct of neural representations plays an explanatory role under the variational free-energy principle and its corollary process theory, active inference; and (2) if so, to assess which philosophical stance - in relation to the ontological and epistemological status of representations - is most appropriate. We focus on non-realist (deflationary and fictionalist-instrumentalist) approaches. We consider a deflationary account of mental representation, according to which the explanatorily relevant contents of neural representations are mathematical, rather than cognitive, and a fictionalist or instrumentalist account, according to which representations are scientifically useful fictions that serve explanatory (and other) aims. After reviewing the free-energy principle and active inference, we argue that the model of adaptive phenotypes under the free-energy principle can be used to furnish a formal semantics, enabling us to assign semantic content to specific phenotypic states (the internal states of a Markovian system that exists far from equilibrium). We propose a modified fictionalist account: an organism-centered fictionalism or instrumentalism. We argue that, under the free-energy principle, pursuing even a deflationary account of the content of neural representations licenses the appeal to the kind of semantic content involved in the aboutness or intentionality of cognitive systems; our position is thus coherent with, but rests on distinct assumptions from, the realist position. We argue that the free-energy principle thereby explains the aboutness or intentionality in living systems and hence their capacity to parse their sensory stream using an ontology or set of semantic factors.
1805.11553
C\'edric Sueur
Sebastian Sosa, Marie Pele, Elise Debergue, Cedric Kuntz, Blandine Keller, Florian Robic, Flora Siegwalt-Baudin, Camille Richer, Amandine Ramos, Cedric Sueur
Impact of group management and transfer on individual sociality in Highland cattle (Bos taurus)
This preprint has been peer-reviewed and recommended by Peer Community In Ecology (https://dx.doi.org/10.24072/pci.ecology.100003)
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The sociality of cattle facilitates the maintenance of herd cohesion and synchronisation, making these species the ideal choice for domestication as livestock for humans. However, livestock populations are not self-regulated, and farmers transfer individuals across different groups throughout their lives for reasons such as genetic mixing, reproduction and pastureland management. Individuals consequently have to adapt to different group compositions during their lives rather than choose their own herd mates, as they would do in the wild. These changes may lead to social instability and stress, entailing potentially negative effects on animal welfare. In this study, we assess how the transfer of Highland cattle (Bos taurus) impacts individual and group social network measures. Four groups with nine different compositions and 18 individual transfers were studied to evaluate 1) the effect of group composition on individual social centralities and 2) the effect of group composition changes on these centralities. As shown in previous works, dyadic associations are stronger between individuals with similar age and dominance rank. This study reveals that the relative stability of dyadic spatial relationships between changes in group composition or enclosure is due to the identities of transferred individuals more than the quantity of individuals that are transferred. Older cattle had higher network centralities than other individuals. The centrality of individuals was also affected by their sex and the number of familiar individuals in the group. This study reveals the necessity of understanding the social structure of a group to predict social instability following the transfer of individuals between groups. The developing of guidelines for the modification of group composition could improve livestock management and reduce stress for the animals concerned.
[ { "created": "Tue, 29 May 2018 15:59:39 GMT", "version": "v1" }, { "created": "Thu, 19 Jul 2018 14:07:53 GMT", "version": "v2" }, { "created": "Fri, 7 Sep 2018 07:29:25 GMT", "version": "v3" }, { "created": "Thu, 11 Oct 2018 07:35:49 GMT", "version": "v4" }, { "created": "Sat, 13 Oct 2018 05:41:18 GMT", "version": "v5" } ]
2018-10-16
[ [ "Sosa", "Sebastian", "" ], [ "Pele", "Marie", "" ], [ "Debergue", "Elise", "" ], [ "Kuntz", "Cedric", "" ], [ "Keller", "Blandine", "" ], [ "Robic", "Florian", "" ], [ "Siegwalt-Baudin", "Flora", "" ], [ "Richer", "Camille", "" ], [ "Ramos", "Amandine", "" ], [ "Sueur", "Cedric", "" ] ]
The sociality of cattle facilitates the maintenance of herd cohesion and synchronisation, making these species the ideal choice for domestication as livestock for humans. However, livestock populations are not self-regulated, and farmers transfer individuals across different groups throughout their lives for reasons such as genetic mixing, reproduction and pastureland management. Individuals consequently have to adapt to different group compositions during their lives rather than choose their own herd mates, as they would do in the wild. These changes may lead to social instability and stress, entailing potentially negative effects on animal welfare. In this study, we assess how the transfer of Highland cattle (Bos taurus) impacts individual and group social network measures. Four groups with nine different compositions and 18 individual transfers were studied to evaluate 1) the effect of group composition on individual social centralities and 2) the effect of group composition changes on these centralities. As shown in previous works, dyadic associations are stronger between individuals with similar age and dominance rank. This study reveals that the relative stability of dyadic spatial relationships between changes in group composition or enclosure is due to the identities of transferred individuals more than the quantity of individuals that are transferred. Older cattle had higher network centralities than other individuals. The centrality of individuals was also affected by their sex and the number of familiar individuals in the group. This study reveals the necessity of understanding the social structure of a group to predict social instability following the transfer of individuals between groups. The developing of guidelines for the modification of group composition could improve livestock management and reduce stress for the animals concerned.
q-bio/0312038
Alan Williams
Alan Williams, Todd K. Leen, Patrick D. Roberts
Random Walks for Spike-Timing Dependent Plasticity
18 pages, 8 figures, 15 subfigures; uses revtex4, subfigure, amsmath
null
10.1103/PhysRevE.70.021916
null
q-bio.NC
null
Random walk methods are used to calculate the moments of negative image equilibrium distributions in synaptic weight dynamics governed by spike-timing dependent plasticity (STDP). The neural architecture of the model is based on the electrosensory lateral line lobe (ELL) of mormyrid electric fish, which forms a negative image of the reafferent signal from the fish's own electric discharge to optimize detection of sensory electric fields. Of particular behavioral importance to the fish is the variance of the equilibrium postsynaptic potential in the presence of noise, which is determined by the variance of the equilibrium weight distribution. Recurrence relations are derived for the moments of the equilibrium weight distribution, for arbitrary postsynaptic potential functions and arbitrary learning rules. For the case of homogeneous network parameters, explicit closed form solutions are developed for the covariances of the synaptic weight and postsynaptic potential distributions.
[ { "created": "Tue, 23 Dec 2003 22:10:43 GMT", "version": "v1" } ]
2009-11-10
[ [ "Williams", "Alan", "" ], [ "Leen", "Todd K.", "" ], [ "Roberts", "Patrick D.", "" ] ]
Random walk methods are used to calculate the moments of negative image equilibrium distributions in synaptic weight dynamics governed by spike-timing dependent plasticity (STDP). The neural architecture of the model is based on the electrosensory lateral line lobe (ELL) of mormyrid electric fish, which forms a negative image of the reafferent signal from the fish's own electric discharge to optimize detection of sensory electric fields. Of particular behavioral importance to the fish is the variance of the equilibrium postsynaptic potential in the presence of noise, which is determined by the variance of the equilibrium weight distribution. Recurrence relations are derived for the moments of the equilibrium weight distribution, for arbitrary postsynaptic potential functions and arbitrary learning rules. For the case of homogeneous network parameters, explicit closed form solutions are developed for the covariances of the synaptic weight and postsynaptic potential distributions.
0908.0015
Michael DeSantis
Michael C. DeSantis, Shawn H. DeCenzo, Je-Luen Li, Y. M. Wang
Precision analysis for standard deviation measurements of single fluorescent molecule images
16 pages, 3 figures, revised
null
10.1364/OE.18.006563
null
q-bio.QM q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera. We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.
[ { "created": "Fri, 31 Jul 2009 22:22:00 GMT", "version": "v1" }, { "created": "Wed, 27 Jan 2010 17:55:32 GMT", "version": "v2" } ]
2015-05-13
[ [ "DeSantis", "Michael C.", "" ], [ "DeCenzo", "Shawn H.", "" ], [ "Li", "Je-Luen", "" ], [ "Wang", "Y. M.", "" ] ]
Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera. We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.
2012.08897
David Schaller
David Schaller, Manuel Lafond, Peter F. Stadler, Nicolas Wieseke and Marc Hellmuth
Indirect Identification of Horizontal Gene Transfer
null
null
null
null
q-bio.PE cs.DM cs.DS
http://creativecommons.org/licenses/by/4.0/
Several implicit methods to infer Horizontal Gene Transfer (HGT) focus on pairs of genes that have diverged only after the divergence of the two species in which the genes reside. This situation defines the edge set of a graph, the later-divergence-time (LDT) graph, whose vertices correspond to genes colored by their species. We investigate these graphs in the setting of relaxed scenarios, i.e., evolutionary scenarios that encompass all commonly used variants of duplication-transfer-loss scenarios in the literature. We characterize LDT graphs as a subclass of properly vertex-colored cographs, and provide a polynomial-time recognition algorithm as well as an algorithm to construct a relaxed scenario that explains a given LDT. An edge in an LDT graph implies that the two corresponding genes are separated by at least one HGT event. The converse is not true, however. We show that the complete xenology relation is described by an rs-Fitch graph, i.e., a complete multipartite graph satisfying constraints on the vertex coloring. This class of vertex-colored graphs is also recognizable in polynomial time. We finally address the question "how much information about all HGT events is contained in LDT graphs" with the help of simulations of evolutionary scenarios with a wide range of duplication, loss, and HGT events. In particular, we show that a simple greedy graph editing scheme can be used to efficiently detect HGT events that are implicitly contained in LDT graphs.
[ { "created": "Wed, 16 Dec 2020 12:18:42 GMT", "version": "v1" }, { "created": "Tue, 6 Apr 2021 15:24:27 GMT", "version": "v2" } ]
2021-04-07
[ [ "Schaller", "David", "" ], [ "Lafond", "Manuel", "" ], [ "Stadler", "Peter F.", "" ], [ "Wieseke", "Nicolas", "" ], [ "Hellmuth", "Marc", "" ] ]
Several implicit methods to infer Horizontal Gene Transfer (HGT) focus on pairs of genes that have diverged only after the divergence of the two species in which the genes reside. This situation defines the edge set of a graph, the later-divergence-time (LDT) graph, whose vertices correspond to genes colored by their species. We investigate these graphs in the setting of relaxed scenarios, i.e., evolutionary scenarios that encompass all commonly used variants of duplication-transfer-loss scenarios in the literature. We characterize LDT graphs as a subclass of properly vertex-colored cographs, and provide a polynomial-time recognition algorithm as well as an algorithm to construct a relaxed scenario that explains a given LDT. An edge in an LDT graph implies that the two corresponding genes are separated by at least one HGT event. The converse is not true, however. We show that the complete xenology relation is described by an rs-Fitch graph, i.e., a complete multipartite graph satisfying constraints on the vertex coloring. This class of vertex-colored graphs is also recognizable in polynomial time. We finally address the question "how much information about all HGT events is contained in LDT graphs" with the help of simulations of evolutionary scenarios with a wide range of duplication, loss, and HGT events. In particular, we show that a simple greedy graph editing scheme can be used to efficiently detect HGT events that are implicitly contained in LDT graphs.
0805.1723
Nicolas Vuillerme
Nicolas Vuillerme (TIMC), Nicolas Pinsault (TIMC), Olivier Chenu (TIMC), Anthony Fleury (TIMC), Yohan Payan (TIMC), Jacques Demongeot (TIMC)
A Wireless Embedded Tongue Tactile Biofeedback System for Balance Control
Pervasive and Mobile Computing (2008) in press
null
10.1016/j.pmcj.2008.04.001
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe the architecture of an original biofeedback system for balance improvement for fall prevention and present results of a feasibility study. The underlying principle of this biofeedback consists of providing supplementary information related to foot sole pressure distribution through a wireless embedded tongue-placed tactile output device. Twelve young healthy adults voluntarily participated in this experiment. They were asked to stand as immobile as possible with their eyes closed in two conditions of nobiofeedback and biofeedback. Centre of foot pressure (CoP) displacements were recorded using a force platform. Results showed reduced CoP displacements in the biofeedback relative to the no-biofeedback condition. On the whole, the present findings evidence the effectiveness of this system in improving postural control on young healthy adults. Further investigations are needed to strengthen the potential clinical value of this device.
[ { "created": "Mon, 12 May 2008 19:41:16 GMT", "version": "v1" } ]
2008-12-18
[ [ "Vuillerme", "Nicolas", "", "TIMC" ], [ "Pinsault", "Nicolas", "", "TIMC" ], [ "Chenu", "Olivier", "", "TIMC" ], [ "Fleury", "Anthony", "", "TIMC" ], [ "Payan", "Yohan", "", "TIMC" ], [ "Demongeot", "Jacques", "", "TIMC" ] ]
We describe the architecture of an original biofeedback system for balance improvement for fall prevention and present results of a feasibility study. The underlying principle of this biofeedback consists of providing supplementary information related to foot sole pressure distribution through a wireless embedded tongue-placed tactile output device. Twelve young healthy adults voluntarily participated in this experiment. They were asked to stand as immobile as possible with their eyes closed in two conditions of nobiofeedback and biofeedback. Centre of foot pressure (CoP) displacements were recorded using a force platform. Results showed reduced CoP displacements in the biofeedback relative to the no-biofeedback condition. On the whole, the present findings evidence the effectiveness of this system in improving postural control on young healthy adults. Further investigations are needed to strengthen the potential clinical value of this device.
2212.00555
Xia Zhengwang
Zhengwang Xia, Tao Zhou, Saqib Mamoon, Amani Alfakih, Jianfeng Lu
A Structure-guided Effective and Temporal-lag Connectivity Network for Revealing Brain Disorder Mechanisms
null
null
null
null
q-bio.NC cs.AI eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Brain network provides important insights for the diagnosis of many brain disorders, and how to effectively model the brain structure has become one of the core issues in the domain of brain imaging analysis. Recently, various computational methods have been proposed to estimate the causal relationship (i.e., effective connectivity) between brain regions. Compared with traditional correlation-based methods, effective connectivity can provide the direction of information flow, which may provide additional information for the diagnosis of brain diseases. However, existing methods either ignore the fact that there is a temporal-lag in the information transmission across brain regions, or simply set the temporal-lag value between all brain regions to a fixed value. To overcome these issues, we design an effective temporal-lag neural network (termed ETLN) to simultaneously infer the causal relationships and the temporal-lag values between brain regions, which can be trained in an end-to-end manner. In addition, we also introduce three mechanisms to better guide the modeling of brain networks. The evaluation results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database demonstrate the effectiveness of the proposed method.
[ { "created": "Thu, 1 Dec 2022 15:02:22 GMT", "version": "v1" } ]
2022-12-02
[ [ "Xia", "Zhengwang", "" ], [ "Zhou", "Tao", "" ], [ "Mamoon", "Saqib", "" ], [ "Alfakih", "Amani", "" ], [ "Lu", "Jianfeng", "" ] ]
Brain network provides important insights for the diagnosis of many brain disorders, and how to effectively model the brain structure has become one of the core issues in the domain of brain imaging analysis. Recently, various computational methods have been proposed to estimate the causal relationship (i.e., effective connectivity) between brain regions. Compared with traditional correlation-based methods, effective connectivity can provide the direction of information flow, which may provide additional information for the diagnosis of brain diseases. However, existing methods either ignore the fact that there is a temporal-lag in the information transmission across brain regions, or simply set the temporal-lag value between all brain regions to a fixed value. To overcome these issues, we design an effective temporal-lag neural network (termed ETLN) to simultaneously infer the causal relationships and the temporal-lag values between brain regions, which can be trained in an end-to-end manner. In addition, we also introduce three mechanisms to better guide the modeling of brain networks. The evaluation results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database demonstrate the effectiveness of the proposed method.
2003.06192
Larissa Terumi Arashiro
Larissa T. Arashiro, Maria Boto-Ordonez, Stijn W.H. Van Hulle, Ivet Ferrer, Marianna Garfi, Diederik P.L. Rousseau
Natural pigments from microalgae grown in industrial wastewater
null
Bioresource Technology 303, 122894 (2020)
10.1016/j.biortech.2020.122894
null
q-bio.PE q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
The aim of this study was to investigate the cultivation of Nostoc sp., Arthrospira platensis and Porphyridium purpureum in industrial wastewater to produce phycobiliproteins. Initially, light intensity and growth medium composition were optimized, indicating that light conditions influenced the phycobiliproteins production more than the medium composition. Conditions were then selected, according to biomass growth, nutrients removal and phycobiliproteins production, to cultivate these microalgae in food-industry wastewater. The three species could efficiently remove up to 98%, 94% and 100% of COD, inorganic nitrogen and PO43--P, respectively. Phycocyanin, allophycocyanin and phycoerythrin were successfully extracted from the biomass reaching concentrations up to 103, 57 and 30 mg/g dry weight, respectively. Results highlight the potential use of microalgae for industrial wastewater treatment and related high-value phycobiliproteins recovery.
[ { "created": "Fri, 13 Mar 2020 10:34:23 GMT", "version": "v1" } ]
2020-03-16
[ [ "Arashiro", "Larissa T.", "" ], [ "Boto-Ordonez", "Maria", "" ], [ "Van Hulle", "Stijn W. H.", "" ], [ "Ferrer", "Ivet", "" ], [ "Garfi", "Marianna", "" ], [ "Rousseau", "Diederik P. L.", "" ] ]
The aim of this study was to investigate the cultivation of Nostoc sp., Arthrospira platensis and Porphyridium purpureum in industrial wastewater to produce phycobiliproteins. Initially, light intensity and growth medium composition were optimized, indicating that light conditions influenced the phycobiliproteins production more than the medium composition. Conditions were then selected, according to biomass growth, nutrients removal and phycobiliproteins production, to cultivate these microalgae in food-industry wastewater. The three species could efficiently remove up to 98%, 94% and 100% of COD, inorganic nitrogen and PO43--P, respectively. Phycocyanin, allophycocyanin and phycoerythrin were successfully extracted from the biomass reaching concentrations up to 103, 57 and 30 mg/g dry weight, respectively. Results highlight the potential use of microalgae for industrial wastewater treatment and related high-value phycobiliproteins recovery.
2312.07977
Dipesh Niraula
Dipesh Niraula (1), Issam El Naqa (1), Jack Adam Tuszynski (2), and Robert A. Gatenby (3) ((1) Department of Machine Learning, Moffitt Cancer Center, Tampa, FL, USA (2) Departments of Physics and Oncology, University of Alberta, Edmonton, AB, CAN (3) Departments of Radiology and Integrated Mathematical Oncology, Moffitt Cancer Center, Tampa, FL, USA)
Modeling non-genetic information dynamics in cells using reservoir computing
Main text: 18 pages, 1 table, and 8 figures; Supplementary materials: 14 pages, 18 figures; Link to Source code and Data included
null
null
null
q-bio.CB cs.LG physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Virtually all cells use energy and ion-specific membrane pumps to maintain large transmembrane gradients of Na$^+$, K$^+$, Cl$^-$, Mg$^{++}$, and Ca$^{++}$. Although they consume up to 1/3 of a cell's energy budget, the corresponding evolutionary benefit of transmembrane ion gradients remain unclear. Here, we propose that ion gradients enable a dynamic and versatile biological system that acquires, analyzes, and responds to environmental information. We hypothesize environmental signals are transmitted into the cell by ion fluxes along pre-existing gradients through gated ion-specific membrane channels. The consequent changes of cytoplasmic ion concentration can generate a local response and orchestrate global or regional responses through wire-like ion fluxes along pre-existing and self-assembling cytoskeleton to engage the endoplasmic reticulum, mitochondria, and nucleus. Here, we frame our hypothesis through a quasi-physical (Cell-Reservoir) model that treats intra-cellular ion-based information dynamics as a sub-cellular process permitting spatiotemporally resolved cellular response that is also capable of learning complex nonlinear dynamical cellular behavior. We demonstrate the proposed ion dynamics permits rapid dissemination of response to information extrinsic perturbations that is consistent with experimental observations.
[ { "created": "Wed, 13 Dec 2023 08:47:50 GMT", "version": "v1" }, { "created": "Thu, 14 Mar 2024 19:26:00 GMT", "version": "v2" } ]
2024-03-18
[ [ "Niraula", "Dipesh", "" ], [ "Naqa", "Issam El", "" ], [ "Tuszynski", "Jack Adam", "" ], [ "Gatenby", "Robert A.", "" ] ]
Virtually all cells use energy and ion-specific membrane pumps to maintain large transmembrane gradients of Na$^+$, K$^+$, Cl$^-$, Mg$^{++}$, and Ca$^{++}$. Although they consume up to 1/3 of a cell's energy budget, the corresponding evolutionary benefit of transmembrane ion gradients remain unclear. Here, we propose that ion gradients enable a dynamic and versatile biological system that acquires, analyzes, and responds to environmental information. We hypothesize environmental signals are transmitted into the cell by ion fluxes along pre-existing gradients through gated ion-specific membrane channels. The consequent changes of cytoplasmic ion concentration can generate a local response and orchestrate global or regional responses through wire-like ion fluxes along pre-existing and self-assembling cytoskeleton to engage the endoplasmic reticulum, mitochondria, and nucleus. Here, we frame our hypothesis through a quasi-physical (Cell-Reservoir) model that treats intra-cellular ion-based information dynamics as a sub-cellular process permitting spatiotemporally resolved cellular response that is also capable of learning complex nonlinear dynamical cellular behavior. We demonstrate the proposed ion dynamics permits rapid dissemination of response to information extrinsic perturbations that is consistent with experimental observations.
1905.10172
Alexander Paraskevov
A.V. Paraskevov, A.S. Minkin
Damped oscillations of the probability of random events followed by absolute refractory period: exact analytical results
Final version
Chaos Soliton. Fract. 155, 111695 (2022)
10.1016/j.chaos.2021.111695
null
q-bio.NC math.PR math.ST physics.data-an stat.AP stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There are numerous examples of natural and artificial processes that represent stochastic sequences of events followed by an absolute refractory period during which the occurrence of a subsequent event is impossible. In the simplest case of a generalized Bernoulli scheme for uniform random events followed by the absolute refractory period, the event probability as a function of time can exhibit damped transient oscillations. Using stochastically-spiking point neuron as a model example, we present an exact and compact analytical description for the oscillations without invoking the standard renewal theory. The resulting formulas stand out for their relative simplicity, allowing one to analytically obtain the amplitude damping of the 2nd and 3rd peaks of the event probability.
[ { "created": "Fri, 24 May 2019 12:02:17 GMT", "version": "v1" }, { "created": "Thu, 7 Nov 2019 15:05:26 GMT", "version": "v2" }, { "created": "Mon, 22 Nov 2021 12:22:26 GMT", "version": "v3" } ]
2022-01-24
[ [ "Paraskevov", "A. V.", "" ], [ "Minkin", "A. S.", "" ] ]
There are numerous examples of natural and artificial processes that represent stochastic sequences of events followed by an absolute refractory period during which the occurrence of a subsequent event is impossible. In the simplest case of a generalized Bernoulli scheme for uniform random events followed by the absolute refractory period, the event probability as a function of time can exhibit damped transient oscillations. Using stochastically-spiking point neuron as a model example, we present an exact and compact analytical description for the oscillations without invoking the standard renewal theory. The resulting formulas stand out for their relative simplicity, allowing one to analytically obtain the amplitude damping of the 2nd and 3rd peaks of the event probability.
2405.10998
Fabio Sanchez PhD
Juan G. Calvo, Mario I. Simoy, Juan P. Aparicio, Jos\'e E. Chac\'on, Fabio Sanchez
Stochastic two-patch epidemic model with nonlinear recidivism
22 pages, 12 figures
null
null
null
q-bio.PE math.DS
http://creativecommons.org/licenses/by/4.0/
We develop a stochastic two-patch epidemic model with nonlinear recidivism to investigate infectious disease dynamics in heterogeneous populations. Extending a deterministic framework, we introduce stochasticity to account for random transmission, recovery, and inter-patch movement fluctuations. We showcase the interplay between local dynamics and migration effects on disease persistence using Monte Carlo simulations and three stochastic approximations-discrete-time Markov chain (DTMC), Poisson, and stochastic differential equations (SDE). Our analysis shows that stochastic effects can cause extinction events and oscillations near critical thresholds like the basic reproduction number, R0, phenomena absent in deterministic models. Numerical simulations highlight source-sink dynamics, where one patch is a persistent infection source while the other experiences intermittent outbreaks.
[ { "created": "Thu, 16 May 2024 20:57:34 GMT", "version": "v1" } ]
2024-05-21
[ [ "Calvo", "Juan G.", "" ], [ "Simoy", "Mario I.", "" ], [ "Aparicio", "Juan P.", "" ], [ "Chacón", "José E.", "" ], [ "Sanchez", "Fabio", "" ] ]
We develop a stochastic two-patch epidemic model with nonlinear recidivism to investigate infectious disease dynamics in heterogeneous populations. Extending a deterministic framework, we introduce stochasticity to account for random transmission, recovery, and inter-patch movement fluctuations. We showcase the interplay between local dynamics and migration effects on disease persistence using Monte Carlo simulations and three stochastic approximations-discrete-time Markov chain (DTMC), Poisson, and stochastic differential equations (SDE). Our analysis shows that stochastic effects can cause extinction events and oscillations near critical thresholds like the basic reproduction number, R0, phenomena absent in deterministic models. Numerical simulations highlight source-sink dynamics, where one patch is a persistent infection source while the other experiences intermittent outbreaks.
q-bio/0503019
Paul Tiesinga
Paul H.E. Tiesinga, Jean-Marc Fellous, Emilio Salinas, Jorge V. Jose, Terrence J. Sejnowski
Inhibitory synchrony as a mechanism for attentional gain modulation
J.Physiology (Paris) in press, 11 figures
null
null
null
q-bio.NC
null
Recordings from area V4 of monkeys have revealed that when the focus of attention is on a visual stimulus within the receptive field of a cortical neuron, two distinct changes can occur: The firing rate of the neuron can change and there can be an increase in the coherence between spikes and the local field potential in the gamma-frequency range (30-50 Hz). The hypothesis explored here is that these observed effects of attention could be a consequence of changes in the synchrony of local interneuron networks. We performed computer simulations of a Hodgkin-Huxley type neuron driven by a constant depolarizing current, I, representing visual stimulation and a modulatory inhibitory input representing the effects of attention via local interneuron networks. We observed that the neuron's firing rate and the coherence of its output spike train with the synaptic inputs was modulated by the degree of synchrony of the inhibitory inputs. The model suggest that the observed changes in firing rate and coherence of neurons in the visual cortex could be controlled by top-down inputs that regulated the coherence in the activity of a local inhibitory network discharging at gamma frequencies.
[ { "created": "Mon, 14 Mar 2005 20:37:09 GMT", "version": "v1" } ]
2007-05-23
[ [ "Tiesinga", "Paul H. E.", "" ], [ "Fellous", "Jean-Marc", "" ], [ "Salinas", "Emilio", "" ], [ "Jose", "Jorge V.", "" ], [ "Sejnowski", "Terrence J.", "" ] ]
Recordings from area V4 of monkeys have revealed that when the focus of attention is on a visual stimulus within the receptive field of a cortical neuron, two distinct changes can occur: The firing rate of the neuron can change and there can be an increase in the coherence between spikes and the local field potential in the gamma-frequency range (30-50 Hz). The hypothesis explored here is that these observed effects of attention could be a consequence of changes in the synchrony of local interneuron networks. We performed computer simulations of a Hodgkin-Huxley type neuron driven by a constant depolarizing current, I, representing visual stimulation and a modulatory inhibitory input representing the effects of attention via local interneuron networks. We observed that the neuron's firing rate and the coherence of its output spike train with the synaptic inputs was modulated by the degree of synchrony of the inhibitory inputs. The model suggest that the observed changes in firing rate and coherence of neurons in the visual cortex could be controlled by top-down inputs that regulated the coherence in the activity of a local inhibitory network discharging at gamma frequencies.
2302.05004
Tom Reershemius
Tom Reershemius and Mike E. Kelland, Jacob S. Jordan, Isabelle R. Davis, Rocco D'Ascanio, Boriana Kalderon-Asael, Dan Asael, T. Jesper Suhrhoff, Dimitar Z. Epihov, David J. Beerling, Christopher T. Reinhard, Noah J. Planavsky
Initial validation of a soil-based mass-balance approach for empirical monitoring of enhanced rock weathering rates
Environmental Science & Technology (2023)
null
10.1021/acs.est.3c03609
null
q-bio.OT
http://creativecommons.org/licenses/by-nc-nd/4.0/
Enhanced Rock Weathering (ERW) is a promising scalable and cost-effective Carbon Dioxide Removal (CDR) strategy with significant environmental and agronomic co-benefits. A major barrier to large-scale implementation of ERW is a robust Monitoring, Reporting, and Verification (MRV) framework. To successfully quantify the amount of carbon dioxide removed by ERW, MRV must be accurate, precise, and cost-effective. Here, we outline a mass-balance-based method where analysis of the chemical composition of soil samples is used to track in-situ silicate rock weathering. We show that signal-to-noise issues of in-situ soil analysis can be mitigated by using isotope-dilution mass spectrometry to reduce analytical error. We implement a proof-of-concept experiment demonstrating the method in controlled mesocosms. In our experiment, basalt rock feedstock is added to soil columns containing the cereal crop Sorghum bicolor at a rate equivalent to 50 t ha$^{-1}$. Using our approach, we calculate rock weathering corresponding to an average initial CDR value of 1.44 +/- 0.27 tCO$_2$eq ha$^{-1}$ from our experiments after 235 days, within error of an independent estimate calculated using conventional elemental budgeting of reaction products. Our method provides a robust time-integrated estimate of initial CDR, to feed into models that track and validate large-scale carbon removal through ERW.
[ { "created": "Fri, 10 Feb 2023 01:19:35 GMT", "version": "v1" }, { "created": "Thu, 16 Feb 2023 23:05:04 GMT", "version": "v2" }, { "created": "Wed, 29 Mar 2023 02:51:54 GMT", "version": "v3" }, { "created": "Thu, 18 May 2023 14:22:44 GMT", "version": "v4" }, { "created": "Sun, 22 Oct 2023 22:55:08 GMT", "version": "v5" } ]
2023-11-21
[ [ "Reershemius", "Tom", "" ], [ "Kelland", "Mike E.", "" ], [ "Jordan", "Jacob S.", "" ], [ "Davis", "Isabelle R.", "" ], [ "D'Ascanio", "Rocco", "" ], [ "Kalderon-Asael", "Boriana", "" ], [ "Asael", "Dan", "" ], [ "Suhrhoff", "T. Jesper", "" ], [ "Epihov", "Dimitar Z.", "" ], [ "Beerling", "David J.", "" ], [ "Reinhard", "Christopher T.", "" ], [ "Planavsky", "Noah J.", "" ] ]
Enhanced Rock Weathering (ERW) is a promising scalable and cost-effective Carbon Dioxide Removal (CDR) strategy with significant environmental and agronomic co-benefits. A major barrier to large-scale implementation of ERW is a robust Monitoring, Reporting, and Verification (MRV) framework. To successfully quantify the amount of carbon dioxide removed by ERW, MRV must be accurate, precise, and cost-effective. Here, we outline a mass-balance-based method where analysis of the chemical composition of soil samples is used to track in-situ silicate rock weathering. We show that signal-to-noise issues of in-situ soil analysis can be mitigated by using isotope-dilution mass spectrometry to reduce analytical error. We implement a proof-of-concept experiment demonstrating the method in controlled mesocosms. In our experiment, basalt rock feedstock is added to soil columns containing the cereal crop Sorghum bicolor at a rate equivalent to 50 t ha$^{-1}$. Using our approach, we calculate rock weathering corresponding to an average initial CDR value of 1.44 +/- 0.27 tCO$_2$eq ha$^{-1}$ from our experiments after 235 days, within error of an independent estimate calculated using conventional elemental budgeting of reaction products. Our method provides a robust time-integrated estimate of initial CDR, to feed into models that track and validate large-scale carbon removal through ERW.
0801.1480
Sumedha
Sumedha, Martin Weigt
A thermodynamic model for agglomeration of DNA-looping proteins
12 pages, 5 figures, to app. in JSTAT
J. Stat. Mech. (2008) P11005
10.1088/1742-5468/2008/11/P11005
null
q-bio.SC cond-mat.soft cond-mat.stat-mech physics.bio-ph q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a thermodynamic mechanism for the formation of transcriptional foci via the joint agglomeration of DNA-looping proteins and protein-binding domains on DNA: The competition between the gain in protein-DNA binding free energy and the entropy loss due to DNA looping is argued to result in an effective attraction between loops. A mean-field approximation can be described analytically via a mapping to a restricted random-graph ensemble having local degree constraints and global constraints on the number of connected components. It shows the emergence of protein clusters containing a finite fraction of all looping proteins. If the entropy loss due to a single DNA loop is high enough, this transition is found to be of first order.
[ { "created": "Wed, 9 Jan 2008 18:21:04 GMT", "version": "v1" }, { "created": "Mon, 20 Oct 2008 16:37:38 GMT", "version": "v2" } ]
2008-11-26
[ [ "Sumedha", "", "" ], [ "Weigt", "Martin", "" ] ]
In this paper, we propose a thermodynamic mechanism for the formation of transcriptional foci via the joint agglomeration of DNA-looping proteins and protein-binding domains on DNA: The competition between the gain in protein-DNA binding free energy and the entropy loss due to DNA looping is argued to result in an effective attraction between loops. A mean-field approximation can be described analytically via a mapping to a restricted random-graph ensemble having local degree constraints and global constraints on the number of connected components. It shows the emergence of protein clusters containing a finite fraction of all looping proteins. If the entropy loss due to a single DNA loop is high enough, this transition is found to be of first order.
1606.04057
Sarah Marzen
Sarah Marzen and Simon DeDeo
Weak universality in sensory tradeoffs
null
Phys. Rev. E 94, 060101 (2016)
10.1103/PhysRevE.94.060101
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For many organisms, the number of sensory neurons is largely determined during development, before strong environmental cues are present. This is despite the fact that environments can fluctuate drastically both from generation to generation and within an organism's lifetime. How can organisms get by by hard-coding the number of sensory neurons? We approach this question using rate-distortion theory. A combination of simulation and theory suggests that when environments are large, the rate-distortion function---a proxy for material costs, timing delays, and energy requirements---depends only on coarse-grained environmental statistics that are expected to change on evolutionary, rather than ontogenetic, timescales.
[ { "created": "Mon, 13 Jun 2016 18:18:11 GMT", "version": "v1" } ]
2018-10-17
[ [ "Marzen", "Sarah", "" ], [ "DeDeo", "Simon", "" ] ]
For many organisms, the number of sensory neurons is largely determined during development, before strong environmental cues are present. This is despite the fact that environments can fluctuate drastically both from generation to generation and within an organism's lifetime. How can organisms get by by hard-coding the number of sensory neurons? We approach this question using rate-distortion theory. A combination of simulation and theory suggests that when environments are large, the rate-distortion function---a proxy for material costs, timing delays, and energy requirements---depends only on coarse-grained environmental statistics that are expected to change on evolutionary, rather than ontogenetic, timescales.
q-bio/0601015
German Andres Enciso
G.A. Enciso
A dichotomy for a class of cyclic delay systems
19 pages, two figures
null
null
null
q-bio.MN
null
Two complementary analyses of a cyclic negative feedback system with delay are considered in this paper. The first analysis applies the work by Sontag, Angeli, Enciso and others regarding monotone control systems under negative feedback, and it implies the global attractiveness towards an equilibrium for arbitrary delays. The second one concerns the existence of a Hopf bifurcation on the delay parameter, and it implies the existence of nonconstant periodic solutions for special delay values. A key idea is the use of the Schwarzian derivative, and its application for the study of Michaelis-Menten nonlinearities. The positive feedback case is also addressed.
[ { "created": "Wed, 11 Jan 2006 22:01:36 GMT", "version": "v1" } ]
2007-05-23
[ [ "Enciso", "G. A.", "" ] ]
Two complementary analyses of a cyclic negative feedback system with delay are considered in this paper. The first analysis applies the work by Sontag, Angeli, Enciso and others regarding monotone control systems under negative feedback, and it implies the global attractiveness towards an equilibrium for arbitrary delays. The second one concerns the existence of a Hopf bifurcation on the delay parameter, and it implies the existence of nonconstant periodic solutions for special delay values. A key idea is the use of the Schwarzian derivative, and its application for the study of Michaelis-Menten nonlinearities. The positive feedback case is also addressed.
1906.05166
Antonio de Candia
S. Scarpetta, I. Apicella, L. Minati, A. de Candia
Hysteresis, neural avalanches and critical behaviour near a first-order transition of a spiking neural network
null
Phys. Rev. E 97, 062305 (2018)
10.1103/PhysRevE.97.062305
null
q-bio.NC cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many experimental results, both in-vivo and in-vitro, support the idea that the brain cortex operates near a critical point, and at the same time works as a reservoir of precise spatio-temporal patterns. However the mechanism at the basis of these observations is still not clear. In this paper we introduce a model which combines both these features, showing that scale-free avalanches are the signature of a system posed near the spinodal line of a first order transition, with many spatio-temporal patterns stored as dynamical metastable attractors. Specifically, we studied a network of leaky integrate and fire neurons, whose connections are the result of the learning of multiple spatio-temporal dynamical patterns, each with a randomly chosen ordering of the neurons. We found that the network shows a first order transition between a low spiking rate disordered state (down), and a high rate state characterized by the emergence of collective activity and the replay of one of the stored patterns (up). The transition is characterized by hysteresis, or alternation of up and down states, depending on the lifetime of the metastable states. In both cases, critical features and neural avalanches are observed. Notably, critical phenomena occur at the edge of a discontinuous phase transition, as recently observed in a network of glow lamps.
[ { "created": "Wed, 12 Jun 2019 14:26:43 GMT", "version": "v1" } ]
2019-06-14
[ [ "Scarpetta", "S.", "" ], [ "Apicella", "I.", "" ], [ "Minati", "L.", "" ], [ "de Candia", "A.", "" ] ]
Many experimental results, both in-vivo and in-vitro, support the idea that the brain cortex operates near a critical point, and at the same time works as a reservoir of precise spatio-temporal patterns. However the mechanism at the basis of these observations is still not clear. In this paper we introduce a model which combines both these features, showing that scale-free avalanches are the signature of a system posed near the spinodal line of a first order transition, with many spatio-temporal patterns stored as dynamical metastable attractors. Specifically, we studied a network of leaky integrate and fire neurons, whose connections are the result of the learning of multiple spatio-temporal dynamical patterns, each with a randomly chosen ordering of the neurons. We found that the network shows a first order transition between a low spiking rate disordered state (down), and a high rate state characterized by the emergence of collective activity and the replay of one of the stored patterns (up). The transition is characterized by hysteresis, or alternation of up and down states, depending on the lifetime of the metastable states. In both cases, critical features and neural avalanches are observed. Notably, critical phenomena occur at the edge of a discontinuous phase transition, as recently observed in a network of glow lamps.
0802.1667
Irene Giardina Dr
Michele Ballerini, Nicola Cabibbo, Raphael Candelier, Andrea Cavagna, Evaristo Cisbani, Irene Giardina, Alberto Orlandi, Giorgio Parisi, Andrea Procaccini, Massimiliano Viale, Vladimir Zdravkovic
An empirical study of large, naturally occurring starling flocks: a benchmark in collective animal behaviour
To be published in Animal Behaviour
null
null
null
q-bio.QM cond-mat.stat-mech q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bird flocking is a striking example of collective animal behaviour. A vivid illustration of this phenomenon is provided by the aerial display of vast flocks of starlings gathering at dusk over the roost and swirling with extraordinary spatial coherence. Both the evolutionary justification and the mechanistic laws of flocking are poorly understood, arguably because of a lack of data on large flocks. Here, we report a quantitative study of aerial display. We measured the individual three-dimensional positions in compact flocks of up to 2700 birds. We investigated the main features of the flock as a whole - shape, movement, density and structure - and discuss these as emergent attributes of the grouping phenomenon. We find that flocks are relatively thin, with variable sizes, but constant proportions. They tend to slide parallel to the ground and, during turns, their orientation changes with respect to the direction of motion. Individual birds keep a minimum distance from each other that is comparable to their wingspan. The density within the aggregations is non-homogeneous, as birds are packed more tightly at the border compared to the centre of the flock. These results constitute the first set of large-scale data on three-dimensional animal aggregations. Current models and theories of collective animal behaviour can now be tested against these results.
[ { "created": "Tue, 12 Feb 2008 15:13:14 GMT", "version": "v1" } ]
2008-02-19
[ [ "Ballerini", "Michele", "" ], [ "Cabibbo", "Nicola", "" ], [ "Candelier", "Raphael", "" ], [ "Cavagna", "Andrea", "" ], [ "Cisbani", "Evaristo", "" ], [ "Giardina", "Irene", "" ], [ "Orlandi", "Alberto", "" ], [ "Parisi", "Giorgio", "" ], [ "Procaccini", "Andrea", "" ], [ "Viale", "Massimiliano", "" ], [ "Zdravkovic", "Vladimir", "" ] ]
Bird flocking is a striking example of collective animal behaviour. A vivid illustration of this phenomenon is provided by the aerial display of vast flocks of starlings gathering at dusk over the roost and swirling with extraordinary spatial coherence. Both the evolutionary justification and the mechanistic laws of flocking are poorly understood, arguably because of a lack of data on large flocks. Here, we report a quantitative study of aerial display. We measured the individual three-dimensional positions in compact flocks of up to 2700 birds. We investigated the main features of the flock as a whole - shape, movement, density and structure - and discuss these as emergent attributes of the grouping phenomenon. We find that flocks are relatively thin, with variable sizes, but constant proportions. They tend to slide parallel to the ground and, during turns, their orientation changes with respect to the direction of motion. Individual birds keep a minimum distance from each other that is comparable to their wingspan. The density within the aggregations is non-homogeneous, as birds are packed more tightly at the border compared to the centre of the flock. These results constitute the first set of large-scale data on three-dimensional animal aggregations. Current models and theories of collective animal behaviour can now be tested against these results.
2307.07444
Yuji Hirono
Yuji Hirono, Ankit Gupta, Mustafa Khammash
Complete characterization of robust perfect adaptation in biochemical reaction networks
65 pages, 16 figures
null
null
KUNS-2973, RIKEN-iTHEMS-Report-23
q-bio.MN cond-mat.stat-mech physics.bio-ph physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Perfect adaptation is a phenomenon whereby the output variables of a system can maintain certain values despite external disturbances. Robust perfect adaptation (RPA) refers to an adaptation property that does not require fine-tuning of system parameters. RPA plays a vital role for the survival of living systems in unpredictable environments. However, complex interaction patterns in biochemical systems pose a significant challenge in identifying RPA and associated regulatory mechanisms. The goal of this paper is to present a novel approach for finding all RPA properties that are realized for a generic choice of kinetics for general deterministic chemical reaction systems. This is accomplished by proving that an RPA property is represented by a subnetwork with certain topological features. This connection is exploited to show that these structures generate all kinetics-independent RPA properties, allowing us to systematically identify all RPA properties by enumerating these subnetworks. An efficient method is developed for this enumeration, and we provide a computational package for this purpose. We pinpoint the integral feedback controllers that work in concert to realize each RPA property, casting our results into the familiar control-theoretic paradigm of the Internal Model Principle. We further generalize the regulation problem to the multi-output scenario where the target values belong to a manifold of nonzero dimension, and provide a sufficient condition for this. The present work significantly advances our understanding of regulatory mechanisms that lead to RPA in endogenous biochemical systems, and it also provides rational design principles for synthetic controllers. The present results indicate that an RPA property is essentially equivalent to the existence of a "topological invariant", which is an instance of what we call the "Robust Adaptation is Topological"(RAT) principle.
[ { "created": "Fri, 14 Jul 2023 16:06:48 GMT", "version": "v1" } ]
2023-07-17
[ [ "Hirono", "Yuji", "" ], [ "Gupta", "Ankit", "" ], [ "Khammash", "Mustafa", "" ] ]
Perfect adaptation is a phenomenon whereby the output variables of a system can maintain certain values despite external disturbances. Robust perfect adaptation (RPA) refers to an adaptation property that does not require fine-tuning of system parameters. RPA plays a vital role for the survival of living systems in unpredictable environments. However, complex interaction patterns in biochemical systems pose a significant challenge in identifying RPA and associated regulatory mechanisms. The goal of this paper is to present a novel approach for finding all RPA properties that are realized for a generic choice of kinetics for general deterministic chemical reaction systems. This is accomplished by proving that an RPA property is represented by a subnetwork with certain topological features. This connection is exploited to show that these structures generate all kinetics-independent RPA properties, allowing us to systematically identify all RPA properties by enumerating these subnetworks. An efficient method is developed for this enumeration, and we provide a computational package for this purpose. We pinpoint the integral feedback controllers that work in concert to realize each RPA property, casting our results into the familiar control-theoretic paradigm of the Internal Model Principle. We further generalize the regulation problem to the multi-output scenario where the target values belong to a manifold of nonzero dimension, and provide a sufficient condition for this. The present work significantly advances our understanding of regulatory mechanisms that lead to RPA in endogenous biochemical systems, and it also provides rational design principles for synthetic controllers. The present results indicate that an RPA property is essentially equivalent to the existence of a "topological invariant", which is an instance of what we call the "Robust Adaptation is Topological"(RAT) principle.
1706.05932
Alessandro Fontana
Alessandro Fontana
A deep learning-inspired model of the hippocampus as storage device of the brain extended dataset
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The standard model of memory consolidation foresees that memories are initially recorded in the hippocampus, while features that capture higher-level generalisations of data are created in the cortex, where they are stored for a possibly indefinite period of time. Computer scientists have sought inspiration from nature to build machines that exhibit some of the remarkable properties present in biological systems. One of the results of this effort is represented by artificial neural networks, a class of algorithms that represent the state of the art in many artificial intelligence applications. In this work, we reverse the inspiration flow and use the experience obtained from neural networks to gain insight into the design of brain architecture and the functioning of memory. Our starting observation is that neural networks learn from data and need to be exposed to each data record many times during learning: this requires the storage of the entire dataset in computer memory. Our thesis is that the same holds true for the brain and the main role of the hippocampus is to store the "brain dataset", from which high-level features are learned and encoded in cortical neurons.
[ { "created": "Fri, 26 May 2017 15:21:43 GMT", "version": "v1" } ]
2017-06-20
[ [ "Fontana", "Alessandro", "" ] ]
The standard model of memory consolidation foresees that memories are initially recorded in the hippocampus, while features that capture higher-level generalisations of data are created in the cortex, where they are stored for a possibly indefinite period of time. Computer scientists have sought inspiration from nature to build machines that exhibit some of the remarkable properties present in biological systems. One of the results of this effort is represented by artificial neural networks, a class of algorithms that represent the state of the art in many artificial intelligence applications. In this work, we reverse the inspiration flow and use the experience obtained from neural networks to gain insight into the design of brain architecture and the functioning of memory. Our starting observation is that neural networks learn from data and need to be exposed to each data record many times during learning: this requires the storage of the entire dataset in computer memory. Our thesis is that the same holds true for the brain and the main role of the hippocampus is to store the "brain dataset", from which high-level features are learned and encoded in cortical neurons.
2010.12067
Yuto Omae
Yuto Omae, Jun Toyotani, Kazuyuki Hara, Yasuhiro Gon, Hirotaka Takahashi
A Calculation Model for Estimating Effect of COVID-19 Contact-Confirming Application (COCOA) on Decreasing Infectors
4 pages, 3 figures
Mathematical Biosciences and Engineering, 2021, Volume 18, Issue 5, pp.6506-6526
10.3934/mbe.2021323
null
q-bio.OT cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As of 2020, COVID-19 is spreading in the world. In Japan, the Ministry of Health, Labor and Welfare developed COVID-19 Contact-Confirming Application (COCOA). The researches to examine the effect of COCOA are still not sufficient. We develop a mathematical model to examine the effect of COCOA and show examined result.
[ { "created": "Sat, 17 Oct 2020 09:32:39 GMT", "version": "v1" } ]
2021-08-10
[ [ "Omae", "Yuto", "" ], [ "Toyotani", "Jun", "" ], [ "Hara", "Kazuyuki", "" ], [ "Gon", "Yasuhiro", "" ], [ "Takahashi", "Hirotaka", "" ] ]
As of 2020, COVID-19 is spreading in the world. In Japan, the Ministry of Health, Labor and Welfare developed COVID-19 Contact-Confirming Application (COCOA). The researches to examine the effect of COCOA are still not sufficient. We develop a mathematical model to examine the effect of COCOA and show examined result.
1911.06425
Anne-Florence Bitbol
Lo\"ic Marrec and Anne-Florence Bitbol
Resist or perish: fate of a microbial population subjected to a periodic presence of antimicrobial
36 pages, 16 figures
PLoS Comput. Biol.16(4): e1007798 (2020)
10.1371/journal.pcbi.1007798
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The evolution of antimicrobial resistance can be strongly affected by variations of antimicrobial concentration. Here, we study the impact of periodic alternations of absence and presence of antimicrobial on resistance evolution in a microbial population, using a stochastic model that includes variations of both population composition and size, and fully incorporates stochastic population extinctions. We show that fast alternations of presence and absence of antimicrobial are inefficient to eradicate the microbial population and strongly favor the establishment of resistance, unless the antimicrobial increases enough the death rate. We further demonstrate that if the period of alternations is longer than a threshold value, the microbial population goes extinct upon the first addition of antimicrobial, if it is not rescued by resistance. We express the probability that the population is eradicated upon the first addition of antimicrobial, assuming rare mutations. Rescue by resistance can happen either if resistant mutants preexist, or if they appear after antimicrobial is added to the environment. Importantly, the latter case is fully prevented by perfect biostatic antimicrobials that completely stop division of sensitive microorganisms. By contrast, we show that the parameter regime where treatment is efficient is larger for biocidal drugs than for biostatic drugs. This sheds light on the respective merits of different antimicrobial modes of action.
[ { "created": "Fri, 15 Nov 2019 00:20:35 GMT", "version": "v1" }, { "created": "Tue, 24 Mar 2020 15:46:02 GMT", "version": "v2" } ]
2020-04-14
[ [ "Marrec", "Loïc", "" ], [ "Bitbol", "Anne-Florence", "" ] ]
The evolution of antimicrobial resistance can be strongly affected by variations of antimicrobial concentration. Here, we study the impact of periodic alternations of absence and presence of antimicrobial on resistance evolution in a microbial population, using a stochastic model that includes variations of both population composition and size, and fully incorporates stochastic population extinctions. We show that fast alternations of presence and absence of antimicrobial are inefficient to eradicate the microbial population and strongly favor the establishment of resistance, unless the antimicrobial increases enough the death rate. We further demonstrate that if the period of alternations is longer than a threshold value, the microbial population goes extinct upon the first addition of antimicrobial, if it is not rescued by resistance. We express the probability that the population is eradicated upon the first addition of antimicrobial, assuming rare mutations. Rescue by resistance can happen either if resistant mutants preexist, or if they appear after antimicrobial is added to the environment. Importantly, the latter case is fully prevented by perfect biostatic antimicrobials that completely stop division of sensitive microorganisms. By contrast, we show that the parameter regime where treatment is efficient is larger for biocidal drugs than for biostatic drugs. This sheds light on the respective merits of different antimicrobial modes of action.
2211.09062
Mohamed El Khalifi
Mohamed El Khalifi and Tom Britton
Extending SIRS epidemics to allow for gradual waning of immunity
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
SIRS epidemic models assume that individual immunity (from infection and vaccination) wanes in one big leap, from complete immunity to complete susceptibility. For many diseases immunity on the contrary wanes gradually, something that's become even more evident during COVID-19 pandemic where also recently infected have a reinfection risk, and where booster vaccines are given to increase immunity. This paper considers an epidemic model allowing for such gradual waning of immunity (either linear or exponential waning) thereby extending SIRS epidemics, and also incorporates vaccination. The two versions for gradual waning of immunity are compared with the classic SIRS epidemic, where the three models are calibrated by having the same \emph{average cumulative immunity}. All models are shown to have identical basic reproduction number $R_0$. However, if no prevention is put in place, the exponential waning model has highest prevalence and the classic SIRS model has lowest. Similarly, the amount of vaccine supply needed to reach and maintain herd immunity is highest for the model with exponential decay of immunity and lowest for the classic SIRS model. consequently, if truth lies close to exponential (or linear) decay of immunity, expressions based on the SIRS epidemic will underestimate the endemic level and the critical vaccine supply will not be sufficient to reach and maintain herd immunity. For parameter choices fitting to COVID-19, the critical amount of vaccine supply is about 50% higher if immunity wanes linearly, and more than 150% higher when immunity wanes exponentially, as compared to the classic SIRS epidemic model.
[ { "created": "Wed, 16 Nov 2022 17:22:55 GMT", "version": "v1" } ]
2022-11-17
[ [ "Khalifi", "Mohamed El", "" ], [ "Britton", "Tom", "" ] ]
SIRS epidemic models assume that individual immunity (from infection and vaccination) wanes in one big leap, from complete immunity to complete susceptibility. For many diseases immunity on the contrary wanes gradually, something that's become even more evident during COVID-19 pandemic where also recently infected have a reinfection risk, and where booster vaccines are given to increase immunity. This paper considers an epidemic model allowing for such gradual waning of immunity (either linear or exponential waning) thereby extending SIRS epidemics, and also incorporates vaccination. The two versions for gradual waning of immunity are compared with the classic SIRS epidemic, where the three models are calibrated by having the same \emph{average cumulative immunity}. All models are shown to have identical basic reproduction number $R_0$. However, if no prevention is put in place, the exponential waning model has highest prevalence and the classic SIRS model has lowest. Similarly, the amount of vaccine supply needed to reach and maintain herd immunity is highest for the model with exponential decay of immunity and lowest for the classic SIRS model. consequently, if truth lies close to exponential (or linear) decay of immunity, expressions based on the SIRS epidemic will underestimate the endemic level and the critical vaccine supply will not be sufficient to reach and maintain herd immunity. For parameter choices fitting to COVID-19, the critical amount of vaccine supply is about 50% higher if immunity wanes linearly, and more than 150% higher when immunity wanes exponentially, as compared to the classic SIRS epidemic model.
q-bio/0407004
Emmanuel Tannenbaum
Emmanuel Tannenbaum, James L. Sherley, Eugene I. Shakhnovich
Semiconservative replication in the quasispecies model II: Generalization to arbitrary lesion repair probabilities
18 pages, 3 figures. To be submitted to Physical Review E
null
null
null
q-bio.PE q-bio.GN
null
This paper extends the semiconservative quasispecies equations to account for arbitrary post-replication lesion repair efficiency. Such an extension could be an important tool for understanding processes such as cancer development and stem cell growth. Starting from the quasispecies dynamics over the space of genomes, we derive an equivalent dynamics over the space of ordered sequence pairs. From this set of equations, we are able to derive the infinite sequence length form of the dynamics for a class of ``master-genome''-based fitness landscapes. We use these equations to solve for a ``generalized'' single-fitness-peak landscape, where the master genome can sustain a maximum number of lesions and remain viable. The central pattern that emerges from our studies is that imperfect lesion repair often leads to increased mutational robustness over semiconservative replication with completely efficient lesion repair. The reason for this is that imperfect lesion repair breaks some of the correlation between the parent and daughter strands, thereby preventing replication errors from destroying the information in the original genome. The result is a delayed error catastrophe over that expected from the original semiconservative quasispecies model. In particular, we show that when only of the strands is necessary for conferring viability, then, when lesion repair is turned off, a semiconservatively replicating system becomes an effectively conservatively replicating one.
[ { "created": "Fri, 2 Jul 2004 21:40:28 GMT", "version": "v1" } ]
2007-05-23
[ [ "Tannenbaum", "Emmanuel", "" ], [ "Sherley", "James L.", "" ], [ "Shakhnovich", "Eugene I.", "" ] ]
This paper extends the semiconservative quasispecies equations to account for arbitrary post-replication lesion repair efficiency. Such an extension could be an important tool for understanding processes such as cancer development and stem cell growth. Starting from the quasispecies dynamics over the space of genomes, we derive an equivalent dynamics over the space of ordered sequence pairs. From this set of equations, we are able to derive the infinite sequence length form of the dynamics for a class of ``master-genome''-based fitness landscapes. We use these equations to solve for a ``generalized'' single-fitness-peak landscape, where the master genome can sustain a maximum number of lesions and remain viable. The central pattern that emerges from our studies is that imperfect lesion repair often leads to increased mutational robustness over semiconservative replication with completely efficient lesion repair. The reason for this is that imperfect lesion repair breaks some of the correlation between the parent and daughter strands, thereby preventing replication errors from destroying the information in the original genome. The result is a delayed error catastrophe over that expected from the original semiconservative quasispecies model. In particular, we show that when only of the strands is necessary for conferring viability, then, when lesion repair is turned off, a semiconservatively replicating system becomes an effectively conservatively replicating one.
1408.3321
Donald Forsdyke Dr.
Donald R. Forsdyke
Lymphocyte repertoire selection and intracellular self/not-self discrimination: historical overview
A 24 page review with 95 references, submitted for formal publication to Immunology and Cell Biology (11th August 2014)
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Immunological self/not-self discrimination is conventionally seen as an extracellular event, involving interactions been receptors on T cells pre-educated to discriminate, and peptides bound to major histocompatibility complex proteins (pMHCs). Mechanisms by which not-self peptides might first be sorted intracellularly to distinguish them from the vast excess of self-peptides have long been called for. Recent demonstrations of endogenous peptide-specific clustering of pMHCs on membrane rafts are indicative of intracellular enrichment before surface display. The clustering could follow the specific aggregation of a foreign protein that exceeded its solubility limit in the crowded intracellular environment. Predominantly entropy-driven, this homoaggregation would co-localize identical peptides, so facilitating their collective presentation. Concentrations of self-proteins are fine-tuned over evolutionary time to avoid this. Disparate observations, such as pyrexia, and female susceptibility to autoimmune disease, can be explained in terms of the need to cosegregate cognate pMHC complexes internally prior to extracellular display.
[ { "created": "Thu, 14 Aug 2014 15:50:46 GMT", "version": "v1" } ]
2014-08-15
[ [ "Forsdyke", "Donald R.", "" ] ]
Immunological self/not-self discrimination is conventionally seen as an extracellular event, involving interactions been receptors on T cells pre-educated to discriminate, and peptides bound to major histocompatibility complex proteins (pMHCs). Mechanisms by which not-self peptides might first be sorted intracellularly to distinguish them from the vast excess of self-peptides have long been called for. Recent demonstrations of endogenous peptide-specific clustering of pMHCs on membrane rafts are indicative of intracellular enrichment before surface display. The clustering could follow the specific aggregation of a foreign protein that exceeded its solubility limit in the crowded intracellular environment. Predominantly entropy-driven, this homoaggregation would co-localize identical peptides, so facilitating their collective presentation. Concentrations of self-proteins are fine-tuned over evolutionary time to avoid this. Disparate observations, such as pyrexia, and female susceptibility to autoimmune disease, can be explained in terms of the need to cosegregate cognate pMHC complexes internally prior to extracellular display.
q-bio/0406005
Anders Eriksson
Anders Eriksson, Kristian Lindgren and Torbj\"orn Lundh
War of attrition with implicit time cost
Accepted for publication in Journal of Theoretical Biology
Journal of Theoretical Biology, 2004, vol 230(3), pp 319-332
10.1016/j.jtbi.2004.05.016
null
q-bio.PE
null
In the game-theoretic model war of attrition, players are subject to an explicit cost proportional to the duration of contests. We construct a model where the time cost is not explicitly given, but instead depends implicitly on the strategies of the whole population. We identify and analyse the underlying mechanisms responsible for the implicit time cost. Each player participates in a series of games, where those prepared to wait longer win with higher certainty but play less frequently. The model is characterised by the ratio of the winner's score to the loser's score, in a single game. The fitness of a player is determined by the accumulated score from the games played during a generation. We derive the stationary distribution of strategies under the replicator dynamics. When the score ratio is high, we find that the stationary distribution is unstable, with respect to both evolutionary and dynamical stability, and the dynamics converge to a limit cycle. When the ratio is low, the dynamics converge to the stationary distribution. For an intermediate interval of the ratio, the distribution is dynamically but not evolutionarily stable. Finally, the implications of our results for previous models based on the war of attrition are discussed.
[ { "created": "Wed, 2 Jun 2004 17:50:28 GMT", "version": "v1" } ]
2007-05-23
[ [ "Eriksson", "Anders", "" ], [ "Lindgren", "Kristian", "" ], [ "Lundh", "Torbjörn", "" ] ]
In the game-theoretic model war of attrition, players are subject to an explicit cost proportional to the duration of contests. We construct a model where the time cost is not explicitly given, but instead depends implicitly on the strategies of the whole population. We identify and analyse the underlying mechanisms responsible for the implicit time cost. Each player participates in a series of games, where those prepared to wait longer win with higher certainty but play less frequently. The model is characterised by the ratio of the winner's score to the loser's score, in a single game. The fitness of a player is determined by the accumulated score from the games played during a generation. We derive the stationary distribution of strategies under the replicator dynamics. When the score ratio is high, we find that the stationary distribution is unstable, with respect to both evolutionary and dynamical stability, and the dynamics converge to a limit cycle. When the ratio is low, the dynamics converge to the stationary distribution. For an intermediate interval of the ratio, the distribution is dynamically but not evolutionarily stable. Finally, the implications of our results for previous models based on the war of attrition are discussed.
2307.14372
Kabir Mustapha Umar Dr
Rehema Iddi Mrutu, Kabir Mustapha Umar, Adnan Abdulhamid, Morris Agaba and Abdussamad Muhammad Abdussamad
Microbial Engineering to Mitigate Methane Emissions in Ruminant Livestock -- A Review
null
null
null
null
q-bio.QM
http://creativecommons.org/publicdomain/zero/1.0/
The most recent and promising strategies for mitigating methane emissions in ruminants are reviewed highlighting the potential of reductive acetogenesis as a viable alternative to methanogenesis. The emergence of Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) technology, and its exceptional precision in genome editing, further enhances the prospects of exploring this avenue. Indeed, research in ruminant methane mitigation has been extensive, and over the years has resulted in the development of a wide variety of mitigation strategies. There is no doubt that the concepts of meat alternatives like lab-meat, microbial proteins and plant proteins may produce equivalent emissions. Reducing methane intensity through breeding and diet has been limited by our inability to phenotype ruminants in a high-throughput manner and the intensification of feed-food competition. Although chemical inhibitors have demonstrated effectiveness in manipulating the rumen microbiota to reduce net emissions, their success is constrained in terms of duration and feasibility in grazing system. Progress in making acetogenesis the dominant hydrogen sink in the rumen has been hampered by the thermodynamic cost of the reaction and the limited hydrogen levels in the rumen environment. We believe that CRISPR may allow the dominance of acetogenesis by converting methanogens into acetogens. We propose Methanobrevibacter ruminantium to be targeted because it is the dominant methane producer in the rumen.
[ { "created": "Tue, 25 Jul 2023 14:09:26 GMT", "version": "v1" } ]
2023-07-28
[ [ "Mrutu", "Rehema Iddi", "" ], [ "Umar", "Kabir Mustapha", "" ], [ "Abdulhamid", "Adnan", "" ], [ "Agaba", "Morris", "" ], [ "Abdussamad", "Abdussamad Muhammad", "" ] ]
The most recent and promising strategies for mitigating methane emissions in ruminants are reviewed highlighting the potential of reductive acetogenesis as a viable alternative to methanogenesis. The emergence of Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) technology, and its exceptional precision in genome editing, further enhances the prospects of exploring this avenue. Indeed, research in ruminant methane mitigation has been extensive, and over the years has resulted in the development of a wide variety of mitigation strategies. There is no doubt that the concepts of meat alternatives like lab-meat, microbial proteins and plant proteins may produce equivalent emissions. Reducing methane intensity through breeding and diet has been limited by our inability to phenotype ruminants in a high-throughput manner and the intensification of feed-food competition. Although chemical inhibitors have demonstrated effectiveness in manipulating the rumen microbiota to reduce net emissions, their success is constrained in terms of duration and feasibility in grazing system. Progress in making acetogenesis the dominant hydrogen sink in the rumen has been hampered by the thermodynamic cost of the reaction and the limited hydrogen levels in the rumen environment. We believe that CRISPR may allow the dominance of acetogenesis by converting methanogens into acetogens. We propose Methanobrevibacter ruminantium to be targeted because it is the dominant methane producer in the rumen.
2304.04777
Hemant Kumar Professor
Kamal Kishor Rajak, Pavan Pahilani, Harsh Patel, Bhavtosh Kikani, Rucha Desai, Hemant Kumar
Green synthesis of silver nanoparticles using Curcuma longa flower extract and antibacterial activity
null
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Silver nanoparticles (AgNP's) possess inherent biological potentials that have obliged an alternative, eco-friendly, sustainable approach to "Green Synthesis." In the present study, we synthesized Green Silver Nanoparticles (GAgNP's) using Curcuma longa L. (C. longa) flower extract as a reducing and capping agent. The synthesized GAgNP's were characterized using UV-Visible spectroscopy, X-ray diffraction (XRD), and High-resolution transmission electron microscopy (HR-TEM), which confirmed their homogeneity and physical characteristics. The GAgNP's were found to contain crystalline silver through XRD, and the particles were confirmed to be homogeneous and spherical with a size of approximately 5 nm, as evidenced by UV-Visible spectroscopy, XRD, and HR-TEM. In addition, the biological potential of GAgNP's was evaluated for their antibacterial activities. GAgNP's showed significant activity and formed different sizes of inhibition zones against all selected bacteria: Mycobacterium smegmatis (M. smegmatis) (26 mm), Mycobacterium phlei (M. phlei), and Staphylococcus aureus (S. aureus) (22 mm), Staphylococcus epidermidis (S. epidermidis) and Klebsiella pneumoniae (K. pneumoniae) (18 mm), and Escherichia coli (E. coli) (13 mm). The MIC value of GAgNP's was found to be between 625 ug/mL-39.06 ug/mL for different microbes tested. With further research, the green synthesis of GAgNP's using C. longa flower extracts could lead to the development of effective antibacterial treatments in the medical field.
[ { "created": "Mon, 10 Apr 2023 16:42:57 GMT", "version": "v1" } ]
2023-04-12
[ [ "Rajak", "Kamal Kishor", "" ], [ "Pahilani", "Pavan", "" ], [ "Patel", "Harsh", "" ], [ "Kikani", "Bhavtosh", "" ], [ "Desai", "Rucha", "" ], [ "Kumar", "Hemant", "" ] ]
Silver nanoparticles (AgNP's) possess inherent biological potentials that have obliged an alternative, eco-friendly, sustainable approach to "Green Synthesis." In the present study, we synthesized Green Silver Nanoparticles (GAgNP's) using Curcuma longa L. (C. longa) flower extract as a reducing and capping agent. The synthesized GAgNP's were characterized using UV-Visible spectroscopy, X-ray diffraction (XRD), and High-resolution transmission electron microscopy (HR-TEM), which confirmed their homogeneity and physical characteristics. The GAgNP's were found to contain crystalline silver through XRD, and the particles were confirmed to be homogeneous and spherical with a size of approximately 5 nm, as evidenced by UV-Visible spectroscopy, XRD, and HR-TEM. In addition, the biological potential of GAgNP's was evaluated for their antibacterial activities. GAgNP's showed significant activity and formed different sizes of inhibition zones against all selected bacteria: Mycobacterium smegmatis (M. smegmatis) (26 mm), Mycobacterium phlei (M. phlei), and Staphylococcus aureus (S. aureus) (22 mm), Staphylococcus epidermidis (S. epidermidis) and Klebsiella pneumoniae (K. pneumoniae) (18 mm), and Escherichia coli (E. coli) (13 mm). The MIC value of GAgNP's was found to be between 625 ug/mL-39.06 ug/mL for different microbes tested. With further research, the green synthesis of GAgNP's using C. longa flower extracts could lead to the development of effective antibacterial treatments in the medical field.
1610.07549
Filippo Disanto
Filippo Disanto and Noah A. Rosenberg
Enumeration of ancestral configurations for matching gene trees and species trees
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a gene tree and a species tree, ancestral configurations represent the combinatorially distinct sets of gene lineages that can reach a given node of the species tree. They have been introduced as a data structure for use in the recursive computation of the conditional probability under the multispecies coalescent model of a gene tree topology given a species tree, the cost of this computation being affected by the number of ancestral configurations of the gene tree in the species tree. For matching gene trees and species trees, we obtain enumerative results on ancestral configurations. We study ancestral configurations in balanced and unbalanced families of trees determined by a given seed tree, showing that for seed trees with more than one taxon, the number of ancestral configurations increases for both families exponentially in the number of taxa $n$. For fixed $n$, the maximal number of ancestral configurations tabulated at the species tree root node and the largest number of labeled histories possible for a labeled topology occur for trees with precisely the same unlabeled shape. For ancestral configurations at the root, the maximum increases with $k_0^n$, where $k_0 \approx 1.5028$ is a quadratic recurrence constant. Under a uniform distribution over the set of labeled trees of given size, the mean number of root ancestral configurations grows with $\sqrt{3/2}(4/3)^n$ and the variance with approximately $1.4048(1.8215)^n$. The results provide a contribution to the combinatorial study of gene trees and species trees.
[ { "created": "Mon, 24 Oct 2016 18:59:27 GMT", "version": "v1" } ]
2016-10-25
[ [ "Disanto", "Filippo", "" ], [ "Rosenberg", "Noah A.", "" ] ]
Given a gene tree and a species tree, ancestral configurations represent the combinatorially distinct sets of gene lineages that can reach a given node of the species tree. They have been introduced as a data structure for use in the recursive computation of the conditional probability under the multispecies coalescent model of a gene tree topology given a species tree, the cost of this computation being affected by the number of ancestral configurations of the gene tree in the species tree. For matching gene trees and species trees, we obtain enumerative results on ancestral configurations. We study ancestral configurations in balanced and unbalanced families of trees determined by a given seed tree, showing that for seed trees with more than one taxon, the number of ancestral configurations increases for both families exponentially in the number of taxa $n$. For fixed $n$, the maximal number of ancestral configurations tabulated at the species tree root node and the largest number of labeled histories possible for a labeled topology occur for trees with precisely the same unlabeled shape. For ancestral configurations at the root, the maximum increases with $k_0^n$, where $k_0 \approx 1.5028$ is a quadratic recurrence constant. Under a uniform distribution over the set of labeled trees of given size, the mean number of root ancestral configurations grows with $\sqrt{3/2}(4/3)^n$ and the variance with approximately $1.4048(1.8215)^n$. The results provide a contribution to the combinatorial study of gene trees and species trees.
q-bio/0411031
Marek Czachor
Diederik Aerts and Marek Czachor
Abstract DNA-type systems
version accepted in Nonlinearity
Nonlinearity 19, 575-589 (2006)
10.1088/0951-7715/19/3/003
null
q-bio.PE nlin.PS q-bio.QM quant-ph
null
An abstract DNA-type system is defined by a set of nonlinear kinetic equations with polynomial nonlinearities that admit soliton solutions associated with helical geometry. The set of equations allows for two different Lax representations: A von Neumann form and a Darboux-covariant Lax pair. We explain why non-Kolmogorovian probability models occurring in soliton kinetics are naturally associated with chemical reactions. The most general known characterization of soliton kinetic equations is given and a class of explicit soliton solutions is discussed. Switching between open and closed states is a generic behaviour of the helices. The effect does not crucially depend on the order of nonlinearity (i.e. types of reactions), a fact that may explain why simplified models possess properties occuring in realistic systems. We explain also why fluctuations based on Darboux transformations will not destroy the dynamics but only switch between a finite number of helical structures.
[ { "created": "Mon, 15 Nov 2004 19:40:40 GMT", "version": "v1" }, { "created": "Thu, 22 Dec 2005 12:19:12 GMT", "version": "v2" } ]
2009-11-10
[ [ "Aerts", "Diederik", "" ], [ "Czachor", "Marek", "" ] ]
An abstract DNA-type system is defined by a set of nonlinear kinetic equations with polynomial nonlinearities that admit soliton solutions associated with helical geometry. The set of equations allows for two different Lax representations: A von Neumann form and a Darboux-covariant Lax pair. We explain why non-Kolmogorovian probability models occurring in soliton kinetics are naturally associated with chemical reactions. The most general known characterization of soliton kinetic equations is given and a class of explicit soliton solutions is discussed. Switching between open and closed states is a generic behaviour of the helices. The effect does not crucially depend on the order of nonlinearity (i.e. types of reactions), a fact that may explain why simplified models possess properties occuring in realistic systems. We explain also why fluctuations based on Darboux transformations will not destroy the dynamics but only switch between a finite number of helical structures.
1804.03744
W. A. Zuniga-Galindo
W. A. Z\'u\~niga-Galindo
Non-Archimedean Replicator Dynamics and Eigen's Paradox
Some errors and typos were corrected. The introduction was shortened. Some references were deleted
null
10.1088/1751-8121/aaebb1
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new non-Archimedean model of evolutionary dynamics, in which the genomes are represented by p-adic numbers. In this model the genomes have a variable length, not necessarily bounded, in contrast with the classical models where the length is fixed. The time evolution of the concentration of a given genome is controlled by a p-adic evolution equation. This equation depends on a fitness function f and on mutation measure Q. By choosing a mutation measure of Gibbs type, and by using a p-adic version of the Maynard Smith Ansatz, we show the existence of threshold function M_{c}(f,Q), such that the long term survival of a genome requires that its length grows faster than M_{c}(f,Q). This implies that Eigen's paradox does not occur if the complexity of genomes grows at the right pace. About twenty years ago, Scheuring and Poole, Jeffares, Penny proposed a hypothesis to explain Eigen's paradox. Our mathematical model shows that this biological hypothesis is feasible, but it requires p-adic analysis instead of real analysis. More exactly, the Darwin-Eigen cycle proposed by Poole et al. takes place if the length of the genomes exceeds M_{c}(f,Q).
[ { "created": "Tue, 10 Apr 2018 22:46:55 GMT", "version": "v1" }, { "created": "Tue, 2 Oct 2018 21:21:00 GMT", "version": "v2" } ]
2018-12-05
[ [ "Zúñiga-Galindo", "W. A.", "" ] ]
We present a new non-Archimedean model of evolutionary dynamics, in which the genomes are represented by p-adic numbers. In this model the genomes have a variable length, not necessarily bounded, in contrast with the classical models where the length is fixed. The time evolution of the concentration of a given genome is controlled by a p-adic evolution equation. This equation depends on a fitness function f and on mutation measure Q. By choosing a mutation measure of Gibbs type, and by using a p-adic version of the Maynard Smith Ansatz, we show the existence of threshold function M_{c}(f,Q), such that the long term survival of a genome requires that its length grows faster than M_{c}(f,Q). This implies that Eigen's paradox does not occur if the complexity of genomes grows at the right pace. About twenty years ago, Scheuring and Poole, Jeffares, Penny proposed a hypothesis to explain Eigen's paradox. Our mathematical model shows that this biological hypothesis is feasible, but it requires p-adic analysis instead of real analysis. More exactly, the Darwin-Eigen cycle proposed by Poole et al. takes place if the length of the genomes exceeds M_{c}(f,Q).
1503.00350
Michael Assaf
Shay Be'er, Michael Assaf and Baruch Meerson
Colonization of a territory by a stochastic population under a strong Allee effect and a low immigration pressure
8 pages, 4 figures
Phys. Rev. E 91, 062126 (2015)
null
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the dynamics of colonization of a territory by a stochastic population at low immigration pressure. We assume a sufficiently strong Allee effect that introduces, in deterministic theory, a large critical population size for colonization. At low immigration rates, the average pre-colonization population size is small thus invalidating the WKB approximation to the master equation. We circumvent this difficulty by deriving an exact zero-flux solution of the master equation and matching it with an approximate non-zero-flux solution of the pertinent Fokker-Planck equation in a small region around the critical population size. This procedure provides an accurate evaluation of the quasi-stationary probability distribution of population sizes in the pre-colonization state, and of the mean time to colonization, for a wide range of immigration rates. At sufficiently high immigration rates our results agree with WKB results obtained previously. At low immigration rates the results can be very different.
[ { "created": "Sun, 1 Mar 2015 21:03:42 GMT", "version": "v1" }, { "created": "Thu, 4 Jun 2015 06:18:19 GMT", "version": "v2" }, { "created": "Thu, 29 Sep 2016 10:31:12 GMT", "version": "v3" } ]
2016-09-30
[ [ "Be'er", "Shay", "" ], [ "Assaf", "Michael", "" ], [ "Meerson", "Baruch", "" ] ]
We study the dynamics of colonization of a territory by a stochastic population at low immigration pressure. We assume a sufficiently strong Allee effect that introduces, in deterministic theory, a large critical population size for colonization. At low immigration rates, the average pre-colonization population size is small thus invalidating the WKB approximation to the master equation. We circumvent this difficulty by deriving an exact zero-flux solution of the master equation and matching it with an approximate non-zero-flux solution of the pertinent Fokker-Planck equation in a small region around the critical population size. This procedure provides an accurate evaluation of the quasi-stationary probability distribution of population sizes in the pre-colonization state, and of the mean time to colonization, for a wide range of immigration rates. At sufficiently high immigration rates our results agree with WKB results obtained previously. At low immigration rates the results can be very different.
q-bio/0312028
Monsieur Jean-Marc Allain
Jean-Marc Allain, Martine Ben Amar
Bi-Phasic Vesicles: instability induced by adsorption of proteins
19 avr. 2004
null
10.1016/j.physa.2003.12.058
null
q-bio.SC
null
The recent discovery of a lateral organization in cell membranes due to small structures called 'rafts' has motivated a lot of biological and physico-chemical studies. A new experiment on a model system has shown a spectacular budding process with the expulsion of one or two rafts when one introduces proteins on the membrane. In this paper, we give a physical interpretation of the budding of the raft phase. An approach based on the energy of the system including the presence of proteins is used to derive a shape equation and to study possible instabilities. This model shows two different situations which are strongly dependent on the nature of the proteins: a regime of easy budding when the proteins are strongly coupled to the membrane and a regime of difficult budding.
[ { "created": "Thu, 18 Dec 2003 21:25:57 GMT", "version": "v1" }, { "created": "Mon, 19 Apr 2004 12:37:08 GMT", "version": "v2" } ]
2009-11-10
[ [ "Allain", "Jean-Marc", "" ], [ "Amar", "Martine Ben", "" ] ]
The recent discovery of a lateral organization in cell membranes due to small structures called 'rafts' has motivated a lot of biological and physico-chemical studies. A new experiment on a model system has shown a spectacular budding process with the expulsion of one or two rafts when one introduces proteins on the membrane. In this paper, we give a physical interpretation of the budding of the raft phase. An approach based on the energy of the system including the presence of proteins is used to derive a shape equation and to study possible instabilities. This model shows two different situations which are strongly dependent on the nature of the proteins: a regime of easy budding when the proteins are strongly coupled to the membrane and a regime of difficult budding.
2112.12550
Bob Eisenberg
Bob Eisenberg
Setting Boundaries for Statistical Mechanics
null
null
null
null
q-bio.OT cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Statistical mechanics has grown without bounds in space. Statistical mechanics of point particles in an unbounded perfect gas is commonly accepted as a foundation for understanding many systems, including liquids like the concentrated salt solutions of life and electrochemical technology, from batteries to nanodevices. Liquids, however, are not gases. Liquids are filled with interacting molecules and so the model of a perfect gas is imperfect. Here we show that statistical mechanics without bounds (in space) is impossible as well as imperfect, if the molecules interact as charged particles, as nearly all atoms do. The behavior of charged particles is not defined until boundary structures and values are defined because charges are governed by the Maxwell partial differential equations. Partial differential equations require boundary conditions to be computable or well defined. The Maxwell equations require boundary conditions on finite sized spatial boundaries (i.e., structures). Boundary conditions cannot be defined 'at infinity' in a general (i.e., unique) way because the limiting process that defines infinity includes such a wide variety of behavior, from light waves that never decay, to fields from dipole and multipolar charges that decay steeply, to Coulomb fields that decay but not so steeply. Statistical mechanics involving charges thus involves spatial boundaries and boundary conditions of finite size. Nearly all matter involves charges, thus nearly all statistical mechanics requires structures and boundary conditions on those structures. Boundaries and boundary conditions are not prominent in classical statistical mechanics. Including boundaries is a challenge to mathematicians. Statistical mechanics must describe bounded systems if it is to provide a proper foundation for studying matter.
[ { "created": "Tue, 21 Dec 2021 16:01:37 GMT", "version": "v1" } ]
2021-12-24
[ [ "Eisenberg", "Bob", "" ] ]
Statistical mechanics has grown without bounds in space. Statistical mechanics of point particles in an unbounded perfect gas is commonly accepted as a foundation for understanding many systems, including liquids like the concentrated salt solutions of life and electrochemical technology, from batteries to nanodevices. Liquids, however, are not gases. Liquids are filled with interacting molecules and so the model of a perfect gas is imperfect. Here we show that statistical mechanics without bounds (in space) is impossible as well as imperfect, if the molecules interact as charged particles, as nearly all atoms do. The behavior of charged particles is not defined until boundary structures and values are defined because charges are governed by the Maxwell partial differential equations. Partial differential equations require boundary conditions to be computable or well defined. The Maxwell equations require boundary conditions on finite sized spatial boundaries (i.e., structures). Boundary conditions cannot be defined 'at infinity' in a general (i.e., unique) way because the limiting process that defines infinity includes such a wide variety of behavior, from light waves that never decay, to fields from dipole and multipolar charges that decay steeply, to Coulomb fields that decay but not so steeply. Statistical mechanics involving charges thus involves spatial boundaries and boundary conditions of finite size. Nearly all matter involves charges, thus nearly all statistical mechanics requires structures and boundary conditions on those structures. Boundaries and boundary conditions are not prominent in classical statistical mechanics. Including boundaries is a challenge to mathematicians. Statistical mechanics must describe bounded systems if it is to provide a proper foundation for studying matter.
1907.05363
Gianluca Truda
Gianluca Truda, Patrick Marais
Warfarin dose estimation on multiple datasets with automated hyperparameter optimisation and a novel software framework
19 pages, 4 tables, 3 figures
Journal of Biomedical Informatics, 2020
10.1016/j.jbi.2020.103634
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Warfarin is an effective preventative treatment for arterial and venous thromboembolism, but requires individualised dosing due to its narrow therapeutic range and high individual variation. Many machine learning techniques have been demonstrated in this domain. This study evaluated the accuracy of the most promising algorithms on the International Warfarin Pharmacogenetics Consortium dataset and a novel clinical dataset of South African patients. Support vectors and linear regression were amongst the top performers in both datasets and performed comparably to recent stacked ensemble approaches, whilst neural networks were one of the worst performers in both datasets. We also introduced genetic programming to automatically optimise model architectures and hyperparameters without human guidance. Remarkably, the generated models were found to match the performance of the best models hand-crafted by human experts. Finally, we present a novel software framework (Warfit-learn) for warfarin dosing research. It leverages the most successful techniques in preprocessing, imputation, and parallel evaluation, with the goal of accelerating research and making results in this domain more reproducible.
[ { "created": "Thu, 11 Jul 2019 16:35:28 GMT", "version": "v1" }, { "created": "Sat, 23 Nov 2019 14:48:30 GMT", "version": "v2" }, { "created": "Thu, 23 Jul 2020 15:01:04 GMT", "version": "v3" }, { "created": "Mon, 26 Oct 2020 20:38:08 GMT", "version": "v4" } ]
2020-12-02
[ [ "Truda", "Gianluca", "" ], [ "Marais", "Patrick", "" ] ]
Warfarin is an effective preventative treatment for arterial and venous thromboembolism, but requires individualised dosing due to its narrow therapeutic range and high individual variation. Many machine learning techniques have been demonstrated in this domain. This study evaluated the accuracy of the most promising algorithms on the International Warfarin Pharmacogenetics Consortium dataset and a novel clinical dataset of South African patients. Support vectors and linear regression were amongst the top performers in both datasets and performed comparably to recent stacked ensemble approaches, whilst neural networks were one of the worst performers in both datasets. We also introduced genetic programming to automatically optimise model architectures and hyperparameters without human guidance. Remarkably, the generated models were found to match the performance of the best models hand-crafted by human experts. Finally, we present a novel software framework (Warfit-learn) for warfarin dosing research. It leverages the most successful techniques in preprocessing, imputation, and parallel evaluation, with the goal of accelerating research and making results in this domain more reproducible.
1108.4715
Brian Williams Dr
Brian G Williams
Determinants of sexual transmission of HV: implications for control
14 pages
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The extent to which ART (anti-retroviral therapy) reduces HIV transmission has received attention in recent years. Using data on the relationship between transmission and viral load we show that transmission saturates at high viral loads. We fit a power-law model and an exponential converging to an asymptote. Using data on the viral load in HIV-positive people we show that ART is likely to reduce transmission by 91.6% (81.7%-96.2%) under the first and 99.5% (98.5%-99.8%) under the second model. The role of the acute phase in HIV transmission is still debated. High levels of transmission during the acute phase have been used to argue that failure to identify people in the acute phase of HIV may compromise the impact of treatment on preventing new infections and that having concurrent sexual partners during the acute phase is an important driver of the epidemic. We show that the acute phase probably accounts for less than 1% of overall transmission. We also show that even if a significant proportion of infections are transmitted during the acute phase, this will not compromise the impact of treatment on population levels of transmission given the constraint implied by the doubling time of the epidemic. This analysis leads to other relevant conclusions. First, it is likely that discordant-couple studies significantly underestimate the risk of infection. Second, attention should be paid to the variability in set point viral load which determines both the infectiousness of HIV-positive people and the variability in the susceptibility of HIV-negative people. Third, if ART drugs are in short supply those with the highest viral load should be given priority, others things including age, gender and opportunistic infections being equal, but to reduce transmission ART should be offered to all those with a viral load above about 10k/mm.3
[ { "created": "Tue, 23 Aug 2011 22:49:09 GMT", "version": "v1" } ]
2011-08-25
[ [ "Williams", "Brian G", "" ] ]
The extent to which ART (anti-retroviral therapy) reduces HIV transmission has received attention in recent years. Using data on the relationship between transmission and viral load we show that transmission saturates at high viral loads. We fit a power-law model and an exponential converging to an asymptote. Using data on the viral load in HIV-positive people we show that ART is likely to reduce transmission by 91.6% (81.7%-96.2%) under the first and 99.5% (98.5%-99.8%) under the second model. The role of the acute phase in HIV transmission is still debated. High levels of transmission during the acute phase have been used to argue that failure to identify people in the acute phase of HIV may compromise the impact of treatment on preventing new infections and that having concurrent sexual partners during the acute phase is an important driver of the epidemic. We show that the acute phase probably accounts for less than 1% of overall transmission. We also show that even if a significant proportion of infections are transmitted during the acute phase, this will not compromise the impact of treatment on population levels of transmission given the constraint implied by the doubling time of the epidemic. This analysis leads to other relevant conclusions. First, it is likely that discordant-couple studies significantly underestimate the risk of infection. Second, attention should be paid to the variability in set point viral load which determines both the infectiousness of HIV-positive people and the variability in the susceptibility of HIV-negative people. Third, if ART drugs are in short supply those with the highest viral load should be given priority, others things including age, gender and opportunistic infections being equal, but to reduce transmission ART should be offered to all those with a viral load above about 10k/mm.3
2212.14491
Ya-Chen Tsai
Ya-Chen Tsai, Wei-Yang Weng, Yu-Tong Yeh, Jun-Chau Chien
Dual-aptamer Drift Cancelling Techniques to Improve Long-term Stability of Real-Time Structure-Switching Aptasensors
null
null
null
null
q-bio.BM physics.bio-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper presents a dual-aptamer scheme to cancel the signal drifts from structure-switching aptamers during long-term monitoring. Electrochemical aptamer-based (E-AB) biosensors recently demonstrated their great potential for in vivo continuous monitoring. Nevertheless, the detection accuracy is often limited by the signaling drifts. Conventionally, these drifts are removed by the kinetic differential measurements (KDM) when coupled with square-wave voltammetry. Yet we discover that KDM does not apply to every aptamer as the responses at different SWV frequencies heavily depend on its structure-switching characteristics and the redox reporters' electron transfer (ET) kinetics. To this end, we present a "dual-aptamer" scheme that uses two aptamers responding differentially to the same molecular target for drift cancellation. We identify these paired aptamers through (1) screening from the existing aptamers pool and (2) engineering the signaling behavior of the redox reporters. We demonstrate their differential signaling to ampicillin and ATP molecules and show that the aptamer pair bears common drifts in undilute goat serum. Through cancellation, sensor drift is reduced by 370-fold. Benefiting from the "differential" signaling, the recording throughput is also doubled using differential readout electronics. The authors believe the proposed technique is beneficial for long-term in vivo monitoring.
[ { "created": "Fri, 30 Dec 2022 00:16:22 GMT", "version": "v1" } ]
2023-01-02
[ [ "Tsai", "Ya-Chen", "" ], [ "Weng", "Wei-Yang", "" ], [ "Yeh", "Yu-Tong", "" ], [ "Chien", "Jun-Chau", "" ] ]
This paper presents a dual-aptamer scheme to cancel the signal drifts from structure-switching aptamers during long-term monitoring. Electrochemical aptamer-based (E-AB) biosensors recently demonstrated their great potential for in vivo continuous monitoring. Nevertheless, the detection accuracy is often limited by the signaling drifts. Conventionally, these drifts are removed by the kinetic differential measurements (KDM) when coupled with square-wave voltammetry. Yet we discover that KDM does not apply to every aptamer as the responses at different SWV frequencies heavily depend on its structure-switching characteristics and the redox reporters' electron transfer (ET) kinetics. To this end, we present a "dual-aptamer" scheme that uses two aptamers responding differentially to the same molecular target for drift cancellation. We identify these paired aptamers through (1) screening from the existing aptamers pool and (2) engineering the signaling behavior of the redox reporters. We demonstrate their differential signaling to ampicillin and ATP molecules and show that the aptamer pair bears common drifts in undilute goat serum. Through cancellation, sensor drift is reduced by 370-fold. Benefiting from the "differential" signaling, the recording throughput is also doubled using differential readout electronics. The authors believe the proposed technique is beneficial for long-term in vivo monitoring.
2104.11624
Jerneja \v{S}tor
Jerneja \v{S}tor, David E. Ruckerbauer, Diana Sz\'eliova, J\"urgen Zanghellini, Nicole Borth
Towards rational glyco-engineering in CHO: from data to predictive models
15 pages, 2 figures, 63 references
null
null
null
q-bio.MN q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Metabolic modeling strives to develop modeling approaches that are robust and highly predictive. To achieve this, various modeling designs, including hybrid models, and parameter estimation methods that define the type and number of parameters used in the model, are adapted. Accurate input data play an important role so that the selection of experimental methods that provide input data of the required precision with low measurement errors is crucial. For the biopharmaceutically relevant protein glycosylation, the most prominent available models are kinetic models which are able to capture the dynamic nature of protein N-glycosylation. In this review we focus on how to choose the most suitable model for a specific research question, as well as on parameters and considerations to take into account before planning relevant experiments.
[ { "created": "Fri, 23 Apr 2021 14:21:52 GMT", "version": "v1" } ]
2021-04-26
[ [ "Štor", "Jerneja", "" ], [ "Ruckerbauer", "David E.", "" ], [ "Széliova", "Diana", "" ], [ "Zanghellini", "Jürgen", "" ], [ "Borth", "Nicole", "" ] ]
Metabolic modeling strives to develop modeling approaches that are robust and highly predictive. To achieve this, various modeling designs, including hybrid models, and parameter estimation methods that define the type and number of parameters used in the model, are adapted. Accurate input data play an important role so that the selection of experimental methods that provide input data of the required precision with low measurement errors is crucial. For the biopharmaceutically relevant protein glycosylation, the most prominent available models are kinetic models which are able to capture the dynamic nature of protein N-glycosylation. In this review we focus on how to choose the most suitable model for a specific research question, as well as on parameters and considerations to take into account before planning relevant experiments.
2405.03913
Wei Xie
Fuqiang Cheng, Wei Xie, Hua Zheng
Digital Twin Calibration for Biological System-of-Systems: Cell Culture Manufacturing Process
11 pages, 5 figures
null
null
null
q-bio.QM cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Biomanufacturing innovation relies on an efficient Design of Experiments (DoEs) to optimize processes and product quality. Traditional DoE methods, ignoring the underlying bioprocessing mechanisms, often suffer from a lack of interpretability and sample efficiency. This limitation motivates us to create a new optimal learning approach for digital twin model calibration. In this study, we consider the cell culture process multi-scale mechanistic model, also known as Biological System-of-Systems (Bio-SoS). This model with a modular design, composed of sub-models, allows us to integrate data across various production processes. To calibrate the Bio-SoS digital twin, we evaluate the mean squared error of model prediction and develop a computational approach to quantify the impact of parameter estimation error of individual sub-models on the prediction accuracy of digital twin, which can guide sample-efficient and interpretable DoEs.
[ { "created": "Tue, 7 May 2024 00:22:13 GMT", "version": "v1" }, { "created": "Fri, 28 Jun 2024 15:13:15 GMT", "version": "v2" } ]
2024-07-01
[ [ "Cheng", "Fuqiang", "" ], [ "Xie", "Wei", "" ], [ "Zheng", "Hua", "" ] ]
Biomanufacturing innovation relies on an efficient Design of Experiments (DoEs) to optimize processes and product quality. Traditional DoE methods, ignoring the underlying bioprocessing mechanisms, often suffer from a lack of interpretability and sample efficiency. This limitation motivates us to create a new optimal learning approach for digital twin model calibration. In this study, we consider the cell culture process multi-scale mechanistic model, also known as Biological System-of-Systems (Bio-SoS). This model with a modular design, composed of sub-models, allows us to integrate data across various production processes. To calibrate the Bio-SoS digital twin, we evaluate the mean squared error of model prediction and develop a computational approach to quantify the impact of parameter estimation error of individual sub-models on the prediction accuracy of digital twin, which can guide sample-efficient and interpretable DoEs.
1905.00751
Patrik Bachtiger
Heather Mattie, Patrick Reidy, Patrik Bachtiger, Emily Lindemer, Mohammad Jouni, Trishan Panch
A Framework for Predicting Impactability of Healthcare Interventions Using Machine Learning Methods, Administrative Claims, Sociodemographic and App Generated Data
null
null
null
null
q-bio.QM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is not clear how to target patients who are most likely to benefit from digital care management programs ex-ante, a shortcoming of current risk score based approaches. This study focuses on defining impactability by identifying those patients most likely to benefit from technology enabled care management, delivered through a digital health platform, including a mobile app and clinician web dashboard. Anonymized insurance claims data were used from a commercially insured population across several U.S. states and combined with inferred sociodemographic data and data derived from the patient-held mobile application itself. Our approach involves the creation of two models and the comparative analysis of the methodologies and performances therein. We first train a cost prediction model to calculate the differences in predicted (without intervention) versus actual (with onboarding onto digital health platform) healthcare expenditure for patients (N = 1,242). This enables the classification of impactability if differences in predicted versus actual costs meet a predetermined threshold. A random forest machine learning model was then trained to accurately categorize new patients as impactable versus not impactable, reaching an overall accuracy of 71.9%. We then modify these parameters through grid search to define the parameters that deliver optimal performance. A roadmap is proposed to iteratively improve the performance of the model. As the number of newly onboarded patients and length of use continues to increase, the accuracy of predicting impactability will improve commensurately as more advanced machine learning techniques such as deep learning become relevant. This approach is generalizable to analyzing the impactability of any intervention and is a key component of realising closed loop feedback systems for continuous improvement in healthcare.
[ { "created": "Fri, 19 Apr 2019 16:06:29 GMT", "version": "v1" }, { "created": "Wed, 15 May 2019 13:12:59 GMT", "version": "v2" } ]
2019-05-16
[ [ "Mattie", "Heather", "" ], [ "Reidy", "Patrick", "" ], [ "Bachtiger", "Patrik", "" ], [ "Lindemer", "Emily", "" ], [ "Jouni", "Mohammad", "" ], [ "Panch", "Trishan", "" ] ]
It is not clear how to target patients who are most likely to benefit from digital care management programs ex-ante, a shortcoming of current risk score based approaches. This study focuses on defining impactability by identifying those patients most likely to benefit from technology enabled care management, delivered through a digital health platform, including a mobile app and clinician web dashboard. Anonymized insurance claims data were used from a commercially insured population across several U.S. states and combined with inferred sociodemographic data and data derived from the patient-held mobile application itself. Our approach involves the creation of two models and the comparative analysis of the methodologies and performances therein. We first train a cost prediction model to calculate the differences in predicted (without intervention) versus actual (with onboarding onto digital health platform) healthcare expenditure for patients (N = 1,242). This enables the classification of impactability if differences in predicted versus actual costs meet a predetermined threshold. A random forest machine learning model was then trained to accurately categorize new patients as impactable versus not impactable, reaching an overall accuracy of 71.9%. We then modify these parameters through grid search to define the parameters that deliver optimal performance. A roadmap is proposed to iteratively improve the performance of the model. As the number of newly onboarded patients and length of use continues to increase, the accuracy of predicting impactability will improve commensurately as more advanced machine learning techniques such as deep learning become relevant. This approach is generalizable to analyzing the impactability of any intervention and is a key component of realising closed loop feedback systems for continuous improvement in healthcare.
1006.4281
Shay Tzur
Shay Tzur (1,8), Saharon Rosset (2,8), Revital Shemer (1), Guennady Yudkovsky (1), Sara Selig (1,3), Ayele Tarekegn (4,5), Endashaw Bekele (5), Neil Bradman (4), Walter G Wasser (6), Doron M Behar (3,7), Karl Skorecki (1,3,9).
Preliminary Report: Missense mutations in the APOL gene family are associated with end stage kidney disease risk previously attributed to the MYH9 gene
25 pages, 6 figures
Human Genetics, July 16 2010
10.1007/s00439-010-0861-0
20635188
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
MYH9 has been proposed as a major genetic risk locus for a spectrum of non-diabetic end stage kidney disease (ESKD). We use recently released sequences from the 1000 Genomes Project to identify two western African specific missense mutations (S342G and I384M) in the neighbouring APOL1 gene, and demonstrate that these are more strongly associated with ESKD than previously reported MYH9 variants. We also show that the distribution of these risk variants in African populations is consistent with the pattern of African ancestry ESKD risk previously attributed to the MYH9 gene. Additional associations were also found among other members of the APOL gene family, and we propose that ESKD risk is caused by western African variants in members of the APOL gene family, which evolved to confer protection against pathogens, such as Trypanosoma.
[ { "created": "Tue, 22 Jun 2010 13:09:53 GMT", "version": "v1" } ]
2010-08-02
[ [ "Tzur", "Shay", "" ], [ "Rosset", "Saharon", "" ], [ "Shemer", "Revital", "" ], [ "Yudkovsky", "Guennady", "" ], [ "Selig", "Sara", "" ], [ "Tarekegn", "Ayele", "" ], [ "Bekele", "Endashaw", "" ], [ "Bradman", "Neil", "" ], [ "Wasser", "Walter G", "" ], [ "Behar", "Doron M", "" ], [ "Skorecki", "Karl", "" ], [ ".", "", "" ] ]
MYH9 has been proposed as a major genetic risk locus for a spectrum of non-diabetic end stage kidney disease (ESKD). We use recently released sequences from the 1000 Genomes Project to identify two western African specific missense mutations (S342G and I384M) in the neighbouring APOL1 gene, and demonstrate that these are more strongly associated with ESKD than previously reported MYH9 variants. We also show that the distribution of these risk variants in African populations is consistent with the pattern of African ancestry ESKD risk previously attributed to the MYH9 gene. Additional associations were also found among other members of the APOL gene family, and we propose that ESKD risk is caused by western African variants in members of the APOL gene family, which evolved to confer protection against pathogens, such as Trypanosoma.
1305.7194
Ekaterina Kosheleva
Katya Kosheleva, Michael Desai
The Dynamics of Genetic Draft in Rapidly Adapting Populations
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The accumulation of beneficial mutations on many competing genetic backgrounds in rapidly adapting populations has a striking impact on evolutionary dynamics. This effect, known as clonal interference, causes erratic fluctuations in the frequencies of observed mutations, randomizes the fixation times of successful mutations, and leaves distinct signatures on patterns of genetic variation. Here, we show how this form of `genetic draft' affects the forward-time dynamics of site frequencies in rapidly adapting asexual populations. We calculate the probability that mutations at individual sites shift in frequency over a characteristic timescale, extending Gillespie's original model of draft to the case where many strongly selected beneficial mutations segregate simultaneously. We then derive the sojourn time of mutant alleles, the expected fixation time of successful mutants, and the site frequency spectrum of beneficial and neutral mutations. We show how this form of draft affects inferences in the McDonald-Kreitman test, and how it relates to recent observations that some aspects of genetic diversity are described by the Bolthausen-Sznitman coalescent in the limit of very rapid adaptation. Finally, we describe how our method can be extended to model evolution on fitness landscapes that include some forms of epistasis, such as landscapes that are partitioned into two or more incompatible evolutionary trajectories.
[ { "created": "Thu, 30 May 2013 18:32:37 GMT", "version": "v1" } ]
2013-05-31
[ [ "Kosheleva", "Katya", "" ], [ "Desai", "Michael", "" ] ]
The accumulation of beneficial mutations on many competing genetic backgrounds in rapidly adapting populations has a striking impact on evolutionary dynamics. This effect, known as clonal interference, causes erratic fluctuations in the frequencies of observed mutations, randomizes the fixation times of successful mutations, and leaves distinct signatures on patterns of genetic variation. Here, we show how this form of `genetic draft' affects the forward-time dynamics of site frequencies in rapidly adapting asexual populations. We calculate the probability that mutations at individual sites shift in frequency over a characteristic timescale, extending Gillespie's original model of draft to the case where many strongly selected beneficial mutations segregate simultaneously. We then derive the sojourn time of mutant alleles, the expected fixation time of successful mutants, and the site frequency spectrum of beneficial and neutral mutations. We show how this form of draft affects inferences in the McDonald-Kreitman test, and how it relates to recent observations that some aspects of genetic diversity are described by the Bolthausen-Sznitman coalescent in the limit of very rapid adaptation. Finally, we describe how our method can be extended to model evolution on fitness landscapes that include some forms of epistasis, such as landscapes that are partitioned into two or more incompatible evolutionary trajectories.
2407.20055
Aine Byrne
\'Aine Byrne
Phase transformation and synchrony for a network of coupled Izhikevich neurons
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A number of recent articles have employed the Lorentz ansatz to reduce a network of Izhikevich neurons to a tractable mean-field description. In this letter, we construct an equivalent phase model for the Izhikevich model and apply the Ott-Antonsen ansatz, to derive the mean field dynamics in terms of the Kuramoto order parameter. In addition, we show that by defining an appropriate order parameter in the voltage-firing rate framework, the conformal mapping of Montbri\'o et al., which relates the two mean-field descriptions, remains valid.
[ { "created": "Mon, 29 Jul 2024 14:41:20 GMT", "version": "v1" } ]
2024-07-30
[ [ "Byrne", "Áine", "" ] ]
A number of recent articles have employed the Lorentz ansatz to reduce a network of Izhikevich neurons to a tractable mean-field description. In this letter, we construct an equivalent phase model for the Izhikevich model and apply the Ott-Antonsen ansatz, to derive the mean field dynamics in terms of the Kuramoto order parameter. In addition, we show that by defining an appropriate order parameter in the voltage-firing rate framework, the conformal mapping of Montbri\'o et al., which relates the two mean-field descriptions, remains valid.
1502.05169
Jorrit Montijn
Jorrit S. Montijn, Pieter M. Goltstein, Cyriel M.A. Pennartz
Heterogeneity and dynamics of cortical populations coding visual detection
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Previous studies have demonstrated the importance of the primary sensory cortex for the detection, discrimination and awareness of visual stimuli, but it is unknown how neuronal populations in this area process detected and undetected stimuli differently. Critical differences may reside in the mean strength of responses to visual stimuli, as reflected in bulk signals detectable in fMRI, EEG or MEG studies, or may be more subtly composed of differentiated activity of individual sensory neurons. Quantifying single-cell Ca2+ responses to visual stimuli recorded with in vivo 2-photon imaging, we found that visual detection correlates more strongly with population response heterogeneity rather than overall response strength. Moreover, neuronal populations showed consistencies in activation patterns across temporally spaced trials in association with hit responses, but not during non-detections. Contrary to models relying on temporally stable networks or bulk-signaling, these results suggest that detection depends on transient differentiation in neuronal activity within cortical populations.
[ { "created": "Wed, 18 Feb 2015 10:13:59 GMT", "version": "v1" }, { "created": "Wed, 2 Dec 2015 11:55:48 GMT", "version": "v2" } ]
2015-12-03
[ [ "Montijn", "Jorrit S.", "" ], [ "Goltstein", "Pieter M.", "" ], [ "Pennartz", "Cyriel M. A.", "" ] ]
Previous studies have demonstrated the importance of the primary sensory cortex for the detection, discrimination and awareness of visual stimuli, but it is unknown how neuronal populations in this area process detected and undetected stimuli differently. Critical differences may reside in the mean strength of responses to visual stimuli, as reflected in bulk signals detectable in fMRI, EEG or MEG studies, or may be more subtly composed of differentiated activity of individual sensory neurons. Quantifying single-cell Ca2+ responses to visual stimuli recorded with in vivo 2-photon imaging, we found that visual detection correlates more strongly with population response heterogeneity rather than overall response strength. Moreover, neuronal populations showed consistencies in activation patterns across temporally spaced trials in association with hit responses, but not during non-detections. Contrary to models relying on temporally stable networks or bulk-signaling, these results suggest that detection depends on transient differentiation in neuronal activity within cortical populations.
1812.02384
Zi-Gang Huang
Yu-Xiang Yao, Zhi-Tong Bing, Liang Huang, Zi-Gang Huang, Ying-Cheng Lai
A network approach to quantifying radiotherapy effect on cancer: Radiosensitive gene group centrality
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Radiotherapy plays a vital role in cancer treatment, for which accurate prognosis is important for guiding sequential treatment and improving the curative effect for patients. An issue of great significance in radiotherapy is to assess tumor radiosensitivity for devising the optimal treatment strategy. Previous studies focused on gene expression in cells closely associated with radiosensitivity, but factors such as the response of a cancer patient to irradiation and the patient survival time are largely ignored. For clinical cancer treatment, a specific pre-treatment indicator taking into account cancer cell type and patient radiosensitivity is of great value but it has been missing. Here, we propose an effective indicator for radiosensitivity: radiosensitive gene group centrality (RSGGC), which characterizes the importance of the group of genes that are radiosensitive in the whole gene correlation network. We demonstrate, using both clinical patient data and experimental cancer cell lines, which RSGGC can provide a quantitative estimate of the effect of radiotherapy, with factors such as the patient survival time and the survived fraction of cancer cell lines under radiotherapy fully taken into account. Our main finding is that, for patients with a higher RSGGC score before radiotherapy, cancer treatment tends to be more effective. The RSGGC can have significant applications in clinical prognosis, serving as a key measure to classifying radiosensitive and radioresistant patients.
[ { "created": "Thu, 6 Dec 2018 07:23:49 GMT", "version": "v1" } ]
2018-12-07
[ [ "Yao", "Yu-Xiang", "" ], [ "Bing", "Zhi-Tong", "" ], [ "Huang", "Liang", "" ], [ "Huang", "Zi-Gang", "" ], [ "Lai", "Ying-Cheng", "" ] ]
Radiotherapy plays a vital role in cancer treatment, for which accurate prognosis is important for guiding sequential treatment and improving the curative effect for patients. An issue of great significance in radiotherapy is to assess tumor radiosensitivity for devising the optimal treatment strategy. Previous studies focused on gene expression in cells closely associated with radiosensitivity, but factors such as the response of a cancer patient to irradiation and the patient survival time are largely ignored. For clinical cancer treatment, a specific pre-treatment indicator taking into account cancer cell type and patient radiosensitivity is of great value but it has been missing. Here, we propose an effective indicator for radiosensitivity: radiosensitive gene group centrality (RSGGC), which characterizes the importance of the group of genes that are radiosensitive in the whole gene correlation network. We demonstrate, using both clinical patient data and experimental cancer cell lines, which RSGGC can provide a quantitative estimate of the effect of radiotherapy, with factors such as the patient survival time and the survived fraction of cancer cell lines under radiotherapy fully taken into account. Our main finding is that, for patients with a higher RSGGC score before radiotherapy, cancer treatment tends to be more effective. The RSGGC can have significant applications in clinical prognosis, serving as a key measure to classifying radiosensitive and radioresistant patients.
1609.07461
Khem Raj Ghusinga
Khem Raj Ghusinga, Abhyudai Singh
Effect of gene-expression bursts on stochastic timing of cellular events
submitted to American Control Conference 2017
null
null
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene expression is inherently a noisy process which manifests as cell-to-cell variability in time evolution of proteins. Consequently, events that trigger at critical threshold levels of regulatory proteins exhibit stochasticity in their timing. An important contributor to the noise in gene expression is translation bursts which correspond to randomness in number of proteins produced in a single mRNA lifetime. Modeling timing of an event as a first-passage time (FPT) problem, we explore the effect of burst size distribution on event timing. Towards this end, the probability density function of FPT is computed for a gene expression model with burst size drawn from a generic non-negative distribution. Analytical formulas for FPT moments are provided in terms of known vectors and inverse of a matrix. The effect of burst size distribution is investigated by looking at how the feedback regulation strategy that minimizes noise in timing around a given time deviates from the case when burst is deterministic. Interestingly, results show that the feedback strategy for deterministic burst case is quite robust to change in burst size distribution, and deviations from it are confined to about 20% of the optimal value. These findings facilitate an improved understanding of noise regulation in event timing.
[ { "created": "Fri, 23 Sep 2016 18:51:38 GMT", "version": "v1" } ]
2016-09-26
[ [ "Ghusinga", "Khem Raj", "" ], [ "Singh", "Abhyudai", "" ] ]
Gene expression is inherently a noisy process which manifests as cell-to-cell variability in time evolution of proteins. Consequently, events that trigger at critical threshold levels of regulatory proteins exhibit stochasticity in their timing. An important contributor to the noise in gene expression is translation bursts which correspond to randomness in number of proteins produced in a single mRNA lifetime. Modeling timing of an event as a first-passage time (FPT) problem, we explore the effect of burst size distribution on event timing. Towards this end, the probability density function of FPT is computed for a gene expression model with burst size drawn from a generic non-negative distribution. Analytical formulas for FPT moments are provided in terms of known vectors and inverse of a matrix. The effect of burst size distribution is investigated by looking at how the feedback regulation strategy that minimizes noise in timing around a given time deviates from the case when burst is deterministic. Interestingly, results show that the feedback strategy for deterministic burst case is quite robust to change in burst size distribution, and deviations from it are confined to about 20% of the optimal value. These findings facilitate an improved understanding of noise regulation in event timing.
1407.6598
Petter Holme
Petter Holme, Naoki Masuda
The basic reproduction number as a predictor for epidemic outbreaks in temporal networks
null
PLoS ONE 10, e0120567 (2015)
10.1371/journal.pone.0120567
null
q-bio.PE cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The basic reproduction number R0 -- the number of individuals directly infected by an infectious person in an otherwise susceptible population -- is arguably the most widely used estimator of how severe an epidemic outbreak can be. This severity can be more directly measured as the fraction people infected once the outbreak is over, {\Omega}. In traditional mathematical epidemiology and common formulations of static network epidemiology, there is a deterministic relationship between R0 and {\Omega}. However, if one considers disease spreading on a temporal contact network -- where one knows when contacts happen, not only between whom -- then larger R0 does not necessarily imply larger {\Omega}. In this paper, we numerically investigate the relationship between R0 and {\Omega} for a set of empirical temporal networks of human contacts. Among 31 explanatory descriptors of temporal network structure, we identify those that make R0 an imperfect predictor of {\Omega}. We find that descriptors related to both temporal and topological aspects affect the relationship between R0 and {\Omega}, but in different ways.
[ { "created": "Thu, 24 Jul 2014 14:37:29 GMT", "version": "v1" } ]
2016-03-18
[ [ "Holme", "Petter", "" ], [ "Masuda", "Naoki", "" ] ]
The basic reproduction number R0 -- the number of individuals directly infected by an infectious person in an otherwise susceptible population -- is arguably the most widely used estimator of how severe an epidemic outbreak can be. This severity can be more directly measured as the fraction people infected once the outbreak is over, {\Omega}. In traditional mathematical epidemiology and common formulations of static network epidemiology, there is a deterministic relationship between R0 and {\Omega}. However, if one considers disease spreading on a temporal contact network -- where one knows when contacts happen, not only between whom -- then larger R0 does not necessarily imply larger {\Omega}. In this paper, we numerically investigate the relationship between R0 and {\Omega} for a set of empirical temporal networks of human contacts. Among 31 explanatory descriptors of temporal network structure, we identify those that make R0 an imperfect predictor of {\Omega}. We find that descriptors related to both temporal and topological aspects affect the relationship between R0 and {\Omega}, but in different ways.
0712.2981
Razvan Radulescu M.D.
Razvan Tudor Radulescu
Oncoprotein metastasis disjoined
6 pages
null
null
null
q-bio.SC q-bio.BM
null
As the past decade barely dawned, a fundamentally novel view of cancer relating to signal transduction through intracellular hormones/growth factors and their subunits began to unfold. Further along, it gained additional substance with the advent of the interdisciplinary fields of particle biology and peptide strings which explain (onco)protein dynamics in spacetime, for instance insulin-driven sub- and trans-cellular carcinogenesis, by physical principles. Here, this new understanding is expanded to introduce the concept of "oncoprotein metastasis" preceding cancer cell spread and, thereby, a particular emphasis is placed on its potential role in the emergence of the pre-metastatic niche. Consistent with this perception, yet unlike currently advocated treatments that target cancer cells only, future antineoplastic strategies should aim to mimic natural tumor suppressors as well as involve both (morphologically) normal and malignant cells. If validated in human patients with advanced cancer disease, its otherwise frequently lethal course may be halted and reversed just in time.
[ { "created": "Tue, 18 Dec 2007 15:14:30 GMT", "version": "v1" } ]
2007-12-19
[ [ "Radulescu", "Razvan Tudor", "" ] ]
As the past decade barely dawned, a fundamentally novel view of cancer relating to signal transduction through intracellular hormones/growth factors and their subunits began to unfold. Further along, it gained additional substance with the advent of the interdisciplinary fields of particle biology and peptide strings which explain (onco)protein dynamics in spacetime, for instance insulin-driven sub- and trans-cellular carcinogenesis, by physical principles. Here, this new understanding is expanded to introduce the concept of "oncoprotein metastasis" preceding cancer cell spread and, thereby, a particular emphasis is placed on its potential role in the emergence of the pre-metastatic niche. Consistent with this perception, yet unlike currently advocated treatments that target cancer cells only, future antineoplastic strategies should aim to mimic natural tumor suppressors as well as involve both (morphologically) normal and malignant cells. If validated in human patients with advanced cancer disease, its otherwise frequently lethal course may be halted and reversed just in time.
2406.08140
Dong Soo Lee
Dong Soo Lee, Hyun Joo Kim, Youngmin Huh, Yeon Koo Kang, Wonseok Whi, Hyekyoung Lee, Hyejin Kang
Functional voxel hierarchy and afferent capacity revealed mental state transition on dynamic correlation resting-state fMRI
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Voxel hierarchy on dynamic brain graphs is produced by k core percolation on functional dynamic amplitude correlation of resting-state fMRI. Directed graphs and their afferent/efferent capacities are produced by Markov modeling of the universal cover of undirected graphs simultaneously with the calculation of volume entropy. Positive and unsigned negative brain graphs were analyzed separately on sliding-window representation to underpin the visualization and quantitation of mental dynamic states with their transitions. Voxel hierarchy animation maps of positive graphs revealed abrupt changes in coreness k and kmaxcore, which we called mental state transitions. Afferent voxel capacities of the positive graphs also revealed transient modules composed of dominating voxels/independent components and their exchanges representing mental state transitions. Animation and quantification plots of voxel hierarchy and afferent capacity corroborated each other in underpinning mental state transitions and afferent module exchange on the positive directed functional connectivity graphs. We propose the use of spatiotemporal trajectories of voxels on positive dynamic graphs to construct hierarchical structures by k core percolation and quantified in- and out-flows of information of voxels by volume entropy/directed graphs to subserve diverse resting mental state transitions on resting-state fMRI graphs in normal human individuals.
[ { "created": "Wed, 12 Jun 2024 12:28:58 GMT", "version": "v1" } ]
2024-06-13
[ [ "Lee", "Dong Soo", "" ], [ "Kim", "Hyun Joo", "" ], [ "Huh", "Youngmin", "" ], [ "Kang", "Yeon Koo", "" ], [ "Whi", "Wonseok", "" ], [ "Lee", "Hyekyoung", "" ], [ "Kang", "Hyejin", "" ] ]
Voxel hierarchy on dynamic brain graphs is produced by k core percolation on functional dynamic amplitude correlation of resting-state fMRI. Directed graphs and their afferent/efferent capacities are produced by Markov modeling of the universal cover of undirected graphs simultaneously with the calculation of volume entropy. Positive and unsigned negative brain graphs were analyzed separately on sliding-window representation to underpin the visualization and quantitation of mental dynamic states with their transitions. Voxel hierarchy animation maps of positive graphs revealed abrupt changes in coreness k and kmaxcore, which we called mental state transitions. Afferent voxel capacities of the positive graphs also revealed transient modules composed of dominating voxels/independent components and their exchanges representing mental state transitions. Animation and quantification plots of voxel hierarchy and afferent capacity corroborated each other in underpinning mental state transitions and afferent module exchange on the positive directed functional connectivity graphs. We propose the use of spatiotemporal trajectories of voxels on positive dynamic graphs to construct hierarchical structures by k core percolation and quantified in- and out-flows of information of voxels by volume entropy/directed graphs to subserve diverse resting mental state transitions on resting-state fMRI graphs in normal human individuals.
2007.03190
George Lykotrafitis
Yihao Zhang, Zhaojie Chai, Yubing Sun and George Lykotrafitis
A deep reinforcement learning model based on deterministic policy gradient for collective neural crest cell migration
null
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modeling cell interactions such as co-attraction and contact-inhibition of locomotion is essential for understanding collective cell migration. Here, we propose a novel deep reinforcement learning model for collective neural crest cell migration. We apply the deep deterministic policy gradient algorithm in association with a particle dynamics simulation environment to train agents to determine the migration path. Because of the different migration mechanisms of leader and follower neural crest cells, we train two types of agents (leaders and followers) to learn the collective cell migration behavior. For a leader agent, we consider a linear combination of a global task, resulting in the shortest path to the target source, and a local task, resulting in a coordinated motion along the local chemoattractant gradient. For a follower agent, we consider only the local task. First, we show that the self-driven forces learned by the leader cell point approximately to the placode, which means that the agent is able to learn to follow the shortest path to the target. To validate our method, we compare the total time elapsed for agents to reach the placode computed using the proposed method and the time computed using an agent-based model. The distributions of the migration time intervals calculated using the two methods are shown to not differ significantly. We then study the effect of co-attraction and contact-inhibition of locomotion to the collective leader cell migration. We show that the overall leader cell migration for the case with co-attraction is slower because the co-attraction mitigates the source-driven effect. In addition, we find that the leader and follower agents learn to follow a similar migration behavior as in experimental observations. Overall, our proposed method provides useful insight on how to apply reinforcement learning techniques to simulate collective cell migration.
[ { "created": "Tue, 7 Jul 2020 04:06:45 GMT", "version": "v1" } ]
2020-07-08
[ [ "Zhang", "Yihao", "" ], [ "Chai", "Zhaojie", "" ], [ "Sun", "Yubing", "" ], [ "Lykotrafitis", "George", "" ] ]
Modeling cell interactions such as co-attraction and contact-inhibition of locomotion is essential for understanding collective cell migration. Here, we propose a novel deep reinforcement learning model for collective neural crest cell migration. We apply the deep deterministic policy gradient algorithm in association with a particle dynamics simulation environment to train agents to determine the migration path. Because of the different migration mechanisms of leader and follower neural crest cells, we train two types of agents (leaders and followers) to learn the collective cell migration behavior. For a leader agent, we consider a linear combination of a global task, resulting in the shortest path to the target source, and a local task, resulting in a coordinated motion along the local chemoattractant gradient. For a follower agent, we consider only the local task. First, we show that the self-driven forces learned by the leader cell point approximately to the placode, which means that the agent is able to learn to follow the shortest path to the target. To validate our method, we compare the total time elapsed for agents to reach the placode computed using the proposed method and the time computed using an agent-based model. The distributions of the migration time intervals calculated using the two methods are shown to not differ significantly. We then study the effect of co-attraction and contact-inhibition of locomotion to the collective leader cell migration. We show that the overall leader cell migration for the case with co-attraction is slower because the co-attraction mitigates the source-driven effect. In addition, we find that the leader and follower agents learn to follow a similar migration behavior as in experimental observations. Overall, our proposed method provides useful insight on how to apply reinforcement learning techniques to simulate collective cell migration.
1002.0470
Gian Marco Palamara Mr
Gian Marco Palamara (1), Vinko Zlatic (2,3), Antonio Scala (2), Guido Caldarelli (2) ((1) Institute of Evolutionary Biology and Environmental Studies, University of Zurich, Wintherthurerstrasse, Zurich, (2) ISC-CNR Dip. Fisica Universita' Sapienza Rome Italy, (3) Theor. Phys. Div., Rudjer Boskovic Institute, Zagreb Croatia)
Population Dynamics on Complex Food Webs
11 Pages, 5 Figures, styles enclosed in the submission
Advances in complex Systems, Vol. 14, No 4. (2011) 635-647
10.1142/S0219525911003116
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we analyse the topological and dynamical properties of a simple model of complex food webs, namely the niche model. In order to underline competition among species, we introduce "prey" and "predators" weighted overlap graphs derived from the niche model and compare synthetic food webs with real data. Doing so, we find new tests for the goodness of synthetic food web models and indicate a possible direction of improvement for existing ones. We then exploit the weighted overlap graphs to define a competition kernel for Lotka-Volterra population dynamics and find that for such a model the stability of food webs decreases with its ecological complexity.
[ { "created": "Tue, 2 Feb 2010 12:13:48 GMT", "version": "v1" }, { "created": "Wed, 12 Sep 2012 09:26:01 GMT", "version": "v2" } ]
2012-09-13
[ [ "Palamara", "Gian Marco", "" ], [ "Zlatic", "Vinko", "" ], [ "Scala", "Antonio", "" ], [ "Caldarelli", "Guido", "" ] ]
In this work we analyse the topological and dynamical properties of a simple model of complex food webs, namely the niche model. In order to underline competition among species, we introduce "prey" and "predators" weighted overlap graphs derived from the niche model and compare synthetic food webs with real data. Doing so, we find new tests for the goodness of synthetic food web models and indicate a possible direction of improvement for existing ones. We then exploit the weighted overlap graphs to define a competition kernel for Lotka-Volterra population dynamics and find that for such a model the stability of food webs decreases with its ecological complexity.
1301.1876
Debashish Chowdhury
Ajeet K. Sharma and Debashish Chowdhury
First-passage problems in DNA replication: effects of template tension on stepping and exonuclease activities of a DNA polymerase motor
24 pages, including 5 figures; final version accepted for publication in the Special Issue on "physics of protein motility and motor proteins" in Journal of Physics: Condensed Matter
Journal of Physics: Condensed Matter, vol.25, 374105 (2013)
10.1088/0953-8984/25/37/374105
null
q-bio.SC cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A DNA polymerase (DNAP) replicates a template DNA strand. It also exploits the template as the track for its own motor-like mechanical movement. In the polymerase mode it elongates the nascent DNA by one nucleotide in each step. But, whenever it commits an error by misincorporating an incorrect nucleotide, it can switch to an exonuclease mode. In the latter mode it excises the wrong nucleotide before switching back to its polymerase mode. We develop a stochastic kinetic model of DNA replication that mimics an {\it in-vitro} experiment where a single-stranded DNA, subjected to a mechanical tension $F$, is converted to a double-stranded DNA by a single DNAP. The $F$-dependence of the average rate of replication, which depends on the rates of both polymerase and exonuclease activities of the DNAP, is in good qualitative agreement with the corresponding experimental results. We introduce 9 novel distinct {\it conditional dwell times} of a DNAP. Using the methods of first-passage times, we also derive the exact analytical expressions for the probability distributions of these conditional dwell times. The predicted $F$-dependence of these distributions are, in principle, accessible to single-molecule experiments.
[ { "created": "Wed, 9 Jan 2013 14:55:19 GMT", "version": "v1" }, { "created": "Thu, 24 Jan 2013 12:05:28 GMT", "version": "v2" }, { "created": "Thu, 11 Jul 2013 19:40:52 GMT", "version": "v3" } ]
2013-10-16
[ [ "Sharma", "Ajeet K.", "" ], [ "Chowdhury", "Debashish", "" ] ]
A DNA polymerase (DNAP) replicates a template DNA strand. It also exploits the template as the track for its own motor-like mechanical movement. In the polymerase mode it elongates the nascent DNA by one nucleotide in each step. But, whenever it commits an error by misincorporating an incorrect nucleotide, it can switch to an exonuclease mode. In the latter mode it excises the wrong nucleotide before switching back to its polymerase mode. We develop a stochastic kinetic model of DNA replication that mimics an {\it in-vitro} experiment where a single-stranded DNA, subjected to a mechanical tension $F$, is converted to a double-stranded DNA by a single DNAP. The $F$-dependence of the average rate of replication, which depends on the rates of both polymerase and exonuclease activities of the DNAP, is in good qualitative agreement with the corresponding experimental results. We introduce 9 novel distinct {\it conditional dwell times} of a DNAP. Using the methods of first-passage times, we also derive the exact analytical expressions for the probability distributions of these conditional dwell times. The predicted $F$-dependence of these distributions are, in principle, accessible to single-molecule experiments.
1307.7084
Sayak Mukherjee
Sayak Mukherjee, Stephanie Rigaud, Sang-Cheol Seok, Guo Fu, Agnieszka Prochenka, Michael Dworkin, Nicholas R. J. Gascoigne, Veronica J. Vieland, Karsten Sauer and Jayajit Das
In silico Modeling of Itk Activation Kinetics in Thymocytes Suggests Competing Positive and Negative IP4 Mediated Feedbacks Increase Robustness
null
PLOS ONE, Vol 8, Issue 9, Page e73937, September 2013
10.1371/journal.pone.0073937
null
q-bio.CB q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The inositol-phosphate messenger inositol(1,3,4,5)tetrakisphosphate (IP4) is essential for thymocyte positive selection by regulating plasma-membrane association of the protein tyrosine kinase Itk downstream of the T cell receptor (TCR). IP4 can act as a soluble analog of the phosphoinositide 3-kinase (PI3K) membrane lipid product phosphatidylinositol(3,4,5)trisphosphate (PIP3). PIP3 recruits signaling proteins such as Itk to cellular membranes by binding to PH and other domains. In thymocytes, low-dose IP4 binding to the Itk PH domain surprisingly promoted and high-dose IP4 inhibited PIP3 binding of Itk PH domains. However, the mechanisms that underlie the regulation of membrane recruitment of Itk by IP4 and PIP3 remain unclear. The distinct Itk PH domain ability to oligomerize is consistent with a cooperative-allosteric mode of IP4 action. However, other possibilities cannot be ruled out due to difficulties in quantitatively measuring the interactions between Itk, IP4 and PIP3, and in generating non-oligomerizing Itk PH domain mutants. This has hindered a full mechanistic understanding of how IP4 controls Itk function. By combining experimentally measured kinetics of PLC{\gamma}1 phosphorylation by Itk with in silico modeling of multiple Itk signaling circuits and a maximum entropy (MaxEnt) based computational approach, we show that those in silico models which are most robust against variations of protein and lipid expression levels and kinetic rates at the single cell level share a cooperative-allosteric mode of Itk regulation by IP4 involving oligomeric Itk PH domains at the plasma membrane. This identifies MaxEnt as an excellent tool for quantifying robustness for complex TCR signaling circuits and provides testable predictions to further elucidate a controversial mechanism of PIP3 signaling.
[ { "created": "Fri, 26 Jul 2013 16:13:57 GMT", "version": "v1" } ]
2014-03-05
[ [ "Mukherjee", "Sayak", "" ], [ "Rigaud", "Stephanie", "" ], [ "Seok", "Sang-Cheol", "" ], [ "Fu", "Guo", "" ], [ "Prochenka", "Agnieszka", "" ], [ "Dworkin", "Michael", "" ], [ "Gascoigne", "Nicholas R. J.", "" ], [ "Vieland", "Veronica J.", "" ], [ "Sauer", "Karsten", "" ], [ "Das", "Jayajit", "" ] ]
The inositol-phosphate messenger inositol(1,3,4,5)tetrakisphosphate (IP4) is essential for thymocyte positive selection by regulating plasma-membrane association of the protein tyrosine kinase Itk downstream of the T cell receptor (TCR). IP4 can act as a soluble analog of the phosphoinositide 3-kinase (PI3K) membrane lipid product phosphatidylinositol(3,4,5)trisphosphate (PIP3). PIP3 recruits signaling proteins such as Itk to cellular membranes by binding to PH and other domains. In thymocytes, low-dose IP4 binding to the Itk PH domain surprisingly promoted and high-dose IP4 inhibited PIP3 binding of Itk PH domains. However, the mechanisms that underlie the regulation of membrane recruitment of Itk by IP4 and PIP3 remain unclear. The distinct Itk PH domain ability to oligomerize is consistent with a cooperative-allosteric mode of IP4 action. However, other possibilities cannot be ruled out due to difficulties in quantitatively measuring the interactions between Itk, IP4 and PIP3, and in generating non-oligomerizing Itk PH domain mutants. This has hindered a full mechanistic understanding of how IP4 controls Itk function. By combining experimentally measured kinetics of PLC{\gamma}1 phosphorylation by Itk with in silico modeling of multiple Itk signaling circuits and a maximum entropy (MaxEnt) based computational approach, we show that those in silico models which are most robust against variations of protein and lipid expression levels and kinetic rates at the single cell level share a cooperative-allosteric mode of Itk regulation by IP4 involving oligomeric Itk PH domains at the plasma membrane. This identifies MaxEnt as an excellent tool for quantifying robustness for complex TCR signaling circuits and provides testable predictions to further elucidate a controversial mechanism of PIP3 signaling.
1604.06111
Lei Meng
Lei Meng, Joseph Crawford, Aaron Striegel, Tijana Milenkovic
IGLOO: Integrating global and local biological network alignment
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Analogous to genomic sequence alignment, biological network alignment (NA) aims to find regions of similarities between molecular networks (rather than sequences) of different species. NA can be either local (LNA) or global (GNA). LNA aims to identify highly conserved common subnetworks, which are typically small, while GNA aims to identify large common subnetworks, which are typically suboptimally conserved. We recently showed that LNA and GNA yield complementary results: LNA has high functional but low topological alignment quality, while GNA has high topological but low functional alignment quality. Thus, we propose IGLOO, a new approach that integrates GNA and LNA in hope to reconcile the two. We evaluate IGLOO against state-of-the-art LNA (NetworkBLAST, NetAligner, AlignNemo, and AlignMCL) and GNA (GHOST, NETAL, GEDEVO, MAGNA++, WAVE, and L-GRAAL) methods. We show that IGLOO allows for a trade-off between topological and functional alignment quality better than the existing LNA and GNA methods considered in our study.
[ { "created": "Wed, 20 Apr 2016 20:05:49 GMT", "version": "v1" }, { "created": "Mon, 6 Jun 2016 01:24:12 GMT", "version": "v2" } ]
2016-06-07
[ [ "Meng", "Lei", "" ], [ "Crawford", "Joseph", "" ], [ "Striegel", "Aaron", "" ], [ "Milenkovic", "Tijana", "" ] ]
Analogous to genomic sequence alignment, biological network alignment (NA) aims to find regions of similarities between molecular networks (rather than sequences) of different species. NA can be either local (LNA) or global (GNA). LNA aims to identify highly conserved common subnetworks, which are typically small, while GNA aims to identify large common subnetworks, which are typically suboptimally conserved. We recently showed that LNA and GNA yield complementary results: LNA has high functional but low topological alignment quality, while GNA has high topological but low functional alignment quality. Thus, we propose IGLOO, a new approach that integrates GNA and LNA in hope to reconcile the two. We evaluate IGLOO against state-of-the-art LNA (NetworkBLAST, NetAligner, AlignNemo, and AlignMCL) and GNA (GHOST, NETAL, GEDEVO, MAGNA++, WAVE, and L-GRAAL) methods. We show that IGLOO allows for a trade-off between topological and functional alignment quality better than the existing LNA and GNA methods considered in our study.
1911.02595
Wenping Cui
Wenping Cui, Robert Marsland III, Pankaj Mehta
Effect of resource dynamics on species packing in diverse ecosystems
17 pages, 8 figures
Phys. Rev. Lett. 125, 048101 (2020)
10.1103/PhysRevLett.125.048101
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The competitive exclusion principle asserts that coexisting species must occupy distinct ecological niches (i.e. the number of surviving species can not exceed the number of resources). An open question is to understand if and how different resource dynamics affect this bound. Here, we analyze a generalized consumer resource model with externally supplied resources and show that -- in contrast to self-renewing resources -- species can occupy only half of all available environmental niches. This motivates us to construct a new schema for classifying ecosystems based on species packing properties.
[ { "created": "Wed, 6 Nov 2019 19:11:06 GMT", "version": "v1" }, { "created": "Tue, 21 Jul 2020 14:52:26 GMT", "version": "v2" } ]
2020-07-29
[ [ "Cui", "Wenping", "" ], [ "Marsland", "Robert", "III" ], [ "Mehta", "Pankaj", "" ] ]
The competitive exclusion principle asserts that coexisting species must occupy distinct ecological niches (i.e. the number of surviving species can not exceed the number of resources). An open question is to understand if and how different resource dynamics affect this bound. Here, we analyze a generalized consumer resource model with externally supplied resources and show that -- in contrast to self-renewing resources -- species can occupy only half of all available environmental niches. This motivates us to construct a new schema for classifying ecosystems based on species packing properties.
0710.1048
Phil Nelson
Philip C Nelson
Colloidal particle motion as a diagnostic of DNA conformational transitions
19pp. Accepted for publication in Curr Op Colloid Interf Science
null
null
null
q-bio.QM q-bio.BM
null
Tethered particle motion is an experimental technique to monitor conformational changes in single molecules of DNA in real time, by observing the position fluctuations of a micrometer-size particle attached to the DNA. This article reviews some recent work on theoretical problems inherent in the interpretation of TPM experiments, both in equilibrium and dynamical aspects.
[ { "created": "Thu, 4 Oct 2007 16:35:06 GMT", "version": "v1" } ]
2007-10-05
[ [ "Nelson", "Philip C", "" ] ]
Tethered particle motion is an experimental technique to monitor conformational changes in single molecules of DNA in real time, by observing the position fluctuations of a micrometer-size particle attached to the DNA. This article reviews some recent work on theoretical problems inherent in the interpretation of TPM experiments, both in equilibrium and dynamical aspects.
2001.02287
Manoj Srinivasan
Geoffrey L. Brown, Nidhi Seethapathi, and Manoj Srinivasan
A unified energy optimality criterion predicts human navigation paths and speeds
null
Proceedings of the National Academy of Sciences 118.29 (2021)
10.1073/pnas.2020327118
null
q-bio.NC nlin.AO
http://creativecommons.org/licenses/by/4.0/
Navigating our physical environment requires changing directions and turning. Despite its ecological importance, we do not have a unified theoretical account of non-straight-line human movement. Here, we present a unified optimality criterion that predicts disparate non-straight-line walking phenomena, with straight-line walking as a special case. We first characterized the metabolic cost of turning, deriving the cost landscape as a function of turning radius and rate. We then generalized this cost landscape to arbitrarily complex trajectories, allowing the velocity direction to deviate from body orientation (holonomic walking). We used this generalized optimality criterion to mathematically predict movement patterns in multiple contexts of varying complexity: walking on prescribed paths, turning in place, navigating an angled corridor, navigating freely with end-point constraints, walking through doors, and navigating around obstacles. In these tasks, humans moved at speeds and paths predicted by our optimality criterion, slowing down to turn and never using sharp turns. We show that the shortest path between two points is, counterintuitively, often not energy optimal, and indeed, humans do not use the shortest path in such cases. Thus, we have obtained a unified theoretical account that predicts human walking paths and speeds in diverse contexts. Our model focuses on walking in healthy adults; future work could generalize this model to other human populations, other animals, and other locomotor tasks.
[ { "created": "Tue, 7 Jan 2020 21:50:28 GMT", "version": "v1" }, { "created": "Mon, 17 Aug 2020 18:58:41 GMT", "version": "v2" }, { "created": "Mon, 1 Nov 2021 21:52:25 GMT", "version": "v3" } ]
2021-11-03
[ [ "Brown", "Geoffrey L.", "" ], [ "Seethapathi", "Nidhi", "" ], [ "Srinivasan", "Manoj", "" ] ]
Navigating our physical environment requires changing directions and turning. Despite its ecological importance, we do not have a unified theoretical account of non-straight-line human movement. Here, we present a unified optimality criterion that predicts disparate non-straight-line walking phenomena, with straight-line walking as a special case. We first characterized the metabolic cost of turning, deriving the cost landscape as a function of turning radius and rate. We then generalized this cost landscape to arbitrarily complex trajectories, allowing the velocity direction to deviate from body orientation (holonomic walking). We used this generalized optimality criterion to mathematically predict movement patterns in multiple contexts of varying complexity: walking on prescribed paths, turning in place, navigating an angled corridor, navigating freely with end-point constraints, walking through doors, and navigating around obstacles. In these tasks, humans moved at speeds and paths predicted by our optimality criterion, slowing down to turn and never using sharp turns. We show that the shortest path between two points is, counterintuitively, often not energy optimal, and indeed, humans do not use the shortest path in such cases. Thus, we have obtained a unified theoretical account that predicts human walking paths and speeds in diverse contexts. Our model focuses on walking in healthy adults; future work could generalize this model to other human populations, other animals, and other locomotor tasks.
2101.10356
Muyuan Chen
Muyuan Chen and Steven Ludtke
Deep learning based mixed-dimensional GMM for characterizing variability in CryoEM
31 pages, 5 main figures and 8 supplementary figures
Nature Methods 18, 930-936 (2021)
10.1038/s41592-021-01220-5
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by/4.0/
Structural flexibility and/or dynamic interactions with other molecules is a critical aspect of protein function. CryoEM provides direct visualization of individual macromolecules sampling different conformational and compositional states. While numerous methods are available for computational classification of discrete states, characterization of continuous conformational changes or large numbers of discrete state without human supervision remains challenging. Here we present e2gmm, a machine learning algorithm to determine a conformational landscape for proteins or complexes using a 3-D Gaussian mixture model mapped onto 2-D particle images in known orientations. Using a deep neural network architecture, e2gmm can automatically resolve the structural heterogeneity within the protein complex and map particles onto a small latent space describing conformational and compositional changes. This system presents a more intuitive and flexible representation than other manifold methods currently in use. We demonstrate this method on both simulated data as well as three biological systems, to explore compositional and conformational changes at a range of scales. The software is distributed as part of EMAN2.
[ { "created": "Mon, 25 Jan 2021 19:05:23 GMT", "version": "v1" }, { "created": "Sun, 23 May 2021 14:06:07 GMT", "version": "v2" } ]
2021-08-04
[ [ "Chen", "Muyuan", "" ], [ "Ludtke", "Steven", "" ] ]
Structural flexibility and/or dynamic interactions with other molecules is a critical aspect of protein function. CryoEM provides direct visualization of individual macromolecules sampling different conformational and compositional states. While numerous methods are available for computational classification of discrete states, characterization of continuous conformational changes or large numbers of discrete state without human supervision remains challenging. Here we present e2gmm, a machine learning algorithm to determine a conformational landscape for proteins or complexes using a 3-D Gaussian mixture model mapped onto 2-D particle images in known orientations. Using a deep neural network architecture, e2gmm can automatically resolve the structural heterogeneity within the protein complex and map particles onto a small latent space describing conformational and compositional changes. This system presents a more intuitive and flexible representation than other manifold methods currently in use. We demonstrate this method on both simulated data as well as three biological systems, to explore compositional and conformational changes at a range of scales. The software is distributed as part of EMAN2.
1908.01646
Max Falkenberg
Alberto Ciacci, Max Falkenberg, Kishan A. Manani, Tim S. Evans, Nicholas S. Peters, Kim Christensen
Understanding the transition from paroxysmal to persistent atrial fibrillation from micro-anatomical re-entry in a simple model
null
Phys. Rev. Research 2, 023311 (2020)
10.1103/PhysRevResearch.2.023311
Imperial/TP/19/TSE/2
q-bio.TO cond-mat.stat-mech
http://creativecommons.org/licenses/by/4.0/
Atrial fibrillation (AF) is the most common cardiac arrhytmia, characterised by the chaotic motion of electrical wavefronts in the atria. In clinical practice, AF is classified under two primary categories: paroxysmal AF, short intermittent episodes separated by periods of normal electrical activity, and persistent AF, longer uninterrupted episodes of chaotic electrical activity. However, the precise reasons why AF in a given patient is paroxysmal or persistent is poorly understood. Recently, we have introduced the percolation based Christensen-Manani-Peters (CMP) model of AF which naturally exhibits both paroxysmal and persistent AF, but precisely how these differences emerge in the model is unclear. In this paper, we dissect the CMP model to identify the cause of these different AF classifications. Starting from a mean-field model where we describe AF as a simple birth-death process, we add layers of complexity to the model and show that persistent AF arises from re-entrant circuits which exhibit an asymmetry in their probability of activation relative to deactivation. As a result, different simulations generated at identical model parameters can exhibit fibrillatory episodes spanning several orders of magnitude from a few seconds to months. These findings demonstrate that diverse, complex fibrillatory dynamics can emerge from very simple dynamics in models of AF.
[ { "created": "Mon, 5 Aug 2019 14:34:34 GMT", "version": "v1" }, { "created": "Wed, 13 May 2020 10:16:26 GMT", "version": "v2" } ]
2020-07-01
[ [ "Ciacci", "Alberto", "" ], [ "Falkenberg", "Max", "" ], [ "Manani", "Kishan A.", "" ], [ "Evans", "Tim S.", "" ], [ "Peters", "Nicholas S.", "" ], [ "Christensen", "Kim", "" ] ]
Atrial fibrillation (AF) is the most common cardiac arrhytmia, characterised by the chaotic motion of electrical wavefronts in the atria. In clinical practice, AF is classified under two primary categories: paroxysmal AF, short intermittent episodes separated by periods of normal electrical activity, and persistent AF, longer uninterrupted episodes of chaotic electrical activity. However, the precise reasons why AF in a given patient is paroxysmal or persistent is poorly understood. Recently, we have introduced the percolation based Christensen-Manani-Peters (CMP) model of AF which naturally exhibits both paroxysmal and persistent AF, but precisely how these differences emerge in the model is unclear. In this paper, we dissect the CMP model to identify the cause of these different AF classifications. Starting from a mean-field model where we describe AF as a simple birth-death process, we add layers of complexity to the model and show that persistent AF arises from re-entrant circuits which exhibit an asymmetry in their probability of activation relative to deactivation. As a result, different simulations generated at identical model parameters can exhibit fibrillatory episodes spanning several orders of magnitude from a few seconds to months. These findings demonstrate that diverse, complex fibrillatory dynamics can emerge from very simple dynamics in models of AF.
1609.06547
Frederic Bois
Frederic Y. Bois, Nazanin Golbamaki-Bakhtyari, Simona Kovarich, Cleo Tebby, Henry A. Gabb, Emmanuel Lemazurier
A high-throughput analysis of ovarian cycle disruption by mixtures of aromatase inhibitors
null
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Combining computational toxicology with ExpoCast exposure estimates and ToxCast assay data gives us access to predictions of human health risks stemming from exposures to chemical mixtures. Objectives: To explore, through mathematical modeling and simulations, the size of potential effects of random mixtures of aromatase inhibitors on the dynamics of women's menstrual cycles. Methods: We simulated random exposures to millions of potential mixtures of 86 aromatase inhibitors. A pharmacokinetic model of intake and disposition of the chemicals predicted their internal concentration as a function of time (up to two years). A ToxCast aromatase assay provided concentration-inhibition relationships for each chemical. The resulting total aromatase inhibition was input to a mathematical model of the hormonal hypothalamus-pituitary-ovarian control of ovulation in women. Results: Above 10% inhibition of estradiol synthesis by aromatase inhibitors, noticeable (eventually reversible) effects on ovulation were predicted. Exposures to individual chemicals never led to such effects. In our best estimate, about 10% of the combined exposures simulated had mild to catastrophic impacts on ovulation. A lower bound on that figure, obtained using an optimistic exposure scenario, was 0.3%. Conclusions: These results demonstrate the possibility to predict large-scale mixture effects for endocrine disrupters with a predictive toxicology approach, suitable for high-throughput ranking and risk assessment. The size of the effects predicted is consistent with an increased risk of infertility in women from everyday exposures to our chemical environment.
[ { "created": "Tue, 13 Sep 2016 13:30:25 GMT", "version": "v1" }, { "created": "Tue, 2 May 2017 09:38:39 GMT", "version": "v2" } ]
2017-05-03
[ [ "Bois", "Frederic Y.", "" ], [ "Golbamaki-Bakhtyari", "Nazanin", "" ], [ "Kovarich", "Simona", "" ], [ "Tebby", "Cleo", "" ], [ "Gabb", "Henry A.", "" ], [ "Lemazurier", "Emmanuel", "" ] ]
Background: Combining computational toxicology with ExpoCast exposure estimates and ToxCast assay data gives us access to predictions of human health risks stemming from exposures to chemical mixtures. Objectives: To explore, through mathematical modeling and simulations, the size of potential effects of random mixtures of aromatase inhibitors on the dynamics of women's menstrual cycles. Methods: We simulated random exposures to millions of potential mixtures of 86 aromatase inhibitors. A pharmacokinetic model of intake and disposition of the chemicals predicted their internal concentration as a function of time (up to two years). A ToxCast aromatase assay provided concentration-inhibition relationships for each chemical. The resulting total aromatase inhibition was input to a mathematical model of the hormonal hypothalamus-pituitary-ovarian control of ovulation in women. Results: Above 10% inhibition of estradiol synthesis by aromatase inhibitors, noticeable (eventually reversible) effects on ovulation were predicted. Exposures to individual chemicals never led to such effects. In our best estimate, about 10% of the combined exposures simulated had mild to catastrophic impacts on ovulation. A lower bound on that figure, obtained using an optimistic exposure scenario, was 0.3%. Conclusions: These results demonstrate the possibility to predict large-scale mixture effects for endocrine disrupters with a predictive toxicology approach, suitable for high-throughput ranking and risk assessment. The size of the effects predicted is consistent with an increased risk of infertility in women from everyday exposures to our chemical environment.
1407.1642
Christoph von der Malsburg
Christoph von der Malsburg
A Vision Architecture
6 pages, 2 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We are offering a particular interpretation (well within the range of experimentally and theoretically accepted notions) of neural connectivity and dynamics and discuss it as the data-and-process architecture of the visual system. In this interpretation the permanent connectivity of cortex is an overlay of well-structured networks, nets, which are formed on the slow time-scale of learning by self-interaction of the network under the influence of sensory input, and which are selectively activated on the fast perceptual time-scale. Nets serve as an explicit, hierarchically structured representation of visual structure in the various sub-modalities, as constraint networks favouring mutually consistent sets of latent variables and as projection mappings to deal with invariance.
[ { "created": "Mon, 7 Jul 2014 09:41:47 GMT", "version": "v1" } ]
2014-07-08
[ [ "von der Malsburg", "Christoph", "" ] ]
We are offering a particular interpretation (well within the range of experimentally and theoretically accepted notions) of neural connectivity and dynamics and discuss it as the data-and-process architecture of the visual system. In this interpretation the permanent connectivity of cortex is an overlay of well-structured networks, nets, which are formed on the slow time-scale of learning by self-interaction of the network under the influence of sensory input, and which are selectively activated on the fast perceptual time-scale. Nets serve as an explicit, hierarchically structured representation of visual structure in the various sub-modalities, as constraint networks favouring mutually consistent sets of latent variables and as projection mappings to deal with invariance.
1907.09377
Abigail Plummer
Giorgia Guccione, Roberto Benzi, Abigail Plummer and Federico Toschi
Discrete Eulerian model for population genetics and dynamics under flow
9 pages, 9 figures
Phys. Rev. E 100, 062105 (2019)
10.1103/PhysRevE.100.062105
null
q-bio.PE nlin.CD physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Marine species reproduce and compete while being advected by turbulent flows. It is largely unknown, both theoretically and experimentally, how population dynamics and genetics are changed by the presence of fluid flows. Discrete agent-based simulations in continuous space allow for accurate treatment of advection and number fluctuations, but can be computationally expensive for even modest organism densities. In this report, we propose an algorithm to overcome some of these challenges. We first provide a thorough validation of the algorithm in one and two dimensions without flow. Next, we focus on the case of weakly compressible flows in two dimensions. This models organisms such as phytoplankton living at a specific depth in the three-dimensional, incompressible ocean experiencing upwelling and/or downwelling events. We show that organisms born at sources in a two-dimensional time-independent flow experience an increase in fixation probability.
[ { "created": "Thu, 18 Jul 2019 15:33:36 GMT", "version": "v1" }, { "created": "Fri, 26 Jul 2019 17:14:17 GMT", "version": "v2" }, { "created": "Sat, 21 Dec 2019 22:40:19 GMT", "version": "v3" } ]
2019-12-24
[ [ "Guccione", "Giorgia", "" ], [ "Benzi", "Roberto", "" ], [ "Plummer", "Abigail", "" ], [ "Toschi", "Federico", "" ] ]
Marine species reproduce and compete while being advected by turbulent flows. It is largely unknown, both theoretically and experimentally, how population dynamics and genetics are changed by the presence of fluid flows. Discrete agent-based simulations in continuous space allow for accurate treatment of advection and number fluctuations, but can be computationally expensive for even modest organism densities. In this report, we propose an algorithm to overcome some of these challenges. We first provide a thorough validation of the algorithm in one and two dimensions without flow. Next, we focus on the case of weakly compressible flows in two dimensions. This models organisms such as phytoplankton living at a specific depth in the three-dimensional, incompressible ocean experiencing upwelling and/or downwelling events. We show that organisms born at sources in a two-dimensional time-independent flow experience an increase in fixation probability.
q-bio/0310019
Michael Deem
Michael W. Deem
Complexity in the Immune System
25 pages, 12 figures
null
null
null
q-bio.CB cond-mat q-bio.PE
null
The immune system is a real-time example of an evolving system that navigates the essentially infinite complexity of protein sequence space. How this system responds to disease and vaccination is discussed. Of particular focus is the case when vaccination leads to increased susceptibility to disease, a phenomenon termed original antigenic sin. A physical theory of protein evolution to explain limitations in the immune system response to vaccination and disease is discussed, and original antigenic sin is explained as stemming from localization of the immune system response in antibody sequence space. This localization is a result of the roughness in sequence space of the evolved antibody affinity constant for antigen and is observed for diseases with high year-to-year mutation rates, such as influenza.
[ { "created": "Wed, 15 Oct 2003 20:13:49 GMT", "version": "v1" } ]
2007-05-23
[ [ "Deem", "Michael W.", "" ] ]
The immune system is a real-time example of an evolving system that navigates the essentially infinite complexity of protein sequence space. How this system responds to disease and vaccination is discussed. Of particular focus is the case when vaccination leads to increased susceptibility to disease, a phenomenon termed original antigenic sin. A physical theory of protein evolution to explain limitations in the immune system response to vaccination and disease is discussed, and original antigenic sin is explained as stemming from localization of the immune system response in antibody sequence space. This localization is a result of the roughness in sequence space of the evolved antibody affinity constant for antigen and is observed for diseases with high year-to-year mutation rates, such as influenza.
1801.08216
Edward D Lee
Edward D. Lee, Bryan C Daniels
Convenient Interface to Inverse Ising (ConIII): A Python 3 Package for Solving Ising-Type Maximum Entropy Models
null
null
null
null
q-bio.QM cond-mat.stat-mech physics.comp-ph
http://creativecommons.org/licenses/by/4.0/
ConIII (pronounced CON-ee) is an open-source Python project providing a simple interface to solving the pairwise and higher order Ising model and a base for extension to other maximum entropy models. We describe the maximum entropy problem and give an overview of the algorithms that are implemented as part of ConIII (https://github.com/eltrompetero/coniii) including Monte Carlo histogram, pseudolikelihood, minimum probability flow, a regularized mean field method, and a cluster expansion method. Our goal is to make a variety of maximum entropy techniques accessible to those unfamiliar with the techniques and accelerate workflow for users.
[ { "created": "Wed, 24 Jan 2018 22:10:43 GMT", "version": "v1" }, { "created": "Mon, 11 Mar 2019 01:42:19 GMT", "version": "v2" } ]
2019-03-12
[ [ "Lee", "Edward D.", "" ], [ "Daniels", "Bryan C", "" ] ]
ConIII (pronounced CON-ee) is an open-source Python project providing a simple interface to solving the pairwise and higher order Ising model and a base for extension to other maximum entropy models. We describe the maximum entropy problem and give an overview of the algorithms that are implemented as part of ConIII (https://github.com/eltrompetero/coniii) including Monte Carlo histogram, pseudolikelihood, minimum probability flow, a regularized mean field method, and a cluster expansion method. Our goal is to make a variety of maximum entropy techniques accessible to those unfamiliar with the techniques and accelerate workflow for users.
2401.05342
Max Burg
Max F. Burg, Thomas Zenkel, Michaela Vystr\v{c}ilov\'a, Jonathan Oesterle, Larissa H\"ofling, Konstantin F. Willeke, Jan Lause, Sarah M\"uller, Paul G. Fahey, Zhiwei Ding, Kelli Restivo, Shashwat Sridhar, Tim Gollisch, Philipp Berens, Andreas S. Tolias, Thomas Euler, Matthias Bethge, Alexander S. Ecker
Most discriminative stimuli for functional cell type clustering
null
null
null
null
q-bio.NC cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Identifying cell types and understanding their functional properties is crucial for unraveling the mechanisms underlying perception and cognition. In the retina, functional types can be identified by carefully selected stimuli, but this requires expert domain knowledge and biases the procedure towards previously known cell types. In the visual cortex, it is still unknown what functional types exist and how to identify them. Thus, for unbiased identification of the functional cell types in retina and visual cortex, new approaches are needed. Here we propose an optimization-based clustering approach using deep predictive models to obtain functional clusters of neurons using Most Discriminative Stimuli (MDS). Our approach alternates between stimulus optimization with cluster reassignment akin to an expectation-maximization algorithm. The algorithm recovers functional clusters in mouse retina, marmoset retina and macaque visual area V4. This demonstrates that our approach can successfully find discriminative stimuli across species, stages of the visual system and recording techniques. The resulting most discriminative stimuli can be used to assign functional cell types fast and on the fly, without the need to train complex predictive models or show a large natural scene dataset, paving the way for experiments that were previously limited by experimental time. Crucially, MDS are interpretable: they visualize the distinctive stimulus patterns that most unambiguously identify a specific type of neuron.
[ { "created": "Wed, 29 Nov 2023 12:58:42 GMT", "version": "v1" }, { "created": "Thu, 14 Mar 2024 15:40:01 GMT", "version": "v2" } ]
2024-03-15
[ [ "Burg", "Max F.", "" ], [ "Zenkel", "Thomas", "" ], [ "Vystrčilová", "Michaela", "" ], [ "Oesterle", "Jonathan", "" ], [ "Höfling", "Larissa", "" ], [ "Willeke", "Konstantin F.", "" ], [ "Lause", "Jan", "" ], [ "Müller", "Sarah", "" ], [ "Fahey", "Paul G.", "" ], [ "Ding", "Zhiwei", "" ], [ "Restivo", "Kelli", "" ], [ "Sridhar", "Shashwat", "" ], [ "Gollisch", "Tim", "" ], [ "Berens", "Philipp", "" ], [ "Tolias", "Andreas S.", "" ], [ "Euler", "Thomas", "" ], [ "Bethge", "Matthias", "" ], [ "Ecker", "Alexander S.", "" ] ]
Identifying cell types and understanding their functional properties is crucial for unraveling the mechanisms underlying perception and cognition. In the retina, functional types can be identified by carefully selected stimuli, but this requires expert domain knowledge and biases the procedure towards previously known cell types. In the visual cortex, it is still unknown what functional types exist and how to identify them. Thus, for unbiased identification of the functional cell types in retina and visual cortex, new approaches are needed. Here we propose an optimization-based clustering approach using deep predictive models to obtain functional clusters of neurons using Most Discriminative Stimuli (MDS). Our approach alternates between stimulus optimization with cluster reassignment akin to an expectation-maximization algorithm. The algorithm recovers functional clusters in mouse retina, marmoset retina and macaque visual area V4. This demonstrates that our approach can successfully find discriminative stimuli across species, stages of the visual system and recording techniques. The resulting most discriminative stimuli can be used to assign functional cell types fast and on the fly, without the need to train complex predictive models or show a large natural scene dataset, paving the way for experiments that were previously limited by experimental time. Crucially, MDS are interpretable: they visualize the distinctive stimulus patterns that most unambiguously identify a specific type of neuron.
1908.01445
Victor Solovyev
Igor Seledtsov (1), Jaroslav Efremov ( 1 and 2), Vladimir Molodtsov (1 and2), Victor Solovyev (1)
ReadsMap: a new tool for high precision mapping of DNAseq and RNAseq read sequences
18 pages, 5 figures and 14 tables
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There are currently plenty of programs available for mapping short sequences (reads) to a genome. Most of them, however, including such popular and actively developed programs as Bowtie, BWA, TopHat and many others, are based on Burrows-Wheeler Transform (BWT) algorithm. This approach is very effective for mapping high-homology reads, but runs into problems when mapping reads with high level of errors or SNP. Also it has problems with mapping RNASeq spliced reads (such as reads that aligning with gaps corresponding intron sequences), the kind that is essential for finding introns and alternative splicing gene isoforms. Meanwhile, finding intron positions is the most important task for determining the gene structure, and especially alternatively spliced variants of genes. In this paper, we propose a new algorithm that involves hashing reference genome. ReadsMap program, implementing such algorithm, demonstrate very high-accuracy mapping of large number of short reads to one or more genomic contigs. It is achieved mostly by better alignment of very short parts of reads separated by long introns with accounting information from mapping other reads containing the same intron inserted between bigger blocks. Availability and implementation: ReadsMap is implemented in C. It is incorporated in Fgenesh++ gene identification pipeline and is freely available to academic users at Softberry web server www.softberry.com.
[ { "created": "Mon, 5 Aug 2019 02:29:04 GMT", "version": "v1" } ]
2019-08-06
[ [ "Seledtsov", "Igor", "", "1 and 2" ], [ "Efremov", "Jaroslav", "", "1 and 2" ], [ "Molodtsov", "Vladimir", "", "1\n and2" ], [ "Solovyev", "Victor", "" ] ]
There are currently plenty of programs available for mapping short sequences (reads) to a genome. Most of them, however, including such popular and actively developed programs as Bowtie, BWA, TopHat and many others, are based on Burrows-Wheeler Transform (BWT) algorithm. This approach is very effective for mapping high-homology reads, but runs into problems when mapping reads with high level of errors or SNP. Also it has problems with mapping RNASeq spliced reads (such as reads that aligning with gaps corresponding intron sequences), the kind that is essential for finding introns and alternative splicing gene isoforms. Meanwhile, finding intron positions is the most important task for determining the gene structure, and especially alternatively spliced variants of genes. In this paper, we propose a new algorithm that involves hashing reference genome. ReadsMap program, implementing such algorithm, demonstrate very high-accuracy mapping of large number of short reads to one or more genomic contigs. It is achieved mostly by better alignment of very short parts of reads separated by long introns with accounting information from mapping other reads containing the same intron inserted between bigger blocks. Availability and implementation: ReadsMap is implemented in C. It is incorporated in Fgenesh++ gene identification pipeline and is freely available to academic users at Softberry web server www.softberry.com.
1604.06591
Alan D. Rendall
Stefan Disselnkoetter, Alan D. Rendall
Stability of stationary solutions in models of the Calvin cycle
18 pages
null
null
null
q-bio.SC math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper results are obtained concerning the number of positive stationary solutions in simple models of the Calvin cycle of photosynthesis and the stability of these solutions. It is proved that there are open sets of parameters in a model of Zhu et. al. for which there exist two positive stationary solutions. There are never more than two isolated positive stationary solutions but under certain explicit special conditions on the parameters there is a whole continuum of positive stationary solutions. It is also shown that in the set of parameter values for which two isolated positive stationary solutions exist there is an open subset where one of the solutions is asymptotically stable and the other is unstable. In related models , for which it was known that more that one positive stationary solution exists, it is proved that there are parameter values for which one of these solutions is asymptotically stable and the other unstable. A key technical aspect of the proofs is to exploit the fact that there is a bifurcation where the centre manifold is one-dimensional.
[ { "created": "Fri, 22 Apr 2016 09:51:26 GMT", "version": "v1" } ]
2016-04-25
[ [ "Disselnkoetter", "Stefan", "" ], [ "Rendall", "Alan D.", "" ] ]
In this paper results are obtained concerning the number of positive stationary solutions in simple models of the Calvin cycle of photosynthesis and the stability of these solutions. It is proved that there are open sets of parameters in a model of Zhu et. al. for which there exist two positive stationary solutions. There are never more than two isolated positive stationary solutions but under certain explicit special conditions on the parameters there is a whole continuum of positive stationary solutions. It is also shown that in the set of parameter values for which two isolated positive stationary solutions exist there is an open subset where one of the solutions is asymptotically stable and the other is unstable. In related models , for which it was known that more that one positive stationary solution exists, it is proved that there are parameter values for which one of these solutions is asymptotically stable and the other unstable. A key technical aspect of the proofs is to exploit the fact that there is a bifurcation where the centre manifold is one-dimensional.
q-bio/0609007
Francesco Romeo
F. Romeo
A simple model of energy expenditure in human locomotion
13 pages, 5 figures, submitted to EPJ B
Revista Brasileira de Ensino de Fisica, v. 31, n. 4, 4306 (2009)
null
null
q-bio.QM
null
A simple harmonic oscillator model is proposed to describe the mechanical power involved in human locomotion. In this framework, by taking into account the anthropometric parameters of a standard individual, we are able to calculate the speed-power curves in human walking. The proposed model accounts for the well known Margaria's law in which the cost ofthe human running (independent from the speed) is fixed to 1 Kcal/(Kg Km). The model includes the effects of a gentle slope (either positive or negative) and the effect due to the mechanical response of the walking surface. The model results obtained in the presence of a slope are in qualitative agreement with the experimental data obtained by A. Leonardi et al.
[ { "created": "Wed, 6 Sep 2006 09:10:45 GMT", "version": "v1" } ]
2010-10-05
[ [ "Romeo", "F.", "" ] ]
A simple harmonic oscillator model is proposed to describe the mechanical power involved in human locomotion. In this framework, by taking into account the anthropometric parameters of a standard individual, we are able to calculate the speed-power curves in human walking. The proposed model accounts for the well known Margaria's law in which the cost ofthe human running (independent from the speed) is fixed to 1 Kcal/(Kg Km). The model includes the effects of a gentle slope (either positive or negative) and the effect due to the mechanical response of the walking surface. The model results obtained in the presence of a slope are in qualitative agreement with the experimental data obtained by A. Leonardi et al.
1711.07446
Avgoustinos Vouros
Avgoustinos Vouros, Tiago V. Gehring, Kinga Szydlowska, Artur Janusz, Mike Croucher, Katarzyna Lukasiuk, Witold Konopka, Carmen Sandi, Zehai Tu, Eleni Vasilaki
A generalised framework for detailed classification of swimming paths inside the Morris Water Maze
null
Scientific Reports volume 8, Article number: 15089 (2018)
10.1038/s41598-018-33456-1
null
q-bio.QM cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Morris Water Maze is commonly used in behavioural neuroscience for the study of spatial learning with rodents. Over the years, various methods of analysing rodent data collected in this task have been proposed. These methods span from classical performance measurements (e.g. escape latency, rodent speed, quadrant preference) to more sophisticated methods of categorisation which classify the animal swimming path into behavioural classes known as strategies. Classification techniques provide additional insight in relation to the actual animal behaviours but still only a limited amount of studies utilise them mainly because they highly depend on machine learning knowledge. We have previously demonstrated that the animals implement various strategies and by classifying whole trajectories can lead to the loss of important information. In this work, we developed a generalised and robust classification methodology which implements majority voting to boost the classification performance and successfully nullify the need of manual tuning. Based on this framework, we built a complete software, capable of performing the full analysis described in this paper. The software provides an easy to use graphical user interface (GUI) through which users can enter their trajectory data, segment and label them and finally generate reports and figures of the results.
[ { "created": "Mon, 20 Nov 2017 18:12:19 GMT", "version": "v1" }, { "created": "Mon, 18 Dec 2017 18:25:17 GMT", "version": "v2" } ]
2018-10-11
[ [ "Vouros", "Avgoustinos", "" ], [ "Gehring", "Tiago V.", "" ], [ "Szydlowska", "Kinga", "" ], [ "Janusz", "Artur", "" ], [ "Croucher", "Mike", "" ], [ "Lukasiuk", "Katarzyna", "" ], [ "Konopka", "Witold", "" ], [ "Sandi", "Carmen", "" ], [ "Tu", "Zehai", "" ], [ "Vasilaki", "Eleni", "" ] ]
The Morris Water Maze is commonly used in behavioural neuroscience for the study of spatial learning with rodents. Over the years, various methods of analysing rodent data collected in this task have been proposed. These methods span from classical performance measurements (e.g. escape latency, rodent speed, quadrant preference) to more sophisticated methods of categorisation which classify the animal swimming path into behavioural classes known as strategies. Classification techniques provide additional insight in relation to the actual animal behaviours but still only a limited amount of studies utilise them mainly because they highly depend on machine learning knowledge. We have previously demonstrated that the animals implement various strategies and by classifying whole trajectories can lead to the loss of important information. In this work, we developed a generalised and robust classification methodology which implements majority voting to boost the classification performance and successfully nullify the need of manual tuning. Based on this framework, we built a complete software, capable of performing the full analysis described in this paper. The software provides an easy to use graphical user interface (GUI) through which users can enter their trajectory data, segment and label them and finally generate reports and figures of the results.
1309.0265
Cameron Browne
Cameron Browne, Lydia Bourouiba, Robert Smith
From regional pulse vaccination to global disease eradication: insights from a mathematical model of Poliomyelitis
Added section 6.1, made other revisions, changed title
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mass-vaccination campaigns are an important strategy in the global fight against poliomyelitis and measles. The large-scale logistics required for these mass immunisation campaigns magnifies the need for research into the effectiveness and optimal deployment of pulse vaccination. In order to better understand this control strategy, we propose a mathematical model accounting for the disease dynamics in connected regions, incorporating seasonality, environmental reservoirs and independent periodic pulse vaccination schedules in each region. The effective reproduction number, $R_e$, is defined and proved to be a global threshold for persistence of the disease. Analytical and numerical calculations show the importance of synchronising the pulse vaccinations in connected regions and the timing of the pulses with respect to the pathogen circulation seasonality. Our results indicate that it may be crucial for mass-vaccination programs, such as national immunisation days, to be synchronised across different regions. In addition, simulations show that a migration imbalance can increase $R_e$ and alter how pulse vaccination should be optimally distributed among the patches, similar to results found with constant-rate vaccination. Furthermore, contrary to the case of constant-rate vaccination, the fraction of environmental transmission affects the value of $R_e$ when pulse vaccination is present.
[ { "created": "Sun, 1 Sep 2013 21:15:41 GMT", "version": "v1" }, { "created": "Wed, 13 Nov 2013 18:38:42 GMT", "version": "v2" }, { "created": "Tue, 27 May 2014 17:30:48 GMT", "version": "v3" } ]
2014-05-28
[ [ "Browne", "Cameron", "" ], [ "Bourouiba", "Lydia", "" ], [ "Smith", "Robert", "" ] ]
Mass-vaccination campaigns are an important strategy in the global fight against poliomyelitis and measles. The large-scale logistics required for these mass immunisation campaigns magnifies the need for research into the effectiveness and optimal deployment of pulse vaccination. In order to better understand this control strategy, we propose a mathematical model accounting for the disease dynamics in connected regions, incorporating seasonality, environmental reservoirs and independent periodic pulse vaccination schedules in each region. The effective reproduction number, $R_e$, is defined and proved to be a global threshold for persistence of the disease. Analytical and numerical calculations show the importance of synchronising the pulse vaccinations in connected regions and the timing of the pulses with respect to the pathogen circulation seasonality. Our results indicate that it may be crucial for mass-vaccination programs, such as national immunisation days, to be synchronised across different regions. In addition, simulations show that a migration imbalance can increase $R_e$ and alter how pulse vaccination should be optimally distributed among the patches, similar to results found with constant-rate vaccination. Furthermore, contrary to the case of constant-rate vaccination, the fraction of environmental transmission affects the value of $R_e$ when pulse vaccination is present.
2203.07063
Giorgio Guzzetta
Mattia Manica, Maria Litvinova, Alfredo De Bellis, Giorgio Guzzetta, Pamela Mancuso, Massimo Vicentini, Francesco Venturelli, Eufemia Bisaccia, Ana I. Bento, Piero Poletti, Valentina Marziano, Agnese Zardini, Valeria d'Andrea, Filippo Trentini, Antonino Bella, Flavia Riccardo, Patrizio Pezzotti, Marco Ajelli, Paolo Giorgi Rossi, Stefano Merler and the Reggio Emilia COVID-19 Working Group
Estimation of the incubation period and generation time of SARS-CoV-2 Alpha and Delta variants from contact tracing data
11 pages, 3 figures, 2 tables, supplementary material
null
null
null
q-bio.PE q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Background. During 2021, the COVID-19 pandemic was characterized by the emergence of lineages with increased fitness. For most of these variants, quantitative information is scarce on epidemiological quantities such as the incubation period and generation time, which are critical for both public health decisions and scientific research. Method. We analyzed a dataset collected during contact tracing activities in the province of Reggio Emilia, Italy, throughout 2021. We determined the distributions of the incubation period using information on negative PCR tests and the date of last exposure from 282 symptomatic cases. We estimated the distributions of the intrinsic generation time (the time between the infection dates of an infector and its secondary cases under a fully susceptible population) using a Bayesian inference approach applied to 4,435 SARS-CoV-2 cases clustered in 1,430 households where at least one secondary case was recorded. Results. We estimated a mean incubation period of 4.9 days (95% credible intervals, CrI, 4.4-5.4; 95 percentile of the mean distribution: 1-12) for Alpha and 4.5 days (95%CrI 4.0-5.0; 95 percentile: 1-10) for Delta. The intrinsic generation time was estimated to have a mean of 6.0 days (95% CrI 5.6-6.4; 95 percentile: 1-15) for Alpha and of 6.6 days (95%CrI 6.0-7.3; 95 percentile: 1-18) for Delta. The household serial interval was 2.6 days (95%CrI 2.4-2.7) for Alpha and 2.4 days (95%CrI 2.2-2.6) for Delta, and the estimated proportion of pre-symptomatic transmission was 54-55% for both variants. Conclusions. These results indicate limited differences in the incubation period and intrinsic generation time of SARS-CoV-2 variants Alpha and Delta compared to ancestral lineages.
[ { "created": "Fri, 11 Mar 2022 14:59:21 GMT", "version": "v1" } ]
2022-03-15
[ [ "Manica", "Mattia", "" ], [ "Litvinova", "Maria", "" ], [ "De Bellis", "Alfredo", "" ], [ "Guzzetta", "Giorgio", "" ], [ "Mancuso", "Pamela", "" ], [ "Vicentini", "Massimo", "" ], [ "Venturelli", "Francesco", "" ], [ "Bisaccia", "Eufemia", "" ], [ "Bento", "Ana I.", "" ], [ "Poletti", "Piero", "" ], [ "Marziano", "Valentina", "" ], [ "Zardini", "Agnese", "" ], [ "d'Andrea", "Valeria", "" ], [ "Trentini", "Filippo", "" ], [ "Bella", "Antonino", "" ], [ "Riccardo", "Flavia", "" ], [ "Pezzotti", "Patrizio", "" ], [ "Ajelli", "Marco", "" ], [ "Rossi", "Paolo Giorgi", "" ], [ "Merler", "Stefano", "" ], [ "Group", "the Reggio Emilia COVID-19 Working", "" ] ]
Background. During 2021, the COVID-19 pandemic was characterized by the emergence of lineages with increased fitness. For most of these variants, quantitative information is scarce on epidemiological quantities such as the incubation period and generation time, which are critical for both public health decisions and scientific research. Method. We analyzed a dataset collected during contact tracing activities in the province of Reggio Emilia, Italy, throughout 2021. We determined the distributions of the incubation period using information on negative PCR tests and the date of last exposure from 282 symptomatic cases. We estimated the distributions of the intrinsic generation time (the time between the infection dates of an infector and its secondary cases under a fully susceptible population) using a Bayesian inference approach applied to 4,435 SARS-CoV-2 cases clustered in 1,430 households where at least one secondary case was recorded. Results. We estimated a mean incubation period of 4.9 days (95% credible intervals, CrI, 4.4-5.4; 95 percentile of the mean distribution: 1-12) for Alpha and 4.5 days (95%CrI 4.0-5.0; 95 percentile: 1-10) for Delta. The intrinsic generation time was estimated to have a mean of 6.0 days (95% CrI 5.6-6.4; 95 percentile: 1-15) for Alpha and of 6.6 days (95%CrI 6.0-7.3; 95 percentile: 1-18) for Delta. The household serial interval was 2.6 days (95%CrI 2.4-2.7) for Alpha and 2.4 days (95%CrI 2.2-2.6) for Delta, and the estimated proportion of pre-symptomatic transmission was 54-55% for both variants. Conclusions. These results indicate limited differences in the incubation period and intrinsic generation time of SARS-CoV-2 variants Alpha and Delta compared to ancestral lineages.
2010.14699
Luke Rast
Luke Rast and Jan Drugowitsch
Adaptation Properties Allow Identification of Optimized Neural Codes
v2: Corrected typos and issues displaying the figures. Edits to supplementary material
Advances in Neural Information Processing Systems 33 (2020)
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The adaptation of neural codes to the statistics of their environment is well captured by efficient coding approaches. Here we solve an inverse problem: characterizing the objective and constraint functions that efficient codes appear to be optimal for, on the basis of how they adapt to different stimulus distributions. We formulate a general efficient coding problem, with flexible objective and constraint functions and minimal parametric assumptions. Solving special cases of this model, we provide solutions to broad classes of Fisher information-based efficient coding problems, generalizing a wide range of previous results. We show that different objective function types impose qualitatively different adaptation behaviors, while constraints enforce characteristic deviations from classic efficient coding signatures. Despite interaction between these effects, clear signatures emerge for both unconstrained optimization problems and information-maximizing objective functions. Asking for a fixed-point of the neural code adaptation, we find an objective-independent characterization of constraints on the neural code. We use this result to propose an experimental paradigm that can characterize both the objective and constraint functions that an observed code appears to be optimized for.
[ { "created": "Wed, 28 Oct 2020 02:03:05 GMT", "version": "v1" }, { "created": "Mon, 22 Feb 2021 21:57:14 GMT", "version": "v2" } ]
2021-02-25
[ [ "Rast", "Luke", "" ], [ "Drugowitsch", "Jan", "" ] ]
The adaptation of neural codes to the statistics of their environment is well captured by efficient coding approaches. Here we solve an inverse problem: characterizing the objective and constraint functions that efficient codes appear to be optimal for, on the basis of how they adapt to different stimulus distributions. We formulate a general efficient coding problem, with flexible objective and constraint functions and minimal parametric assumptions. Solving special cases of this model, we provide solutions to broad classes of Fisher information-based efficient coding problems, generalizing a wide range of previous results. We show that different objective function types impose qualitatively different adaptation behaviors, while constraints enforce characteristic deviations from classic efficient coding signatures. Despite interaction between these effects, clear signatures emerge for both unconstrained optimization problems and information-maximizing objective functions. Asking for a fixed-point of the neural code adaptation, we find an objective-independent characterization of constraints on the neural code. We use this result to propose an experimental paradigm that can characterize both the objective and constraint functions that an observed code appears to be optimized for.
1611.00732
Matthew Betti
Matthew Betti and Lindi Wahl and Mair Zamir
Reproduction Number And Asymptotic Stability For The Dynamics of a Honey Bee Colony with Continuous Age Structure
null
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A system of partial differential equations is derived as a model for the dynamics of a honey bee colony with a continuous age distribution, and the system is then extended to include the effects of a simplified infectious disease. In the disease-free case we analytically derive the equilibrium age distribution within the colony and propose a novel approach for determining the global asymptotic stability of a reduced model. Furthermore, we present a method for determining the basic reproduction number $R_0$ of the infection; the method can be applied to other age-structured disease models with interacting susceptible classes. The results of asymptotic stability indicate that a honey bee colony suffering losses will recover naturally so long as the cause of the losses is removed before the colony collapses. Our expression for $R_0$ has potential uses in the tracking and control of an infectious disease within a bee colony.
[ { "created": "Wed, 2 Nov 2016 19:12:25 GMT", "version": "v1" } ]
2016-11-03
[ [ "Betti", "Matthew", "" ], [ "Wahl", "Lindi", "" ], [ "Zamir", "Mair", "" ] ]
A system of partial differential equations is derived as a model for the dynamics of a honey bee colony with a continuous age distribution, and the system is then extended to include the effects of a simplified infectious disease. In the disease-free case we analytically derive the equilibrium age distribution within the colony and propose a novel approach for determining the global asymptotic stability of a reduced model. Furthermore, we present a method for determining the basic reproduction number $R_0$ of the infection; the method can be applied to other age-structured disease models with interacting susceptible classes. The results of asymptotic stability indicate that a honey bee colony suffering losses will recover naturally so long as the cause of the losses is removed before the colony collapses. Our expression for $R_0$ has potential uses in the tracking and control of an infectious disease within a bee colony.
1806.05785
Xiaochang Leng
Ting Miao, Liqiong Tian, Xiaochang Leng, Zhamgmu Miao, Chengjun Xu
A Comparative Study of Cohesive Zone Models for Predicting Delamination Behaviors of Arterial Wall
17 pages, 6 figures
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Arterial tissue delamination, manifested as the failure between arterial layers, is a critical process in the rupture of atherosclerotic plaque, leading to potential life-threatening clinical consequences. Numerous models have been used to characterize the arterial tissue delamination. Few has investigated the effect of Cohesive Zone Model (CZM) shapes on predicting the delamination behavior of the arterial wall. In this study, four types of cohesive zone models (triangular, trapezoidal, linear-exponential and exponential-linear) were investigated to compare their predictability of the arterial wall failure. The Holzapfel-Gasser-Ogden (HGO) model was adopted for modelling the mechanical behavior of the aortic bulk material. The simulation results using CZM on the aortic media delamination were also compared with the results on mouse plaque delamination and human fibrous cap delamination. The results show that: 1) the simulation results based on the four shapes of CZMs match well with the experimental results, 2) the triangular and exponential-linear CZMs are in good agreement with the experimental force-displacement curves of mouse plaque delamination, 3) considering the viscoelastic effect of the arterial tissue, the triangular and exponential-linear CZMs match well with the experimental force-displacement curves of human fibrous cap delamination. Thus, triangular and exponential-linear CZMs can capture the arterial tissue failure response well.
[ { "created": "Fri, 15 Jun 2018 02:06:13 GMT", "version": "v1" } ]
2018-06-18
[ [ "Miao", "Ting", "" ], [ "Tian", "Liqiong", "" ], [ "Leng", "Xiaochang", "" ], [ "Miao", "Zhamgmu", "" ], [ "Xu", "Chengjun", "" ] ]
Arterial tissue delamination, manifested as the failure between arterial layers, is a critical process in the rupture of atherosclerotic plaque, leading to potential life-threatening clinical consequences. Numerous models have been used to characterize the arterial tissue delamination. Few has investigated the effect of Cohesive Zone Model (CZM) shapes on predicting the delamination behavior of the arterial wall. In this study, four types of cohesive zone models (triangular, trapezoidal, linear-exponential and exponential-linear) were investigated to compare their predictability of the arterial wall failure. The Holzapfel-Gasser-Ogden (HGO) model was adopted for modelling the mechanical behavior of the aortic bulk material. The simulation results using CZM on the aortic media delamination were also compared with the results on mouse plaque delamination and human fibrous cap delamination. The results show that: 1) the simulation results based on the four shapes of CZMs match well with the experimental results, 2) the triangular and exponential-linear CZMs are in good agreement with the experimental force-displacement curves of mouse plaque delamination, 3) considering the viscoelastic effect of the arterial tissue, the triangular and exponential-linear CZMs match well with the experimental force-displacement curves of human fibrous cap delamination. Thus, triangular and exponential-linear CZMs can capture the arterial tissue failure response well.
1601.05340
Serguei Saavedra
Serguei Saavedra, Rudolf P. Rohr, Jens M. Olesen, Jordi Bascompte
Nested species interactions promote feasibility over stability during the assembly of a pollinator community
null
null
10.1002/ece3.1930
null
q-bio.PE physics.pop-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The foundational concepts behind the persistence of ecological communities have been based on two ecological properties: dynamical stability and feasibility. The former is typically regarded as the capacity of a community to return to an original equilibrium state after a perturbation in species abundances, and is usually linked to the strength of interspecific interactions. The latter is the capacity to sustain positive abundances on all its constituent species, and is linked to both interspecific interactions and species demographic characteristics. Over the last 40 years, theoretical research in ecology has emphasized the search for conditions leading to the dynamical stability of ecological communities, while the conditions leading to feasibility have been overlooked. However, thus far, we have no evidence of whether species interactions are more conditioned by the community's need to be stable or feasible. Here, we introduce novel quantitative methods and use empirical data to investigate the consequences of species interactions on the dynamical stability and feasibility of mutualistic communities. First, we demonstrate that the more nested the species interactions in a community are, the lower the mutualistic strength that the community can tolerate without losing dynamical stability. Second, we show that high feasibility in a community can be reached either with high mutualistic strength or with highly nested species interactions. Third, we find that during the assembly process of a seasonal pollinator community located at The Zackenberg Research Station (NE Greenland), a high feasibility is reached through the nested species interactions established between newcomer and resident species. Our findings imply that nested mutualistic communities promote feasibility over stability, which may suggest that the former can be key for community persistence.
[ { "created": "Wed, 20 Jan 2016 17:28:42 GMT", "version": "v1" } ]
2016-01-21
[ [ "Saavedra", "Serguei", "" ], [ "Rohr", "Rudolf P.", "" ], [ "Olesen", "Jens M.", "" ], [ "Bascompte", "Jordi", "" ] ]
The foundational concepts behind the persistence of ecological communities have been based on two ecological properties: dynamical stability and feasibility. The former is typically regarded as the capacity of a community to return to an original equilibrium state after a perturbation in species abundances, and is usually linked to the strength of interspecific interactions. The latter is the capacity to sustain positive abundances on all its constituent species, and is linked to both interspecific interactions and species demographic characteristics. Over the last 40 years, theoretical research in ecology has emphasized the search for conditions leading to the dynamical stability of ecological communities, while the conditions leading to feasibility have been overlooked. However, thus far, we have no evidence of whether species interactions are more conditioned by the community's need to be stable or feasible. Here, we introduce novel quantitative methods and use empirical data to investigate the consequences of species interactions on the dynamical stability and feasibility of mutualistic communities. First, we demonstrate that the more nested the species interactions in a community are, the lower the mutualistic strength that the community can tolerate without losing dynamical stability. Second, we show that high feasibility in a community can be reached either with high mutualistic strength or with highly nested species interactions. Third, we find that during the assembly process of a seasonal pollinator community located at The Zackenberg Research Station (NE Greenland), a high feasibility is reached through the nested species interactions established between newcomer and resident species. Our findings imply that nested mutualistic communities promote feasibility over stability, which may suggest that the former can be key for community persistence.
1612.02491
Mihai Nadin
Mihai Nadin
Rethinking the Experiment
4 figures
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The crisis in the reproducibility of experiments invites a re-evaluation of methods of inquiry and validation procedures. The text challenges current assumptions of knowledge acquisition and introduces G-complexity for defining decidable vs. non-decidable knowledge domains. A "second Cartesian revolution," informed by and in awareness of anticipatory processes, should result in scientific methods that transcend determinism and reductionism. Physics and physics-based disciplines convincingly ascertained themselves by adequately describing the non-living. A complementary perspective should account for the specific causality characteristic of life by integrating past, present, and future. Knowledge about anticipatory processes facilitates attainment of this goal. Society cannot afford the dead-end street of reductionism. Science, itself an expression of anticipatory activity, makes possible in our days alternative understandings of reality and its dynamics.
[ { "created": "Wed, 7 Dec 2016 23:59:57 GMT", "version": "v1" } ]
2016-12-09
[ [ "Nadin", "Mihai", "" ] ]
The crisis in the reproducibility of experiments invites a re-evaluation of methods of inquiry and validation procedures. The text challenges current assumptions of knowledge acquisition and introduces G-complexity for defining decidable vs. non-decidable knowledge domains. A "second Cartesian revolution," informed by and in awareness of anticipatory processes, should result in scientific methods that transcend determinism and reductionism. Physics and physics-based disciplines convincingly ascertained themselves by adequately describing the non-living. A complementary perspective should account for the specific causality characteristic of life by integrating past, present, and future. Knowledge about anticipatory processes facilitates attainment of this goal. Society cannot afford the dead-end street of reductionism. Science, itself an expression of anticipatory activity, makes possible in our days alternative understandings of reality and its dynamics.
2304.04770
Takeshi Ishida
Takeshi Ishida
Emergence simulation of cell-like morphologies with evolutionary potential by virtual molecular interactions
arXiv admin note: text overlap with arXiv:2204.09680
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by-nc-nd/4.0/
This study explores the emergence of life through a simulation model approach. The model "Multi-set chemical lattice model" is a model that allows virtual molecules of multiple types to be placed in each lattice cell on a two-dimensional space. This model is capable of describing a wide variety of states and interactions in a limited number of lattice cell spaces, such as diffusion, chemical reaction, and polymerization of virtual molecules. This model is also capable of describing a wide variety of states and interactions even in the limited lattice cell space of 100 x 100 cells. Furthermore it was considered energy metabolism and energy resources environment. It was able to reproduce the "evolution" in which a certain cell-like shapes adapted to the environment survives under conditions of decreasing amounts of energy resources in the environment. This enabled the emergence of cell-like shapes with the four minimum cellular requirements: boundary, metabolism, replication, and evolution, based solely on the interaction of virtual molecules.
[ { "created": "Mon, 10 Apr 2023 02:39:56 GMT", "version": "v1" } ]
2023-04-12
[ [ "Ishida", "Takeshi", "" ] ]
This study explores the emergence of life through a simulation model approach. The model "Multi-set chemical lattice model" is a model that allows virtual molecules of multiple types to be placed in each lattice cell on a two-dimensional space. This model is capable of describing a wide variety of states and interactions in a limited number of lattice cell spaces, such as diffusion, chemical reaction, and polymerization of virtual molecules. This model is also capable of describing a wide variety of states and interactions even in the limited lattice cell space of 100 x 100 cells. Furthermore it was considered energy metabolism and energy resources environment. It was able to reproduce the "evolution" in which a certain cell-like shapes adapted to the environment survives under conditions of decreasing amounts of energy resources in the environment. This enabled the emergence of cell-like shapes with the four minimum cellular requirements: boundary, metabolism, replication, and evolution, based solely on the interaction of virtual molecules.
2301.00992
Louxin Zhang
Louxin Zhang, Niloufar Abhari, Caroline Colijn, Yufeng Wu
A Fast and Scalable Method for Inferring Phylogenetic Networks from Trees by Aligning Lineage Taxon Strings
44 pages, 15 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
The reconstruction of phylogenetic networks is an important but challenging problem in phylogenetics and genome evolution, as the space of phylogenetic networks is vast and cannot be sampled well. One approach to the problem is to solve the minimum phylogenetic network problem, in which phylogenetic trees are first inferred, then the smallest phylogenetic network that displays all the trees is computed. The approach takes advantage of the fact that the theory of phylogenetic trees is mature and there are excellent tools available for inferring phylogenetic trees from a large number of biomolecular sequences. A tree-child network is a phylogenetic network satisfying the condition that every non-leaf node has at least one child that is of indegree one. Here, we develop a new method that infers the minimum tree-child network by aligning lineage taxon strings in the phylogenetic trees. This algorithmic innovation enables us to get around the limitations of the existing programs for phylogenetic network inference. Our new program, named ALTS, is fast enough to infer a tree-child network with a large number of reticulations for a set of up to 50 phylogenetic trees with 50 taxa that have only trivial common clusters in about a quarter of an hour on average.
[ { "created": "Tue, 3 Jan 2023 07:54:01 GMT", "version": "v1" }, { "created": "Thu, 13 Apr 2023 02:09:25 GMT", "version": "v2" } ]
2023-04-14
[ [ "Zhang", "Louxin", "" ], [ "Abhari", "Niloufar", "" ], [ "Colijn", "Caroline", "" ], [ "Wu", "Yufeng", "" ] ]
The reconstruction of phylogenetic networks is an important but challenging problem in phylogenetics and genome evolution, as the space of phylogenetic networks is vast and cannot be sampled well. One approach to the problem is to solve the minimum phylogenetic network problem, in which phylogenetic trees are first inferred, then the smallest phylogenetic network that displays all the trees is computed. The approach takes advantage of the fact that the theory of phylogenetic trees is mature and there are excellent tools available for inferring phylogenetic trees from a large number of biomolecular sequences. A tree-child network is a phylogenetic network satisfying the condition that every non-leaf node has at least one child that is of indegree one. Here, we develop a new method that infers the minimum tree-child network by aligning lineage taxon strings in the phylogenetic trees. This algorithmic innovation enables us to get around the limitations of the existing programs for phylogenetic network inference. Our new program, named ALTS, is fast enough to infer a tree-child network with a large number of reticulations for a set of up to 50 phylogenetic trees with 50 taxa that have only trivial common clusters in about a quarter of an hour on average.
1208.4611
Jacob Oppenheim
Jacob N. Oppenheim and Marcelo O. Magnasco
Human Time-Frequency Acuity Beats the Fourier Uncertainty Principle
4 pages, 2 figures; Accepted at PRL
null
10.1103/PhysRevLett.110.044301
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The time-frequency uncertainty principle states that the product of the temporal and frequency extents of a signal cannot be smaller than $1/(4\pi)$. We study human ability to simultaneously judge the frequency and the timing of a sound. Our subjects often exceeded the uncertainty limit, sometimes by more than tenfold, mostly through remarkable timing acuity. Our results establish a lower bound for the nonlinearity and complexity of the algorithms employed by our brains in parsing transient sounds, rule out simple "linear filter" models of early auditory processing, and highlight timing acuity as a central feature in auditory object processing.
[ { "created": "Wed, 22 Aug 2012 20:12:32 GMT", "version": "v1" }, { "created": "Thu, 3 Jan 2013 16:14:03 GMT", "version": "v2" } ]
2015-03-11
[ [ "Oppenheim", "Jacob N.", "" ], [ "Magnasco", "Marcelo O.", "" ] ]
The time-frequency uncertainty principle states that the product of the temporal and frequency extents of a signal cannot be smaller than $1/(4\pi)$. We study human ability to simultaneously judge the frequency and the timing of a sound. Our subjects often exceeded the uncertainty limit, sometimes by more than tenfold, mostly through remarkable timing acuity. Our results establish a lower bound for the nonlinearity and complexity of the algorithms employed by our brains in parsing transient sounds, rule out simple "linear filter" models of early auditory processing, and highlight timing acuity as a central feature in auditory object processing.
2309.16994
Viorel Munteanu
Viorel Munteanu, Victor Gordeev, Michael Saldana, Eva A{\ss}mann, Justin Maine Su, Nicolae Drabcinski, Oksana Zlenko, Maryna Kit, Felicia Iordachi, Khooshbu Kantibhai Patel, Abdullah Al Nahid, Likhitha Chittampalli, Yidian Xu, Pavel Skums, Shelesh Agrawal, Martin H\"olzer, Adam Smith, Alex Zelikovsky, Serghei Mangul
A rigorous benchmarking of methods for SARS-CoV-2 lineage abundance estimation in wastewater
For correspondence: serghei.mangul@gmail.com
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
In light of the continuous transmission and evolution of SARS-CoV-2 coupled with a significant decline in clinical testing, there is a pressing need for scalable, cost-effective, long-term, passive surveillance tools to effectively monitor viral variants circulating in the population. Wastewater genomic surveillance of SARS-CoV-2 has arrived as an alternative to clinical genomic surveillance, allowing to continuously monitor the prevalence of viral lineages in communities of various size at a fraction of the time, cost, and logistic effort and serving as an early warning system for emerging variants, critical for developed communities and especially for underserved ones. Importantly, lineage prevalence estimates obtained with this approach aren't distorted by biases related to clinical testing accessibility and participation. However, the relative performance of bioinformatics methods used to measure relative lineage abundances from wastewater sequencing data is unknown, preventing both the research community and public health authorities from making informed decisions regarding computational tool selection. Here, we perform comprehensive benchmarking of 18 bioinformatics methods for estimating the relative abundance of SARS-CoV-2 (sub)lineages in wastewater by using data from 36 in vitro mixtures of synthetic lineage and sublineage genomes. In addition, we use simulated data from 78 mixtures of lineages and sublineages co-occurring in the clinical setting with proportions mirroring their prevalence ratios observed in real data. Importantly, we investigate how the accuracy of the evaluated methods is impacted by the sequencing technology used, the associated error rate, the read length, read depth, but also by the exposure of the synthetic RNA mixtures to wastewater, with the goal of capturing the effects induced by the wastewater matrix, including RNA fragmentation and degradation.
[ { "created": "Fri, 29 Sep 2023 05:47:02 GMT", "version": "v1" }, { "created": "Tue, 28 Nov 2023 17:42:29 GMT", "version": "v2" }, { "created": "Mon, 22 Jan 2024 03:59:53 GMT", "version": "v3" } ]
2024-01-23
[ [ "Munteanu", "Viorel", "" ], [ "Gordeev", "Victor", "" ], [ "Saldana", "Michael", "" ], [ "Aßmann", "Eva", "" ], [ "Su", "Justin Maine", "" ], [ "Drabcinski", "Nicolae", "" ], [ "Zlenko", "Oksana", "" ], [ "Kit", "Maryna", "" ], [ "Iordachi", "Felicia", "" ], [ "Patel", "Khooshbu Kantibhai", "" ], [ "Nahid", "Abdullah Al", "" ], [ "Chittampalli", "Likhitha", "" ], [ "Xu", "Yidian", "" ], [ "Skums", "Pavel", "" ], [ "Agrawal", "Shelesh", "" ], [ "Hölzer", "Martin", "" ], [ "Smith", "Adam", "" ], [ "Zelikovsky", "Alex", "" ], [ "Mangul", "Serghei", "" ] ]
In light of the continuous transmission and evolution of SARS-CoV-2 coupled with a significant decline in clinical testing, there is a pressing need for scalable, cost-effective, long-term, passive surveillance tools to effectively monitor viral variants circulating in the population. Wastewater genomic surveillance of SARS-CoV-2 has arrived as an alternative to clinical genomic surveillance, allowing to continuously monitor the prevalence of viral lineages in communities of various size at a fraction of the time, cost, and logistic effort and serving as an early warning system for emerging variants, critical for developed communities and especially for underserved ones. Importantly, lineage prevalence estimates obtained with this approach aren't distorted by biases related to clinical testing accessibility and participation. However, the relative performance of bioinformatics methods used to measure relative lineage abundances from wastewater sequencing data is unknown, preventing both the research community and public health authorities from making informed decisions regarding computational tool selection. Here, we perform comprehensive benchmarking of 18 bioinformatics methods for estimating the relative abundance of SARS-CoV-2 (sub)lineages in wastewater by using data from 36 in vitro mixtures of synthetic lineage and sublineage genomes. In addition, we use simulated data from 78 mixtures of lineages and sublineages co-occurring in the clinical setting with proportions mirroring their prevalence ratios observed in real data. Importantly, we investigate how the accuracy of the evaluated methods is impacted by the sequencing technology used, the associated error rate, the read length, read depth, but also by the exposure of the synthetic RNA mixtures to wastewater, with the goal of capturing the effects induced by the wastewater matrix, including RNA fragmentation and degradation.
2207.12475
Thomaz Bastiaanssen
Thomaz F. S. Bastiaanssen, Thomas P. Quinn and Amy Loughman
Bugs as Features (Part I): Concepts and Foundations for the Compositional Data Analysis of the Microbiome-Gut-Brain Axis
For main text: 23 pages, 3 figures; for supplementary demonstration analysis: 31 pages and 12 figures. Supplementary demonstration analysis generated using Rmarkdown by Thomaz F. S. Bastiaanssen. Part I of a two-part piece
null
10.1038/s44220-023-00148-3
null
q-bio.GN stat.AP
http://creativecommons.org/licenses/by-nc-sa/4.0/
There has been a growing acknowledgement of the involvement of the gut microbiome - the collection of microbes that reside in our gut - in regulating our mood and behaviour. This phenomenon is referred to as the microbiome-gut-brain axis. While our techniques to measure the presence and abundance of these microbes have been steadily improving, the analysis of microbiome data is non-trivial. Here, we present a perspective on the concepts and foundations of data analysis and interpretation of microbiome experiments with a focus on the microbiome-gut-brain axis domain. We give an overview of foundational considerations prior to commencing analysis alongside the core microbiome analysis approaches of alpha diversity, beta diversity, differential feature abundance and functional inference. We emphasize the compositional data analysis (CoDA) paradigm. Further, this perspective features an extensive and heavily annotated microbiome analysis in R in the supplementary materials, as a resource for new and experienced bioinformaticians alike.
[ { "created": "Mon, 25 Jul 2022 18:58:56 GMT", "version": "v1" }, { "created": "Fri, 19 May 2023 12:43:06 GMT", "version": "v2" }, { "created": "Tue, 25 Jul 2023 16:48:33 GMT", "version": "v3" } ]
2023-12-11
[ [ "Bastiaanssen", "Thomaz F. S.", "" ], [ "Quinn", "Thomas P.", "" ], [ "Loughman", "Amy", "" ] ]
There has been a growing acknowledgement of the involvement of the gut microbiome - the collection of microbes that reside in our gut - in regulating our mood and behaviour. This phenomenon is referred to as the microbiome-gut-brain axis. While our techniques to measure the presence and abundance of these microbes have been steadily improving, the analysis of microbiome data is non-trivial. Here, we present a perspective on the concepts and foundations of data analysis and interpretation of microbiome experiments with a focus on the microbiome-gut-brain axis domain. We give an overview of foundational considerations prior to commencing analysis alongside the core microbiome analysis approaches of alpha diversity, beta diversity, differential feature abundance and functional inference. We emphasize the compositional data analysis (CoDA) paradigm. Further, this perspective features an extensive and heavily annotated microbiome analysis in R in the supplementary materials, as a resource for new and experienced bioinformaticians alike.
1409.6338
Julie Dethier
Julie Dethier, Guillaume Drion, Alessio Franci, Rodolphe Sepulchre
A positive feedback at the cellular level promotes robustness and modulation at the circuit level
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper highlights the role of a positive feedback gating mechanism at the cellular level in the robust- ness and modulation properties of rhythmic activities at the circuit level. The results are presented in the context of half-center oscillators, which are simple rhythmic circuits composed of two reciprocally connected inhibitory neuronal populations. Specifically, we focus on rhythms that rely on a particu- lar excitability property, the post-inhibitory rebound, an intrinsic cellular property that elicits transient membrane depolarization when released from hyperpolarization. Two distinct ionic currents can evoke this transient depolarization: a hyperpolarization-activated cation current and a low-threshold T-type calcium current. The presence of a slow activation is specific to the T-type calcium current and provides a slow-positive feedback at the cellular level that is absent in the cation current. We show that this slow- positive feedback is necessary and sufficient to endow the network rhythm with physiological modulation and robustness properties. This study thereby identifies an essential cellular property to be retained at the network level in modeling network robustness and modulation.
[ { "created": "Mon, 22 Sep 2014 20:42:10 GMT", "version": "v1" }, { "created": "Fri, 19 Dec 2014 19:21:18 GMT", "version": "v2" } ]
2014-12-22
[ [ "Dethier", "Julie", "" ], [ "Drion", "Guillaume", "" ], [ "Franci", "Alessio", "" ], [ "Sepulchre", "Rodolphe", "" ] ]
The paper highlights the role of a positive feedback gating mechanism at the cellular level in the robust- ness and modulation properties of rhythmic activities at the circuit level. The results are presented in the context of half-center oscillators, which are simple rhythmic circuits composed of two reciprocally connected inhibitory neuronal populations. Specifically, we focus on rhythms that rely on a particu- lar excitability property, the post-inhibitory rebound, an intrinsic cellular property that elicits transient membrane depolarization when released from hyperpolarization. Two distinct ionic currents can evoke this transient depolarization: a hyperpolarization-activated cation current and a low-threshold T-type calcium current. The presence of a slow activation is specific to the T-type calcium current and provides a slow-positive feedback at the cellular level that is absent in the cation current. We show that this slow- positive feedback is necessary and sufficient to endow the network rhythm with physiological modulation and robustness properties. This study thereby identifies an essential cellular property to be retained at the network level in modeling network robustness and modulation.
2108.05827
Hata Katsuhiko
Yuto Takeda, Katsuhiko Hata, Tokio Yamasaki, Masaki Kaneko, Osamu Yokoi, Chengta Tsai, Kazuo Umemura, Tetsuro Nikuni
Fluctuation in background synaptic activity controls synaptic plasticity
9 pages, 4 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Synaptic plasticity is vital for learning and memory in the brain. It consists of long-term potentiation (LTP) and long-term depression (LTD). Spike frequency is one of the major components of synaptic plasticity in the brain, a noisy environment. Recently, we mathematically analysed the frequency-dependent synaptic plasticity (FDP) in vivo and found that LTP is more likely to occur with an increase in the frequency of background synaptic activity. Previous studies suggest fluctuation in the amplitude of background synaptic activity. However, little is understood about the relationship between synaptic plasticity and the fluctuation in the background synaptic activity. To address this issue, we performed numerical simulations of a calcium-based synapse model. Then, we found attenuation of the tendency to become LTD due to an increase in the fluctuation of background synaptic activity, leading to an enhancement of synaptic weight. Our result suggests that the fluctuation affect synaptic plasticity in the brain.
[ { "created": "Thu, 12 Aug 2021 16:15:16 GMT", "version": "v1" } ]
2021-08-13
[ [ "Takeda", "Yuto", "" ], [ "Hata", "Katsuhiko", "" ], [ "Yamasaki", "Tokio", "" ], [ "Kaneko", "Masaki", "" ], [ "Yokoi", "Osamu", "" ], [ "Tsai", "Chengta", "" ], [ "Umemura", "Kazuo", "" ], [ "Nikuni", "Tetsuro", "" ] ]
Synaptic plasticity is vital for learning and memory in the brain. It consists of long-term potentiation (LTP) and long-term depression (LTD). Spike frequency is one of the major components of synaptic plasticity in the brain, a noisy environment. Recently, we mathematically analysed the frequency-dependent synaptic plasticity (FDP) in vivo and found that LTP is more likely to occur with an increase in the frequency of background synaptic activity. Previous studies suggest fluctuation in the amplitude of background synaptic activity. However, little is understood about the relationship between synaptic plasticity and the fluctuation in the background synaptic activity. To address this issue, we performed numerical simulations of a calcium-based synapse model. Then, we found attenuation of the tendency to become LTD due to an increase in the fluctuation of background synaptic activity, leading to an enhancement of synaptic weight. Our result suggests that the fluctuation affect synaptic plasticity in the brain.
2401.03376
Kohitij Kar
Greta Tuckute, Dawn Finzi, Eshed Margalit, Joel Zylberberg, SueYeon Chung, Alona Fyshe, Evelina Fedorenko, Nikolaus Kriegeskorte, Jacob Yates, Kalanit Grill-Spector, and Kohitij Kar
How to optimize neuroscience data utilization and experiment design for advancing primate visual and linguistic brain models?
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
In recent years, neuroscience has made significant progress in building large-scale artificial neural network (ANN) models of brain activity and behavior. However, there is no consensus on the most efficient ways to collect data and design experiments to develop the next generation of models. This article explores the controversial opinions that have emerged on this topic in the domain of vision and language. Specifically, we address two critical points. First, we weigh the pros and cons of using qualitative insights from empirical results versus raw experimental data to train models. Second, we consider model-free (intuition-based) versus model-based approaches for data collection, specifically experimental design and stimulus selection, for optimal model development. Finally, we consider the challenges of developing a synergistic approach to experimental design and model building, including encouraging data and model sharing and the implications of iterative additions to existing models. The goal of the paper is to discuss decision points and propose directions for both experimenters and model developers in the quest to understand the brain.
[ { "created": "Sun, 7 Jan 2024 02:56:04 GMT", "version": "v1" } ]
2024-01-31
[ [ "Tuckute", "Greta", "" ], [ "Finzi", "Dawn", "" ], [ "Margalit", "Eshed", "" ], [ "Zylberberg", "Joel", "" ], [ "Chung", "SueYeon", "" ], [ "Fyshe", "Alona", "" ], [ "Fedorenko", "Evelina", "" ], [ "Kriegeskorte", "Nikolaus", "" ], [ "Yates", "Jacob", "" ], [ "Grill-Spector", "Kalanit", "" ], [ "Kar", "Kohitij", "" ] ]
In recent years, neuroscience has made significant progress in building large-scale artificial neural network (ANN) models of brain activity and behavior. However, there is no consensus on the most efficient ways to collect data and design experiments to develop the next generation of models. This article explores the controversial opinions that have emerged on this topic in the domain of vision and language. Specifically, we address two critical points. First, we weigh the pros and cons of using qualitative insights from empirical results versus raw experimental data to train models. Second, we consider model-free (intuition-based) versus model-based approaches for data collection, specifically experimental design and stimulus selection, for optimal model development. Finally, we consider the challenges of developing a synergistic approach to experimental design and model building, including encouraging data and model sharing and the implications of iterative additions to existing models. The goal of the paper is to discuss decision points and propose directions for both experimenters and model developers in the quest to understand the brain.