id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2310.11121
Mintu Karmakar
Mintu Karmakar
Quarantine as a delay, not a definitive solution
10 pages
null
null
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
In the realm of pandemic dynamics, understanding the intricate interplay between disease transmission, interventions, and immunity is pivotal for effective control strategies. Through a rigorous agent-based simulation, we embarked on a comprehensive exploration, traversing unmitigated spread, lockdown scenarios, and the transformative potential of vaccination. we unveil a paradoxical trend: while quarantine unquestionably delays the pandemic peak, it does not act as an impenetrable barrier to halt the progression of infectious diseases. Vaccination factor revealed a potent weapon against outbreaks higher vaccination percentage not only delayed infection peaks but also substantially curtailed their impact. Our investigation into bond dilution below the percolation threshold presents an additional dimension to pandemic control. We observed that localized isolation through bond dilution offers a targeted control strategy that can be more resource-efficient compared to blanket lockdowns or large-scale vaccination campaigns.
[ { "created": "Tue, 17 Oct 2023 10:14:03 GMT", "version": "v1" } ]
2023-10-18
[ [ "Karmakar", "Mintu", "" ] ]
In the realm of pandemic dynamics, understanding the intricate interplay between disease transmission, interventions, and immunity is pivotal for effective control strategies. Through a rigorous agent-based simulation, we embarked on a comprehensive exploration, traversing unmitigated spread, lockdown scenarios, and the transformative potential of vaccination. we unveil a paradoxical trend: while quarantine unquestionably delays the pandemic peak, it does not act as an impenetrable barrier to halt the progression of infectious diseases. Vaccination factor revealed a potent weapon against outbreaks higher vaccination percentage not only delayed infection peaks but also substantially curtailed their impact. Our investigation into bond dilution below the percolation threshold presents an additional dimension to pandemic control. We observed that localized isolation through bond dilution offers a targeted control strategy that can be more resource-efficient compared to blanket lockdowns or large-scale vaccination campaigns.
1910.09491
Kabir Husain
Kabir Husain, Arvind Murugan
Physical constraints on epistasis
null
null
null
null
q-bio.PE physics.bio-ph q-bio.BM q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Living systems evolve one mutation at a time, but a single mutation can alter the effect of subsequent mutations. The underlying mechanistic determinants of such epistasis are unclear. Here, we demonstrate that the physical dynamics of a biological system can generically constrain epistasis. We analyze models and experimental data on proteins and regulatory networks. In each, we find that if the long-time physical dynamics is dominated by a slow, collective mode, then the dimensionality of mutational effects is reduced. Consequently, epistatic coefficients for different combinations of mutations are no longer independent, even if individually strong. Such epistasis can be summarized as resulting from a global non-linearity applied to an underlying linear trait, i.e., as global epistasis. This constraint, in turn, reduces the ruggedness of the sequence-to-function map. By providing a generic mechanistic origin for experimentally observed global epistasis, our work suggests that slow collective physical modes can make biological systems evolvable.
[ { "created": "Mon, 21 Oct 2019 16:28:38 GMT", "version": "v1" } ]
2019-10-22
[ [ "Husain", "Kabir", "" ], [ "Murugan", "Arvind", "" ] ]
Living systems evolve one mutation at a time, but a single mutation can alter the effect of subsequent mutations. The underlying mechanistic determinants of such epistasis are unclear. Here, we demonstrate that the physical dynamics of a biological system can generically constrain epistasis. We analyze models and experimental data on proteins and regulatory networks. In each, we find that if the long-time physical dynamics is dominated by a slow, collective mode, then the dimensionality of mutational effects is reduced. Consequently, epistatic coefficients for different combinations of mutations are no longer independent, even if individually strong. Such epistasis can be summarized as resulting from a global non-linearity applied to an underlying linear trait, i.e., as global epistasis. This constraint, in turn, reduces the ruggedness of the sequence-to-function map. By providing a generic mechanistic origin for experimentally observed global epistasis, our work suggests that slow collective physical modes can make biological systems evolvable.
1912.12553
Yang Shen
Mostafa Karimi, Di Wu, Zhangyang Wang, Yang Shen
Explainable Deep Relational Networks for Predicting Compound-Protein Affinities and Contacts
null
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Predicting compound-protein affinity is critical for accelerating drug discovery. Recent progress made by machine learning focuses on accuracy but leaves much to be desired for interpretability. Through molecular contacts underlying affinities, our large-scale interpretability assessment finds commonly-used attention mechanisms inadequate. We thus formulate a hierarchical multi-objective learning problem whose predicted contacts form the basis for predicted affinities. We further design a physics-inspired deep relational network, DeepRelations, with intrinsically explainable architecture. Specifically, various atomic-level contacts or "relations" lead to molecular-level affinity prediction. And the embedded attentions are regularized with predicted structural contexts and supervised with partially available training contacts. DeepRelations shows superior interpretability to the state-of-the-art: without compromising affinity prediction, it boosts the AUPRC of contact prediction 9.5, 16.9, 19.3 and 5.7-fold for the test, compound-unique, protein-unique, and both-unique sets, respectively. Our study represents the first dedicated model development and systematic model assessment for interpretable machine learning of compound-protein affinity.
[ { "created": "Sun, 29 Dec 2019 00:14:07 GMT", "version": "v1" } ]
2020-01-01
[ [ "Karimi", "Mostafa", "" ], [ "Wu", "Di", "" ], [ "Wang", "Zhangyang", "" ], [ "Shen", "Yang", "" ] ]
Predicting compound-protein affinity is critical for accelerating drug discovery. Recent progress made by machine learning focuses on accuracy but leaves much to be desired for interpretability. Through molecular contacts underlying affinities, our large-scale interpretability assessment finds commonly-used attention mechanisms inadequate. We thus formulate a hierarchical multi-objective learning problem whose predicted contacts form the basis for predicted affinities. We further design a physics-inspired deep relational network, DeepRelations, with intrinsically explainable architecture. Specifically, various atomic-level contacts or "relations" lead to molecular-level affinity prediction. And the embedded attentions are regularized with predicted structural contexts and supervised with partially available training contacts. DeepRelations shows superior interpretability to the state-of-the-art: without compromising affinity prediction, it boosts the AUPRC of contact prediction 9.5, 16.9, 19.3 and 5.7-fold for the test, compound-unique, protein-unique, and both-unique sets, respectively. Our study represents the first dedicated model development and systematic model assessment for interpretable machine learning of compound-protein affinity.
2110.03339
Johann Summhammer
Johann Summhammer
Morphology and high frequency bio-electric fields
13 pages, including 4 figures
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate possible shapes of the electric field, which oscillating dipoles in a certain region of biological tissue can produce in a neighboring region, or outside the tissue boundaries. We find that a wide range of shapes, including the typical morphology of limbs and appendages, can be generated as a zone of extremely low field amplitudes embedded in a zone of much larger field amplitudes. Neutral molecules with a resonance close to the frequency of the oscillating field may be attracted to this zone or be repelled from it, while the driving effect on molecules with an electric charge is only extremely weak. The forces would be sufficient for the controlled deposition of molecules during growth or regeneration. They could also serve as a method of information transfer.
[ { "created": "Thu, 7 Oct 2021 11:08:43 GMT", "version": "v1" }, { "created": "Tue, 1 Feb 2022 10:21:04 GMT", "version": "v2" } ]
2022-02-02
[ [ "Summhammer", "Johann", "" ] ]
We investigate possible shapes of the electric field, which oscillating dipoles in a certain region of biological tissue can produce in a neighboring region, or outside the tissue boundaries. We find that a wide range of shapes, including the typical morphology of limbs and appendages, can be generated as a zone of extremely low field amplitudes embedded in a zone of much larger field amplitudes. Neutral molecules with a resonance close to the frequency of the oscillating field may be attracted to this zone or be repelled from it, while the driving effect on molecules with an electric charge is only extremely weak. The forces would be sufficient for the controlled deposition of molecules during growth or regeneration. They could also serve as a method of information transfer.
1706.10106
Tanja Stadler
Tanja Stadler, Alexandra Gavryushkina, Rachel C.M. Warnock, Alexei J. Drummond, Tracy A. Heath
The fossilized birth-death model for the analysis of stratigraphic range data under different speciation concepts
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A birth-death-sampling model gives rise to phylogenetic trees with samples from the past and the present. Interpreting "birth" as branching speciation, "death" as extinction, and "sampling" as fossil preservation and recovery, this model -- also referred to as the fossilized birth-death (FBD) model -- gives rise to phylogenetic trees on extant and fossil samples. The model has been mathematically analyzed and successfully applied to a range of datasets on different taxonomic levels, such as penguins, plants, and insects. However, the current mathematical treatment of this model does not allow for a group of temporally distinct fossil specimens to be assigned to the same species. In this paper, we provide a general mathematical FBD modeling framework that explicitly takes "stratigraphic ranges" into account, with a stratigraphic range being defined as the lineage interval associated with a single species, ranging through time from the first to the last fossil appearance of the species. To assign a sequence of fossil samples in the phylogenetic tree to the same species, i.e., to specify a stratigraphic range, we need to define the mode of speciation. We provide expressions to account for three common speciation modes: budding (or asymmetric) speciation, bifurcating (or symmetric) speciation, and anagenetic speciation. Our equations allow for flexible joint Bayesian analysis of paleontological and neontological data. Furthermore, our framework is directly applicable to epidemiology, where a stratigraphic range is the observed duration of infection of a single patient, "birth" via budding is transmission, "death" is recovery, and "sampling" is sequencing the pathogen of a patient. Thus, we present a model that allows for incorporation of multiple observations through time from a single patient.
[ { "created": "Fri, 30 Jun 2017 10:29:04 GMT", "version": "v1" }, { "created": "Fri, 9 Mar 2018 13:19:33 GMT", "version": "v2" } ]
2018-03-12
[ [ "Stadler", "Tanja", "" ], [ "Gavryushkina", "Alexandra", "" ], [ "Warnock", "Rachel C. M.", "" ], [ "Drummond", "Alexei J.", "" ], [ "Heath", "Tracy A.", "" ] ]
A birth-death-sampling model gives rise to phylogenetic trees with samples from the past and the present. Interpreting "birth" as branching speciation, "death" as extinction, and "sampling" as fossil preservation and recovery, this model -- also referred to as the fossilized birth-death (FBD) model -- gives rise to phylogenetic trees on extant and fossil samples. The model has been mathematically analyzed and successfully applied to a range of datasets on different taxonomic levels, such as penguins, plants, and insects. However, the current mathematical treatment of this model does not allow for a group of temporally distinct fossil specimens to be assigned to the same species. In this paper, we provide a general mathematical FBD modeling framework that explicitly takes "stratigraphic ranges" into account, with a stratigraphic range being defined as the lineage interval associated with a single species, ranging through time from the first to the last fossil appearance of the species. To assign a sequence of fossil samples in the phylogenetic tree to the same species, i.e., to specify a stratigraphic range, we need to define the mode of speciation. We provide expressions to account for three common speciation modes: budding (or asymmetric) speciation, bifurcating (or symmetric) speciation, and anagenetic speciation. Our equations allow for flexible joint Bayesian analysis of paleontological and neontological data. Furthermore, our framework is directly applicable to epidemiology, where a stratigraphic range is the observed duration of infection of a single patient, "birth" via budding is transmission, "death" is recovery, and "sampling" is sequencing the pathogen of a patient. Thus, we present a model that allows for incorporation of multiple observations through time from a single patient.
2008.05903
Stefan Klus
Kateryna Melnyk, Stefan Klus, Gr\'egoire Montavon, Tim Conrad
GraphKKE: Graph Kernel Koopman Embedding for Human Microbiome Analysis
null
null
10.1007/s41109-020-00339-2
null
q-bio.QM cs.LG q-bio.GN stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
More and more diseases have been found to be strongly correlated with disturbances in the microbiome constitution, e.g., obesity, diabetes, or some cancer types. Thanks to modern high-throughput omics technologies, it becomes possible to directly analyze human microbiome and its influence on the health status. Microbial communities are monitored over long periods of time and the associations between their members are explored. These relationships can be described by a time-evolving graph. In order to understand responses of the microbial community members to a distinct range of perturbations such as antibiotics exposure or diseases and general dynamical properties, the time-evolving graph of the human microbial communities has to be analyzed. This becomes especially challenging due to dozens of complex interactions among microbes and metastable dynamics. The key to solving this problem is the representation of the time-evolving graphs as fixed-length feature vectors preserving the original dynamics. We propose a method for learning the embedding of the time-evolving graph that is based on the spectral analysis of transfer operators and graph kernels. We demonstrate that our method can capture temporary changes in the time-evolving graph on both created synthetic data and real-world data. Our experiments demonstrate the efficacy of the method. Furthermore, we show that our method can be applied to human microbiome data to study dynamic processes.
[ { "created": "Wed, 12 Aug 2020 10:57:02 GMT", "version": "v1" }, { "created": "Mon, 7 Sep 2020 09:35:33 GMT", "version": "v2" }, { "created": "Thu, 19 Nov 2020 12:06:13 GMT", "version": "v3" } ]
2021-04-06
[ [ "Melnyk", "Kateryna", "" ], [ "Klus", "Stefan", "" ], [ "Montavon", "Grégoire", "" ], [ "Conrad", "Tim", "" ] ]
More and more diseases have been found to be strongly correlated with disturbances in the microbiome constitution, e.g., obesity, diabetes, or some cancer types. Thanks to modern high-throughput omics technologies, it becomes possible to directly analyze human microbiome and its influence on the health status. Microbial communities are monitored over long periods of time and the associations between their members are explored. These relationships can be described by a time-evolving graph. In order to understand responses of the microbial community members to a distinct range of perturbations such as antibiotics exposure or diseases and general dynamical properties, the time-evolving graph of the human microbial communities has to be analyzed. This becomes especially challenging due to dozens of complex interactions among microbes and metastable dynamics. The key to solving this problem is the representation of the time-evolving graphs as fixed-length feature vectors preserving the original dynamics. We propose a method for learning the embedding of the time-evolving graph that is based on the spectral analysis of transfer operators and graph kernels. We demonstrate that our method can capture temporary changes in the time-evolving graph on both created synthetic data and real-world data. Our experiments demonstrate the efficacy of the method. Furthermore, we show that our method can be applied to human microbiome data to study dynamic processes.
2310.04317
Alexander Fleischmann
Andrea Pierr\'e, Tuan Pham, Jonah Pearl, Sandeep Robert Datta, Jason T. Ritt, and Alexander Fleischmann
A perspective on neuroscience data standardization with Neurodata Without Borders
32 pages, 9 figures
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Neuroscience research has evolved to generate increasingly large and complex experimental data sets, and advanced data science tools are taking on central roles in neuroscience research. Neurodata Without Borders (NWB), a standard language for neurophysiology data, has recently emerged as a powerful solution for data management, analysis, and sharing. We here discuss our efforts to implement NWB data science pipelines. We describe general principles and specific use cases that illustrate successes, challenges, and non-trivial decisions in software engineering. We hope that our experience can provide guidance for the neuroscience community and help bridge the gap between experimental neuroscience and data science.
[ { "created": "Fri, 6 Oct 2023 15:28:51 GMT", "version": "v1" }, { "created": "Mon, 22 Jan 2024 17:45:24 GMT", "version": "v2" } ]
2024-01-23
[ [ "Pierré", "Andrea", "" ], [ "Pham", "Tuan", "" ], [ "Pearl", "Jonah", "" ], [ "Datta", "Sandeep Robert", "" ], [ "Ritt", "Jason T.", "" ], [ "Fleischmann", "Alexander", "" ] ]
Neuroscience research has evolved to generate increasingly large and complex experimental data sets, and advanced data science tools are taking on central roles in neuroscience research. Neurodata Without Borders (NWB), a standard language for neurophysiology data, has recently emerged as a powerful solution for data management, analysis, and sharing. We here discuss our efforts to implement NWB data science pipelines. We describe general principles and specific use cases that illustrate successes, challenges, and non-trivial decisions in software engineering. We hope that our experience can provide guidance for the neuroscience community and help bridge the gap between experimental neuroscience and data science.
1007.2689
Alistair Forrest
A. R. R. Forrest, M. Kanamori-Katayama, Y. Tomaru, T. Lassmann, N. Ninomiya, Y. Takahashi, M. J. L. de Hoon, A. Kubosaki, A. Kaiho, M. Suzuki, J. Yasuda, J. Kawai, Y. Hayashizaki, D. A. Hume and H. Suzuki
Induction of microRNAs, mir-155, mir-222, mir-424 and mir-503, promotes monocytic differentiation through combinatorial regulation
45 pages 5 figures
Leukemia. 24 (2010) 460-6
10.1038/leu.2009.246
null
q-bio.GN q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Acute myeloid leukemia (AML) involves a block in terminal differentiation of the myeloid lineage and uncontrolled proliferation of a progenitor state. Using phorbol myristate acetate (PMA), it is possible to overcome this block in THP-1 cells (an M5-AML containing the MLL-MLLT3 fusion), resulting in differentiation to an adherent monocytic phenotype. As part of FANTOM4, we used microarrays to identify 23 microRNAs that are regulated by PMA. We identify four PMA-induced micro- RNAs (mir-155, mir-222, mir-424 and mir-503) that when overexpressed cause cell-cycle arrest and partial differentiation and when used in combination induce additional changes not seen by any individual microRNA. We further characterize these prodifferentiative microRNAs and show that mir-155 and mir-222 induce G2 arrest and apoptosis, respectively. We find mir-424 and mir-503 are derived from a polycistronic precursor mir-424-503 that is under repression by the MLL-MLLT3 leukemogenic fusion. Both of these microRNAs directly target cell-cycle regulators and induce G1 cell-cycle arrest when overexpressed in THP-1. We also find that the pro-differentiative mir-424 and mir-503 downregulate the anti-differentiative mir-9 by targeting a site in its primary transcript. Our study highlights the combinatorial effects of multiple microRNAs within cellular systems.
[ { "created": "Fri, 16 Jul 2010 02:22:59 GMT", "version": "v1" } ]
2010-07-19
[ [ "Forrest", "A. R. R.", "" ], [ "Kanamori-Katayama", "M.", "" ], [ "Tomaru", "Y.", "" ], [ "Lassmann", "T.", "" ], [ "Ninomiya", "N.", "" ], [ "Takahashi", "Y.", "" ], [ "de Hoon", "M. J. L.", "" ], [ "Kubosaki", "A.", "" ], [ "Kaiho", "A.", "" ], [ "Suzuki", "M.", "" ], [ "Yasuda", "J.", "" ], [ "Kawai", "J.", "" ], [ "Hayashizaki", "Y.", "" ], [ "Hume", "D. A.", "" ], [ "Suzuki", "H.", "" ] ]
Acute myeloid leukemia (AML) involves a block in terminal differentiation of the myeloid lineage and uncontrolled proliferation of a progenitor state. Using phorbol myristate acetate (PMA), it is possible to overcome this block in THP-1 cells (an M5-AML containing the MLL-MLLT3 fusion), resulting in differentiation to an adherent monocytic phenotype. As part of FANTOM4, we used microarrays to identify 23 microRNAs that are regulated by PMA. We identify four PMA-induced micro- RNAs (mir-155, mir-222, mir-424 and mir-503) that when overexpressed cause cell-cycle arrest and partial differentiation and when used in combination induce additional changes not seen by any individual microRNA. We further characterize these prodifferentiative microRNAs and show that mir-155 and mir-222 induce G2 arrest and apoptosis, respectively. We find mir-424 and mir-503 are derived from a polycistronic precursor mir-424-503 that is under repression by the MLL-MLLT3 leukemogenic fusion. Both of these microRNAs directly target cell-cycle regulators and induce G1 cell-cycle arrest when overexpressed in THP-1. We also find that the pro-differentiative mir-424 and mir-503 downregulate the anti-differentiative mir-9 by targeting a site in its primary transcript. Our study highlights the combinatorial effects of multiple microRNAs within cellular systems.
2303.11494
Patricia Suriana
Patricia Suriana, Joseph M. Paggi, Ron O. Dror
FlexVDW: A machine learning approach to account for protein flexibility in ligand docking
Published at the MLDD workshop, International Conference on Learning Representations (ICLR) 2023
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Most widely used ligand docking methods assume a rigid protein structure. This leads to problems when the structure of the target protein deforms upon ligand binding. In particular, the ligand's true binding pose is often scored very unfavorably due to apparent clashes between ligand and protein atoms, which lead to extremely high values of the calculated van der Waals energy term. Traditionally, this problem has been addressed by explicitly searching for receptor conformations to account for the flexibility of the receptor in ligand binding. Here we present a deep learning model trained to take receptor flexibility into account implicitly when predicting van der Waals energy. We show that incorporating this machine-learned energy term into a state-of-the-art physics-based scoring function improves small molecule ligand pose prediction results in cases with substantial protein deformation, without degrading performance in cases with minimal protein deformation. This work demonstrates the feasibility of learning effects of protein flexibility on ligand binding without explicitly modeling changes in protein structure.
[ { "created": "Mon, 20 Mar 2023 23:19:05 GMT", "version": "v1" } ]
2023-03-22
[ [ "Suriana", "Patricia", "" ], [ "Paggi", "Joseph M.", "" ], [ "Dror", "Ron O.", "" ] ]
Most widely used ligand docking methods assume a rigid protein structure. This leads to problems when the structure of the target protein deforms upon ligand binding. In particular, the ligand's true binding pose is often scored very unfavorably due to apparent clashes between ligand and protein atoms, which lead to extremely high values of the calculated van der Waals energy term. Traditionally, this problem has been addressed by explicitly searching for receptor conformations to account for the flexibility of the receptor in ligand binding. Here we present a deep learning model trained to take receptor flexibility into account implicitly when predicting van der Waals energy. We show that incorporating this machine-learned energy term into a state-of-the-art physics-based scoring function improves small molecule ligand pose prediction results in cases with substantial protein deformation, without degrading performance in cases with minimal protein deformation. This work demonstrates the feasibility of learning effects of protein flexibility on ligand binding without explicitly modeling changes in protein structure.
2208.11293
Dexuan Xie
Dexuan Xie
An Extension of Goldman-Hodgkin-Katz Equations by Charges from Ionic Solution and Ion Channel Protein
18 pages, 3 figures, two tables
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Goldman-Hodgkin-Katz (GHK) equations have been widely applied to ion channel studies, simulations, and model developments. However, they are constructed under a constant electric field, causing them to have a low degree of approximation in the prediction of ionic fluxes, electric currents, and membrane potentials. In this paper, the equations are extended from the constant electric field to the nonlinear electric field induced by charges from an ionic solution and an ion channel protein. Furthermore, a novel numerical quadrature scheme is developed to estimate one major parameter, called the extension parameter, of the extended GHK equations in terms of a set of electrostatic potential values. To this end, the extended GHK equations become a bridge between the "macroscopic" ion channel kinetics and the "microscopic" electrostatic potential values across a cell membrane. To generate a set of required electrostatic potential values, a nonlinear finite element iterative scheme for solving a one-dimensional Poisson-Nernst-Planck ion channel model is developed and implemented as a Python software package. This package is then used to do numerical studies on the extended GHK equations, the numerical quadrature scheme, and the nonlinear iterative scheme. Numerical results confirm the importance of considering charge effects in the calculation of ionic fluxes. They also validate the high numerical accuracy of the numerical quadrature scheme, the fast convergence rate of the nonlinear iterative scheme, and the high performance of the software package.
[ { "created": "Wed, 24 Aug 2022 04:12:18 GMT", "version": "v1" } ]
2022-08-25
[ [ "Xie", "Dexuan", "" ] ]
The Goldman-Hodgkin-Katz (GHK) equations have been widely applied to ion channel studies, simulations, and model developments. However, they are constructed under a constant electric field, causing them to have a low degree of approximation in the prediction of ionic fluxes, electric currents, and membrane potentials. In this paper, the equations are extended from the constant electric field to the nonlinear electric field induced by charges from an ionic solution and an ion channel protein. Furthermore, a novel numerical quadrature scheme is developed to estimate one major parameter, called the extension parameter, of the extended GHK equations in terms of a set of electrostatic potential values. To this end, the extended GHK equations become a bridge between the "macroscopic" ion channel kinetics and the "microscopic" electrostatic potential values across a cell membrane. To generate a set of required electrostatic potential values, a nonlinear finite element iterative scheme for solving a one-dimensional Poisson-Nernst-Planck ion channel model is developed and implemented as a Python software package. This package is then used to do numerical studies on the extended GHK equations, the numerical quadrature scheme, and the nonlinear iterative scheme. Numerical results confirm the importance of considering charge effects in the calculation of ionic fluxes. They also validate the high numerical accuracy of the numerical quadrature scheme, the fast convergence rate of the nonlinear iterative scheme, and the high performance of the software package.
2202.01682
James Whittington
James C.R. Whittington, David McCaffary, Jacob J.W. Bakermans, Timothy E.J. Behrens
How to build a cognitive map: insights from models of the hippocampal formation
null
null
null
null
q-bio.NC cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Learning and interpreting the structure of the environment is an innate feature of biological systems, and is integral to guiding flexible behaviours for evolutionary viability. The concept of a cognitive map has emerged as one of the leading metaphors for these capacities, and unravelling the learning and neural representation of such a map has become a central focus of neuroscience. While experimentalists are providing a detailed picture of the neural substrate of cognitive maps in hippocampus and beyond, theorists have been busy building models to bridge the divide between neurons, computation, and behaviour. These models can account for a variety of known representations and neural phenomena, but often provide a differing understanding of not only the underlying principles of cognitive maps, but also the respective roles of hippocampus and cortex. In this Perspective, we bring many of these models into a common language, distil their underlying principles of constructing cognitive maps, provide novel (re)interpretations for neural phenomena, suggest how the principles can be extended to account for prefrontal cortex representations and, finally, speculate on the role of cognitive maps in higher cognitive capacities.
[ { "created": "Thu, 3 Feb 2022 16:49:37 GMT", "version": "v1" } ]
2022-02-04
[ [ "Whittington", "James C. R.", "" ], [ "McCaffary", "David", "" ], [ "Bakermans", "Jacob J. W.", "" ], [ "Behrens", "Timothy E. J.", "" ] ]
Learning and interpreting the structure of the environment is an innate feature of biological systems, and is integral to guiding flexible behaviours for evolutionary viability. The concept of a cognitive map has emerged as one of the leading metaphors for these capacities, and unravelling the learning and neural representation of such a map has become a central focus of neuroscience. While experimentalists are providing a detailed picture of the neural substrate of cognitive maps in hippocampus and beyond, theorists have been busy building models to bridge the divide between neurons, computation, and behaviour. These models can account for a variety of known representations and neural phenomena, but often provide a differing understanding of not only the underlying principles of cognitive maps, but also the respective roles of hippocampus and cortex. In this Perspective, we bring many of these models into a common language, distil their underlying principles of constructing cognitive maps, provide novel (re)interpretations for neural phenomena, suggest how the principles can be extended to account for prefrontal cortex representations and, finally, speculate on the role of cognitive maps in higher cognitive capacities.
1002.2745
Jose A. Cuesta
Susanna C. Manrubia and Jose A. Cuesta
Neutral networks of genotypes: Evolution behind the curtain
7 pages, 7 color figures, uses a modification of pnastwo.cls called pnastwo-modified.cls (included)
ARBOR, 186, 1051-1064 (2010)
10.3989/arbor.2010.746n1253
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Our understanding of the evolutionary process has gone a long way since the publication, 150 years ago, of "On the origin of species" by Charles R. Darwin. The XXth Century witnessed great efforts to embrace replication, mutation, and selection within the framework of a formal theory, able eventually to predict the dynamics and fate of evolving populations. However, a large body of empirical evidence collected over the last decades strongly suggests that some of the assumptions of those classical models necessitate a deep revision. The viability of organisms is not dependent on a unique and optimal genotype. The discovery of huge sets of genotypes (or neutral networks) yielding the same phenotype --in the last term the same organism--, reveals that, most likely, very different functional solutions can be found, accessed and fixed in a population through a low-cost exploration of the space of genomes. The 'evolution behind the curtain' may be the answer to some of the current puzzles that evolutionary theory faces, like the fast speciation process that is observed in the fossil record after very long stasis periods.
[ { "created": "Sun, 14 Feb 2010 02:49:09 GMT", "version": "v1" } ]
2012-02-02
[ [ "Manrubia", "Susanna C.", "" ], [ "Cuesta", "Jose A.", "" ] ]
Our understanding of the evolutionary process has gone a long way since the publication, 150 years ago, of "On the origin of species" by Charles R. Darwin. The XXth Century witnessed great efforts to embrace replication, mutation, and selection within the framework of a formal theory, able eventually to predict the dynamics and fate of evolving populations. However, a large body of empirical evidence collected over the last decades strongly suggests that some of the assumptions of those classical models necessitate a deep revision. The viability of organisms is not dependent on a unique and optimal genotype. The discovery of huge sets of genotypes (or neutral networks) yielding the same phenotype --in the last term the same organism--, reveals that, most likely, very different functional solutions can be found, accessed and fixed in a population through a low-cost exploration of the space of genomes. The 'evolution behind the curtain' may be the answer to some of the current puzzles that evolutionary theory faces, like the fast speciation process that is observed in the fossil record after very long stasis periods.
2207.05335
Dimitri Loutchko
Dimitri Loutchko
An algebraic characterization of self-generating chemical reaction networks using semigroup models
33 pages, 6 figures
null
10.1007/s00285-023-01899-4
null
q-bio.MN math.CO math.RA physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
The ability of a chemical reaction network to generate itself by catalyzed reactions from constantly present environmental food sources is considered a fundamental property in origin-of-life research. Based on Kaufmann's autocatalytic sets, Hordijk and Steel have constructed the versatile formalism of catalytic reaction systems (CRS) to model and to analyze such self-generating networks, which they named reflexively autocatalytic and food generated (RAF). Previously, it was established that the subsequent and simultaenous catalytic functions of the chemicals of a CRS give rise to an algebraic structure, termed a semigroup model. The semigroup model allows to naturally consider the function of any subset of chemicals on the whole CRS. This gives rise to a generative dynamics by iteratively applying the function of a subset to the externally supplied food set. The fixed point of this dynamics yields the maximal self-generating set of chemicals. Moreover, the lattice of all functionally closed self-generating sets of chemicals is discussed and a structure theorem for this lattice is proven. It is also shown that a CRS which contains self-generating sets of chemicals cannot be nilpotent and thus a useful link to the combinatorial theory of finite semigroups is established. The main technical tool introduced and utilized in this work is the representation of the semigroup elements as decorated rooted trees, allowing to translate the generation of chemicals from a given set of resources into the semigroup language.
[ { "created": "Tue, 12 Jul 2022 06:27:22 GMT", "version": "v1" } ]
2023-08-16
[ [ "Loutchko", "Dimitri", "" ] ]
The ability of a chemical reaction network to generate itself by catalyzed reactions from constantly present environmental food sources is considered a fundamental property in origin-of-life research. Based on Kaufmann's autocatalytic sets, Hordijk and Steel have constructed the versatile formalism of catalytic reaction systems (CRS) to model and to analyze such self-generating networks, which they named reflexively autocatalytic and food generated (RAF). Previously, it was established that the subsequent and simultaenous catalytic functions of the chemicals of a CRS give rise to an algebraic structure, termed a semigroup model. The semigroup model allows to naturally consider the function of any subset of chemicals on the whole CRS. This gives rise to a generative dynamics by iteratively applying the function of a subset to the externally supplied food set. The fixed point of this dynamics yields the maximal self-generating set of chemicals. Moreover, the lattice of all functionally closed self-generating sets of chemicals is discussed and a structure theorem for this lattice is proven. It is also shown that a CRS which contains self-generating sets of chemicals cannot be nilpotent and thus a useful link to the combinatorial theory of finite semigroups is established. The main technical tool introduced and utilized in this work is the representation of the semigroup elements as decorated rooted trees, allowing to translate the generation of chemicals from a given set of resources into the semigroup language.
1707.09046
Ali Yousefi
Ali Yousefi, Theodore W. Berger
Time Divergence-Convergence Learning Scheme in Multi-Layer Dynamic Synapse Neural Networks
13 pages
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new learning scheme called time divergence-convergence (TDC) is proposed for two-layer dynamic synapse neural networks (DSNN). DSNN is an artificial neural network model, in which the synaptic transmission is modeled by a dynamic process and the information between neurons are transmitted through spike timing. In TDC, the intra-layer neurons of a DSNN are trained to map input spike trains to a higher dimension of spike trains called a feature-domain, and the output neurons are trained to build the desired spike trains by processing the spike timing of intralayer neurons. The DSNN performance was examined in a jittered spike train classification task which shows more than 92\% accuracy in classifying different spike trains. The DSNN performance is comparable with the recurrent multi-layer neural networks and surpasses a single-layer DSNN with a 22\% margin. Synaptic dynamics have been proposed as the neural substrate for sub-second temporal processing; we can utilize TDC to train a DSNN to perform diverse forms of sub-second temporal processing. The TDC learning proposed here is scalable in terms of the synaptic adaptation of deeper layers of multi-layer DSNNs. The DSNN along with TDC learning proposed here can be used in to replicate the processing observed in neural circuitry and in pattern recognition tasks.
[ { "created": "Tue, 25 Jul 2017 23:35:17 GMT", "version": "v1" } ]
2017-07-31
[ [ "Yousefi", "Ali", "" ], [ "Berger", "Theodore W.", "" ] ]
A new learning scheme called time divergence-convergence (TDC) is proposed for two-layer dynamic synapse neural networks (DSNN). DSNN is an artificial neural network model, in which the synaptic transmission is modeled by a dynamic process and the information between neurons are transmitted through spike timing. In TDC, the intra-layer neurons of a DSNN are trained to map input spike trains to a higher dimension of spike trains called a feature-domain, and the output neurons are trained to build the desired spike trains by processing the spike timing of intralayer neurons. The DSNN performance was examined in a jittered spike train classification task which shows more than 92\% accuracy in classifying different spike trains. The DSNN performance is comparable with the recurrent multi-layer neural networks and surpasses a single-layer DSNN with a 22\% margin. Synaptic dynamics have been proposed as the neural substrate for sub-second temporal processing; we can utilize TDC to train a DSNN to perform diverse forms of sub-second temporal processing. The TDC learning proposed here is scalable in terms of the synaptic adaptation of deeper layers of multi-layer DSNNs. The DSNN along with TDC learning proposed here can be used in to replicate the processing observed in neural circuitry and in pattern recognition tasks.
q-bio/0402021
William Chen
William W. Chen, Jeremy L. England, Eugene I. Shakhnovich
An Exact Model of Fluctuations in Gene Expression
15 pages, 2 figures, RevTeX4
null
null
null
q-bio.MN cond-mat.soft
null
Fluctuations in the measured mRNA levels of unperturbed cells under fixed conditions have often been viewed as an impediment to the extraction of information from expression profiles. Here, we argue that such expression fluctuations should themselves be studied as a source of valuable information about the underlying dynamics of genetic networks. By analyzing microarray data taken from Saccharomyces cerevisiae, we demonstrate that correlations in expression fluctuations have a highly statistically significant dependence on gene function, and furthermore exhibit a remarkable scale-free network structure. We therefore present what we view to be the simplest phenomenological model of a genetic network which can account for the presence of biological information in transcript level fluctuations. We proceed to exactly solve this model using a path integral technique and derive several quantitative predictions. Finally, we propose several experiments by which these predictions might be rigorously tested.
[ { "created": "Tue, 10 Feb 2004 20:21:47 GMT", "version": "v1" } ]
2007-05-23
[ [ "Chen", "William W.", "" ], [ "England", "Jeremy L.", "" ], [ "Shakhnovich", "Eugene I.", "" ] ]
Fluctuations in the measured mRNA levels of unperturbed cells under fixed conditions have often been viewed as an impediment to the extraction of information from expression profiles. Here, we argue that such expression fluctuations should themselves be studied as a source of valuable information about the underlying dynamics of genetic networks. By analyzing microarray data taken from Saccharomyces cerevisiae, we demonstrate that correlations in expression fluctuations have a highly statistically significant dependence on gene function, and furthermore exhibit a remarkable scale-free network structure. We therefore present what we view to be the simplest phenomenological model of a genetic network which can account for the presence of biological information in transcript level fluctuations. We proceed to exactly solve this model using a path integral technique and derive several quantitative predictions. Finally, we propose several experiments by which these predictions might be rigorously tested.
1604.01308
Carsten Mehring
Luke Bashford, Jing Wu, Devapratim Sarma, Kelly Collins, Jeff Ojemann, Carsten Mehring
Natural movement with concurrent brain-computer interface control induces persistent dissociation of neural activity
2 pages, 3 figures, submitted to the annual BCI research award 2016 http://www.bci-award.com/
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As Brain-computer interface (BCI) technology develops it is likely it may be incorporated into protocols that complement and supplement existing movements of the user. Two possible scenarios for such a control could be: the increasing interest to control artificial supernumerary prosthetics, or in cases following brain injury where BCI can be incorporated alongside residual movements to recover ability. In this study we explore the extent to which the human motor cortex is able to concurrently control movements via a BCI and overtly executed movements. Crucially both movement types are driven from the same cortical site. With this we aim to dissociate the activity at this cortical site from the movements being made and instead allow the representation and control for the BCI to develop alongside motor cortex activity. We investigated both BCI performance and its effect on the movement evoked potentials originally associated with overt execution.
[ { "created": "Tue, 5 Apr 2016 15:53:28 GMT", "version": "v1" } ]
2016-04-06
[ [ "Bashford", "Luke", "" ], [ "Wu", "Jing", "" ], [ "Sarma", "Devapratim", "" ], [ "Collins", "Kelly", "" ], [ "Ojemann", "Jeff", "" ], [ "Mehring", "Carsten", "" ] ]
As Brain-computer interface (BCI) technology develops it is likely it may be incorporated into protocols that complement and supplement existing movements of the user. Two possible scenarios for such a control could be: the increasing interest to control artificial supernumerary prosthetics, or in cases following brain injury where BCI can be incorporated alongside residual movements to recover ability. In this study we explore the extent to which the human motor cortex is able to concurrently control movements via a BCI and overtly executed movements. Crucially both movement types are driven from the same cortical site. With this we aim to dissociate the activity at this cortical site from the movements being made and instead allow the representation and control for the BCI to develop alongside motor cortex activity. We investigated both BCI performance and its effect on the movement evoked potentials originally associated with overt execution.
1405.5993
Kohaku H. Z. So
Kohaku H. Z. So, Hisashi Ohtsuki, Takeo Kato
Spatial effect on stochastic dynamics of bistable evolutionary games
null
J. Stat. Mech. (2014) P10020
10.1088/1742-5468/2014/10/P10020
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the lifetimes of metastable states in bistable evolutionary games (coordination games), and examine how they are affected by spatial structure. A semiclassical approximation based on a path integral method is applied to stochastic evolutionary game dynamics with and without spatial structure, and the lifetimes of the metastable states are evaluated. It is shown that the population dependence of the lifetimes is qualitatively different in these two models. Our result indicates that spatial structure can accelerate the transitions between metastable states.
[ { "created": "Fri, 23 May 2014 09:06:36 GMT", "version": "v1" }, { "created": "Mon, 18 Aug 2014 03:57:36 GMT", "version": "v2" } ]
2014-12-01
[ [ "So", "Kohaku H. Z.", "" ], [ "Ohtsuki", "Hisashi", "" ], [ "Kato", "Takeo", "" ] ]
We consider the lifetimes of metastable states in bistable evolutionary games (coordination games), and examine how they are affected by spatial structure. A semiclassical approximation based on a path integral method is applied to stochastic evolutionary game dynamics with and without spatial structure, and the lifetimes of the metastable states are evaluated. It is shown that the population dependence of the lifetimes is qualitatively different in these two models. Our result indicates that spatial structure can accelerate the transitions between metastable states.
1211.0194
Chaitanya A. Athale
Saurabh Mahajan and Chaitanya A. Athale
Spatial and Temporal Sensing Limits of Microtubule Polarization in Neuronal Growth Cones by Intracellular Gradients and Forces
7 figures and supplementary material
Biophys. J. 103(12) 2432-2445, 19 December 2012
10.1016/j.bpj.2012.10.021
null
q-bio.SC physics.bio-ph q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neuronal growth cones are the most sensitive amongst eukaryotic cells in responding to directional chemical cues. Although a dynamic microtubule cytoskeleton has been shown to be essential for growth cone turning, the precise nature of coupling of the spatial cue with microtubule polarization is less understood. Here we present a computational model of microtubule polarization in a turning neuronal growth cone (GC). We explore the limits of directional cues in modifying the spatial polarization of microtubules by testing the role of microtubule dynamics, gradients of regulators and retrograde forces along filopodia. We analyze the steady state and transition behavior of microtubules on being presented with a directional stimulus. The model makes novel predictions about the minimal angular spread of the chemical signal at the growth cone and the fastest polarization times. A regulatory reaction-diffusion network based on the cyclic phosphorylation-dephosphorylation of a regulator predicts that the receptor signal magnitude can generate the maximal polarization of microtubules and not feedback loops or amplifications in the network. Using both the phenomenological and network models we have demonstrated some of the physical limits within which the MT polarization system works in turning neuron.
[ { "created": "Thu, 1 Nov 2012 14:41:41 GMT", "version": "v1" } ]
2019-05-16
[ [ "Mahajan", "Saurabh", "" ], [ "Athale", "Chaitanya A.", "" ] ]
Neuronal growth cones are the most sensitive amongst eukaryotic cells in responding to directional chemical cues. Although a dynamic microtubule cytoskeleton has been shown to be essential for growth cone turning, the precise nature of coupling of the spatial cue with microtubule polarization is less understood. Here we present a computational model of microtubule polarization in a turning neuronal growth cone (GC). We explore the limits of directional cues in modifying the spatial polarization of microtubules by testing the role of microtubule dynamics, gradients of regulators and retrograde forces along filopodia. We analyze the steady state and transition behavior of microtubules on being presented with a directional stimulus. The model makes novel predictions about the minimal angular spread of the chemical signal at the growth cone and the fastest polarization times. A regulatory reaction-diffusion network based on the cyclic phosphorylation-dephosphorylation of a regulator predicts that the receptor signal magnitude can generate the maximal polarization of microtubules and not feedback loops or amplifications in the network. Using both the phenomenological and network models we have demonstrated some of the physical limits within which the MT polarization system works in turning neuron.
2405.10993
Thomas Li
Thomas Z. Li, Kaiwen Xu, Aravind Krishnan, Riqiang Gao, Michael N. Kammer, Sanja Antic, David Xiao, Michael Knight, Yency Martinez, Rafael Paez, Robert J. Lentz, Stephen Deppen, Eric L. Grogan, Thomas A. Lasko, Kim L. Sandler, Fabien Maldonado, Bennett A. Landman
No winners: Performance of lung cancer prediction models depends on screening-detected, incidental, and biopsied pulmonary nodule use cases
Submitted to Radiology: AI
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Statistical models for predicting lung cancer have the potential to facilitate earlier diagnosis of malignancy and avoid invasive workup of benign disease. Many models have been published, but comparative studies of their utility in different clinical settings in which patients would arguably most benefit are scarce. This study retrospectively evaluated promising predictive models for lung cancer prediction in three clinical settings: lung cancer screening with low-dose computed tomography, incidentally detected pulmonary nodules, and nodules deemed suspicious enough to warrant a biopsy. We leveraged 9 cohorts (n=898, 896, 882, 219, 364, 117, 131, 115, 373) from multiple institutions to assess the area under the receiver operating characteristic curve (AUC) of validated models including logistic regressions on clinical variables and radiologist nodule characterizations, artificial intelligence on chest CTs, longitudinal imaging AI, and multi-modal approaches. We implemented each model from their published literature, re-training the models if necessary, and curated each cohort from primary data sources. We observed that model performance varied greatly across clinical use cases. No single predictive model emerged as a clear winner across all cohorts, but certain models excelled in specific clinical contexts. Single timepoint chest CT AI performed well in lung screening, but struggled to generalize to other clinical settings. Longitudinal imaging and multimodal models demonstrated comparatively promising performance on incidentally-detected nodules. However, when applied to nodules that underwent biopsy, all models underperformed. These results underscore the strengths and limitations of 8 validated predictive models and highlight promising directions towards personalized, noninvasive lung cancer diagnosis.
[ { "created": "Thu, 16 May 2024 14:16:47 GMT", "version": "v1" } ]
2024-05-21
[ [ "Li", "Thomas Z.", "" ], [ "Xu", "Kaiwen", "" ], [ "Krishnan", "Aravind", "" ], [ "Gao", "Riqiang", "" ], [ "Kammer", "Michael N.", "" ], [ "Antic", "Sanja", "" ], [ "Xiao", "David", "" ], [ "Knight", "Michael", "" ], [ "Martinez", "Yency", "" ], [ "Paez", "Rafael", "" ], [ "Lentz", "Robert J.", "" ], [ "Deppen", "Stephen", "" ], [ "Grogan", "Eric L.", "" ], [ "Lasko", "Thomas A.", "" ], [ "Sandler", "Kim L.", "" ], [ "Maldonado", "Fabien", "" ], [ "Landman", "Bennett A.", "" ] ]
Statistical models for predicting lung cancer have the potential to facilitate earlier diagnosis of malignancy and avoid invasive workup of benign disease. Many models have been published, but comparative studies of their utility in different clinical settings in which patients would arguably most benefit are scarce. This study retrospectively evaluated promising predictive models for lung cancer prediction in three clinical settings: lung cancer screening with low-dose computed tomography, incidentally detected pulmonary nodules, and nodules deemed suspicious enough to warrant a biopsy. We leveraged 9 cohorts (n=898, 896, 882, 219, 364, 117, 131, 115, 373) from multiple institutions to assess the area under the receiver operating characteristic curve (AUC) of validated models including logistic regressions on clinical variables and radiologist nodule characterizations, artificial intelligence on chest CTs, longitudinal imaging AI, and multi-modal approaches. We implemented each model from their published literature, re-training the models if necessary, and curated each cohort from primary data sources. We observed that model performance varied greatly across clinical use cases. No single predictive model emerged as a clear winner across all cohorts, but certain models excelled in specific clinical contexts. Single timepoint chest CT AI performed well in lung screening, but struggled to generalize to other clinical settings. Longitudinal imaging and multimodal models demonstrated comparatively promising performance on incidentally-detected nodules. However, when applied to nodules that underwent biopsy, all models underperformed. These results underscore the strengths and limitations of 8 validated predictive models and highlight promising directions towards personalized, noninvasive lung cancer diagnosis.
1111.5165
Burak Erman
Nazan B. Walpoth and Burak Erman
The effect of point mutations on energy conduction pathways in proteins
14 pages, 8 figures, a supplementary section
null
null
null
q-bio.BM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Energetically responsive residues of the 173 amino acid N-terminal domain of the cardiac Ryanodine receptor RyR2 are identified by a simple elastic net model. Residues that respond in a correlated way to fluctuations of spatially neighboring residues specify a hydrogen bonded path through the protein. The evolutionarily conserved residues of the protein are all located on this path or in its close proximity. All of the residues of the path are either located on the two Mir domains of the protein or are hydrogen bonded them. Two calcium binding residues, E171 and E173, are proposed as potential binding region, based on insights gained from the elastic net analysis of another calcium channel receptor, the inositol 1,4,5-triphosphate receptor, IP3R. Analysis of the disease causing A77V mutated RyR2 showed that the path is disrupted by the loss of energy responsiveness of certain residues.
[ { "created": "Tue, 22 Nov 2011 11:51:49 GMT", "version": "v1" } ]
2011-11-23
[ [ "Walpoth", "Nazan B.", "" ], [ "Erman", "Burak", "" ] ]
Energetically responsive residues of the 173 amino acid N-terminal domain of the cardiac Ryanodine receptor RyR2 are identified by a simple elastic net model. Residues that respond in a correlated way to fluctuations of spatially neighboring residues specify a hydrogen bonded path through the protein. The evolutionarily conserved residues of the protein are all located on this path or in its close proximity. All of the residues of the path are either located on the two Mir domains of the protein or are hydrogen bonded them. Two calcium binding residues, E171 and E173, are proposed as potential binding region, based on insights gained from the elastic net analysis of another calcium channel receptor, the inositol 1,4,5-triphosphate receptor, IP3R. Analysis of the disease causing A77V mutated RyR2 showed that the path is disrupted by the loss of energy responsiveness of certain residues.
2210.05649
Suman Bhowmick
Suman Bhowmick, Igor M. Sokolov, Hartmut H. K. Lentz
Decoding the double trouble: A mathematical modelling of co-infection dynamics of SARS-CoV-2 and influenza-like illness
null
null
null
null
q-bio.PE physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
After the detection of coronavirus disease 2019 (Covid-19), caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in Wuhan, Hubei Province, China in late December, the cases of Covid-19 have spiralled out around the globe. Due to the clinical similarity of Covid-19 with other flulike syndromes, patients are assayed for other pathogens of influenza like illness. There have been reported cases of co-infection amongst patients with Covid-19. Bacteria for example Streptococcus pneumoniae, Staphylococcus aureus, Klebsiella pneumoniae, Mycoplasma pneumoniae, Chlamydia pneumonia, Legionella pneumophila etc and viruses such as influenza, coronavirus, rhinovirus/enterovirus, parainfluenza, metapneumovirus, influenza B virus etc are identified as co-pathogens. In our current effort, we develop and analysed a compartmental based Ordinary Differential Equation (ODE) type mathematical model to understand the co-infection dynamics of Covid-19 and other influenza type illness. In this work we have incorporated the saturated treatment rate to take account of the impact of limited treatment resources to control the possible Covid-19 cases. As results, we formulate the basic reproduction number of the model system. Finally, we have performed numerical simulations of the co-infection model to examine the solutions in different zones of parameter space.
[ { "created": "Tue, 11 Oct 2022 17:48:27 GMT", "version": "v1" } ]
2022-10-12
[ [ "Bhowmick", "Suman", "" ], [ "Sokolov", "Igor M.", "" ], [ "Lentz", "Hartmut H. K.", "" ] ]
After the detection of coronavirus disease 2019 (Covid-19), caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in Wuhan, Hubei Province, China in late December, the cases of Covid-19 have spiralled out around the globe. Due to the clinical similarity of Covid-19 with other flulike syndromes, patients are assayed for other pathogens of influenza like illness. There have been reported cases of co-infection amongst patients with Covid-19. Bacteria for example Streptococcus pneumoniae, Staphylococcus aureus, Klebsiella pneumoniae, Mycoplasma pneumoniae, Chlamydia pneumonia, Legionella pneumophila etc and viruses such as influenza, coronavirus, rhinovirus/enterovirus, parainfluenza, metapneumovirus, influenza B virus etc are identified as co-pathogens. In our current effort, we develop and analysed a compartmental based Ordinary Differential Equation (ODE) type mathematical model to understand the co-infection dynamics of Covid-19 and other influenza type illness. In this work we have incorporated the saturated treatment rate to take account of the impact of limited treatment resources to control the possible Covid-19 cases. As results, we formulate the basic reproduction number of the model system. Finally, we have performed numerical simulations of the co-infection model to examine the solutions in different zones of parameter space.
1310.7226
Ricardo V\^encio
Ricardo R. Silva, Fabien Jourdan, Diego M. Salvanha, Fabien Letisse, Emilien L. Jamin, Simone Guidetti-Gonzalez, Carlos A. Labate, Ricardo Z.N. V\^encio
ProbMetab: an R package for Bayesian probabilistic annotation of LC-MS based metabolomics
Manuscript to be submitted very soon. 7 pages, 3 color figures. There is a companion material, the two case studies, which are going to be posted here together with the main text in next updated version
null
10.1093/bioinformatics/btu019
null
q-bio.QM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present ProbMetab, an R package which promotes substantial improvement in automatic probabilistic LC-MS based metabolome annotation. The inference engine core is based on a Bayesian model implemented to: (i) allow diverse source of experimental data and metadata to be systematically incorporated into the model with alternative ways to calculate the likelihood function and; (ii) allow sensitive selection of biologically meaningful biochemical reactions databases as Dirichlet-categorical prior distribution. Additionally, to ensure result interpretation by system biologists, we display the annotation in a network where observed mass peaks are connected if their candidate metabolites are substrate/product of known biochemical reactions. This graph can be overlaid with other graph-based analysis, such as partial correlation networks, in a visualization scheme exported to Cytoscape, with web and stand alone versions. ProbMetab was implemented in a modular fashion to fit together with established upstream (xcms, CAMERA, AStream, mzMatch.R, etc) and downstream R package tools (GeneNet, RCytoscape, DiffCorr, etc). ProbMetab, along with extensive documentation and case studies, is freely available under GNU license at: http://labpib.fmrp.usp.br/methods/probmetab/.
[ { "created": "Sun, 27 Oct 2013 18:31:30 GMT", "version": "v1" } ]
2014-02-05
[ [ "Silva", "Ricardo R.", "" ], [ "Jourdan", "Fabien", "" ], [ "Salvanha", "Diego M.", "" ], [ "Letisse", "Fabien", "" ], [ "Jamin", "Emilien L.", "" ], [ "Guidetti-Gonzalez", "Simone", "" ], [ "Labate", "Carlos A.", "" ], [ "Vêncio", "Ricardo Z. N.", "" ] ]
We present ProbMetab, an R package which promotes substantial improvement in automatic probabilistic LC-MS based metabolome annotation. The inference engine core is based on a Bayesian model implemented to: (i) allow diverse source of experimental data and metadata to be systematically incorporated into the model with alternative ways to calculate the likelihood function and; (ii) allow sensitive selection of biologically meaningful biochemical reactions databases as Dirichlet-categorical prior distribution. Additionally, to ensure result interpretation by system biologists, we display the annotation in a network where observed mass peaks are connected if their candidate metabolites are substrate/product of known biochemical reactions. This graph can be overlaid with other graph-based analysis, such as partial correlation networks, in a visualization scheme exported to Cytoscape, with web and stand alone versions. ProbMetab was implemented in a modular fashion to fit together with established upstream (xcms, CAMERA, AStream, mzMatch.R, etc) and downstream R package tools (GeneNet, RCytoscape, DiffCorr, etc). ProbMetab, along with extensive documentation and case studies, is freely available under GNU license at: http://labpib.fmrp.usp.br/methods/probmetab/.
1209.4254
Daniela Delneri
Elzbieta M. Piatkowska, David Knight and Daniela Delneri
Chimeric protein complexes in hybrid species generate novel evolutionary phenotypes
20 pages, 4 figures, for supplementary files email d.delneri@manchester.ac.uk
PLoS Genet. 2013 Oct;9(10):e1003836
10.1371/journal.pgen.1003836
null
q-bio.GN q-bio.PE
http://creativecommons.org/licenses/by/3.0/
Hybridization between species is an important mechanism for the origin of novel lineages and adaptation to new environments. Increased allelic variation and modification of the transcriptional network are the two recognized forces currently deemed to be responsible for the phenotypic properties seen in hybrids. However, since the majority of the biological functions in a cell are carried out by protein complexes, inter-specific protein assemblies therefore represent another important source of natural variation upon which evolutionary forces can act. Here we studied the composition of six protein complexes in two different Saccharomyces "sensu strictu" hybrids, to understand whether chimeric interactions can be freely formed in the cell in spite of species-specific co-evolutionary forces, and whether the different types of complexes cause a change in hybrid fitness. The protein assemblies were isolated from the hybrids via affinity chromatography and identified via mass spectrometry. We found evidence of spontaneous chimericity for four of the six protein assemblies tested and we showed that different types of complexes can cause a variety of phenotypes in selected environments. In the case of TRP2/TRP3 complex, the effect of such chimeric formation resulted in the fitness advantage of the hybrid in an environment lacking tryptophan, while only one type of parental combination of the MBF complex could confer viability to the hybrid under respiratory conditions. This study provides empirical evidence that chimeric protein complexes can freely assemble in cells and reveals a new mechanism to generate phenotypic novelty and plasticity in hybrids to complement the genomic innovation resulting from gene duplication. The ability to exchange orthologous members has also important implications for the adaptation and subsequent genome evolution of the hybrids in terms of pattern of gene loss.
[ { "created": "Wed, 19 Sep 2012 14:24:04 GMT", "version": "v1" } ]
2013-10-24
[ [ "Piatkowska", "Elzbieta M.", "" ], [ "Knight", "David", "" ], [ "Delneri", "Daniela", "" ] ]
Hybridization between species is an important mechanism for the origin of novel lineages and adaptation to new environments. Increased allelic variation and modification of the transcriptional network are the two recognized forces currently deemed to be responsible for the phenotypic properties seen in hybrids. However, since the majority of the biological functions in a cell are carried out by protein complexes, inter-specific protein assemblies therefore represent another important source of natural variation upon which evolutionary forces can act. Here we studied the composition of six protein complexes in two different Saccharomyces "sensu strictu" hybrids, to understand whether chimeric interactions can be freely formed in the cell in spite of species-specific co-evolutionary forces, and whether the different types of complexes cause a change in hybrid fitness. The protein assemblies were isolated from the hybrids via affinity chromatography and identified via mass spectrometry. We found evidence of spontaneous chimericity for four of the six protein assemblies tested and we showed that different types of complexes can cause a variety of phenotypes in selected environments. In the case of TRP2/TRP3 complex, the effect of such chimeric formation resulted in the fitness advantage of the hybrid in an environment lacking tryptophan, while only one type of parental combination of the MBF complex could confer viability to the hybrid under respiratory conditions. This study provides empirical evidence that chimeric protein complexes can freely assemble in cells and reveals a new mechanism to generate phenotypic novelty and plasticity in hybrids to complement the genomic innovation resulting from gene duplication. The ability to exchange orthologous members has also important implications for the adaptation and subsequent genome evolution of the hybrids in terms of pattern of gene loss.
2105.13582
Yanzheng Meng
Yanzheng Meng, Lei Li
Cysteine post-translational modifications: ten years from chemical proteomics to bioinformatics
null
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-nd/4.0/
As the only thiol-bearing amino acid, cysteine (Cys) residues in proteins have the reactive thiol side chain, which is susceptible to a series of post-translational modifications (PTMs). These PTMs participate in a wide range of biological activities including the alteration of enzymatic reactions, protein-protein interactions and protein stability. Here we summarize the advance of cysteine PTM identification technologies and the features of the various kinds of the PTMs. We also discuss in silico approaches for the prediction of the different types of cysteine modified sites, giving directions for future study.
[ { "created": "Fri, 28 May 2021 04:25:54 GMT", "version": "v1" } ]
2021-05-31
[ [ "Meng", "Yanzheng", "" ], [ "Li", "Lei", "" ] ]
As the only thiol-bearing amino acid, cysteine (Cys) residues in proteins have the reactive thiol side chain, which is susceptible to a series of post-translational modifications (PTMs). These PTMs participate in a wide range of biological activities including the alteration of enzymatic reactions, protein-protein interactions and protein stability. Here we summarize the advance of cysteine PTM identification technologies and the features of the various kinds of the PTMs. We also discuss in silico approaches for the prediction of the different types of cysteine modified sites, giving directions for future study.
1105.4242
Uwe C. T\"auber
Uwe C. Tauber (Virginia Tech)
Stochastic population oscillations in spatial predator-prey models
14 pages, 6 figures, submitted to J. Phys C: Conf. Ser. (2011)
J. Phys.: Conf. Ser. 319 (2011) 012019
10.1088/1742-6596/319/1/012019
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is well-established that including spatial structure and stochastic noise in models for predator-prey interactions invalidates the classical deterministic Lotka-Volterra picture of neutral population cycles. In contrast, stochastic models yield long-lived, but ultimately decaying erratic population oscillations, which can be understood through a resonant amplification mechanism for density fluctuations. In Monte Carlo simulations of spatial stochastic predator-prey systems, one observes striking complex spatio-temporal structures. These spreading activity fronts induce persistent correlations between predators and prey. In the presence of local particle density restrictions (finite prey carrying capacity), there exists an extinction threshold for the predator population. The accompanying continuous non-equilibrium phase transition is governed by the directed-percolation universality class. We employ field-theoretic methods based on the Doi-Peliti representation of the master equation for stochastic particle interaction models to (i) map the ensuing action in the vicinity of the absorbing state phase transition to Reggeon field theory, and (ii) to quantitatively address fluctuation-induced renormalizations of the population oscillation frequency, damping, and diffusion coefficients in the species coexistence phase.
[ { "created": "Sat, 21 May 2011 09:31:08 GMT", "version": "v1" } ]
2011-09-20
[ [ "Tauber", "Uwe C.", "", "Virginia Tech" ] ]
It is well-established that including spatial structure and stochastic noise in models for predator-prey interactions invalidates the classical deterministic Lotka-Volterra picture of neutral population cycles. In contrast, stochastic models yield long-lived, but ultimately decaying erratic population oscillations, which can be understood through a resonant amplification mechanism for density fluctuations. In Monte Carlo simulations of spatial stochastic predator-prey systems, one observes striking complex spatio-temporal structures. These spreading activity fronts induce persistent correlations between predators and prey. In the presence of local particle density restrictions (finite prey carrying capacity), there exists an extinction threshold for the predator population. The accompanying continuous non-equilibrium phase transition is governed by the directed-percolation universality class. We employ field-theoretic methods based on the Doi-Peliti representation of the master equation for stochastic particle interaction models to (i) map the ensuing action in the vicinity of the absorbing state phase transition to Reggeon field theory, and (ii) to quantitatively address fluctuation-induced renormalizations of the population oscillation frequency, damping, and diffusion coefficients in the species coexistence phase.
0706.2516
Bhalchandra Thatte
Bhalchandra D. Thatte and Mike Steel
Reconstructing pedigrees: a stochastic perspective
20 pages, 3 figures
null
null
null
q-bio.PE
null
A pedigree is a directed graph that describes how individuals are related through ancestry in a sexually-reproducing population. In this paper we explore the question of whether one can reconstruct a pedigree by just observing sequence data for present day individuals. This is motivated by the increasing availability of genomic sequences, but in this paper we take a more theoretical approach and consider what models of sequence evolution might allow pedigree reconstruction (given sufficiently long sequences). Our results complement recent work that showed that pedigree reconstruction may be fundamentally impossible if one uses just the degrees of relatedness between different extant individuals. We find that for certain stochastic processes, pedigrees can be recovered up to isomorphism from sufficiently long sequences.
[ { "created": "Mon, 18 Jun 2007 00:16:14 GMT", "version": "v1" } ]
2007-06-19
[ [ "Thatte", "Bhalchandra D.", "" ], [ "Steel", "Mike", "" ] ]
A pedigree is a directed graph that describes how individuals are related through ancestry in a sexually-reproducing population. In this paper we explore the question of whether one can reconstruct a pedigree by just observing sequence data for present day individuals. This is motivated by the increasing availability of genomic sequences, but in this paper we take a more theoretical approach and consider what models of sequence evolution might allow pedigree reconstruction (given sufficiently long sequences). Our results complement recent work that showed that pedigree reconstruction may be fundamentally impossible if one uses just the degrees of relatedness between different extant individuals. We find that for certain stochastic processes, pedigrees can be recovered up to isomorphism from sufficiently long sequences.
1602.04258
Christophe Dessimoz
Oscar Robinson, David Dylus, Christophe Dessimoz
Phylo.io: interactive viewing and comparison of large phylogenetic trees on the web
null
null
10.1093/molbev/msw080
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phylogenetic trees are pervasively used to depict evolutionary relationships. Increasingly, researchers need to visualize large trees and compare multiple large trees inferred for the same set of taxa (reflecting uncertainty in the tree inference or genuine discordance among the loci analysed). Existing tree visualization tools are however not well suited to these tasks. In particular, side-by-side comparison of trees can prove challenging beyond a few dozen taxa. Here, we introduce Phylo.io, a web application to visualize and compare phylogenetic trees side-by-side. Its distinctive features are: highlighting of similarities and differences between two trees, automatic identification of the best matching rooting and leaf order, scalability to very large trees, high usability, multiplatform support via standard HTML5 implementation, and possibility to store and share visualisations. The tool can be freely accessed at http://phylo.io. The code for the associated JavaScript library is available at https://github.com/DessimozLab/phylo-io under an MIT open source license.
[ { "created": "Fri, 12 Feb 2016 23:01:54 GMT", "version": "v1" } ]
2016-04-21
[ [ "Robinson", "Oscar", "" ], [ "Dylus", "David", "" ], [ "Dessimoz", "Christophe", "" ] ]
Phylogenetic trees are pervasively used to depict evolutionary relationships. Increasingly, researchers need to visualize large trees and compare multiple large trees inferred for the same set of taxa (reflecting uncertainty in the tree inference or genuine discordance among the loci analysed). Existing tree visualization tools are however not well suited to these tasks. In particular, side-by-side comparison of trees can prove challenging beyond a few dozen taxa. Here, we introduce Phylo.io, a web application to visualize and compare phylogenetic trees side-by-side. Its distinctive features are: highlighting of similarities and differences between two trees, automatic identification of the best matching rooting and leaf order, scalability to very large trees, high usability, multiplatform support via standard HTML5 implementation, and possibility to store and share visualisations. The tool can be freely accessed at http://phylo.io. The code for the associated JavaScript library is available at https://github.com/DessimozLab/phylo-io under an MIT open source license.
1105.5816
Polina Kurbatova
Stephan Fischer (LIRIS / INRIA Grenoble Rh\^one-Alpes / INSA Lyon / UCB Lyon), Polina Kurbatova (ICJ), Nikolai Bessonov (IPME), Olivier Gandrillon, Vitaly Volpert (ICJ), Fabien Crauste (ICJ)
Modelling Erythroblastic Islands: Using a Hybrid Model to Assess the Function of Central Macrophage
null
null
null
UMR5208, UMR5208
q-bio.QM physics.bio-ph q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The production and regulation of red blood cells, erythropoiesis, occurs in the bone marrow where erythroid cells proliferate and differentiate within particular structures, called erythroblastic islands. A typical structure of these islands consists in a macrophage (white cell) surrounded by immature erythroid cells (progenitors), with more mature cells on the periphery of the island, ready to leave the bone marrow and enter the bloodstream. A hybrid model, coupling a continuous model (ordinary differential equations) describing intracellular regulation through competition of two key proteins, to a discrete spatial model describing cell-cell interactions, with growth factor diffusion in the medium described by a continuous model (partial differential equations), is proposed to investigate the role of the central macrophage in normal erythropoiesis. Intracellular competition of the two proteins leads the erythroid cell to either proliferation, differentiation, or death by apoptosis. This approach allows considering spatial aspects of erythropoiesis, involved for instance in the occurrence of cellular interactions or the access to external factors, as well as dynamics of intracellular and extracellular scales of this complex cellular process, accounting for stochasticity in cell cycle durations and orientation of the mitotic spindle. The analysis of the model shows a strong effect of the central macrophage on the stability of an erythroblastic island, when assuming the macrophage releases pro-survival cytokines. Even though it is not clear whether or not erythroblastic island stability must be required, investigation of the model concludes that stability improves responsiveness of the model, hence stressing out the potential relevance of the central macrophage in normal erythropoiesis.
[ { "created": "Sun, 29 May 2011 19:19:59 GMT", "version": "v1" } ]
2011-06-01
[ [ "Fischer", "Stephan", "", "LIRIS / INRIA Grenoble Rhône-Alpes / INSA Lyon /\n UCB Lyon" ], [ "Kurbatova", "Polina", "", "ICJ" ], [ "Bessonov", "Nikolai", "", "IPME" ], [ "Gandrillon", "Olivier", "", "ICJ" ], [ "Volpert", "Vitaly", "", "ICJ" ], [ "Crauste", "Fabien", "", "ICJ" ] ]
The production and regulation of red blood cells, erythropoiesis, occurs in the bone marrow where erythroid cells proliferate and differentiate within particular structures, called erythroblastic islands. A typical structure of these islands consists in a macrophage (white cell) surrounded by immature erythroid cells (progenitors), with more mature cells on the periphery of the island, ready to leave the bone marrow and enter the bloodstream. A hybrid model, coupling a continuous model (ordinary differential equations) describing intracellular regulation through competition of two key proteins, to a discrete spatial model describing cell-cell interactions, with growth factor diffusion in the medium described by a continuous model (partial differential equations), is proposed to investigate the role of the central macrophage in normal erythropoiesis. Intracellular competition of the two proteins leads the erythroid cell to either proliferation, differentiation, or death by apoptosis. This approach allows considering spatial aspects of erythropoiesis, involved for instance in the occurrence of cellular interactions or the access to external factors, as well as dynamics of intracellular and extracellular scales of this complex cellular process, accounting for stochasticity in cell cycle durations and orientation of the mitotic spindle. The analysis of the model shows a strong effect of the central macrophage on the stability of an erythroblastic island, when assuming the macrophage releases pro-survival cytokines. Even though it is not clear whether or not erythroblastic island stability must be required, investigation of the model concludes that stability improves responsiveness of the model, hence stressing out the potential relevance of the central macrophage in normal erythropoiesis.
2312.17506
Giulia Di Teodoro
Giulia Di Teodoro, Federico Siciliano, Valerio Guarrasi, Anne-Mieke Vandamme, Valeria Ghisetti, Anders S\"onnerborg, Maurizio Zazzi, Fabrizio Silvestri, Laura Palagi
A graph neural network-based model with Out-of-Distribution Robustness for enhancing Antiretroviral Therapy Outcome Prediction for HIV-1
32 pages, 2 figures
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/publicdomain/zero/1.0/
Predicting the outcome of antiretroviral therapies for HIV-1 is a pressing clinical challenge, especially when the treatment regimen includes drugs for which limited effectiveness data is available. This scarcity of data can arise either due to the introduction of a new drug to the market or due to limited use in clinical settings. To tackle this issue, we introduce a novel joint fusion model, which combines features from a Fully Connected (FC) Neural Network and a Graph Neural Network (GNN). The FC network employs tabular data with a feature vector made up of viral mutations identified in the most recent genotypic resistance test, along with the drugs used in therapy. Conversely, the GNN leverages knowledge derived from Stanford drug-resistance mutation tables, which serve as benchmark references for deducing in-vivo treatment efficacy based on the viral genetic sequence, to build informative graphs. We evaluated these models' robustness against Out-of-Distribution drugs in the test set, with a specific focus on the GNN's role in handling such scenarios. Our comprehensive analysis demonstrates that the proposed model consistently outperforms the FC model, especially when considering Out-of-Distribution drugs. These results underscore the advantage of integrating Stanford scores in the model, thereby enhancing its generalizability and robustness, but also extending its utility in real-world applications with limited data availability. This research highlights the potential of our approach to inform antiretroviral therapy outcome prediction and contribute to more informed clinical decisions.
[ { "created": "Fri, 29 Dec 2023 08:02:13 GMT", "version": "v1" } ]
2024-01-01
[ [ "Di Teodoro", "Giulia", "" ], [ "Siciliano", "Federico", "" ], [ "Guarrasi", "Valerio", "" ], [ "Vandamme", "Anne-Mieke", "" ], [ "Ghisetti", "Valeria", "" ], [ "Sönnerborg", "Anders", "" ], [ "Zazzi", "Maurizio", "" ], [ "Silvestri", "Fabrizio", "" ], [ "Palagi", "Laura", "" ] ]
Predicting the outcome of antiretroviral therapies for HIV-1 is a pressing clinical challenge, especially when the treatment regimen includes drugs for which limited effectiveness data is available. This scarcity of data can arise either due to the introduction of a new drug to the market or due to limited use in clinical settings. To tackle this issue, we introduce a novel joint fusion model, which combines features from a Fully Connected (FC) Neural Network and a Graph Neural Network (GNN). The FC network employs tabular data with a feature vector made up of viral mutations identified in the most recent genotypic resistance test, along with the drugs used in therapy. Conversely, the GNN leverages knowledge derived from Stanford drug-resistance mutation tables, which serve as benchmark references for deducing in-vivo treatment efficacy based on the viral genetic sequence, to build informative graphs. We evaluated these models' robustness against Out-of-Distribution drugs in the test set, with a specific focus on the GNN's role in handling such scenarios. Our comprehensive analysis demonstrates that the proposed model consistently outperforms the FC model, especially when considering Out-of-Distribution drugs. These results underscore the advantage of integrating Stanford scores in the model, thereby enhancing its generalizability and robustness, but also extending its utility in real-world applications with limited data availability. This research highlights the potential of our approach to inform antiretroviral therapy outcome prediction and contribute to more informed clinical decisions.
1805.12491
Christopher Lynn
Christopher W. Lynn, Ari E. Kahn, Nathaniel Nyema, and Danielle S. Bassett
Abstract representations of events arise from mental errors in learning and memory
73 pages, 11 figures, 11 tables
null
null
null
q-bio.NC physics.bio-ph physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans are adept at uncovering abstract associations in the world around them, yet the underlying mechanisms remain poorly understood. Intuitively, learning the higher-order structure of statistical relationships should involve complex mental processes. Here we propose an alternative perspective: that higher-order associations instead arise from natural errors in learning and memory. Combining ideas from information theory and reinforcement learning, we derive a maximum entropy (or minimum complexity) model of people's internal representations of the transitions between stimuli. Importantly, our model (i) affords a concise analytic form, (ii) qualitatively explains the effects of transition network structure on human expectations, and (iii) quantitatively predicts human reaction times in probabilistic sequential motor tasks. Together, these results suggest that mental errors influence our abstract representations of the world in significant and predictable ways, with direct implications for the study and design of optimally learnable information sources.
[ { "created": "Thu, 31 May 2018 14:30:40 GMT", "version": "v1" }, { "created": "Thu, 31 Jan 2019 15:47:29 GMT", "version": "v2" }, { "created": "Wed, 25 Mar 2020 17:41:13 GMT", "version": "v3" } ]
2020-03-26
[ [ "Lynn", "Christopher W.", "" ], [ "Kahn", "Ari E.", "" ], [ "Nyema", "Nathaniel", "" ], [ "Bassett", "Danielle S.", "" ] ]
Humans are adept at uncovering abstract associations in the world around them, yet the underlying mechanisms remain poorly understood. Intuitively, learning the higher-order structure of statistical relationships should involve complex mental processes. Here we propose an alternative perspective: that higher-order associations instead arise from natural errors in learning and memory. Combining ideas from information theory and reinforcement learning, we derive a maximum entropy (or minimum complexity) model of people's internal representations of the transitions between stimuli. Importantly, our model (i) affords a concise analytic form, (ii) qualitatively explains the effects of transition network structure on human expectations, and (iii) quantitatively predicts human reaction times in probabilistic sequential motor tasks. Together, these results suggest that mental errors influence our abstract representations of the world in significant and predictable ways, with direct implications for the study and design of optimally learnable information sources.
1803.04085
Andrew Leifer
Mochi Liu, Anuj K Sharma, Joshua W Shaevitz, Andrew M Leifer
Temporal processing and context dependency in C. elegans mechanosensation
40 pages, 8 main figures, 19 supplementary figures
eLife 2018;7:e36419
10.7554/eLife.36419
null
q-bio.NC physics.bio-ph
http://creativecommons.org/licenses/by-sa/4.0/
A quantitative understanding of how sensory signals are transformed into motor outputs places useful constraints on brain function and helps reveal the brain's underlying computations. We investigate how the nematode C. elegans responds to time-varying mechanosensory signals using a high-throughput optogenetic assay and automated behavior quantification. In the prevailing picture of the touch circuit, the animal's behavior is determined by which neurons are stimulated and by the stimulus amplitude. In contrast, we find that the behavioral response is tuned to temporal properties of mechanosensory signals, like its integral and derivative, that extend over many seconds. Mechanosensory signals, even in the same neurons, can be tailored to elicit different behavioral responses. Moreover, we find that the animal's response also depends on its behavioral context. Most dramatically, the animal ignores all tested mechanosensory stimuli during turns. Finally, we present a linear-nonlinear model that predicts the animal's behavioral response to stimulus.
[ { "created": "Mon, 12 Mar 2018 01:42:26 GMT", "version": "v1" }, { "created": "Tue, 20 Mar 2018 16:30:34 GMT", "version": "v2" } ]
2020-08-18
[ [ "Liu", "Mochi", "" ], [ "Sharma", "Anuj K", "" ], [ "Shaevitz", "Joshua W", "" ], [ "Leifer", "Andrew M", "" ] ]
A quantitative understanding of how sensory signals are transformed into motor outputs places useful constraints on brain function and helps reveal the brain's underlying computations. We investigate how the nematode C. elegans responds to time-varying mechanosensory signals using a high-throughput optogenetic assay and automated behavior quantification. In the prevailing picture of the touch circuit, the animal's behavior is determined by which neurons are stimulated and by the stimulus amplitude. In contrast, we find that the behavioral response is tuned to temporal properties of mechanosensory signals, like its integral and derivative, that extend over many seconds. Mechanosensory signals, even in the same neurons, can be tailored to elicit different behavioral responses. Moreover, we find that the animal's response also depends on its behavioral context. Most dramatically, the animal ignores all tested mechanosensory stimuli during turns. Finally, we present a linear-nonlinear model that predicts the animal's behavioral response to stimulus.
1610.09696
Natalie Sanborn
Natalie E. Sanborn, N. Robert Hayre, Rajiv R.P Singh, and Daniel L. Cox
All-atom Molecular Dynamics Simulations of the Projection Domain of the Intrinsically Disordered htau40 Protein
12 pages with 8 figures
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We have performed all atom molecular dynamics simulations on the projection domain of the intrinsically disordered htau40 protein. After generating a suitable ensemble of starting conformations at high temperatures, at room temperature in an adaptive box algorithm we have generated histograms for the radius of gyration, secondary structure time series, generated model small angle x-ray scattering intensities, and model chemical shift plots for comparison to nuclear magnetic resonance data for solvated and filamentous tau. Significantly, we find that the chemical shift spectrum is more consistent with filamentous tau than full length solution based tau. We have also carried out principle component analysis and find three basics groups: compact globules, tadpoles, and extended hinging structures. To validate the adaptive box and our force field choice, we have run limited simulations in a large conventional box with varying force fields and find that our essential results are unchanged. We also performed two simulations with the TIP4P-D water model, the effects of which depended on whether the initial configuration was compact or extended.
[ { "created": "Sun, 30 Oct 2016 19:32:45 GMT", "version": "v1" } ]
2016-11-01
[ [ "Sanborn", "Natalie E.", "" ], [ "Hayre", "N. Robert", "" ], [ "Singh", "Rajiv R. P", "" ], [ "Cox", "Daniel L.", "" ] ]
We have performed all atom molecular dynamics simulations on the projection domain of the intrinsically disordered htau40 protein. After generating a suitable ensemble of starting conformations at high temperatures, at room temperature in an adaptive box algorithm we have generated histograms for the radius of gyration, secondary structure time series, generated model small angle x-ray scattering intensities, and model chemical shift plots for comparison to nuclear magnetic resonance data for solvated and filamentous tau. Significantly, we find that the chemical shift spectrum is more consistent with filamentous tau than full length solution based tau. We have also carried out principle component analysis and find three basics groups: compact globules, tadpoles, and extended hinging structures. To validate the adaptive box and our force field choice, we have run limited simulations in a large conventional box with varying force fields and find that our essential results are unchanged. We also performed two simulations with the TIP4P-D water model, the effects of which depended on whether the initial configuration was compact or extended.
2007.06150
Teresa Head-Gordon
Meili Liu, Akshaya K. Das, James Lincoff, Sukanya Sasmal, Sara Y. Cheng, Robert Vernon, Julie Forman-Kay, Teresa Head-Gordon
Configurational Entropy of Folded Proteins and its Importance for Intrinsically Disordered Proteins
null
null
null
null
q-bio.BM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many pairwise additive force fields are in active use for intrinsically disordered proteins (IDPs) and regions (IDRs), some of which modify energetic terms to improve description of IDPs/IDRs, but are largely in disagreement with solution experiments for the disordered states. We have evaluated representative pairwise and many-body protein and water force fields against experimental data on representative IDPs and IDRs, a peptide that undergoes a disorder-to-order transition, and for seven globular proteins ranging in size from 130-266 amino acids. We find that force fields with the largest statistical fluctuations consistent with the radius of gyration and universal Lindemann values for folded states simultaneously better describe IDPs and IDRs and disorder to order transitions. Hence the crux of what a force field should exhibit to well describe IDRs/IDPs is not just the balance between protein and water energetics, but the balance between energetic effects and configurational entropy of folded states of globular proteins.
[ { "created": "Mon, 13 Jul 2020 02:00:40 GMT", "version": "v1" }, { "created": "Fri, 31 Jul 2020 14:42:14 GMT", "version": "v2" }, { "created": "Thu, 12 Nov 2020 01:48:24 GMT", "version": "v3" } ]
2020-11-13
[ [ "Liu", "Meili", "" ], [ "Das", "Akshaya K.", "" ], [ "Lincoff", "James", "" ], [ "Sasmal", "Sukanya", "" ], [ "Cheng", "Sara Y.", "" ], [ "Vernon", "Robert", "" ], [ "Forman-Kay", "Julie", "" ], [ "Head-Gordon", "Teresa", "" ] ]
Many pairwise additive force fields are in active use for intrinsically disordered proteins (IDPs) and regions (IDRs), some of which modify energetic terms to improve description of IDPs/IDRs, but are largely in disagreement with solution experiments for the disordered states. We have evaluated representative pairwise and many-body protein and water force fields against experimental data on representative IDPs and IDRs, a peptide that undergoes a disorder-to-order transition, and for seven globular proteins ranging in size from 130-266 amino acids. We find that force fields with the largest statistical fluctuations consistent with the radius of gyration and universal Lindemann values for folded states simultaneously better describe IDPs and IDRs and disorder to order transitions. Hence the crux of what a force field should exhibit to well describe IDRs/IDPs is not just the balance between protein and water energetics, but the balance between energetic effects and configurational entropy of folded states of globular proteins.
2112.09244
Joseph Johnson
Joseph D. Johnson, Nathan L. White, Alain Kangabire and Daniel M. Abrams
A Dynamical Model for the Origin of Anisogamy
8 pages, 8 figures
Journal of Theoretical Biology, 521, 110669 (2021)
10.1016/j.jtbi.2021.110669
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The vast majority of multi-cellular organisms are anisogamous, meaning that male and female sex cells differ in size. It remains an open question how this asymmetric state evolved, presumably from the symmetric isogamous state where all gametes are roughly the same size (drawn from the same distribution). Here, we use tools from the study of nonlinear dynamical systems to develop a simple mathematical model for this phenomenon. Using theoretical analysis and numerical simulation, we demonstrate that competition between individuals that is linked to the mean gamete size will almost inevitably result in a stable anisogamous equilibrium, and thus isogamy may naturally lead to anisogamy.
[ { "created": "Thu, 16 Dec 2021 23:00:50 GMT", "version": "v1" } ]
2021-12-20
[ [ "Johnson", "Joseph D.", "" ], [ "White", "Nathan L.", "" ], [ "Kangabire", "Alain", "" ], [ "Abrams", "Daniel M.", "" ] ]
The vast majority of multi-cellular organisms are anisogamous, meaning that male and female sex cells differ in size. It remains an open question how this asymmetric state evolved, presumably from the symmetric isogamous state where all gametes are roughly the same size (drawn from the same distribution). Here, we use tools from the study of nonlinear dynamical systems to develop a simple mathematical model for this phenomenon. Using theoretical analysis and numerical simulation, we demonstrate that competition between individuals that is linked to the mean gamete size will almost inevitably result in a stable anisogamous equilibrium, and thus isogamy may naturally lead to anisogamy.
1602.03046
Deniz Akdemir
Deniz Akdemir, Julio Isidro Sanchez
Efficient Breeding by Genomic Mating
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article, we propose an approach to breeding which focuses on mating instead of truncation selection, our method uses genome-wide marker information in a similar fashion to genomic selection so we refer it to as genomic mating. Using concepts of estimated breeding values, risk (usefulness) and inbreeding, an efficient mating approach is formulated for improvement of breeding values in the long run. We have used a genetic algorithm to find solutions to this optimization problem. Results from our simulations point to the efficiency of genomic mating for breeding complex traits compared to truncation selection.
[ { "created": "Mon, 8 Feb 2016 20:43:39 GMT", "version": "v1" }, { "created": "Mon, 27 Jun 2016 15:47:07 GMT", "version": "v2" } ]
2016-06-28
[ [ "Akdemir", "Deniz", "" ], [ "Sanchez", "Julio Isidro", "" ] ]
In this article, we propose an approach to breeding which focuses on mating instead of truncation selection, our method uses genome-wide marker information in a similar fashion to genomic selection so we refer it to as genomic mating. Using concepts of estimated breeding values, risk (usefulness) and inbreeding, an efficient mating approach is formulated for improvement of breeding values in the long run. We have used a genetic algorithm to find solutions to this optimization problem. Results from our simulations point to the efficiency of genomic mating for breeding complex traits compared to truncation selection.
1702.00768
Xerxes D. Arsiwalla
Riccardo Zucca, Xerxes D. Arsiwalla, Hoang Le, Mikail Rubinov, Paul Verschure
Scaling Properties of Human Brain Functional Networks
International Conference on Artificial Neural Networks - ICANN 2016
Artificial Neural Networks and Machine Learning, Lecture Notes in Computer Science, vol 9886, 2016
10.1007/978-3-319-44778-0_13
null
q-bio.NC cond-mat.dis-nn cs.NE physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate scaling properties of human brain functional networks in the resting-state. Analyzing network degree distributions, we statistically test whether their tails scale as power-law or not. Initial studies, based on least-squares fitting, were shown to be inadequate for precise estimation of power-law distributions. Subsequently, methods based on maximum-likelihood estimators have been proposed and applied to address this question. Nevertheless, no clear consensus has emerged, mainly because results have shown substantial variability depending on the data-set used or its resolution. In this study, we work with high-resolution data (10K nodes) from the Human Connectome Project and take into account network weights. We test for the power-law, exponential, log-normal and generalized Pareto distributions. Our results show that the statistics generally do not support a power-law, but instead these degree distributions tend towards the thin-tail limit of the generalized Pareto model. This may have implications for the number of hubs in human brain functional networks.
[ { "created": "Thu, 2 Feb 2017 18:01:07 GMT", "version": "v1" } ]
2017-02-03
[ [ "Zucca", "Riccardo", "" ], [ "Arsiwalla", "Xerxes D.", "" ], [ "Le", "Hoang", "" ], [ "Rubinov", "Mikail", "" ], [ "Verschure", "Paul", "" ] ]
We investigate scaling properties of human brain functional networks in the resting-state. Analyzing network degree distributions, we statistically test whether their tails scale as power-law or not. Initial studies, based on least-squares fitting, were shown to be inadequate for precise estimation of power-law distributions. Subsequently, methods based on maximum-likelihood estimators have been proposed and applied to address this question. Nevertheless, no clear consensus has emerged, mainly because results have shown substantial variability depending on the data-set used or its resolution. In this study, we work with high-resolution data (10K nodes) from the Human Connectome Project and take into account network weights. We test for the power-law, exponential, log-normal and generalized Pareto distributions. Our results show that the statistics generally do not support a power-law, but instead these degree distributions tend towards the thin-tail limit of the generalized Pareto model. This may have implications for the number of hubs in human brain functional networks.
1312.3028
Daniel Balick
Daniel J. Balick, Ron Do, David Reich, and Shamil R. Sunyaev
Response to a population bottleneck can be used to infer recessive selection
35 pages including supplement, 7 figures including supplement
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Here we present the first genome wide statistical test for recessive selection. This test uses explicitly non-equilibrium demographic differences between populations to infer the mode of selection. By analyzing the transient response to a population bottleneck and subsequent re-expansion, we qualitatively distinguish between alleles under additive and recessive selection. We analyze the response of the average number of deleterious mutations per haploid individual and describe time dependence of this quantity. We introduce a statistic, $B_R$, to compare the number of mutations in different populations and detail its functional dependence on the strength of selection and the intensity of the population bottleneck. This test can be used to detect the predominant mode of selection on the genome wide or regional level, as well as among a sufficiently large set of medically or functionally relevant alleles.
[ { "created": "Wed, 11 Dec 2013 03:22:54 GMT", "version": "v1" }, { "created": "Fri, 21 Mar 2014 04:05:00 GMT", "version": "v2" } ]
2014-03-24
[ [ "Balick", "Daniel J.", "" ], [ "Do", "Ron", "" ], [ "Reich", "David", "" ], [ "Sunyaev", "Shamil R.", "" ] ]
Here we present the first genome wide statistical test for recessive selection. This test uses explicitly non-equilibrium demographic differences between populations to infer the mode of selection. By analyzing the transient response to a population bottleneck and subsequent re-expansion, we qualitatively distinguish between alleles under additive and recessive selection. We analyze the response of the average number of deleterious mutations per haploid individual and describe time dependence of this quantity. We introduce a statistic, $B_R$, to compare the number of mutations in different populations and detail its functional dependence on the strength of selection and the intensity of the population bottleneck. This test can be used to detect the predominant mode of selection on the genome wide or regional level, as well as among a sufficiently large set of medically or functionally relevant alleles.
1608.00985
Artem Kaznatcheev
Artem Kaznatcheev, Robert Vander Velde, Jacob G. Scott, David Basanta
Cancer treatment scheduling and dynamic heterogeneity in social dilemmas of tumour acidity and vasculature
14 main pages (+10 pg appendix), 3 figures
null
null
null
q-bio.PE q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Tumours are diverse ecosystems with persistent heterogeneity in various cancer hallmarks like self-sufficiency of growth factor production for angiogenesis and reprogramming of energy-metabolism for aerobic glycolysis. This heterogeneity has consequences for diagnosis, treatment, and disease progression. Methods: We introduce the double goods game to study the dynamics of these traits using evolutionary game theory. We model glycolytic acid production as a public good for all tumour cells and oxygen from vascularization via VEGF production as a club good benefiting non-glycolytic tumour cells. This results in three viable phenotypic strategies: glycolytic, angiogenic, and aerobic non-angiogenic. Results: We classify the dynamics into three qualitatively distinct regimes: (1) fully glycolytic, (2) fully angiogenic, or (3) polyclonal in all three cell types. The third regime allows for dynamic heterogeneity even with linear goods, something that was not possible in prior public good models that considered glycolysis or growth-factor production in isolation. Conclusion: The cyclic dynamics of the polyclonal regime stress the importance of timing for anti-glycolysis treatments like lonidamine. The existence of qualitatively different dynamic regimes highlights the order effects of treatments. In particular, we consider the potential of vascular renormalization as a neoadjuvant therapy before follow up with interventions like buffer therapy.
[ { "created": "Tue, 2 Aug 2016 20:07:48 GMT", "version": "v1" } ]
2016-08-04
[ [ "Kaznatcheev", "Artem", "" ], [ "Velde", "Robert Vander", "" ], [ "Scott", "Jacob G.", "" ], [ "Basanta", "David", "" ] ]
Background: Tumours are diverse ecosystems with persistent heterogeneity in various cancer hallmarks like self-sufficiency of growth factor production for angiogenesis and reprogramming of energy-metabolism for aerobic glycolysis. This heterogeneity has consequences for diagnosis, treatment, and disease progression. Methods: We introduce the double goods game to study the dynamics of these traits using evolutionary game theory. We model glycolytic acid production as a public good for all tumour cells and oxygen from vascularization via VEGF production as a club good benefiting non-glycolytic tumour cells. This results in three viable phenotypic strategies: glycolytic, angiogenic, and aerobic non-angiogenic. Results: We classify the dynamics into three qualitatively distinct regimes: (1) fully glycolytic, (2) fully angiogenic, or (3) polyclonal in all three cell types. The third regime allows for dynamic heterogeneity even with linear goods, something that was not possible in prior public good models that considered glycolysis or growth-factor production in isolation. Conclusion: The cyclic dynamics of the polyclonal regime stress the importance of timing for anti-glycolysis treatments like lonidamine. The existence of qualitatively different dynamic regimes highlights the order effects of treatments. In particular, we consider the potential of vascular renormalization as a neoadjuvant therapy before follow up with interventions like buffer therapy.
1503.08324
Corey S. O'Hern
Manuel Mai, Kun Wang, Greg Huber, Michael Kirby, Mark D. Shattuck, and Corey S. O'Hern
Outcome prediction in mathematical models of immune response to infection
14 pages, 7 figures
PLoS ONE 10 (2015) e0135861
10.1371/journal.pone.0135861
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Clinicians need to predict patient outcomes with high accuracy as early as possible after disease inception. In this manuscript, we show that patient-to-patient variability sets a fundamental limit on outcome prediction accuracy for a general class of mathematical models for the immune response to infection. However, accuracy can be increased at the expense of delayed prognosis. We investigate several systems of ordinary differential equations (ODEs) that model the host immune response to a pathogen load. Advantages of systems of ODEs for investigating the immune response to infection include the ability to collect data on large numbers of `virtual patients', each with a given set of model parameters, and obtain many time points during the course of the infection. We implement patient-to-patient variability $v$ in the ODE models by randomly selecting the model parameters from Gaussian distributions with variance $v$ that are centered on physiological values. We use logistic regression with one-versus-all classification to predict the discrete steady-state outcomes of the system. We find that the prediction algorithm achieves near $100\%$ accuracy for $v=0$, and the accuracy decreases with increasing $v$ for all ODE models studied. The fact that multiple steady-state outcomes can be obtained for a given initial condition, i.e. the basins of attraction overlap in the space of initial conditions, limits the prediction accuracy for $v>0$. Increasing the elapsed time of the variables used to train and test the classifier, increases the prediction accuracy, while adding explicit external noise to the ODE models decreases the prediction accuracy. Our results quantify the competition between early prognosis and high prediction accuracy that is frequently encountered by clinicians.
[ { "created": "Sat, 28 Mar 2015 16:36:04 GMT", "version": "v1" } ]
2016-02-17
[ [ "Mai", "Manuel", "" ], [ "Wang", "Kun", "" ], [ "Huber", "Greg", "" ], [ "Kirby", "Michael", "" ], [ "Shattuck", "Mark D.", "" ], [ "O'Hern", "Corey S.", "" ] ]
Clinicians need to predict patient outcomes with high accuracy as early as possible after disease inception. In this manuscript, we show that patient-to-patient variability sets a fundamental limit on outcome prediction accuracy for a general class of mathematical models for the immune response to infection. However, accuracy can be increased at the expense of delayed prognosis. We investigate several systems of ordinary differential equations (ODEs) that model the host immune response to a pathogen load. Advantages of systems of ODEs for investigating the immune response to infection include the ability to collect data on large numbers of `virtual patients', each with a given set of model parameters, and obtain many time points during the course of the infection. We implement patient-to-patient variability $v$ in the ODE models by randomly selecting the model parameters from Gaussian distributions with variance $v$ that are centered on physiological values. We use logistic regression with one-versus-all classification to predict the discrete steady-state outcomes of the system. We find that the prediction algorithm achieves near $100\%$ accuracy for $v=0$, and the accuracy decreases with increasing $v$ for all ODE models studied. The fact that multiple steady-state outcomes can be obtained for a given initial condition, i.e. the basins of attraction overlap in the space of initial conditions, limits the prediction accuracy for $v>0$. Increasing the elapsed time of the variables used to train and test the classifier, increases the prediction accuracy, while adding explicit external noise to the ODE models decreases the prediction accuracy. Our results quantify the competition between early prognosis and high prediction accuracy that is frequently encountered by clinicians.
2407.16721
Ashad Kabir
Sheikh Mohammed Shariful Islam, Moloud Abrar, Teketo Tegegne, Liliana Loranjo, Chandan Karmakar, Md Abdul Awal, Md. Shahadat Hossain, Muhammad Ashad Kabir, Mufti Mahmud, Abbas Khosravi, George Siopis, Jeban C Moses, Ralph Maddison
Machine Learning Models for the Identification of Cardiovascular Diseases Using UK Biobank Data
19 pages, 3 figures
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
Machine learning models have the potential to identify cardiovascular diseases (CVDs) early and accurately in primary healthcare settings, which is crucial for delivering timely treatment and management. Although population-based CVD risk models have been used traditionally, these models often do not consider variations in lifestyles, socioeconomic conditions, or genetic predispositions. Therefore, we aimed to develop machine learning models for CVD detection using primary healthcare data, compare the performance of different models, and identify the best models. We used data from the UK Biobank study, which included over 500,000 middle-aged participants from different primary healthcare centers in the UK. Data collected at baseline (2006--2010) and during imaging visits after 2014 were used in this study. Baseline characteristics, including sex, age, and the Townsend Deprivation Index, were included. Participants were classified as having CVD if they reported at least one of the following conditions: heart attack, angina, stroke, or high blood pressure. Cardiac imaging data such as electrocardiogram and echocardiography data, including left ventricular size and function, cardiac output, and stroke volume, were also used. We used 9 machine learning models (LSVM, RBFSVM, GP, DT, RF, NN, AdaBoost, NB, and QDA), which are explainable and easily interpretable. We reported the accuracy, precision, recall, and F-1 scores; confusion matrices; and area under the curve (AUC) curves.
[ { "created": "Tue, 23 Jul 2024 11:05:20 GMT", "version": "v1" } ]
2024-07-25
[ [ "Islam", "Sheikh Mohammed Shariful", "" ], [ "Abrar", "Moloud", "" ], [ "Tegegne", "Teketo", "" ], [ "Loranjo", "Liliana", "" ], [ "Karmakar", "Chandan", "" ], [ "Awal", "Md Abdul", "" ], [ "Hossain", "Md. Shahadat", "" ], [ "Kabir", "Muhammad Ashad", "" ], [ "Mahmud", "Mufti", "" ], [ "Khosravi", "Abbas", "" ], [ "Siopis", "George", "" ], [ "Moses", "Jeban C", "" ], [ "Maddison", "Ralph", "" ] ]
Machine learning models have the potential to identify cardiovascular diseases (CVDs) early and accurately in primary healthcare settings, which is crucial for delivering timely treatment and management. Although population-based CVD risk models have been used traditionally, these models often do not consider variations in lifestyles, socioeconomic conditions, or genetic predispositions. Therefore, we aimed to develop machine learning models for CVD detection using primary healthcare data, compare the performance of different models, and identify the best models. We used data from the UK Biobank study, which included over 500,000 middle-aged participants from different primary healthcare centers in the UK. Data collected at baseline (2006--2010) and during imaging visits after 2014 were used in this study. Baseline characteristics, including sex, age, and the Townsend Deprivation Index, were included. Participants were classified as having CVD if they reported at least one of the following conditions: heart attack, angina, stroke, or high blood pressure. Cardiac imaging data such as electrocardiogram and echocardiography data, including left ventricular size and function, cardiac output, and stroke volume, were also used. We used 9 machine learning models (LSVM, RBFSVM, GP, DT, RF, NN, AdaBoost, NB, and QDA), which are explainable and easily interpretable. We reported the accuracy, precision, recall, and F-1 scores; confusion matrices; and area under the curve (AUC) curves.
q-bio/0702010
Michael Meyer-Hermann
Michael Meyer-Hermann
The electrophysiology of the betacell based on single transmembrane protein characteristics
28 pages, 5 figures, 54 references, 14 pages supplementary material
null
null
null
q-bio.CB q-bio.QM
null
The electrophysiology of betacells is at the origin of insulin secretion. Betacells exhibit a complex behaviour upon stimulation with glucose including repeated and uninterrupted bursting. Mathematical modelling is most suitable to improve knowledge about the function of various transmembrane currents provided the model is based on reliable data. This is the first attempt to build a mathematical model for the betacell-electrophysiology in a bottom-up approach which relies on single protein conductivity data. The results of previous whole-cell-based models are reconsidered. The full simulation including all prominent transmembrane proteins in betacells is used to provide a functional interpretation of their role in betacell-bursting and an updated vantage point of betacell-electrophysiology. As a result of a number of in silico knock-out- and block-experiments the novel model makes some unexpected predictions: Single-channel conductivity data imply that calcium-gated potassium currents are rather small. Thus, their role in burst interruption has to be revisited. An alternative role in high calcium level oscillations is proposed and an alternative burst interruption model is presented. It also turns out that sodium currents are more relevant than expected so far. Experiments are proposed to verify these predictions.
[ { "created": "Wed, 7 Feb 2007 13:10:05 GMT", "version": "v1" } ]
2007-05-23
[ [ "Meyer-Hermann", "Michael", "" ] ]
The electrophysiology of betacells is at the origin of insulin secretion. Betacells exhibit a complex behaviour upon stimulation with glucose including repeated and uninterrupted bursting. Mathematical modelling is most suitable to improve knowledge about the function of various transmembrane currents provided the model is based on reliable data. This is the first attempt to build a mathematical model for the betacell-electrophysiology in a bottom-up approach which relies on single protein conductivity data. The results of previous whole-cell-based models are reconsidered. The full simulation including all prominent transmembrane proteins in betacells is used to provide a functional interpretation of their role in betacell-bursting and an updated vantage point of betacell-electrophysiology. As a result of a number of in silico knock-out- and block-experiments the novel model makes some unexpected predictions: Single-channel conductivity data imply that calcium-gated potassium currents are rather small. Thus, their role in burst interruption has to be revisited. An alternative role in high calcium level oscillations is proposed and an alternative burst interruption model is presented. It also turns out that sodium currents are more relevant than expected so far. Experiments are proposed to verify these predictions.
0708.3627
Walter Nadler
Walter Nadler, Ulrich H. E. Hansmann
Optimizing Replica Exchange Moves For Molecular Dynamics
4 pages, 3 figures; revised version (1 figure added), PRE in press
null
10.1103/PhysRevE.76.057102
null
q-bio.QM cond-mat.stat-mech physics.comp-ph q-bio.BM
null
In this short note we sketch the statistical physics framework of the replica exchange technique when applied to molecular dynamics simulations. In particular, we draw attention to generalized move sets that allow a variety of optimizations as well as new applications of the method.
[ { "created": "Mon, 27 Aug 2007 15:18:03 GMT", "version": "v1" }, { "created": "Tue, 28 Aug 2007 09:18:00 GMT", "version": "v2" }, { "created": "Tue, 16 Oct 2007 09:26:27 GMT", "version": "v3" } ]
2009-11-13
[ [ "Nadler", "Walter", "" ], [ "Hansmann", "Ulrich H. E.", "" ] ]
In this short note we sketch the statistical physics framework of the replica exchange technique when applied to molecular dynamics simulations. In particular, we draw attention to generalized move sets that allow a variety of optimizations as well as new applications of the method.
1902.08357
Xiaotao Li
Xiaotao Li, Xuejing Chen, Fangfang Fan, Li Ning, Kangguang Lin, Zan Chen, Zhenyun Qin, Albert S. Yeung, Liping Wang, Xiaojian Li, Kwok-Fai So
Cognitive computation of brain disorders based primarily on ocular responses
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
The present review presents multiple techniques in which ocular assessments may serve as a noninvasive approach for the early diagnoses of various cognitive and psychiatric disorders, such as Alzheimer's disease (AD), autism spectrum disorder (ASD), schizophrenia (SZ), and major depressive disorder (MDD). Real-time ocular responses are tightly associated with emotional and cognitive processing within the central nervous system. Patterns seen in saccades, pupillary responses, and blinking, as well as retinal microvasculature and morphology visualized via office-based ophthalmic imaging, are potential biomarkers for the screening and evaluation of cognitive and psychiatric disorders. Additionally, rapid advances in artificial intelligence (AI) present a growing opportunity to use machine-learning-based AI, especially deep-learning neural networks, to shed new light on the field of cognitive neuroscience, which may lead to novel evaluations and interventions via ocular approaches for cognitive and psychiatric disorders.
[ { "created": "Fri, 22 Feb 2019 04:18:16 GMT", "version": "v1" }, { "created": "Wed, 20 Mar 2019 14:40:24 GMT", "version": "v2" }, { "created": "Fri, 3 Apr 2020 17:23:03 GMT", "version": "v3" } ]
2020-04-06
[ [ "Li", "Xiaotao", "" ], [ "Chen", "Xuejing", "" ], [ "Fan", "Fangfang", "" ], [ "Ning", "Li", "" ], [ "Lin", "Kangguang", "" ], [ "Chen", "Zan", "" ], [ "Qin", "Zhenyun", "" ], [ "Yeung", "Albert S.", "" ], [ "Wang", "Liping", "" ], [ "Li", "Xiaojian", "" ], [ "So", "Kwok-Fai", "" ] ]
The present review presents multiple techniques in which ocular assessments may serve as a noninvasive approach for the early diagnoses of various cognitive and psychiatric disorders, such as Alzheimer's disease (AD), autism spectrum disorder (ASD), schizophrenia (SZ), and major depressive disorder (MDD). Real-time ocular responses are tightly associated with emotional and cognitive processing within the central nervous system. Patterns seen in saccades, pupillary responses, and blinking, as well as retinal microvasculature and morphology visualized via office-based ophthalmic imaging, are potential biomarkers for the screening and evaluation of cognitive and psychiatric disorders. Additionally, rapid advances in artificial intelligence (AI) present a growing opportunity to use machine-learning-based AI, especially deep-learning neural networks, to shed new light on the field of cognitive neuroscience, which may lead to novel evaluations and interventions via ocular approaches for cognitive and psychiatric disorders.
2005.12390
Constantinos Siettos
Ioannis Gallos, Evangelos Galaris, Constantinos Siettos
Construction of embedded fMRI resting state functional connectivity networks using manifold learning
null
Cogn Neurodyn 15, 585-608 (2021)
10.1007/s11571-020-09645-y
null
q-bio.NC cs.LG cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We construct embedded functional connectivity networks (FCN) from benchmark resting-state functional magnetic resonance imaging (rsfMRI) data acquired from patients with schizophrenia and healthy controls based on linear and nonlinear manifold learning algorithms, namely, Multidimensional Scaling (MDS), Isometric Feature Mapping (ISOMAP) and Diffusion Maps. Furthermore, based on key global graph-theoretical properties of the embedded FCN, we compare their classification potential using machine learning techniques. We also assess the performance of two metrics that are widely used for the construction of FCN from fMRI, namely the Euclidean distance and the lagged cross-correlation metric. We show that the FCN constructed with Diffusion Maps and the lagged cross-correlation metric outperform the other combinations.
[ { "created": "Mon, 25 May 2020 20:39:29 GMT", "version": "v1" } ]
2023-03-24
[ [ "Gallos", "Ioannis", "" ], [ "Galaris", "Evangelos", "" ], [ "Siettos", "Constantinos", "" ] ]
We construct embedded functional connectivity networks (FCN) from benchmark resting-state functional magnetic resonance imaging (rsfMRI) data acquired from patients with schizophrenia and healthy controls based on linear and nonlinear manifold learning algorithms, namely, Multidimensional Scaling (MDS), Isometric Feature Mapping (ISOMAP) and Diffusion Maps. Furthermore, based on key global graph-theoretical properties of the embedded FCN, we compare their classification potential using machine learning techniques. We also assess the performance of two metrics that are widely used for the construction of FCN from fMRI, namely the Euclidean distance and the lagged cross-correlation metric. We show that the FCN constructed with Diffusion Maps and the lagged cross-correlation metric outperform the other combinations.
2108.01765
John Rhodes
Elizabeth S. Allman, Hector Ba\~nos, John A. Rhodes
Identifiability of species network topologies from genomic sequences using the logDet distance
25 pages
null
null
null
q-bio.PE math.ST stat.TH
http://creativecommons.org/licenses/by/4.0/
Inference of network-like evolutionary relationships between species from genomic data must address the interwoven signals from both gene flow and incomplete lineage sorting. The heavy computational demands of standard approaches to this problem severely limit the size of datasets that may be analyzed, in both the number of species and the number of genetic loci. Here we provide a theoretical pointer to more efficient methods, by showing that logDet distances computed from genomic-scale sequences retain sufficient information to recover network relationships in the level-1 ultrametric case. This result is obtained under the Network Multispecies Coalescent model combined with a mixture of General Time-Reversible sequence evolution models across individual gene trees, but does not depend on partitioning sequences by genes. Thus under standard stochastic models statistically justifiable inference of network relationships from sequences can be accomplished without consideration of individual genes or gene trees.
[ { "created": "Tue, 3 Aug 2021 21:58:19 GMT", "version": "v1" } ]
2021-08-05
[ [ "Allman", "Elizabeth S.", "" ], [ "Baños", "Hector", "" ], [ "Rhodes", "John A.", "" ] ]
Inference of network-like evolutionary relationships between species from genomic data must address the interwoven signals from both gene flow and incomplete lineage sorting. The heavy computational demands of standard approaches to this problem severely limit the size of datasets that may be analyzed, in both the number of species and the number of genetic loci. Here we provide a theoretical pointer to more efficient methods, by showing that logDet distances computed from genomic-scale sequences retain sufficient information to recover network relationships in the level-1 ultrametric case. This result is obtained under the Network Multispecies Coalescent model combined with a mixture of General Time-Reversible sequence evolution models across individual gene trees, but does not depend on partitioning sequences by genes. Thus under standard stochastic models statistically justifiable inference of network relationships from sequences can be accomplished without consideration of individual genes or gene trees.
1806.05547
Erik Doty
E. Doty, N. McCague, D.J. Stone, L.A. Celi
Analyzing counterintuitive data
11 pages, 2 figures, 4 tables
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Purpose: To explore the issue of counterintuitive data via analysis of a representative case and further discussion of those situations in which the data appear to be inconsistent with current knowledge. Case: 844 postoperative CABG patients, who were extubated within 24 hours of surgery were identified in a critical care database (MIMIC-III). Nurse elicited pain scores were documented throughout their hospital stay on a scale of 0 to 10. Levels were tracked as mean, median, and maximum values, and categorized as no (0/10), mild (1-3), moderate (4-6) and severe pain (7-10). Regression analysis was employed to analyze the relationship between pain scores and outcomes of interest (mortality and hospital LOS). After covariate adjustment, increased levels of pain were found to be associated with lower mortality rates and reduced hospital LOS. Conclusion: These counterintuitive results for post-CABG pain related outcomes have not been previously reported. While not representing strong enough evidence to alter clinical practice, confirmed and reliable results such as these should serve as a research trigger and prompt further studies into unexpected associations between pain and patient outcomes. With the advent of frequent secondary analysis of electronic health records, such counterintuitive data results are likely to become more frequent. We discuss the issue of counterintuitive data in extended fashion, including possible reasons for, and approaches to, this phenomenon.
[ { "created": "Tue, 12 Jun 2018 23:40:07 GMT", "version": "v1" } ]
2018-06-15
[ [ "Doty", "E.", "" ], [ "McCague", "N.", "" ], [ "Stone", "D. J.", "" ], [ "Celi", "L. A.", "" ] ]
Purpose: To explore the issue of counterintuitive data via analysis of a representative case and further discussion of those situations in which the data appear to be inconsistent with current knowledge. Case: 844 postoperative CABG patients, who were extubated within 24 hours of surgery were identified in a critical care database (MIMIC-III). Nurse elicited pain scores were documented throughout their hospital stay on a scale of 0 to 10. Levels were tracked as mean, median, and maximum values, and categorized as no (0/10), mild (1-3), moderate (4-6) and severe pain (7-10). Regression analysis was employed to analyze the relationship between pain scores and outcomes of interest (mortality and hospital LOS). After covariate adjustment, increased levels of pain were found to be associated with lower mortality rates and reduced hospital LOS. Conclusion: These counterintuitive results for post-CABG pain related outcomes have not been previously reported. While not representing strong enough evidence to alter clinical practice, confirmed and reliable results such as these should serve as a research trigger and prompt further studies into unexpected associations between pain and patient outcomes. With the advent of frequent secondary analysis of electronic health records, such counterintuitive data results are likely to become more frequent. We discuss the issue of counterintuitive data in extended fashion, including possible reasons for, and approaches to, this phenomenon.
q-bio/0402008
Tom Chou
Kevin Klapstein, Tom Chou, and Robijn Bruinsma
Physics of RecA-mediated homologous recognition
12pp, 10 figures
null
10.1529/biophysj.104.039578
null
q-bio.BM cond-mat.stat-mech q-bio.GN
null
Most proteins involved in processing DNA accomplish their activities as a monomer or as a component of a multimer containing a relatively small number of other elements. They generally act locally, binding to one or a few small regions of the DNA substrate. Striking exceptions are the \textit{E. coli} protein RecA and its homologues in other species, whose activities are associated with homologous DNA recombination. The active form of RecA in DNA recombination is a stiff nucleoprotein filament formed by RecA and DNA, within which the DNA is extended by 50%. Invoking physical and geometrical ideas, we show that the filamentary organization greatly enhances the rate of homologous recognition while preventing the formation of topological traps originating from multi-site recognition.
[ { "created": "Wed, 4 Feb 2004 21:47:42 GMT", "version": "v1" } ]
2009-11-10
[ [ "Klapstein", "Kevin", "" ], [ "Chou", "Tom", "" ], [ "Bruinsma", "Robijn", "" ] ]
Most proteins involved in processing DNA accomplish their activities as a monomer or as a component of a multimer containing a relatively small number of other elements. They generally act locally, binding to one or a few small regions of the DNA substrate. Striking exceptions are the \textit{E. coli} protein RecA and its homologues in other species, whose activities are associated with homologous DNA recombination. The active form of RecA in DNA recombination is a stiff nucleoprotein filament formed by RecA and DNA, within which the DNA is extended by 50%. Invoking physical and geometrical ideas, we show that the filamentary organization greatly enhances the rate of homologous recognition while preventing the formation of topological traps originating from multi-site recognition.
1202.6505
Ralf Metzler
Leila Esmaeili Sereshki, Michael A. Lomholt, and Ralf Metzler
A solution to the subdiffusion-efficiency paradox: Inactive states enhance reaction efficiency at subdiffusion conditions in living cells
6 plus epsilon pages, 6 figures
EPL 97, 20008 (2012)
null
null
q-bio.SC cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Macromolecular crowding in living biological cells effects subdiffusion of larger biomolecules such as proteins and enzymes. Mimicking this subdiffusion in terms of random walks on a critical percolation cluster, we here present a case study of EcoRV restriction enzymes involved in vital cellular defence. We show that due to its so far elusive propensity to an inactive state the enzyme avoids non-specific binding and remains well-distributed in the bulk cytoplasm of the cell. Despite the reduced volume exploration capability of subdiffusion processes, this mechanism guarantees a high efficiency of the enzyme. By variation of the non-specific binding constant and the bond occupation probability on the percolation network, we demonstrate that reduced non-specific binding are beneficial for efficient subdiffusive enzyme activity even in relatively small bacteria cells. Our results corroborate a more local picture of cellular regulation.
[ { "created": "Wed, 29 Feb 2012 10:22:55 GMT", "version": "v1" }, { "created": "Thu, 8 Mar 2012 17:03:58 GMT", "version": "v2" } ]
2012-03-09
[ [ "Sereshki", "Leila Esmaeili", "" ], [ "Lomholt", "Michael A.", "" ], [ "Metzler", "Ralf", "" ] ]
Macromolecular crowding in living biological cells effects subdiffusion of larger biomolecules such as proteins and enzymes. Mimicking this subdiffusion in terms of random walks on a critical percolation cluster, we here present a case study of EcoRV restriction enzymes involved in vital cellular defence. We show that due to its so far elusive propensity to an inactive state the enzyme avoids non-specific binding and remains well-distributed in the bulk cytoplasm of the cell. Despite the reduced volume exploration capability of subdiffusion processes, this mechanism guarantees a high efficiency of the enzyme. By variation of the non-specific binding constant and the bond occupation probability on the percolation network, we demonstrate that reduced non-specific binding are beneficial for efficient subdiffusive enzyme activity even in relatively small bacteria cells. Our results corroborate a more local picture of cellular regulation.
1106.6344
Joel Miller
Joel C. Miller and Erik M. Volz
Edge-Based Compartmental Modeling for Infectious Disease Spread Part III: Disease and Population Structure
null
PLoS ONE 8(8): e69162. 2013
10.1371/journal.pone.0069162
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the edge-based compartmental models for infectious disease spread introduced in Part I. These models allow us to consider standard SIR diseases spreading in random populations. In this paper we show how to handle deviations of the disease or population from the simplistic assumptions of Part I. We allow the population to have structure due to effects such as demographic detail or multiple types of risk behavior the disease to have more complicated natural history. We introduce these modifications in the static network context, though it is straightforward to incorporate them into dynamic networks. We also consider serosorting, which requires using the dynamic network models. The basic methods we use to derive these generalizations are widely applicable, and so it is straightforward to introduce many other generalizations not considered here.
[ { "created": "Thu, 30 Jun 2011 19:11:29 GMT", "version": "v1" } ]
2015-09-03
[ [ "Miller", "Joel C.", "" ], [ "Volz", "Erik M.", "" ] ]
We consider the edge-based compartmental models for infectious disease spread introduced in Part I. These models allow us to consider standard SIR diseases spreading in random populations. In this paper we show how to handle deviations of the disease or population from the simplistic assumptions of Part I. We allow the population to have structure due to effects such as demographic detail or multiple types of risk behavior the disease to have more complicated natural history. We introduce these modifications in the static network context, though it is straightforward to incorporate them into dynamic networks. We also consider serosorting, which requires using the dynamic network models. The basic methods we use to derive these generalizations are widely applicable, and so it is straightforward to introduce many other generalizations not considered here.
2407.09355
Aaron Ge
Aaron Ge, Jeya Balasubramanian, Xueyao Wu, Peter Kraft, and Jonas S. Almeida
FastImpute: A Baseline for Open-source, Reference-Free Genotype Imputation Methods -- A Case Study in PRS313
This paper is 16 pages long and contains 7 figures. For more information and to access related resources: * Web application: https://aaronge-2020.github.io/DeepImpute/ * Code repository: https://github.com/aaronge-2020/DeepImpute
null
null
null
q-bio.GN cs.AI
http://creativecommons.org/licenses/by/4.0/
Genotype imputation enhances genetic data by predicting missing SNPs using reference haplotype information. Traditional methods leverage linkage disequilibrium (LD) to infer untyped SNP genotypes, relying on the similarity of LD structures between genotyped target sets and fully sequenced reference panels. Recently, reference-free deep learning-based methods have emerged, offering a promising alternative by predicting missing genotypes without external databases, thereby enhancing privacy and accessibility. However, these methods often produce models with tens of millions of parameters, leading to challenges such as the need for substantial computational resources to train and inefficiency for client-sided deployment. Our study addresses these limitations by introducing a baseline for a novel genotype imputation pipeline that supports client-sided imputation models generalizable across any genotyping chip and genomic region. This approach enhances patient privacy by performing imputation directly on edge devices. As a case study, we focus on PRS313, a polygenic risk score comprising 313 SNPs used for breast cancer risk prediction. Utilizing consumer genetic panels such as 23andMe, our model democratizes access to personalized genetic insights by allowing 23andMe users to obtain their PRS313 score. We demonstrate that simple linear regression can significantly improve the accuracy of PRS313 scores when calculated using SNPs imputed from consumer gene panels, such as 23andMe. Our linear regression model achieved an R^2 of 0.86, compared to 0.33 without imputation and 0.28 with simple imputation (substituting missing SNPs with the minor allele frequency). These findings suggest that popular SNP analysis libraries could benefit from integrating linear regression models for genotype imputation, providing a viable and light-weight alternative to reference based imputation.
[ { "created": "Fri, 12 Jul 2024 15:28:13 GMT", "version": "v1" } ]
2024-07-15
[ [ "Ge", "Aaron", "" ], [ "Balasubramanian", "Jeya", "" ], [ "Wu", "Xueyao", "" ], [ "Kraft", "Peter", "" ], [ "Almeida", "Jonas S.", "" ] ]
Genotype imputation enhances genetic data by predicting missing SNPs using reference haplotype information. Traditional methods leverage linkage disequilibrium (LD) to infer untyped SNP genotypes, relying on the similarity of LD structures between genotyped target sets and fully sequenced reference panels. Recently, reference-free deep learning-based methods have emerged, offering a promising alternative by predicting missing genotypes without external databases, thereby enhancing privacy and accessibility. However, these methods often produce models with tens of millions of parameters, leading to challenges such as the need for substantial computational resources to train and inefficiency for client-sided deployment. Our study addresses these limitations by introducing a baseline for a novel genotype imputation pipeline that supports client-sided imputation models generalizable across any genotyping chip and genomic region. This approach enhances patient privacy by performing imputation directly on edge devices. As a case study, we focus on PRS313, a polygenic risk score comprising 313 SNPs used for breast cancer risk prediction. Utilizing consumer genetic panels such as 23andMe, our model democratizes access to personalized genetic insights by allowing 23andMe users to obtain their PRS313 score. We demonstrate that simple linear regression can significantly improve the accuracy of PRS313 scores when calculated using SNPs imputed from consumer gene panels, such as 23andMe. Our linear regression model achieved an R^2 of 0.86, compared to 0.33 without imputation and 0.28 with simple imputation (substituting missing SNPs with the minor allele frequency). These findings suggest that popular SNP analysis libraries could benefit from integrating linear regression models for genotype imputation, providing a viable and light-weight alternative to reference based imputation.
1212.2658
Etienne Rajon
Etienne Rajon and Joanna Masel
Compensatory evolution and the origins of innovations
null
Genetics 193 (2013) 1209-1220
10.1534/genetics.112.148627
null
q-bio.PE q-bio.GN q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cryptic genetic sequences have attenuated effects on phenotypes. In the classic view, relaxed selection allows cryptic genetic diversity to build up across individuals in a population, providing alleles that may later contribute to adaptation when co-opted - e.g. following a mutation increasing expression from a low, attenuated baseline. This view is described, for example, by the metaphor of the spread of a population across a neutral network in genotype space. As an alternative view, consider the fact that most phenotypic traits are affected by multiple sequences, including cryptic ones. Even in a strictly clonal population, the co-option of cryptic sequences at different loci may have different phenotypic effects and offer the population multiple adaptive possibilities. Here, we model the evolution of quantitative phenotypic characters encoded by cryptic sequences, and compare the relative contributions of genetic diversity and of variation across sites to the phenotypic potential of a population. We show that most of the phenotypic variation accessible through co-option would exist even in populations with no polymorphism. This is made possible by a history of compensatory evolution, whereby the phenotypic effect of a cryptic mutation at one site was balanced by mutations elsewhere in the genome, leading to a diversity of cryptic effect sizes across sites rather than across individuals. Cryptic sequences might accelerate adaptation and facilitate large phenotypic changes even in the absence of genetic diversity, as traditionally defined in terms of alternative alleles.
[ { "created": "Tue, 11 Dec 2012 21:48:57 GMT", "version": "v1" } ]
2013-05-23
[ [ "Rajon", "Etienne", "" ], [ "Masel", "Joanna", "" ] ]
Cryptic genetic sequences have attenuated effects on phenotypes. In the classic view, relaxed selection allows cryptic genetic diversity to build up across individuals in a population, providing alleles that may later contribute to adaptation when co-opted - e.g. following a mutation increasing expression from a low, attenuated baseline. This view is described, for example, by the metaphor of the spread of a population across a neutral network in genotype space. As an alternative view, consider the fact that most phenotypic traits are affected by multiple sequences, including cryptic ones. Even in a strictly clonal population, the co-option of cryptic sequences at different loci may have different phenotypic effects and offer the population multiple adaptive possibilities. Here, we model the evolution of quantitative phenotypic characters encoded by cryptic sequences, and compare the relative contributions of genetic diversity and of variation across sites to the phenotypic potential of a population. We show that most of the phenotypic variation accessible through co-option would exist even in populations with no polymorphism. This is made possible by a history of compensatory evolution, whereby the phenotypic effect of a cryptic mutation at one site was balanced by mutations elsewhere in the genome, leading to a diversity of cryptic effect sizes across sites rather than across individuals. Cryptic sequences might accelerate adaptation and facilitate large phenotypic changes even in the absence of genetic diversity, as traditionally defined in terms of alternative alleles.
0708.0572
Eduardo Candelario-Jalil
E. Candelario-Jalil, N. H. Mhadu, S. M. Al-Dalain, G. Martinez, O. S. Leon
Time course of oxidative damage in different brain regions following transient cerebral ischemia in gerbils
null
Neuroscience Research 41(3): 233-241 (2001)
null
null
q-bio.TO
null
The time course of oxidative damage in different brain regions was investigated in the gerbil model of transient cerebral ischemia. Animals were subjected to both common carotid arteries occlusion for 5 min. After the end of ischemia and at different reperfusion times (2, 6, 12, 24, 48, 72, 96 h and 7 days), markers of lipid peroxidation, reduced and oxidized glutathione levels, glutathione peroxidase, glutathione reductase, manganese-dependent superoxide dismutase (MnSOD) and copper/zinc containing SOD (Cu/ZnSOD) activities were measured in hippocampus, cortex and striatum. Oxidative damage in hippocampus was maximal at late stages after ischemia (48-96 h) coincident with a significant impairment in glutathione homeostasis. MnSOD increased in hippocampus at 24, 48 and 72 h after ischemia, coincident with the marked reduction in the activity of glutathione-related enzymes. The late disturbance in oxidant-antioxidant balance corresponds with the time course of delayed neuronal loss in the hippocampal CA1 sector. Cerebral cortex showed early changes in oxidative damage with no significant impairment in antioxidant capacity. Striatal lipid peroxidation significantly increased as early as 2 h after ischemia and persisted until 48 h with respect to the sham-operated group. These results contribute significant information on the timing and factors that influence free radical formation following ischemic brain injury, an essential step in determining effective antioxidant intervention.
[ { "created": "Fri, 3 Aug 2007 19:42:03 GMT", "version": "v1" } ]
2007-08-06
[ [ "Candelario-Jalil", "E.", "" ], [ "Mhadu", "N. H.", "" ], [ "Al-Dalain", "S. M.", "" ], [ "Martinez", "G.", "" ], [ "Leon", "O. S.", "" ] ]
The time course of oxidative damage in different brain regions was investigated in the gerbil model of transient cerebral ischemia. Animals were subjected to both common carotid arteries occlusion for 5 min. After the end of ischemia and at different reperfusion times (2, 6, 12, 24, 48, 72, 96 h and 7 days), markers of lipid peroxidation, reduced and oxidized glutathione levels, glutathione peroxidase, glutathione reductase, manganese-dependent superoxide dismutase (MnSOD) and copper/zinc containing SOD (Cu/ZnSOD) activities were measured in hippocampus, cortex and striatum. Oxidative damage in hippocampus was maximal at late stages after ischemia (48-96 h) coincident with a significant impairment in glutathione homeostasis. MnSOD increased in hippocampus at 24, 48 and 72 h after ischemia, coincident with the marked reduction in the activity of glutathione-related enzymes. The late disturbance in oxidant-antioxidant balance corresponds with the time course of delayed neuronal loss in the hippocampal CA1 sector. Cerebral cortex showed early changes in oxidative damage with no significant impairment in antioxidant capacity. Striatal lipid peroxidation significantly increased as early as 2 h after ischemia and persisted until 48 h with respect to the sham-operated group. These results contribute significant information on the timing and factors that influence free radical formation following ischemic brain injury, an essential step in determining effective antioxidant intervention.
2405.19159
Hideaki Yamamoto
Hakuba Murota, Hideaki Yamamoto, Nobuaki Monma, Shigeo Sato, Ayumi Hirano-Iwata
Precision microfluidic control of neuronal ensembles in cultured cortical networks
30 pages, 6 figures, 6 supplementary figures
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
In vitro neuronal culture is an important research platform in cellular and network neuroscience. However, neurons cultured on a homogeneous scaffold form dense, randomly connected networks and display excessively synchronized activity; this phenomenon has limited their applications in network-level studies, such as studies of neuronal ensembles, or coordinated activity by a group of neurons. Herein, we develop polydimethylsiloxane-based microfluidic devices to create small neuronal networks exhibiting a hierarchically modular structure resembling the connectivity observed in the mammalian cortex. The strength of intermodular coupling was manipulated by varying the width and height of the microchannels that connect the modules. Using fluorescent calcium imaging, we observe that the spontaneous activity in networks with smaller microchannels (2.2$-$5.5 $\mu$m$^2$) had lower synchrony and exhibit a threefold variety of neuronal ensembles. Optogenetic stimulation demonstrates that a reduction in intermodular coupling enriches evoked neuronal activity patterns and that repeated stimulation induces plasticity in neuronal ensembles in these networks. These findings suggest that cell engineering technologies based on microfluidic devices enable in vitro reconstruction of the intricate dynamics of neuronal ensembles, thus providing a robust platform for studying neuronal ensembles in a well-defined physicochemical environment.
[ { "created": "Wed, 29 May 2024 15:02:28 GMT", "version": "v1" } ]
2024-05-30
[ [ "Murota", "Hakuba", "" ], [ "Yamamoto", "Hideaki", "" ], [ "Monma", "Nobuaki", "" ], [ "Sato", "Shigeo", "" ], [ "Hirano-Iwata", "Ayumi", "" ] ]
In vitro neuronal culture is an important research platform in cellular and network neuroscience. However, neurons cultured on a homogeneous scaffold form dense, randomly connected networks and display excessively synchronized activity; this phenomenon has limited their applications in network-level studies, such as studies of neuronal ensembles, or coordinated activity by a group of neurons. Herein, we develop polydimethylsiloxane-based microfluidic devices to create small neuronal networks exhibiting a hierarchically modular structure resembling the connectivity observed in the mammalian cortex. The strength of intermodular coupling was manipulated by varying the width and height of the microchannels that connect the modules. Using fluorescent calcium imaging, we observe that the spontaneous activity in networks with smaller microchannels (2.2$-$5.5 $\mu$m$^2$) had lower synchrony and exhibit a threefold variety of neuronal ensembles. Optogenetic stimulation demonstrates that a reduction in intermodular coupling enriches evoked neuronal activity patterns and that repeated stimulation induces plasticity in neuronal ensembles in these networks. These findings suggest that cell engineering technologies based on microfluidic devices enable in vitro reconstruction of the intricate dynamics of neuronal ensembles, thus providing a robust platform for studying neuronal ensembles in a well-defined physicochemical environment.
1308.1252
Marc H. E. de Lussanet PhD
Marc H.E. de Lussanet
The human and mammalian cerebrum scale by computational power and information resistance
There are some crucial flaws in the calculations
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The cerebrum of mammals spans a vast range of sizes and yet has a very regular structure. The amount of folding of the cortical surface and the proportion of white matter gradually increase with size, but the underlying mechanisms remain elusive. Here, two laws are derived to fully explain these cerebral scaling relations. The first law holds that the long-range information flow in the cerebrum is determined by the total cortical surface (i.e., the number of neurons) and the increasing information resistance of long-range connections. Despite having just one free parameter, the first law fits the mammalian cerebrum better than any existing function, both across species and within humans. According to the second law, the white matter volume scales, with a few minor corrections, to the cortical surface area. It follows from the first law that large cerebrums have much local processing and little global information flow. Moreover, paradoxically, a further increase in long-range connections would decrease the efficiency of information flow.
[ { "created": "Tue, 6 Aug 2013 12:13:18 GMT", "version": "v1" }, { "created": "Fri, 10 Jun 2022 19:52:09 GMT", "version": "v2" } ]
2022-06-14
[ [ "de Lussanet", "Marc H. E.", "" ] ]
The cerebrum of mammals spans a vast range of sizes and yet has a very regular structure. The amount of folding of the cortical surface and the proportion of white matter gradually increase with size, but the underlying mechanisms remain elusive. Here, two laws are derived to fully explain these cerebral scaling relations. The first law holds that the long-range information flow in the cerebrum is determined by the total cortical surface (i.e., the number of neurons) and the increasing information resistance of long-range connections. Despite having just one free parameter, the first law fits the mammalian cerebrum better than any existing function, both across species and within humans. According to the second law, the white matter volume scales, with a few minor corrections, to the cortical surface area. It follows from the first law that large cerebrums have much local processing and little global information flow. Moreover, paradoxically, a further increase in long-range connections would decrease the efficiency of information flow.
2308.12585
Il Memming Park
Il Memming Park and \'Abel S\'agodi and Piotr Aleksander Sok\'o\l
Persistent learning signals and working memory without continuous attractors
null
null
null
null
q-bio.NC cs.LG cs.NE nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural dynamical systems with stable attractor structures, such as point attractors and continuous attractors, are hypothesized to underlie meaningful temporal behavior that requires working memory. However, working memory may not support useful learning signals necessary to adapt to changes in the temporal structure of the environment. We show that in addition to the continuous attractors that are widely implicated, periodic and quasi-periodic attractors can also support learning arbitrarily long temporal relationships. Unlike the continuous attractors that suffer from the fine-tuning problem, the less explored quasi-periodic attractors are uniquely qualified for learning to produce temporally structured behavior. Our theory has broad implications for the design of artificial learning systems and makes predictions about observable signatures of biological neural dynamics that can support temporal dependence learning and working memory. Based on our theory, we developed a new initialization scheme for artificial recurrent neural networks that outperforms standard methods for tasks that require learning temporal dynamics. Moreover, we propose a robust recurrent memory mechanism for integrating and maintaining head direction without a ring attractor.
[ { "created": "Thu, 24 Aug 2023 06:12:41 GMT", "version": "v1" } ]
2023-08-25
[ [ "Park", "Il Memming", "" ], [ "Ságodi", "Ábel", "" ], [ "Sokół", "Piotr Aleksander", "" ] ]
Neural dynamical systems with stable attractor structures, such as point attractors and continuous attractors, are hypothesized to underlie meaningful temporal behavior that requires working memory. However, working memory may not support useful learning signals necessary to adapt to changes in the temporal structure of the environment. We show that in addition to the continuous attractors that are widely implicated, periodic and quasi-periodic attractors can also support learning arbitrarily long temporal relationships. Unlike the continuous attractors that suffer from the fine-tuning problem, the less explored quasi-periodic attractors are uniquely qualified for learning to produce temporally structured behavior. Our theory has broad implications for the design of artificial learning systems and makes predictions about observable signatures of biological neural dynamics that can support temporal dependence learning and working memory. Based on our theory, we developed a new initialization scheme for artificial recurrent neural networks that outperforms standard methods for tasks that require learning temporal dynamics. Moreover, we propose a robust recurrent memory mechanism for integrating and maintaining head direction without a ring attractor.
0912.2366
Rafael Dias Vilela
Rafael D. Vilela and Benjamin Lindner
Are the input parameters of white-noise-driven integrate-and-fire neurons uniquely determined by rate and CV?
null
J. Theor. Biol. 257, 90 (2009)
null
null
q-bio.NC nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Integrate-and-fire (IF) neurons have found widespread applications in computational neuroscience. Particularly important are stochastic versions of these models where the driving consists of a synaptic input modeled as white Gaussian noise with mean $\mu$ and noise intensity $D$. Different IF models have been proposed, the firing statistics of which depends nontrivially on the input parameters $\mu$ and $D$. In order to compare these models among each other, one must first specify the correspondence between their parameters. This can be done by determining which set of parameters ($\mu$, $D$) of each model is associated to a given set of basic firing statistics as, for instance, the firing rate and the coefficient of variation (CV) of the interspike interval (ISI). However, it is not clear {\em a priori} whether for a given firing rate and CV there is only one unique choice of input parameters for each model. Here we review the dependence of rate and CV on input parameters for the perfect, leaky, and quadratic IF neuron models and show analytically that indeed in these three models the firing rate and the CV uniquely determine the input parameters.
[ { "created": "Fri, 11 Dec 2009 22:00:45 GMT", "version": "v1" } ]
2009-12-15
[ [ "Vilela", "Rafael D.", "" ], [ "Lindner", "Benjamin", "" ] ]
Integrate-and-fire (IF) neurons have found widespread applications in computational neuroscience. Particularly important are stochastic versions of these models where the driving consists of a synaptic input modeled as white Gaussian noise with mean $\mu$ and noise intensity $D$. Different IF models have been proposed, the firing statistics of which depends nontrivially on the input parameters $\mu$ and $D$. In order to compare these models among each other, one must first specify the correspondence between their parameters. This can be done by determining which set of parameters ($\mu$, $D$) of each model is associated to a given set of basic firing statistics as, for instance, the firing rate and the coefficient of variation (CV) of the interspike interval (ISI). However, it is not clear {\em a priori} whether for a given firing rate and CV there is only one unique choice of input parameters for each model. Here we review the dependence of rate and CV on input parameters for the perfect, leaky, and quadratic IF neuron models and show analytically that indeed in these three models the firing rate and the CV uniquely determine the input parameters.
1609.00032
Qing Wan
Chang Jin Wan, Wei Wang, Li Qiang Zhu, Yang Hui Liu, Ping Feng, Zhao Ping Liu, Yi Shi, and Qing Wan
Flexible Metal Oxide/Graphene Oxide Hybrid Neuromorphic Devices on Flexible Conducting Graphene Substrates
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Flexible metal oxide/graphene oxide hybrid multi-gate neuron transistors were fabricated on flexible graphene substrates. Dendritic integrations in both spatial and temporal modes were successfully emulated, and spatiotemporal correlated logics were obtained. A proof-of-principle visual system model for emulating lobula giant motion detector neuron was investigated. Our results are of great interest for flexible neuromorphic cognitive systems.
[ { "created": "Mon, 7 Mar 2016 07:02:51 GMT", "version": "v1" } ]
2016-09-02
[ [ "Wan", "Chang Jin", "" ], [ "Wang", "Wei", "" ], [ "Zhu", "Li Qiang", "" ], [ "Liu", "Yang Hui", "" ], [ "Feng", "Ping", "" ], [ "Liu", "Zhao Ping", "" ], [ "Shi", "Yi", "" ], [ "Wan", "Qing", "" ] ]
Flexible metal oxide/graphene oxide hybrid multi-gate neuron transistors were fabricated on flexible graphene substrates. Dendritic integrations in both spatial and temporal modes were successfully emulated, and spatiotemporal correlated logics were obtained. A proof-of-principle visual system model for emulating lobula giant motion detector neuron was investigated. Our results are of great interest for flexible neuromorphic cognitive systems.
2211.03774
Puttipong Pongtanapaisan
Isabel K. Darcy, Garrett Jones and Puttipong Pongtanapaisan
Modeling knotted proteins with tangles
null
null
null
null
q-bio.BM math.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although rare, an increasing number of proteins have been observed to contain entanglements in their native structures. To gain more insight into the significance of protein knotting, researchers have been investigating protein knot formation using both experimental and theoretical methods. Motivated by the hypothesized folding pathway of $\alpha$-haloacid dehalogenase (DehI) protein, Flapan, He, and Wong proposed a theory of how protein knots form, which includes existing folding pathways described by Taylor and B\"olinger et al. as special cases. In their topological descriptions, two loops in an unknotted open protein chain containing at most two twists each come close together, and one end of the protein eventually passes through the two loops. In this paper, we build on Flapan, He, and Wong's theory where we pay attention to the crossing signs of the threading process and assume that the unknotted protein chain may arrange itself into a more complicated configuration before threading occurs. We then apply tangle calculus, originally developed by Ernst and Sumners to analyze the action of specific proteins on DNA, to give all possible knots or knotoids that may be discovered in the future according to our model and give recipes for engineering specific knots in proteins from simpler pieces. We show why twists knots are the most likely knots to occur in proteins. We use chirality to show that the most likely knots to occur in proteins via Taylor's twisted hairpin model are the knots $+3_1$, $4_1$, and $-5_2$.
[ { "created": "Mon, 7 Nov 2022 18:50:17 GMT", "version": "v1" } ]
2022-11-08
[ [ "Darcy", "Isabel K.", "" ], [ "Jones", "Garrett", "" ], [ "Pongtanapaisan", "Puttipong", "" ] ]
Although rare, an increasing number of proteins have been observed to contain entanglements in their native structures. To gain more insight into the significance of protein knotting, researchers have been investigating protein knot formation using both experimental and theoretical methods. Motivated by the hypothesized folding pathway of $\alpha$-haloacid dehalogenase (DehI) protein, Flapan, He, and Wong proposed a theory of how protein knots form, which includes existing folding pathways described by Taylor and B\"olinger et al. as special cases. In their topological descriptions, two loops in an unknotted open protein chain containing at most two twists each come close together, and one end of the protein eventually passes through the two loops. In this paper, we build on Flapan, He, and Wong's theory where we pay attention to the crossing signs of the threading process and assume that the unknotted protein chain may arrange itself into a more complicated configuration before threading occurs. We then apply tangle calculus, originally developed by Ernst and Sumners to analyze the action of specific proteins on DNA, to give all possible knots or knotoids that may be discovered in the future according to our model and give recipes for engineering specific knots in proteins from simpler pieces. We show why twists knots are the most likely knots to occur in proteins. We use chirality to show that the most likely knots to occur in proteins via Taylor's twisted hairpin model are the knots $+3_1$, $4_1$, and $-5_2$.
1509.08409
Alexander Gates
Alexander J. Gates and Luis M. Rocha
Control of complex networks requires both structure and dynamics
15 pages, 6 figures
Scientific Reports 6, Article number: 24456 (2016)
10.1038/srep24456
null
q-bio.MN cs.SY math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The study of network structure has uncovered signatures of the organization of complex systems. However, there is also a need to understand how to control them; for example, identifying strategies to revert a diseased cell to a healthy state, or a mature cell to a pluripotent state. Two recent methodologies suggest that the controllability of complex systems can be predicted solely from the graph of interactions between variables, without considering their dynamics: structural controllability and minimum dominating sets. We demonstrate that such structure-only methods fail to characterize controllability when dynamics are introduced. We study Boolean network ensembles of network motifs as well as three models of biochemical regulation: the segment polarity network in Drosophila melanogaster, the cell cycle of budding yeast Saccharomyces cerevisiae, and the floral organ arrangement in Arabidopsis thaliana. We demonstrate that structure-only methods both undershoot and overshoot the number and which sets of critical variables best control the dynamics of these models, highlighting the importance of the actual system dynamics in determining control. Our analysis further shows that the logic of automata transition functions, namely how canalizing they are, plays an important role in the extent to which structure predicts dynamics.
[ { "created": "Mon, 28 Sep 2015 17:40:29 GMT", "version": "v1" }, { "created": "Mon, 18 Apr 2016 17:23:45 GMT", "version": "v2" } ]
2016-04-19
[ [ "Gates", "Alexander J.", "" ], [ "Rocha", "Luis M.", "" ] ]
The study of network structure has uncovered signatures of the organization of complex systems. However, there is also a need to understand how to control them; for example, identifying strategies to revert a diseased cell to a healthy state, or a mature cell to a pluripotent state. Two recent methodologies suggest that the controllability of complex systems can be predicted solely from the graph of interactions between variables, without considering their dynamics: structural controllability and minimum dominating sets. We demonstrate that such structure-only methods fail to characterize controllability when dynamics are introduced. We study Boolean network ensembles of network motifs as well as three models of biochemical regulation: the segment polarity network in Drosophila melanogaster, the cell cycle of budding yeast Saccharomyces cerevisiae, and the floral organ arrangement in Arabidopsis thaliana. We demonstrate that structure-only methods both undershoot and overshoot the number and which sets of critical variables best control the dynamics of these models, highlighting the importance of the actual system dynamics in determining control. Our analysis further shows that the logic of automata transition functions, namely how canalizing they are, plays an important role in the extent to which structure predicts dynamics.
1411.4980
David Sterratt
David C. Sterratt and Oksana Sorokina and J. Douglas Armstrong
Integration of rule-based models and compartmental models of neurons
Presented to the Third International Workshop on Hybrid Systems Biology Vienna, Austria, July 23-24, 2014 at the International Conference on Computer-Aided Verification 2014
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Synaptic plasticity depends on the interaction between electrical activity in neurons and the synaptic proteome, the collection of over 1000 proteins in the post-synaptic density (PSD) of synapses. To construct models of synaptic plasticity with realistic numbers of proteins, we aim to combine rule-based models of molecular interactions in the synaptic proteome with compartmental models of the electrical activity of neurons. Rule-based models allow interactions between the combinatorially large number of protein complexes in the postsynaptic proteome to be expressed straightforwardly. Simulations of rule-based models are stochastic and thus can deal with the small copy numbers of proteins and complexes in the PSD. Compartmental models of neurons are expressed as systems of coupled ordinary differential equations and solved deterministically. We present an algorithm which incorporates stochastic rule-based models into deterministic compartmental models and demonstrate an implementation ("KappaNEURON") of this hybrid system using the SpatialKappa and NEURON simulators.
[ { "created": "Tue, 18 Nov 2014 19:43:09 GMT", "version": "v1" } ]
2014-11-19
[ [ "Sterratt", "David C.", "" ], [ "Sorokina", "Oksana", "" ], [ "Armstrong", "J. Douglas", "" ] ]
Synaptic plasticity depends on the interaction between electrical activity in neurons and the synaptic proteome, the collection of over 1000 proteins in the post-synaptic density (PSD) of synapses. To construct models of synaptic plasticity with realistic numbers of proteins, we aim to combine rule-based models of molecular interactions in the synaptic proteome with compartmental models of the electrical activity of neurons. Rule-based models allow interactions between the combinatorially large number of protein complexes in the postsynaptic proteome to be expressed straightforwardly. Simulations of rule-based models are stochastic and thus can deal with the small copy numbers of proteins and complexes in the PSD. Compartmental models of neurons are expressed as systems of coupled ordinary differential equations and solved deterministically. We present an algorithm which incorporates stochastic rule-based models into deterministic compartmental models and demonstrate an implementation ("KappaNEURON") of this hybrid system using the SpatialKappa and NEURON simulators.
2011.05859
Sitabhra Sinha
Richa Tripathi, Shakti N. Menon and Sitabhra Sinha
The nonlinearity of interactions drives networks of neural oscillators to decoherence at strong coupling
6 pages, 3 figures (+ 6 pages SI)
null
null
null
q-bio.NC nlin.PS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While phase oscillators are often used to model neuronal populations, in contrast to the Kuramoto paradigm, strong interactions between brain areas can be associated with loss of synchrony. Using networks of coupled oscillators described by neural mass models, we find that a transition to decoherence at increased coupling strength results from the fundamental nonlinearity, e.g., arising from refractoriness, of the interactions between the nodes. The nonlinearity-driven transition also depends on the connection topology, underlining the role of network structure in shaping brain activity.
[ { "created": "Thu, 29 Oct 2020 17:58:47 GMT", "version": "v1" } ]
2020-11-12
[ [ "Tripathi", "Richa", "" ], [ "Menon", "Shakti N.", "" ], [ "Sinha", "Sitabhra", "" ] ]
While phase oscillators are often used to model neuronal populations, in contrast to the Kuramoto paradigm, strong interactions between brain areas can be associated with loss of synchrony. Using networks of coupled oscillators described by neural mass models, we find that a transition to decoherence at increased coupling strength results from the fundamental nonlinearity, e.g., arising from refractoriness, of the interactions between the nodes. The nonlinearity-driven transition also depends on the connection topology, underlining the role of network structure in shaping brain activity.
1312.7111
Michael Tress
Iakes Ezkurdia, David Juan, Jose Manuel Rodriguez, Adam Frankish, Mark Diekhans, Jennifer Harrow, Jesus Vazquez, Alfonso Valencia, Michael L. Tress
The shrinking human protein coding complement: are there now fewer than 20,000 genes?
Main article 34 pages, 1 1/2 spaced, five figures, one table. Supplementary info, 13 pages, 10 figures Update no 1: Version 4: small change in numbers between version 1 and 2 was because we removed the pseudoautosomal genes. The number 11,838 in version 2 is a typo
null
null
null
q-bio.GN q-bio.MN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Determining the full complement of protein-coding genes is a key goal of genome annotation. The most powerful approach for confirming protein coding potential is the detection of cellular protein expression through peptide mass spectrometry experiments. Here we map the peptides detected in 7 large-scale proteomics studies to almost 60% of the protein coding genes in the GENCODE annotation the human genome. We find that conservation across vertebrate species and the age of the gene family are key indicators of whether a peptide will be detected in proteomics experiments. We find peptides for most highly conserved genes and for practically all genes that evolved before bilateria. At the same time there is almost no evidence of protein expression for genes that have appeared since primates, or for genes that do not have any protein-like features or cross-species conservation. We identify 19 non-protein-like features such as weak conservation, no protein features or ambiguous annotations in major databases that are indicators of low peptide detection rates. We use these features to describe a set of 2,001 genes that are potentially non-coding, and show that many of these genes behave more like non-coding genes than protein-coding genes. We detect peptides for just 3% of these genes. We suggest that many of these 2,001 genes do not code for proteins under normal circumstances and that they should not be included in the human protein coding gene catalogue. These potential non-coding genes will be revised as part of the ongoing human genome annotation effort.
[ { "created": "Thu, 26 Dec 2013 14:22:23 GMT", "version": "v1" }, { "created": "Wed, 1 Jan 2014 21:20:56 GMT", "version": "v2" }, { "created": "Thu, 23 Jan 2014 12:59:37 GMT", "version": "v3" }, { "created": "Tue, 11 Feb 2014 19:23:44 GMT", "version": "v4" } ]
2014-02-12
[ [ "Ezkurdia", "Iakes", "" ], [ "Juan", "David", "" ], [ "Rodriguez", "Jose Manuel", "" ], [ "Frankish", "Adam", "" ], [ "Diekhans", "Mark", "" ], [ "Harrow", "Jennifer", "" ], [ "Vazquez", "Jesus", "" ], [ "Valencia", "Alfonso", "" ], [ "Tress", "Michael L.", "" ] ]
Determining the full complement of protein-coding genes is a key goal of genome annotation. The most powerful approach for confirming protein coding potential is the detection of cellular protein expression through peptide mass spectrometry experiments. Here we map the peptides detected in 7 large-scale proteomics studies to almost 60% of the protein coding genes in the GENCODE annotation the human genome. We find that conservation across vertebrate species and the age of the gene family are key indicators of whether a peptide will be detected in proteomics experiments. We find peptides for most highly conserved genes and for practically all genes that evolved before bilateria. At the same time there is almost no evidence of protein expression for genes that have appeared since primates, or for genes that do not have any protein-like features or cross-species conservation. We identify 19 non-protein-like features such as weak conservation, no protein features or ambiguous annotations in major databases that are indicators of low peptide detection rates. We use these features to describe a set of 2,001 genes that are potentially non-coding, and show that many of these genes behave more like non-coding genes than protein-coding genes. We detect peptides for just 3% of these genes. We suggest that many of these 2,001 genes do not code for proteins under normal circumstances and that they should not be included in the human protein coding gene catalogue. These potential non-coding genes will be revised as part of the ongoing human genome annotation effort.
0801.4301
Laurent Jacob
Laurent Jacob (CB), Brice Hoffmann (CB), V\'eronique Stoven (CB), Jean-Philippe Vert (CB)
Virtual screening of GPCRs: an in silico chemogenomics approach
null
null
null
null
q-bio.QM
null
The G-protein coupled receptor (GPCR) superfamily is currently the largest class of therapeutic targets. \textit{In silico} prediction of interactions between GPCRs and small molecules is therefore a crucial step in the drug discovery process, which remains a daunting task due to the difficulty to characterize the 3D structure of most GPCRs, and to the limited amount of known ligands for some members of the superfamily. Chemogenomics, which attempts to characterize interactions between all members of a target class and all small molecules simultaneously, has recently been proposed as an interesting alternative to traditional docking or ligand-based virtual screening strategies. We propose new methods for in silico chemogenomics and validate them on the virtual screening of GPCRs. The methods represent an extension of a recently proposed machine learning strategy, based on support vector machines (SVM), which provides a flexible framework to incorporate various information sources on the biological space of targets and on the chemical space of small molecules. We investigate the use of 2D and 3D descriptors for small molecules, and test a variety of descriptors for GPCRs. We show fo instance that incorporating information about the known hierarchical classification of the target family and about key residues in their inferred binding pockets significantly improves the prediction accuracy of our model. In particular we are able to predict ligands of orphan GPCRs with an estimated accuracy of 78.1%.
[ { "created": "Mon, 28 Jan 2008 15:03:47 GMT", "version": "v1" } ]
2008-01-29
[ [ "Jacob", "Laurent", "", "CB" ], [ "Hoffmann", "Brice", "", "CB" ], [ "Stoven", "Véronique", "", "CB" ], [ "Vert", "Jean-Philippe", "", "CB" ] ]
The G-protein coupled receptor (GPCR) superfamily is currently the largest class of therapeutic targets. \textit{In silico} prediction of interactions between GPCRs and small molecules is therefore a crucial step in the drug discovery process, which remains a daunting task due to the difficulty to characterize the 3D structure of most GPCRs, and to the limited amount of known ligands for some members of the superfamily. Chemogenomics, which attempts to characterize interactions between all members of a target class and all small molecules simultaneously, has recently been proposed as an interesting alternative to traditional docking or ligand-based virtual screening strategies. We propose new methods for in silico chemogenomics and validate them on the virtual screening of GPCRs. The methods represent an extension of a recently proposed machine learning strategy, based on support vector machines (SVM), which provides a flexible framework to incorporate various information sources on the biological space of targets and on the chemical space of small molecules. We investigate the use of 2D and 3D descriptors for small molecules, and test a variety of descriptors for GPCRs. We show fo instance that incorporating information about the known hierarchical classification of the target family and about key residues in their inferred binding pockets significantly improves the prediction accuracy of our model. In particular we are able to predict ligands of orphan GPCRs with an estimated accuracy of 78.1%.
2401.06629
Marianna Karapitta
Marianna Karapitta, Andreas Kasis, Charithea Stylianides, Kleanthis Malialis, Panayiotis Kolios
Pandemic infection forecasting through compartmental model and learning-based approaches
13 pages, 7 figures, 12 tables
null
null
null
q-bio.PE math.OC q-bio.QM
http://creativecommons.org/licenses/by-sa/4.0/
The emergence and spread of deadly pandemics has repeatedly occurred throughout history, causing widespread infections and loss of life. The rapid spread of pandemics have made governments across the world adopt a range of actions, including non-pharmaceutical measures to contain its impact. However, the dynamic nature of pandemics makes selecting intervention strategies challenging. Hence, the development of suitable monitoring and forecasting tools for tracking infected cases is crucial for designing and implementing effective measures. Motivated by this, we present a hybrid pandemic infection forecasting methodology that integrates compartmental model and learning-based approaches. In particular, we develop a compartmental model that includes time-varying infection rates, which are the key parameters that determine the pandemic's evolution. To identify the time-dependent infection rates, we establish a hybrid methodology that combines the developed compartmental model and tools from optimization and neural networks. Specifically, the proposed methodology estimates the infection rates by fitting the model to available data, regarding the COVID-19 pandemic in Cyprus, and then predicting their future values through either a) extrapolation, or b) feeding them to neural networks. The developed approach exhibits strong accuracy in predicting infections seven days in advance, achieving low average percentage errors both using the extrapolation (9.90%) and neural network (5.04%) approaches.
[ { "created": "Fri, 12 Jan 2024 15:23:07 GMT", "version": "v1" } ]
2024-01-15
[ [ "Karapitta", "Marianna", "" ], [ "Kasis", "Andreas", "" ], [ "Stylianides", "Charithea", "" ], [ "Malialis", "Kleanthis", "" ], [ "Kolios", "Panayiotis", "" ] ]
The emergence and spread of deadly pandemics has repeatedly occurred throughout history, causing widespread infections and loss of life. The rapid spread of pandemics have made governments across the world adopt a range of actions, including non-pharmaceutical measures to contain its impact. However, the dynamic nature of pandemics makes selecting intervention strategies challenging. Hence, the development of suitable monitoring and forecasting tools for tracking infected cases is crucial for designing and implementing effective measures. Motivated by this, we present a hybrid pandemic infection forecasting methodology that integrates compartmental model and learning-based approaches. In particular, we develop a compartmental model that includes time-varying infection rates, which are the key parameters that determine the pandemic's evolution. To identify the time-dependent infection rates, we establish a hybrid methodology that combines the developed compartmental model and tools from optimization and neural networks. Specifically, the proposed methodology estimates the infection rates by fitting the model to available data, regarding the COVID-19 pandemic in Cyprus, and then predicting their future values through either a) extrapolation, or b) feeding them to neural networks. The developed approach exhibits strong accuracy in predicting infections seven days in advance, achieving low average percentage errors both using the extrapolation (9.90%) and neural network (5.04%) approaches.
1711.05517
Alfred Anwander
Christa M\"uller-Axt, Alfred Anwander, Katharina von Kriegstein
Altered structural connectivity of the left visual thalamus in developmental dyslexia
31 pages, 5 figures, 2 tables
Current Biology (2017)
10.1016/j.cub.2017.10.034
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Developmental dyslexia is characterized by persistent reading and spelling deficits. Partly due to technical challenges with investigating subcortical sensory structures, current research on dyslexia in humans by-and-large focuses on the cerebral cortex. These studies found that dyslexia is typically associated with functional and structural alterations of a distributed left-hemispheric cerebral cortex network. However, findings from animal models and post-mortem studies in humans suggest that developmental dyslexia might also be associated with structural alterations in subcortical sensory pathways. Whether these alterations also exist in developmental dyslexia in-vivo and how they relate to dyslexia symptoms is currently unknown. Here we used ultra-high resolution structural magnetic resonance imaging (MRI), diffusion MRI and probabilistic tractography to investigate the structural connections of the visual sensory pathway in dyslexia in-vivo. We discovered that individuals with developmental dyslexia have reduced structural connections in the direct pathway between the left visual thalamus (LGN) and left middle temporal area V5/MT, but not between the left LGN and left primary visual cortex (V1). In addition, left V5/MT-LGN connectivity strength correlated with rapid naming abilities - a key deficit in dyslexia [14]. These findings provide the first evidence of specific structural alterations in the connections between the sensory thalamus and cortex in developmental dyslexia. The results challenge current standard models and provide novel evidence for the importance of cortico-thalamic interactions in explaining dyslexia.
[ { "created": "Wed, 15 Nov 2017 12:21:22 GMT", "version": "v1" }, { "created": "Thu, 16 Nov 2017 23:56:00 GMT", "version": "v2" } ]
2017-11-20
[ [ "Müller-Axt", "Christa", "" ], [ "Anwander", "Alfred", "" ], [ "von Kriegstein", "Katharina", "" ] ]
Developmental dyslexia is characterized by persistent reading and spelling deficits. Partly due to technical challenges with investigating subcortical sensory structures, current research on dyslexia in humans by-and-large focuses on the cerebral cortex. These studies found that dyslexia is typically associated with functional and structural alterations of a distributed left-hemispheric cerebral cortex network. However, findings from animal models and post-mortem studies in humans suggest that developmental dyslexia might also be associated with structural alterations in subcortical sensory pathways. Whether these alterations also exist in developmental dyslexia in-vivo and how they relate to dyslexia symptoms is currently unknown. Here we used ultra-high resolution structural magnetic resonance imaging (MRI), diffusion MRI and probabilistic tractography to investigate the structural connections of the visual sensory pathway in dyslexia in-vivo. We discovered that individuals with developmental dyslexia have reduced structural connections in the direct pathway between the left visual thalamus (LGN) and left middle temporal area V5/MT, but not between the left LGN and left primary visual cortex (V1). In addition, left V5/MT-LGN connectivity strength correlated with rapid naming abilities - a key deficit in dyslexia [14]. These findings provide the first evidence of specific structural alterations in the connections between the sensory thalamus and cortex in developmental dyslexia. The results challenge current standard models and provide novel evidence for the importance of cortico-thalamic interactions in explaining dyslexia.
1012.3124
Ernest Barreto
Ernest Barreto and John R. Cressman
Ion Concentration Dynamics as a Mechanism for Neuronal Bursting
The most recent version now contains citation information: Journal of Biological Physics 37, 361-373 (2011)
null
10.1007/s10867-010-9212-6
null
q-bio.CB q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe a simple conductance-based model neuron that includes intra- and extra-cellular ion concentration dynamics and show that this model exhibits periodic bursting. The bursting arises as the fast spiking behavior of the neuron is modulated by the slow oscillatory behavior in the ion concentration variables, and vice versa. By separating these time scales and studying the bifurcation structure of the neuron, we catalog several qualitatively different bursting profiles that are strikingly similar to those seen in experimental preparations. Our work suggests that ion concentration dynamics may play an important role in modulating neuronal excitability in real biological systems.
[ { "created": "Tue, 14 Dec 2010 18:33:39 GMT", "version": "v1" }, { "created": "Thu, 30 Dec 2010 21:04:36 GMT", "version": "v2" }, { "created": "Wed, 21 Sep 2011 19:38:05 GMT", "version": "v3" } ]
2011-09-22
[ [ "Barreto", "Ernest", "" ], [ "Cressman", "John R.", "" ] ]
We describe a simple conductance-based model neuron that includes intra- and extra-cellular ion concentration dynamics and show that this model exhibits periodic bursting. The bursting arises as the fast spiking behavior of the neuron is modulated by the slow oscillatory behavior in the ion concentration variables, and vice versa. By separating these time scales and studying the bifurcation structure of the neuron, we catalog several qualitatively different bursting profiles that are strikingly similar to those seen in experimental preparations. Our work suggests that ion concentration dynamics may play an important role in modulating neuronal excitability in real biological systems.
1812.10602
Zheng Zhao
Zheng Zhao, Lei Xie, and Philip E. Bourne
Structural insights into characterizing binding sites in EGFR kinase mutants
32 pages, 7 figures
Journal of Chemical Information and Modeling, 2018
10.1021/acs.jcim.8b00458
null
q-bio.MN
http://creativecommons.org/licenses/by-sa/4.0/
Over the last two decades epidermal growth factor receptor (EGFR) kinase has become an important target to treat non-small cell lung cancer (NSCLC). Currently, three generations of EGFR kinase-targeted small molecule drugs have been FDA approved. They nominally produce a response at the start of treatment and lead to a substantial survival benefit for patients. However, long-term treatment results in acquired drug resistance and further vulnerability to NSCLC. Therefore, novel EGFR kinase inhibitors that specially overcome acquired mutations are urgently needed. To this end, we carried out a comprehensive study of different EGFR kinase mutants using a structural systems pharmacology strategy. Our analysis shows that both wild-type and mutated structures exhibit multiple conformational states that have not been observed in solved crystal structures. We show that this conformational flexibility accommodates diverse types of ligands with multiple types of binding modes. These results provide insights for designing a new-generation of EGFR kinase inhibitor that combats acquired drug-resistant mutations through a multi-conformation-based drug design strategy.
[ { "created": "Thu, 27 Dec 2018 02:51:34 GMT", "version": "v1" } ]
2018-12-31
[ [ "Zhao", "Zheng", "" ], [ "Xie", "Lei", "" ], [ "Bourne", "Philip E.", "" ] ]
Over the last two decades epidermal growth factor receptor (EGFR) kinase has become an important target to treat non-small cell lung cancer (NSCLC). Currently, three generations of EGFR kinase-targeted small molecule drugs have been FDA approved. They nominally produce a response at the start of treatment and lead to a substantial survival benefit for patients. However, long-term treatment results in acquired drug resistance and further vulnerability to NSCLC. Therefore, novel EGFR kinase inhibitors that specially overcome acquired mutations are urgently needed. To this end, we carried out a comprehensive study of different EGFR kinase mutants using a structural systems pharmacology strategy. Our analysis shows that both wild-type and mutated structures exhibit multiple conformational states that have not been observed in solved crystal structures. We show that this conformational flexibility accommodates diverse types of ligands with multiple types of binding modes. These results provide insights for designing a new-generation of EGFR kinase inhibitor that combats acquired drug-resistant mutations through a multi-conformation-based drug design strategy.
1906.07354
Huilin Wei
Huilin Wei, Amirhossein Jafarian, Peter Zeidman, Vladimir Litvak, Adeel Razi, Dewen Hu, Karl J. Friston
Bayesian fusion and multimodal DCM for EEG and fMRI
null
null
null
null
q-bio.QM q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper asks whether integrating multimodal EEG and fMRI data offers a better characterisation of functional brain architectures than either modality alone. This evaluation rests upon a dynamic causal model that generates both EEG and fMRI data from the same neuronal dynamics. We introduce the use of Bayesian fusion to provide informative (empirical) neuronal priors - derived from dynamic causal modelling (DCM) of EEG data - for subsequent DCM of fMRI data. To illustrate this procedure, we generated synthetic EEG and fMRI timeseries for a mismatch negativity (or auditory oddball) paradigm, using biologically plausible model parameters (i.e., posterior expectations from a DCM of empirical, open access, EEG data). Using model inversion, we found that Bayesian fusion provided a substantial improvement in marginal likelihood or model evidence, indicating a more efficient estimation of model parameters, in relation to inverting fMRI data alone. We quantified the benefits of multimodal fusion with the information gain pertaining to neuronal and haemodynamic parameters - as measured by the Kullback-Leibler divergence between their prior and posterior densities. Remarkably, this analysis suggested that EEG data can improve estimates of haemodynamic parameters; thereby furnishing proof-of-principle that Bayesian fusion of EEG and fMRI is necessary to resolve conditional dependencies between neuronal and haemodynamic estimators. These results suggest that Bayesian fusion may offer a useful approach that exploits the complementary temporal (EEG) and spatial (fMRI) precision of different data modalities. We envisage the procedure could be applied to any multimodal dataset that can be explained by a DCM with a common neuronal parameterisation.
[ { "created": "Tue, 18 Jun 2019 02:59:47 GMT", "version": "v1" } ]
2019-06-19
[ [ "Wei", "Huilin", "" ], [ "Jafarian", "Amirhossein", "" ], [ "Zeidman", "Peter", "" ], [ "Litvak", "Vladimir", "" ], [ "Razi", "Adeel", "" ], [ "Hu", "Dewen", "" ], [ "Friston", "Karl J.", "" ] ]
This paper asks whether integrating multimodal EEG and fMRI data offers a better characterisation of functional brain architectures than either modality alone. This evaluation rests upon a dynamic causal model that generates both EEG and fMRI data from the same neuronal dynamics. We introduce the use of Bayesian fusion to provide informative (empirical) neuronal priors - derived from dynamic causal modelling (DCM) of EEG data - for subsequent DCM of fMRI data. To illustrate this procedure, we generated synthetic EEG and fMRI timeseries for a mismatch negativity (or auditory oddball) paradigm, using biologically plausible model parameters (i.e., posterior expectations from a DCM of empirical, open access, EEG data). Using model inversion, we found that Bayesian fusion provided a substantial improvement in marginal likelihood or model evidence, indicating a more efficient estimation of model parameters, in relation to inverting fMRI data alone. We quantified the benefits of multimodal fusion with the information gain pertaining to neuronal and haemodynamic parameters - as measured by the Kullback-Leibler divergence between their prior and posterior densities. Remarkably, this analysis suggested that EEG data can improve estimates of haemodynamic parameters; thereby furnishing proof-of-principle that Bayesian fusion of EEG and fMRI is necessary to resolve conditional dependencies between neuronal and haemodynamic estimators. These results suggest that Bayesian fusion may offer a useful approach that exploits the complementary temporal (EEG) and spatial (fMRI) precision of different data modalities. We envisage the procedure could be applied to any multimodal dataset that can be explained by a DCM with a common neuronal parameterisation.
0807.1699
Swarnendu Tripathi
Swarnendu Tripathi and John J. Portman
Inherent flexibility determines the transition mechanisms of the EF-hands of Calmodulin
17 pages, 7 figures
null
10.1073/pnas.0806872106
null
q-bio.QM q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explore how inherent flexibility of a protein molecule influences the mechanism controlling the kinetics of allosteric transitions using a variational model inspired from work in protein folding. The striking differences in the predicted transition mechanism for the opening of the two domains of calmodulin (CaM) emphasizes that inherent flexibility is key to understanding the complex conformational changes that occur in proteins. In particular, the C-terminal domain of CaM (cCaM) which is inherently less flexible than its N-terminal domain (nCaM) reveals "cracking" or local partial unfolding during the open/closed transition. This result is in harmony with the picture that cracking relieves local stresses due to conformational deformations of a sufficiently rigid protein. We also compare the conformational transition in a recently studied "even-odd" paired fragment of CaM. Our results rationalize the different relative binding affinities of the EF-hands in the engineered fragment compared to the intact "odd-even" paired EF-hands (nCaM and cCaM) in terms of changes in flexibility along the transition route. Aside from elucidating general theoretical ideas about the cracking mechanism, these studies also emphasize how the remarkable intrinsic plasticity of CaM underlies conformational dynamics essential for its diverse functions.
[ { "created": "Thu, 10 Jul 2008 16:44:12 GMT", "version": "v1" } ]
2009-11-13
[ [ "Tripathi", "Swarnendu", "" ], [ "Portman", "John J.", "" ] ]
We explore how inherent flexibility of a protein molecule influences the mechanism controlling the kinetics of allosteric transitions using a variational model inspired from work in protein folding. The striking differences in the predicted transition mechanism for the opening of the two domains of calmodulin (CaM) emphasizes that inherent flexibility is key to understanding the complex conformational changes that occur in proteins. In particular, the C-terminal domain of CaM (cCaM) which is inherently less flexible than its N-terminal domain (nCaM) reveals "cracking" or local partial unfolding during the open/closed transition. This result is in harmony with the picture that cracking relieves local stresses due to conformational deformations of a sufficiently rigid protein. We also compare the conformational transition in a recently studied "even-odd" paired fragment of CaM. Our results rationalize the different relative binding affinities of the EF-hands in the engineered fragment compared to the intact "odd-even" paired EF-hands (nCaM and cCaM) in terms of changes in flexibility along the transition route. Aside from elucidating general theoretical ideas about the cracking mechanism, these studies also emphasize how the remarkable intrinsic plasticity of CaM underlies conformational dynamics essential for its diverse functions.
0801.0797
Wang Weiming
Weiming Wang, Lei Zhang, Yakui Xue, Zhen Jin
Spatiotemporal pattern formation of Beddington-DeAngelis-type predator-prey model
null
null
null
null
q-bio.PE
null
In this paper, we investigate the emergence of a predator-prey model with Beddington-DeAngelis-type functional response and reaction-diffusion. We derive the conditions for Hopf and Turing bifurcation on the spatial domain. Based on the stability and bifurcation analysis, we give the spatial pattern formation via numerical simulation, i.e., the evolution process of the model near the coexistence equilibrium point. We find that for the model we consider, pure Turing instability gives birth to the spotted pattern, pure Hopf instability gives birth to the spiral wave pattern, and both Hopf and Turing instability give birth to stripe-like pattern. Our results show that reaction-diffusion model is an appropriate tool for investigating fundamental mechanism of complex spatiotemporal dynamics. It will be useful for studying the dynamic complexity of ecosystems.
[ { "created": "Sat, 5 Jan 2008 09:36:45 GMT", "version": "v1" } ]
2008-01-08
[ [ "Wang", "Weiming", "" ], [ "Zhang", "Lei", "" ], [ "Xue", "Yakui", "" ], [ "Jin", "Zhen", "" ] ]
In this paper, we investigate the emergence of a predator-prey model with Beddington-DeAngelis-type functional response and reaction-diffusion. We derive the conditions for Hopf and Turing bifurcation on the spatial domain. Based on the stability and bifurcation analysis, we give the spatial pattern formation via numerical simulation, i.e., the evolution process of the model near the coexistence equilibrium point. We find that for the model we consider, pure Turing instability gives birth to the spotted pattern, pure Hopf instability gives birth to the spiral wave pattern, and both Hopf and Turing instability give birth to stripe-like pattern. Our results show that reaction-diffusion model is an appropriate tool for investigating fundamental mechanism of complex spatiotemporal dynamics. It will be useful for studying the dynamic complexity of ecosystems.
q-bio/0701001
Edwin Wang Dr.
Qinghua Cui, Zhenbao Yu, Youlian Pan, Enrico Purisima and Edwin Wang
MicroRNAs preferentially target the genes with high transcriptional regulation complexity
supplementary data available at http://www.bri.nrc.ca/wang
Biochem Biophys Res Commun., 352:733-738, 2007
10.1016/j.bbrc.2006.11.080
null
q-bio.GN q-bio.MN
null
Over the past few years, microRNAs (miRNAs) have emerged as a new prominent class of gene regulatory factors that negatively regulate expression of approximately one-third of the genes in animal genomes at post-transcriptional level. However, it is still unclear why some genes are regulated by miRNAs but others are not, i.e. what principles govern miRNA regulation in animal genomes. In this study, we systematically analyzed the relationship between transcription factors (TFs) and miRNAs in gene regulation. We found that the genes with more TF-binding sites have a higher probability of being targeted by miRNAs and have more miRNA-binding sites on average. This observation reveals that the genes with higher cis-regulation complexity are more coordinately regulated by TFs at the transcriptional level and by miRNAs at the post-transcriptional level. This is a potentially novel discovery of mechanism for coordinated regulation of gene expression. Gene ontology analysis further demonstrated that such coordinated regulation is more popular in the developmental genes.
[ { "created": "Sat, 30 Dec 2006 04:20:26 GMT", "version": "v1" } ]
2007-05-23
[ [ "Cui", "Qinghua", "" ], [ "Yu", "Zhenbao", "" ], [ "Pan", "Youlian", "" ], [ "Purisima", "Enrico", "" ], [ "Wang", "Edwin", "" ] ]
Over the past few years, microRNAs (miRNAs) have emerged as a new prominent class of gene regulatory factors that negatively regulate expression of approximately one-third of the genes in animal genomes at post-transcriptional level. However, it is still unclear why some genes are regulated by miRNAs but others are not, i.e. what principles govern miRNA regulation in animal genomes. In this study, we systematically analyzed the relationship between transcription factors (TFs) and miRNAs in gene regulation. We found that the genes with more TF-binding sites have a higher probability of being targeted by miRNAs and have more miRNA-binding sites on average. This observation reveals that the genes with higher cis-regulation complexity are more coordinately regulated by TFs at the transcriptional level and by miRNAs at the post-transcriptional level. This is a potentially novel discovery of mechanism for coordinated regulation of gene expression. Gene ontology analysis further demonstrated that such coordinated regulation is more popular in the developmental genes.
1705.10001
Marco Kienzle
M.K. Broadhurst, M. Kienzle and J. Stewart
Natural mortality of Trachurus novaezelandiae and their size selection by purse seines off south-eastern Australia
null
null
10.1111/fme.12286
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The natural mortality (M) and purse-seine catchability and selectivity were estimated for Trachurus novaezelandiae, Richardson, 1843 (yellowtail scad)-a small inshore pelagic species harvested off south-eastern Australia. Hazard functions were applied to two decades of data describing catches (mostly stable at a mean +- SE of 315 +- 14 t p.a.) and effort (declining from a maximum of 2289 to 642 boat days between 1999/00 and 2015/16) and inter-dispersed (over nine years) annual estimates of size-at-age (0+ to 18 years) to enable survival analysis. The data were best described by a model with eight parameters, including catchability (estimated at < 0.1 x 10-7 boat day-1), M (0.22 year-1) and variable age-specific selection up to 6 years with a 50% retention among 5-year olds (larger than the estimated age at maturation). The low catchability implied minimal fishing mortality by the purse-seine fleet. Ongoing monitoring and applied gear-based studies are required to validate purse-seine catchability and selectivity, but the data nevertheless imply T. novaezelandiae could incur substantial additional fishing effort and, in doing, so alleviate pressure on other regional small pelagics.
[ { "created": "Mon, 29 May 2017 00:00:17 GMT", "version": "v1" } ]
2018-07-10
[ [ "Broadhurst", "M. K.", "" ], [ "Kienzle", "M.", "" ], [ "Stewart", "J.", "" ] ]
The natural mortality (M) and purse-seine catchability and selectivity were estimated for Trachurus novaezelandiae, Richardson, 1843 (yellowtail scad)-a small inshore pelagic species harvested off south-eastern Australia. Hazard functions were applied to two decades of data describing catches (mostly stable at a mean +- SE of 315 +- 14 t p.a.) and effort (declining from a maximum of 2289 to 642 boat days between 1999/00 and 2015/16) and inter-dispersed (over nine years) annual estimates of size-at-age (0+ to 18 years) to enable survival analysis. The data were best described by a model with eight parameters, including catchability (estimated at < 0.1 x 10-7 boat day-1), M (0.22 year-1) and variable age-specific selection up to 6 years with a 50% retention among 5-year olds (larger than the estimated age at maturation). The low catchability implied minimal fishing mortality by the purse-seine fleet. Ongoing monitoring and applied gear-based studies are required to validate purse-seine catchability and selectivity, but the data nevertheless imply T. novaezelandiae could incur substantial additional fishing effort and, in doing, so alleviate pressure on other regional small pelagics.
1707.08240
Peter Taylor
Peter N Taylor, Nishant Sinha, Yujiang Wang, Sjoerd B Vos, Jane de Tisi, Anna Miserocchi, Andrew W McEvoy, Gavin P Winston, John S Duncan
The impact of epilepsy surgery on the structural connectome and its relation to outcome
null
NeuroImage.Clinical 18 (2018) 202-214
10.1016/j.nicl.2018.01.028
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Temporal lobe surgical resection brings seizure remission in up to 80% of patients, with long-term complete seizure freedom in 41%. However, it is unclear how surgery impacts on the structural white matter network, and how the network changes relate to seizure outcome. We used white matter fibre tractography on preoperative diffusion MRI to generate a structural white matter network, and postoperative T1-weighted MRI to retrospectively infer the impact of surgical resection on this network. We then applied graph theory and machine learning to investigate the properties of change between the preoperative and predicted postoperative networks. Temporal lobe surgery had a modest impact on global network efficiency, despite the disruption caused. This was due to alternative shortest paths in the network leading to widespread increases in betweenness centrality post-surgery. Measurements of network change could retrospectively predict seizure outcomes with 79% accuracy and 65% specificity, which is twice as high as the empirical distribution. Fifteen connections which changed due to surgery were identified as useful for prediction of outcome, eight of which connected to the ipsilateral temporal pole. Our results suggest that the use of network change metrics may have clinical value for predicting seizure outcome. This approach could be used to prospectively predict outcomes given a suggested resection mask using preoperative data only.
[ { "created": "Tue, 25 Jul 2017 22:18:09 GMT", "version": "v1" }, { "created": "Fri, 23 Mar 2018 19:05:12 GMT", "version": "v2" } ]
2020-09-30
[ [ "Taylor", "Peter N", "" ], [ "Sinha", "Nishant", "" ], [ "Wang", "Yujiang", "" ], [ "Vos", "Sjoerd B", "" ], [ "de Tisi", "Jane", "" ], [ "Miserocchi", "Anna", "" ], [ "McEvoy", "Andrew W", "" ], [ "Winston", "Gavin P", "" ], [ "Duncan", "John S", "" ] ]
Temporal lobe surgical resection brings seizure remission in up to 80% of patients, with long-term complete seizure freedom in 41%. However, it is unclear how surgery impacts on the structural white matter network, and how the network changes relate to seizure outcome. We used white matter fibre tractography on preoperative diffusion MRI to generate a structural white matter network, and postoperative T1-weighted MRI to retrospectively infer the impact of surgical resection on this network. We then applied graph theory and machine learning to investigate the properties of change between the preoperative and predicted postoperative networks. Temporal lobe surgery had a modest impact on global network efficiency, despite the disruption caused. This was due to alternative shortest paths in the network leading to widespread increases in betweenness centrality post-surgery. Measurements of network change could retrospectively predict seizure outcomes with 79% accuracy and 65% specificity, which is twice as high as the empirical distribution. Fifteen connections which changed due to surgery were identified as useful for prediction of outcome, eight of which connected to the ipsilateral temporal pole. Our results suggest that the use of network change metrics may have clinical value for predicting seizure outcome. This approach could be used to prospectively predict outcomes given a suggested resection mask using preoperative data only.
q-bio/0309001
Eduardo D. Sontag
David Angeli and Eduardo D. Sontag
Monotone Systems with Inputs and Outputs
null
null
null
null
q-bio.QM q-bio.MN
null
Monotone systems constitute one of the most important classes of dynamical systems used in mathematical biology modeling. The objective of this paper is to extend the notion of monotonicity to systems with inputs and outputs, a necessary first step in trying to understand interconnections, especially including feedback loops, built up out of monotone components. Basic definitions and theorems are provided, as well as an application of a theorem regarding negative feedback loops to the study of a model of one of the cell's most important subsystems (MAPK cascades) .
[ { "created": "Tue, 16 Sep 2003 13:59:15 GMT", "version": "v1" } ]
2007-05-23
[ [ "Angeli", "David", "" ], [ "Sontag", "Eduardo D.", "" ] ]
Monotone systems constitute one of the most important classes of dynamical systems used in mathematical biology modeling. The objective of this paper is to extend the notion of monotonicity to systems with inputs and outputs, a necessary first step in trying to understand interconnections, especially including feedback loops, built up out of monotone components. Basic definitions and theorems are provided, as well as an application of a theorem regarding negative feedback loops to the study of a model of one of the cell's most important subsystems (MAPK cascades) .
2003.09403
Raj Abhijit Dandekar
Raj Dandekar and George Barbastathis
Neural Network aided quarantine control model estimation of COVID spread in Wuhan, China
9 pages including references. 8 figures
null
null
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a move described as unprecedented in public health history, starting 24 January 2020, China imposed quarantine and isolation restrictions in Wuhan, a city of more than 10 million people. This raised the question: is mass quarantine and isolation effective as a social tool in addition to its scientific use as a medical tool? In an effort to address this question, using a epidemiological model driven approach augmented by machine learning, we show that the quarantine and isolation measures implemented in Wuhan brought down the effective reproduction number R(t) of the CoVID-19 spread from R(t) > 1 to R(t) <1 within a month after the imposition of quarantine control measures in Wuhan, China. This ultimately resulted in a stagnation phase in the infected case count in Wuhan. Our results indicate that the strict public health policies implemented in Wuhan may have played a crucial role in halting down the spread of infection and such measures should potentially be implemented in other highly affected countries such as South Korea, Italy and Iran to curtail spread of the disease. Finally, our forecasting results predict a stagnation in the quarantine control measures implemented in Wuhan towards the end of March 2020; this would lead to a subsequent stagnation in the effective reproduction number at R(t) <1. We warn that immediate relaxation of the quarantine measures in Wuhan may lead to a relapse in the infection spread and a subsequent increase in the effective reproduction number to R(t) >1. Thus, it may be wise to relax quarantine measures after sufficient time has elapsed, during which maximum of the quarantined/isolated individuals are recovered.
[ { "created": "Wed, 18 Mar 2020 16:28:18 GMT", "version": "v1" } ]
2020-03-23
[ [ "Dandekar", "Raj", "" ], [ "Barbastathis", "George", "" ] ]
In a move described as unprecedented in public health history, starting 24 January 2020, China imposed quarantine and isolation restrictions in Wuhan, a city of more than 10 million people. This raised the question: is mass quarantine and isolation effective as a social tool in addition to its scientific use as a medical tool? In an effort to address this question, using a epidemiological model driven approach augmented by machine learning, we show that the quarantine and isolation measures implemented in Wuhan brought down the effective reproduction number R(t) of the CoVID-19 spread from R(t) > 1 to R(t) <1 within a month after the imposition of quarantine control measures in Wuhan, China. This ultimately resulted in a stagnation phase in the infected case count in Wuhan. Our results indicate that the strict public health policies implemented in Wuhan may have played a crucial role in halting down the spread of infection and such measures should potentially be implemented in other highly affected countries such as South Korea, Italy and Iran to curtail spread of the disease. Finally, our forecasting results predict a stagnation in the quarantine control measures implemented in Wuhan towards the end of March 2020; this would lead to a subsequent stagnation in the effective reproduction number at R(t) <1. We warn that immediate relaxation of the quarantine measures in Wuhan may lead to a relapse in the infection spread and a subsequent increase in the effective reproduction number to R(t) >1. Thus, it may be wise to relax quarantine measures after sufficient time has elapsed, during which maximum of the quarantined/isolated individuals are recovered.
1809.05969
Marie Li
Marie Li
Missing Value Estimation Algorithms on Cluster and Representativeness Preservation of Gene Expression Microarray Data
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Missing values are largely inevitable in gene expression microarray studies. Data sets often have significant omissions due to individuals dropping out of experiments, errors in data collection, image corruptions, and so on. Missing data could potentially undermine the validity of research results - leading to inaccurate predictive models and misleading conclusions. Imputation - a relatively flexible, general purpose approach towards dealing with missing data - is now available in massive numbers, making it possible to handle missing data. While these estimation methods are becoming increasingly more effective in resolving the discrepancies between true and estimated values, its effect on clustering outcomes is largely disregarded. This study seeks to reveal the vast differences in agglomerative hierarchal clustering outcomes estimation methods can construct in comparison to the precision exhibited (presented through the cophenetic correlation coefficient) in comparison to their high efficiency and effectivity in value preservation of true and imputed values (presented through the root-mean-squared error). We argue against the traditional approach towards the development of imputation methods and instead, advocate towards methods that reproduce a data set's original, natural cluster. By using a number of advanced imputation methods, we reveal extensive differences between original and reconstructed clusters that could significantly transform the interpretations of the data as a whole.
[ { "created": "Sun, 16 Sep 2018 22:21:46 GMT", "version": "v1" } ]
2018-09-18
[ [ "Li", "Marie", "" ] ]
Missing values are largely inevitable in gene expression microarray studies. Data sets often have significant omissions due to individuals dropping out of experiments, errors in data collection, image corruptions, and so on. Missing data could potentially undermine the validity of research results - leading to inaccurate predictive models and misleading conclusions. Imputation - a relatively flexible, general purpose approach towards dealing with missing data - is now available in massive numbers, making it possible to handle missing data. While these estimation methods are becoming increasingly more effective in resolving the discrepancies between true and estimated values, its effect on clustering outcomes is largely disregarded. This study seeks to reveal the vast differences in agglomerative hierarchal clustering outcomes estimation methods can construct in comparison to the precision exhibited (presented through the cophenetic correlation coefficient) in comparison to their high efficiency and effectivity in value preservation of true and imputed values (presented through the root-mean-squared error). We argue against the traditional approach towards the development of imputation methods and instead, advocate towards methods that reproduce a data set's original, natural cluster. By using a number of advanced imputation methods, we reveal extensive differences between original and reconstructed clusters that could significantly transform the interpretations of the data as a whole.
1111.0379
Jakub Truszkowski
Daniel G. Brown and Jakub Truszkowski
Fast reconstruction of phylogenetic trees using locality-sensitive hashing
null
null
null
null
q-bio.PE cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the first sub-quadratic time algorithm that with high probability correctly reconstructs phylogenetic trees for short sequences generated by a Markov model of evolution. Due to rapid expansion in sequence databases, such very fast algorithms are becoming necessary. Other fast heuristics have been developed for building trees from very large alignments (Price et al, and Brown et al), but they lack theoretical performance guarantees. Our new algorithm runs in $O(n^{1+\gamma(g)}\log^2n)$ time, where $\gamma$ is an increasing function of an upper bound on the branch lengths in the phylogeny, the upper bound $g$ must be below$1/2-\sqrt{1/8} \approx 0.15$, and $\gamma(g)<1$ for all $g$. For phylogenies with very short branches, the running time of our algorithm is close to linear. For example, if all branch lengths correspond to a mutation probability of less than 0.02, the running time of our algorithm is roughly $O(n^{1.2}\log^2n)$. Via a prototype and a sequence of large-scale experiments, we show that many large phylogenies can be reconstructed fast, without compromising reconstruction accuracy.
[ { "created": "Wed, 2 Nov 2011 04:21:19 GMT", "version": "v1" }, { "created": "Thu, 31 May 2012 07:28:25 GMT", "version": "v2" } ]
2012-06-01
[ [ "Brown", "Daniel G.", "" ], [ "Truszkowski", "Jakub", "" ] ]
We present the first sub-quadratic time algorithm that with high probability correctly reconstructs phylogenetic trees for short sequences generated by a Markov model of evolution. Due to rapid expansion in sequence databases, such very fast algorithms are becoming necessary. Other fast heuristics have been developed for building trees from very large alignments (Price et al, and Brown et al), but they lack theoretical performance guarantees. Our new algorithm runs in $O(n^{1+\gamma(g)}\log^2n)$ time, where $\gamma$ is an increasing function of an upper bound on the branch lengths in the phylogeny, the upper bound $g$ must be below$1/2-\sqrt{1/8} \approx 0.15$, and $\gamma(g)<1$ for all $g$. For phylogenies with very short branches, the running time of our algorithm is close to linear. For example, if all branch lengths correspond to a mutation probability of less than 0.02, the running time of our algorithm is roughly $O(n^{1.2}\log^2n)$. Via a prototype and a sequence of large-scale experiments, we show that many large phylogenies can be reconstructed fast, without compromising reconstruction accuracy.
2001.07841
Ye Lin
Ye Lin, Sean B. Andersson
Simultaneous Localization and Parameter Estimation for Single Particle Tracking via Sigma Points based EM
Accepted by 58th Conference on Decision and Control (CDC)
null
null
null
q-bio.BM math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Single Particle Tracking (SPT) is a powerful class of tools for analyzing the dynamics of individual biological macromolecules moving inside living cells. The acquired data is typically in the form of a sequence of camera images that are then post-processed to reveal details about the motion. In this work, we develop an algorithm for jointly estimating both particle trajectory and motion model parameters from the data. Our approach uses Expectation Maximization (EM) combined with an Unscented Kalman filter (UKF) and an Unscented Rauch-Tung-Striebel smoother (URTSS), allowing us to use an accurate, nonlinear model of the observations acquired by the camera. Due to the shot noise characteristics of the photon generation process, this model uses a Poisson distribution to capture the measurement noise inherent in imaging. In order to apply a UKF, we first must transform the measurements into a model with additive Gaussian noise. We consider two approaches, one based on variance stabilizing transformations (where we compare the Anscombe and Freeman-Tukey transforms) and one on a Gaussian approximation to the Poisson distribution. Through simulations, we demonstrate efficacy of the approach and explore the differences among these measurement transformations.
[ { "created": "Wed, 22 Jan 2020 01:42:01 GMT", "version": "v1" } ]
2020-01-23
[ [ "Lin", "Ye", "" ], [ "Andersson", "Sean B.", "" ] ]
Single Particle Tracking (SPT) is a powerful class of tools for analyzing the dynamics of individual biological macromolecules moving inside living cells. The acquired data is typically in the form of a sequence of camera images that are then post-processed to reveal details about the motion. In this work, we develop an algorithm for jointly estimating both particle trajectory and motion model parameters from the data. Our approach uses Expectation Maximization (EM) combined with an Unscented Kalman filter (UKF) and an Unscented Rauch-Tung-Striebel smoother (URTSS), allowing us to use an accurate, nonlinear model of the observations acquired by the camera. Due to the shot noise characteristics of the photon generation process, this model uses a Poisson distribution to capture the measurement noise inherent in imaging. In order to apply a UKF, we first must transform the measurements into a model with additive Gaussian noise. We consider two approaches, one based on variance stabilizing transformations (where we compare the Anscombe and Freeman-Tukey transforms) and one on a Gaussian approximation to the Poisson distribution. Through simulations, we demonstrate efficacy of the approach and explore the differences among these measurement transformations.
2104.04567
Kaare Mikkelsen
Kaare B. Mikkelsen, Huy Phan, Mike L. Rank, Martin C. Hemmsen, Maarten de Vos, Preben Kidmose
Light-weight sleep monitoring: electrode distance matters more than placement for automatic scoring
8 pages, 8 figures
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Modern sleep monitoring development is shifting towards the use of unobtrusive sensors combined with algorithms for automatic sleep scoring. Many different combinations of wet and dry electrodes, ear-centered, forehead-mounted or headband-inspired designs have been proposed, alongside an ever growing variety of machine learning algorithms for automatic sleep scoring. In this paper, we compare 13 different, realistic sensor setups derived from the same data set and analysed with the same pipeline. We find that all setups which include both a lateral and an EOG derivation show similar, state-of-the-art performance, with average Cohen's kappa values of at least 0.80. This indicates that electrode distance, rather than position, is important for accurate sleep scoring. Finally, based on the results presented, we argue that with the current competitive performance of automated staging approaches, there is an urgent need for establishing an improved benchmark beyond current single human rater scoring.
[ { "created": "Fri, 9 Apr 2021 18:52:23 GMT", "version": "v1" }, { "created": "Tue, 13 Apr 2021 08:45:06 GMT", "version": "v2" } ]
2021-04-14
[ [ "Mikkelsen", "Kaare B.", "" ], [ "Phan", "Huy", "" ], [ "Rank", "Mike L.", "" ], [ "Hemmsen", "Martin C.", "" ], [ "de Vos", "Maarten", "" ], [ "Kidmose", "Preben", "" ] ]
Modern sleep monitoring development is shifting towards the use of unobtrusive sensors combined with algorithms for automatic sleep scoring. Many different combinations of wet and dry electrodes, ear-centered, forehead-mounted or headband-inspired designs have been proposed, alongside an ever growing variety of machine learning algorithms for automatic sleep scoring. In this paper, we compare 13 different, realistic sensor setups derived from the same data set and analysed with the same pipeline. We find that all setups which include both a lateral and an EOG derivation show similar, state-of-the-art performance, with average Cohen's kappa values of at least 0.80. This indicates that electrode distance, rather than position, is important for accurate sleep scoring. Finally, based on the results presented, we argue that with the current competitive performance of automated staging approaches, there is an urgent need for establishing an improved benchmark beyond current single human rater scoring.
1311.5652
Kevin Liu
Kevin J. Liu, Ethan Steinberg, Alexander Yozzo, Ying Song, Michael H. Kohn, Luay Nakhleh
Interspecific Introgressive Origin of Genomic Diversity in the House Mouse
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We report on a genome-wide scan for introgression in the house mouse (Mus musculus domesticus) involving the Algerian mouse (Mus spretus), using samples from the ranges of sympatry and allopatry in Africa and Europe. Our analysis reveals wide variability in introgression signatures along the genomes, as well as across the samples. We find that fewer than half of the autosomes in each genome harbor all detectable introgression, while the X chromosome has none. Further, European mice carry more M. spretus alleles than the sympatric African ones. Using the length distribution and sharing patterns of introgressed genomic tracts across the samples, we infer, first, that at least three distinct hybridization events involving M. spretus have occurred, one of which is ancient, and the other two are recent (one presumably due to warfarin rodenticide selection). Second, several of the inferred introgressed tracts contain genes that are likely to confer adaptive advantage. Third, introgressed tracts might contain driver genes that determine the evolutionary fate of those tracts. Further, functional analysis revealed introgressed genes that are essential to fitness, including the Vkorc1 gene, which is implicated in rodenticide resistance, and olfactory receptor genes. Our findings highlight the extent and role of introgression in nature, and call for careful analysis and interpretation of house mouse data in evolutionary and genetic studies.
[ { "created": "Fri, 22 Nov 2013 04:35:47 GMT", "version": "v1" }, { "created": "Fri, 19 Sep 2014 16:41:34 GMT", "version": "v2" } ]
2014-09-22
[ [ "Liu", "Kevin J.", "" ], [ "Steinberg", "Ethan", "" ], [ "Yozzo", "Alexander", "" ], [ "Song", "Ying", "" ], [ "Kohn", "Michael H.", "" ], [ "Nakhleh", "Luay", "" ] ]
We report on a genome-wide scan for introgression in the house mouse (Mus musculus domesticus) involving the Algerian mouse (Mus spretus), using samples from the ranges of sympatry and allopatry in Africa and Europe. Our analysis reveals wide variability in introgression signatures along the genomes, as well as across the samples. We find that fewer than half of the autosomes in each genome harbor all detectable introgression, while the X chromosome has none. Further, European mice carry more M. spretus alleles than the sympatric African ones. Using the length distribution and sharing patterns of introgressed genomic tracts across the samples, we infer, first, that at least three distinct hybridization events involving M. spretus have occurred, one of which is ancient, and the other two are recent (one presumably due to warfarin rodenticide selection). Second, several of the inferred introgressed tracts contain genes that are likely to confer adaptive advantage. Third, introgressed tracts might contain driver genes that determine the evolutionary fate of those tracts. Further, functional analysis revealed introgressed genes that are essential to fitness, including the Vkorc1 gene, which is implicated in rodenticide resistance, and olfactory receptor genes. Our findings highlight the extent and role of introgression in nature, and call for careful analysis and interpretation of house mouse data in evolutionary and genetic studies.
2005.02211
Alessandro Salatiello
Alessandro Salatiello and Martin A. Giese
Recurrent Neural Network Learning of Performance and Intrinsic Population Dynamics from Sparse Neural Data
null
Artificial Neural Networks and Machine Learning - ICANN 2020. ICANN 2020. Lecture Notes in Computer Science, vol 12396. Springer, Cham.:874-86
10.1007/978-3-030-61609-0_69
null
q-bio.NC cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recurrent Neural Networks (RNNs) are popular models of brain function. The typical training strategy is to adjust their input-output behavior so that it matches that of the biological circuit of interest. Even though this strategy ensures that the biological and artificial networks perform the same computational task, it does not guarantee that their internal activity dynamics match. This suggests that the trained RNNs might end up performing the task employing a different internal computational mechanism, which would make them a suboptimal model of the biological circuit. In this work, we introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics, based on sparse neural recordings. We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model. Specifically, this model generates the multiphasic muscle-like activity patterns typically observed during the execution of reaching movements, based on the oscillatory activation patterns concurrently observed in the motor cortex. Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons sampled from the biological network. Furthermore, we show that training the RNNs with this method significantly improves their generalization performance. Overall, our results suggest that the proposed method is suitable for building powerful functional RNN models, which automatically capture important computational properties of the biological circuit of interest from sparse neural recordings.
[ { "created": "Tue, 5 May 2020 14:16:54 GMT", "version": "v1" } ]
2020-11-09
[ [ "Salatiello", "Alessandro", "" ], [ "Giese", "Martin A.", "" ] ]
Recurrent Neural Networks (RNNs) are popular models of brain function. The typical training strategy is to adjust their input-output behavior so that it matches that of the biological circuit of interest. Even though this strategy ensures that the biological and artificial networks perform the same computational task, it does not guarantee that their internal activity dynamics match. This suggests that the trained RNNs might end up performing the task employing a different internal computational mechanism, which would make them a suboptimal model of the biological circuit. In this work, we introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics, based on sparse neural recordings. We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model. Specifically, this model generates the multiphasic muscle-like activity patterns typically observed during the execution of reaching movements, based on the oscillatory activation patterns concurrently observed in the motor cortex. Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons sampled from the biological network. Furthermore, we show that training the RNNs with this method significantly improves their generalization performance. Overall, our results suggest that the proposed method is suitable for building powerful functional RNN models, which automatically capture important computational properties of the biological circuit of interest from sparse neural recordings.
1511.04769
Yun Kang
Yun Kang and Guy Theraulaz
Dynamical models of task organization in social insect colonies
null
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The organizations of insect societies, such as division of labor, task allocation, collective regulation, mass action responses, have been considered as main reasons for the ecological success. In this article, we propose and study a general modeling framework that includes the following three features: (a) the average internal response threshold for each task (the internal factor); (b) social network communications that could lead to task switching (the environmental factor); and (c) dynamical changes of task demands (the external factor). Since workers in many social insect species exhibit \emph{age polyethism}, we also extend our model to incorporate \emph{age polyethism} in which worker task preferences change with age. We apply our general modeling framework to the cases of two task groups: the inside colony task versus the outside colony task. Our analytical study of the models provides important insights and predictions on the effects of colony size, social communication, and age related task preferences on task allocation and division of labor in the adaptive dynamical environment. Our study implies that the smaller size colony invests its resource for the colony growth and allocates more workers in the risky tasks such as foraging while the larger colony shifts more workers to perform the safer tasks inside the colony. Social interactions among different task groups play an important role in shaping task allocation depending on the relative cost and demands of the tasks.
[ { "created": "Sun, 15 Nov 2015 21:30:26 GMT", "version": "v1" } ]
2015-11-17
[ [ "Kang", "Yun", "" ], [ "Theraulaz", "Guy", "" ] ]
The organizations of insect societies, such as division of labor, task allocation, collective regulation, mass action responses, have been considered as main reasons for the ecological success. In this article, we propose and study a general modeling framework that includes the following three features: (a) the average internal response threshold for each task (the internal factor); (b) social network communications that could lead to task switching (the environmental factor); and (c) dynamical changes of task demands (the external factor). Since workers in many social insect species exhibit \emph{age polyethism}, we also extend our model to incorporate \emph{age polyethism} in which worker task preferences change with age. We apply our general modeling framework to the cases of two task groups: the inside colony task versus the outside colony task. Our analytical study of the models provides important insights and predictions on the effects of colony size, social communication, and age related task preferences on task allocation and division of labor in the adaptive dynamical environment. Our study implies that the smaller size colony invests its resource for the colony growth and allocates more workers in the risky tasks such as foraging while the larger colony shifts more workers to perform the safer tasks inside the colony. Social interactions among different task groups play an important role in shaping task allocation depending on the relative cost and demands of the tasks.
1605.09682
Jose A Capitan
Jose A. Capitan, Sara Cuenda, Alejandro Ordo\~nez, David Alonso
A signal of competitive dominance in mid-latitude herbaceous plant communities
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the main determinants of species coexistence across space and time is a central question in ecology. However, ecologists still know little about the scales and conditions at which biotic interactions matter and how these interact with the environment to structure species assemblages. Here we use recent theory developments to analyze plant distribution and trait data across Europe and find that plant height clustering is related to both evapotranspiration and gross primary productivity. This clustering is a signal of interspecies competition between plants, which is most evident in mid-latitude ecoregions, where conditions for growth (reflected in actual evapotranspiration rates and gross primary productivities) are optimal. Away from this optimum, climate severity likely overrides the effect of competition, or other interactions become increasingly important. Our approach bridges the gap between species-rich competition theories and large-scale species distribution data analysis.
[ { "created": "Tue, 31 May 2016 15:45:50 GMT", "version": "v1" }, { "created": "Wed, 21 Sep 2016 09:48:56 GMT", "version": "v2" }, { "created": "Thu, 19 Aug 2021 08:49:22 GMT", "version": "v3" } ]
2021-08-20
[ [ "Capitan", "Jose A.", "" ], [ "Cuenda", "Sara", "" ], [ "Ordoñez", "Alejandro", "" ], [ "Alonso", "David", "" ] ]
Understanding the main determinants of species coexistence across space and time is a central question in ecology. However, ecologists still know little about the scales and conditions at which biotic interactions matter and how these interact with the environment to structure species assemblages. Here we use recent theory developments to analyze plant distribution and trait data across Europe and find that plant height clustering is related to both evapotranspiration and gross primary productivity. This clustering is a signal of interspecies competition between plants, which is most evident in mid-latitude ecoregions, where conditions for growth (reflected in actual evapotranspiration rates and gross primary productivities) are optimal. Away from this optimum, climate severity likely overrides the effect of competition, or other interactions become increasingly important. Our approach bridges the gap between species-rich competition theories and large-scale species distribution data analysis.
2111.06969
Reimann Stefan
Stefan Reimann
Computing with Cognitive States
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Basic experimental findings about human working memory can be described by an algebra built on high-dimensional binary states, representing information items, and two operations: multiplication for binding and addition for bundling. In contrast to common VSA algebras, bundling is not associative. Consequently bundling a sequence of items preserves their sequential ordering. The cognitive states representing a memorised list exhibit a primacy as well as a recency gradient. The typical concave-up and asymmetrically shaped serial position curve is derived as a linear combination of those gradients. Quantitative implications of the algebra are shown to agree well with empirical data from basic cognitive tasks including storage and retrieval of information in human working memory.
[ { "created": "Wed, 3 Nov 2021 20:23:58 GMT", "version": "v1" } ]
2021-11-16
[ [ "Reimann", "Stefan", "" ] ]
Basic experimental findings about human working memory can be described by an algebra built on high-dimensional binary states, representing information items, and two operations: multiplication for binding and addition for bundling. In contrast to common VSA algebras, bundling is not associative. Consequently bundling a sequence of items preserves their sequential ordering. The cognitive states representing a memorised list exhibit a primacy as well as a recency gradient. The typical concave-up and asymmetrically shaped serial position curve is derived as a linear combination of those gradients. Quantitative implications of the algebra are shown to agree well with empirical data from basic cognitive tasks including storage and retrieval of information in human working memory.
2305.14388
Zuzanna Szyma\'nska Ph.D.
Zuzanna Szyma\'nska and Miros{\l}aw Lachowicz and Nikolaos Sfakianakis and Mark A. J. Chaplain
Mathematical modelling of cancer invasion: Phenotypic transitioning provides insight into multifocal foci formation
null
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
The transition from the epithelial to mesenchymal phenotype and its reverse (from mesenchymal to epithelial) are crucial processes necessary for the progression and spread of cancer. In this paper, we investigate how phenotypic switching at the cancer cell level impacts on behaviour at the tissue level, specifically on the emergence of isolated foci of the invading solid tumour mass leading to a multifocal tumour. To this end, we propose a new mathematical model of cancer invasion that includes the influence of cancer cell phenotype on the rate of invasion and metastasis. The implications of model are explored through numerical simulations revealing that the plasticity of tumour cell phenotypes appears to be crucial for disease progression and local invasive spread. The computational simulations show the progression of the invasive spread of a primary cancer reminiscent of in vivo multifocal breast carcinomas, where multiple, synchronous, ipsilateral neoplastic foci are frequently observed and are associated with a poorer patient prognosis.
[ { "created": "Mon, 22 May 2023 19:43:30 GMT", "version": "v1" } ]
2023-05-25
[ [ "Szymańska", "Zuzanna", "" ], [ "Lachowicz", "Mirosław", "" ], [ "Sfakianakis", "Nikolaos", "" ], [ "Chaplain", "Mark A. J.", "" ] ]
The transition from the epithelial to mesenchymal phenotype and its reverse (from mesenchymal to epithelial) are crucial processes necessary for the progression and spread of cancer. In this paper, we investigate how phenotypic switching at the cancer cell level impacts on behaviour at the tissue level, specifically on the emergence of isolated foci of the invading solid tumour mass leading to a multifocal tumour. To this end, we propose a new mathematical model of cancer invasion that includes the influence of cancer cell phenotype on the rate of invasion and metastasis. The implications of model are explored through numerical simulations revealing that the plasticity of tumour cell phenotypes appears to be crucial for disease progression and local invasive spread. The computational simulations show the progression of the invasive spread of a primary cancer reminiscent of in vivo multifocal breast carcinomas, where multiple, synchronous, ipsilateral neoplastic foci are frequently observed and are associated with a poorer patient prognosis.
1604.04801
Ehtibar Dzhafarov
Ru Zhang and Ehtibar N. Dzhafarov
Testing Contextuality in Cyclic Psychophysical Systems of High Ranks
to appear in Lecture Notes in Computer Science, based on Quantum Interaction 2016 conference, version 2 is a minor revision
null
null
null
q-bio.NC quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Contextuality-by-Default (CbD) theory allows one to separate contextuality from context-dependent errors and violations of selective influences (aka "no-signaling" or "no-disturbance" principles). This makes the theory especially applicable to behavioral systems, where violations of selective influences are ubiquitous. For cyclic systems with binary random variables, CbD provides necessary and sufficient conditions for noncontextuality, and these conditions are known to be breached in certain quantum systems. We apply the theory of cyclic systems to a psychophysical double-detection experiment, in which observers were asked to determine presence or absence of a signal property in each of two simultaneously presented stimuli. The results, as in all other behavioral and social systems previous analyzed, indicate lack of contextuality. The role of context in double-detection is confined to lack of selectiveness: the distribution of responses to one of the stimuli is influenced by the state of the other stimulus.
[ { "created": "Sat, 16 Apr 2016 21:13:26 GMT", "version": "v1" }, { "created": "Wed, 24 Aug 2016 02:55:04 GMT", "version": "v2" } ]
2016-08-25
[ [ "Zhang", "Ru", "" ], [ "Dzhafarov", "Ehtibar N.", "" ] ]
The Contextuality-by-Default (CbD) theory allows one to separate contextuality from context-dependent errors and violations of selective influences (aka "no-signaling" or "no-disturbance" principles). This makes the theory especially applicable to behavioral systems, where violations of selective influences are ubiquitous. For cyclic systems with binary random variables, CbD provides necessary and sufficient conditions for noncontextuality, and these conditions are known to be breached in certain quantum systems. We apply the theory of cyclic systems to a psychophysical double-detection experiment, in which observers were asked to determine presence or absence of a signal property in each of two simultaneously presented stimuli. The results, as in all other behavioral and social systems previous analyzed, indicate lack of contextuality. The role of context in double-detection is confined to lack of selectiveness: the distribution of responses to one of the stimuli is influenced by the state of the other stimulus.
2308.01402
Daisy Yi Ding
Daisy Yi Ding, Yuhui Zhang, Yuan Jia, Jiuzhi Sun
Machine Learning-guided Lipid Nanoparticle Design for mRNA Delivery
The 2023 ICML Workshop on Computational Biology
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While RNA technologies hold immense therapeutic potential in a range of applications from vaccination to gene editing, the broad implementation of these technologies is hindered by the challenge of delivering these agents effectively. Lipid nanoparticles have emerged as one of the most widely used delivery agents, but their design optimization relies on laborious and costly experimental methods. We propose to in silico optimize LNP design with machine learning models. On a curated dataset of 622 LNPs from published studies, we demonstrate the effectiveness of our model in predicting the transfection efficiency of unseen LNPs, with the multilayer perceptron achieving a classification accuracy of 98% on the test set. Our work represents a pioneering effort in combining ML and LNP design, offering significant potential for improving screening efficiency by computationally prioritizing LNP candidates for experimental validation and accelerating the development of effective mRNA delivery systems.
[ { "created": "Wed, 2 Aug 2023 19:53:04 GMT", "version": "v1" }, { "created": "Tue, 29 Aug 2023 01:48:31 GMT", "version": "v2" } ]
2023-08-30
[ [ "Ding", "Daisy Yi", "" ], [ "Zhang", "Yuhui", "" ], [ "Jia", "Yuan", "" ], [ "Sun", "Jiuzhi", "" ] ]
While RNA technologies hold immense therapeutic potential in a range of applications from vaccination to gene editing, the broad implementation of these technologies is hindered by the challenge of delivering these agents effectively. Lipid nanoparticles have emerged as one of the most widely used delivery agents, but their design optimization relies on laborious and costly experimental methods. We propose to in silico optimize LNP design with machine learning models. On a curated dataset of 622 LNPs from published studies, we demonstrate the effectiveness of our model in predicting the transfection efficiency of unseen LNPs, with the multilayer perceptron achieving a classification accuracy of 98% on the test set. Our work represents a pioneering effort in combining ML and LNP design, offering significant potential for improving screening efficiency by computationally prioritizing LNP candidates for experimental validation and accelerating the development of effective mRNA delivery systems.
q-bio/0407033
Ambarish Kunwar
Ambarish Kunwar
Evolution of Spatially Inhomogeneous Eco-Systems: An Unified Model Based Approach
Latex, 10 pages, 8 figures
International Journal of Modern Physics C, Vol. 15, 1449 (2004)
10.1142/S0129183104006856
null
q-bio.PE
null
Recently we have extended our the "unified" model of evolutionary ecology to incorporate the {\it spatial inhomogeneities} of the eco-system and the {\it migration} of individual organisms from one patch to another within the same eco-system. In this paper an extension of our recent model is investigated so as to describe the {\it migration} and {\it speciation} in a more realistic way.
[ { "created": "Sun, 25 Jul 2004 09:25:48 GMT", "version": "v1" } ]
2009-11-10
[ [ "Kunwar", "Ambarish", "" ] ]
Recently we have extended our the "unified" model of evolutionary ecology to incorporate the {\it spatial inhomogeneities} of the eco-system and the {\it migration} of individual organisms from one patch to another within the same eco-system. In this paper an extension of our recent model is investigated so as to describe the {\it migration} and {\it speciation} in a more realistic way.
2111.15424
Markus Pfeil
Markus Pfeil and Thomas Slawig
Unique steady annual cycle in marine ecosystem model simulations
null
null
null
null
q-bio.PE physics.ao-ph
http://creativecommons.org/licenses/by/4.0/
Marine ecosystem models are an important tool to assess the role of the ocean biota in climate change and to identify relevant biogeochemical processes by validating the model outputs against observational data. For the assessment of the marine ecosystem models, the existence and uniqueness of an annual periodic solution (i.e., a steady annual cycle) is desirable. To analyze the uniqueness of a steady annual cycle, we performed a larger number of simulations starting from different initial concentrations for a hierarchy of biogeochemical models with an increasing complexity. The numerical results suggested that the simulations finished always with the same steady annual cycle regardless of the initial concentration. Due to numerical instabilities, some inadmissible approximations of the steady annual cycle, however, occurred in some cases for the three most complex biogeochemical models. Our numerical results indicate a unique steady annual cycle for practical applications.
[ { "created": "Tue, 30 Nov 2021 14:15:37 GMT", "version": "v1" } ]
2021-12-01
[ [ "Pfeil", "Markus", "" ], [ "Slawig", "Thomas", "" ] ]
Marine ecosystem models are an important tool to assess the role of the ocean biota in climate change and to identify relevant biogeochemical processes by validating the model outputs against observational data. For the assessment of the marine ecosystem models, the existence and uniqueness of an annual periodic solution (i.e., a steady annual cycle) is desirable. To analyze the uniqueness of a steady annual cycle, we performed a larger number of simulations starting from different initial concentrations for a hierarchy of biogeochemical models with an increasing complexity. The numerical results suggested that the simulations finished always with the same steady annual cycle regardless of the initial concentration. Due to numerical instabilities, some inadmissible approximations of the steady annual cycle, however, occurred in some cases for the three most complex biogeochemical models. Our numerical results indicate a unique steady annual cycle for practical applications.
1802.05774
Michael Manhart
Michael Manhart and Eugene I. Shakhnovich
Growth tradeoffs produce complex microbial communities on a single limiting resource
null
null
10.1038/s41467-018-05703-6
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The relationship between the dynamics of a community and its constituent pairwise interactions is a fundamental problem in ecology. Higher-order ecological effects beyond pairwise interactions may be key to complex ecosystems, but mechanisms to produce these effects remain poorly understood. Here we show that higher-order effects can arise from variation in multiple microbial growth traits, such as lag times and growth rates, on a single limiting resource with no other interactions. These effects produce a range of ecological phenomena: an unlimited number of strains can exhibit multistability and neutral coexistence, potentially with a single keystone strain; strains that coexist in pairs do not coexist all together; and the champion of all pairwise competitions may not dominate in a mixed community. Since variation in multiple growth traits is ubiquitous in microbial populations due to pleiotropy and non-genetic variation, our results indicate these higher-order effects may also be widespread, especially in laboratory ecology and evolution experiments.
[ { "created": "Thu, 15 Feb 2018 21:51:32 GMT", "version": "v1" }, { "created": "Thu, 31 May 2018 15:19:44 GMT", "version": "v2" } ]
2018-09-05
[ [ "Manhart", "Michael", "" ], [ "Shakhnovich", "Eugene I.", "" ] ]
The relationship between the dynamics of a community and its constituent pairwise interactions is a fundamental problem in ecology. Higher-order ecological effects beyond pairwise interactions may be key to complex ecosystems, but mechanisms to produce these effects remain poorly understood. Here we show that higher-order effects can arise from variation in multiple microbial growth traits, such as lag times and growth rates, on a single limiting resource with no other interactions. These effects produce a range of ecological phenomena: an unlimited number of strains can exhibit multistability and neutral coexistence, potentially with a single keystone strain; strains that coexist in pairs do not coexist all together; and the champion of all pairwise competitions may not dominate in a mixed community. Since variation in multiple growth traits is ubiquitous in microbial populations due to pleiotropy and non-genetic variation, our results indicate these higher-order effects may also be widespread, especially in laboratory ecology and evolution experiments.
1212.3932
Li Xiaoguang
Cheng Lv, Xiaoguang Li, Fangting Li, and Tiejun Li
Transition Path, Quasi-potential Energy Landscape and Stability of Genetic Switches
5 pages, 6 figures
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by-nc-sa/3.0/
One of the fundamental cellular processes governed by genetic regulatory networks in cells is the transition among different states under the intrinsic and extrinsic noise. Based on a two-state genetic switching model with positive feedback, we develop a framework to understand the metastability in gene expressions. This framework is comprised of identifying the transition path, reconstructing the global quasi-potential energy landscape, analyzing the uphill and downhill transition paths, etc. It is successfully utilized to investigate the stability of genetic switching models and fluctuation properties in different regimes of gene expression with positive feedback. The quasi-potential energy landscape, which is the rationalized version of Waddington potential, provides a quantitative tool to understand the metastability in more general biological processes with intrinsic noise.
[ { "created": "Mon, 17 Dec 2012 08:32:58 GMT", "version": "v1" } ]
2012-12-18
[ [ "Lv", "Cheng", "" ], [ "Li", "Xiaoguang", "" ], [ "Li", "Fangting", "" ], [ "Li", "Tiejun", "" ] ]
One of the fundamental cellular processes governed by genetic regulatory networks in cells is the transition among different states under the intrinsic and extrinsic noise. Based on a two-state genetic switching model with positive feedback, we develop a framework to understand the metastability in gene expressions. This framework is comprised of identifying the transition path, reconstructing the global quasi-potential energy landscape, analyzing the uphill and downhill transition paths, etc. It is successfully utilized to investigate the stability of genetic switching models and fluctuation properties in different regimes of gene expression with positive feedback. The quasi-potential energy landscape, which is the rationalized version of Waddington potential, provides a quantitative tool to understand the metastability in more general biological processes with intrinsic noise.
2011.13462
Andreas Reichmuth
Ilaria Incaviglia, Andreas Frutiger, Yves Blickenstorfer, Fridolin Treindl, Giulia Ammirati, Ines L\"uchtefeld, Birgit Dreier, Andreas Pl\"uckthun, Janos V\"or\"os, Andreas M Reichmuth
A promising approach for the real-time quantification of cytosolic protein-protein interactions in living cells
22 pages, 4 figures
null
10.1021/acssensors.0c02480
null
q-bio.QM q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, cell-based assays have been frequently used in molecular interaction analysis. Cell-based assays complement traditional biochemical and biophysical methods, as they allow for molecular interaction analysis, mode of action studies and even drug screening processes to be performed under physiologically relevant conditions. In most cellular assays, biomolecules are usually labeled to achieve specificity. In order to overcome some of the drawbacks associated with label-based assays, we have recently introduced cell-based molography as a biosensor for the analysis of specific molecular interactions involving native membrane receptors in living cells. Here, we expand this assay to cytosolic protein-protein interactions. First, we created a biomimetic membrane receptor by tethering one cytosolic interaction partner to the plasma membrane. The artificial construct is then coherently arranged into a two-dimensional pattern within the cytosol of living cells. Thanks to the molographic sensor, the specific interactions between the coherently arranged protein and its endogenous interaction partners become visible in real-time without the use of a fluorescent label. This method turns out to be an important extension of cell-based molography because it expands the range of interactions that can be analyzed by molography to those in the cytosol of living cells.
[ { "created": "Thu, 26 Nov 2020 20:13:02 GMT", "version": "v1" } ]
2021-04-13
[ [ "Incaviglia", "Ilaria", "" ], [ "Frutiger", "Andreas", "" ], [ "Blickenstorfer", "Yves", "" ], [ "Treindl", "Fridolin", "" ], [ "Ammirati", "Giulia", "" ], [ "Lüchtefeld", "Ines", "" ], [ "Dreier", "Birgit", "" ], [ "Plückthun", "Andreas", "" ], [ "Vörös", "Janos", "" ], [ "Reichmuth", "Andreas M", "" ] ]
In recent years, cell-based assays have been frequently used in molecular interaction analysis. Cell-based assays complement traditional biochemical and biophysical methods, as they allow for molecular interaction analysis, mode of action studies and even drug screening processes to be performed under physiologically relevant conditions. In most cellular assays, biomolecules are usually labeled to achieve specificity. In order to overcome some of the drawbacks associated with label-based assays, we have recently introduced cell-based molography as a biosensor for the analysis of specific molecular interactions involving native membrane receptors in living cells. Here, we expand this assay to cytosolic protein-protein interactions. First, we created a biomimetic membrane receptor by tethering one cytosolic interaction partner to the plasma membrane. The artificial construct is then coherently arranged into a two-dimensional pattern within the cytosol of living cells. Thanks to the molographic sensor, the specific interactions between the coherently arranged protein and its endogenous interaction partners become visible in real-time without the use of a fluorescent label. This method turns out to be an important extension of cell-based molography because it expands the range of interactions that can be analyzed by molography to those in the cytosol of living cells.
2107.09056
Madiha Hameed Awan
Madiha Hameed, Abdul Majiid, Asifullah Khan
FANCA: In-Silico deleterious mutation analysis for early prediction of leukemia
22 pages, 09 figure
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
As a novel biomarker from the Fanconi anemia complementation group (FANC) family, FANCA is antigens to Leukemia cancer. The overexpression of FANCA has predicted the second most common cancer in the world that is responsible for cancer-related deaths. Non-synonymous SNPs are an essential group of SNPs that lead to alterations in encoded polypeptides. Changes in the amino acid sequences of gene products lead to Leukemia. First, we study individual SNPs in the coding region of FANCA and computational tools like PROVEAN, PolyPhen2, MuPro, and PANTHER to compute deleterious mutation scores. The three-dimensional structural and functional prediction conducted using I-TASSER. Further, the predicted structure refined using the GlaxyWeb tool. In the study, the proteomic data has been retrieved from the UniProtKB. The coding region of the dataset contains 100 non-synonymous single nucleotide polymorphisms (nsSNPs), and 24 missense SNPs have been determined as deleterious by all analyses. In this work, six well-known computational tools were employed to study Leukemia-associated nsSNPs. It is inferred that these nsSNPs could play their role in the up-regulation of FANCA, which further leads to provoke leukemia advancement. The current research would benefit researchers and practitioners in handling cancer-associated diseases related to FANCA. The proposed study would also help to develop precision medicine in the field of drug discovery.
[ { "created": "Mon, 19 Jul 2021 05:54:08 GMT", "version": "v1" }, { "created": "Fri, 29 Oct 2021 05:16:53 GMT", "version": "v2" } ]
2021-11-01
[ [ "Hameed", "Madiha", "" ], [ "Majiid", "Abdul", "" ], [ "Khan", "Asifullah", "" ] ]
As a novel biomarker from the Fanconi anemia complementation group (FANC) family, FANCA is antigens to Leukemia cancer. The overexpression of FANCA has predicted the second most common cancer in the world that is responsible for cancer-related deaths. Non-synonymous SNPs are an essential group of SNPs that lead to alterations in encoded polypeptides. Changes in the amino acid sequences of gene products lead to Leukemia. First, we study individual SNPs in the coding region of FANCA and computational tools like PROVEAN, PolyPhen2, MuPro, and PANTHER to compute deleterious mutation scores. The three-dimensional structural and functional prediction conducted using I-TASSER. Further, the predicted structure refined using the GlaxyWeb tool. In the study, the proteomic data has been retrieved from the UniProtKB. The coding region of the dataset contains 100 non-synonymous single nucleotide polymorphisms (nsSNPs), and 24 missense SNPs have been determined as deleterious by all analyses. In this work, six well-known computational tools were employed to study Leukemia-associated nsSNPs. It is inferred that these nsSNPs could play their role in the up-regulation of FANCA, which further leads to provoke leukemia advancement. The current research would benefit researchers and practitioners in handling cancer-associated diseases related to FANCA. The proposed study would also help to develop precision medicine in the field of drug discovery.
2203.10179
Xavier Michalet
Donald Ferschweiler, Maya Segal, Shimon Weiss, Xavier Michalet
A user-friendly tool to convert photon counting data to the open-source Photon-HDF5 file format
null
Proceedings of SPIE Vol. 11967 (2022) art. 1196703
10.1117/12.2608487
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Photon-HDF5 is an open-source and open file format for storing photon-counting data from single molecule microscopy experiments, introduced to simplify data exchange and increase the reproducibility of data analysis. Part of the Photon-HDF5 ecosystem, is phconvert, an extensible python library that allows converting proprietary formats into Photon-HDF5 files. However, its use requires some proficiency with command line instructions, the python programming language, and the YAML markup format. This creates a significant barrier for potential users without that expertise, but who want to benefit from the advantages of releasing their files in an open format. In this work, we present a GUI that lowers this barrier, thus simplifying the use of Photon-HDF5. This tool uses the phconvert python library to convert data files originally saved in proprietary data formats to Photon-HDF5 files, without users having to write a single line of code. Because reproducible analyses depend on essential experimental information, such as laser power or sample description, the GUI also includes (currently limited) functionality to associate valid metadata with the converted file, without having to write any YAML. Finally, the GUI includes several productivity-enhancing features such as whole-directory batch conversion and the ability to re-run a failed batch, only converting the files that could not be converted in the previous run.
[ { "created": "Fri, 18 Mar 2022 22:24:14 GMT", "version": "v1" } ]
2022-03-22
[ [ "Ferschweiler", "Donald", "" ], [ "Segal", "Maya", "" ], [ "Weiss", "Shimon", "" ], [ "Michalet", "Xavier", "" ] ]
Photon-HDF5 is an open-source and open file format for storing photon-counting data from single molecule microscopy experiments, introduced to simplify data exchange and increase the reproducibility of data analysis. Part of the Photon-HDF5 ecosystem, is phconvert, an extensible python library that allows converting proprietary formats into Photon-HDF5 files. However, its use requires some proficiency with command line instructions, the python programming language, and the YAML markup format. This creates a significant barrier for potential users without that expertise, but who want to benefit from the advantages of releasing their files in an open format. In this work, we present a GUI that lowers this barrier, thus simplifying the use of Photon-HDF5. This tool uses the phconvert python library to convert data files originally saved in proprietary data formats to Photon-HDF5 files, without users having to write a single line of code. Because reproducible analyses depend on essential experimental information, such as laser power or sample description, the GUI also includes (currently limited) functionality to associate valid metadata with the converted file, without having to write any YAML. Finally, the GUI includes several productivity-enhancing features such as whole-directory batch conversion and the ability to re-run a failed batch, only converting the files that could not be converted in the previous run.
2008.03067
Antonia Mey
Antonia S. J. S. Mey, Bryce Allen, Hannah E. Bruce Macdonald, John D. Chodera, Maximilian Kuhn, Julien Michel, David L. Mobley, Levi N. Naden, Samarjeet Prasad, Andrea Rizzi, Jenke Scheen, Michael R. Shirts, Gary Tresadern, Huafeng Xu
Best Practices for Alchemical Free Energy Calculations
48 pages, 14 figures
null
10.33011/livecoms.2.1.18378
null
q-bio.BM stat.CO
http://creativecommons.org/licenses/by-sa/4.0/
Alchemical free energy calculations are a useful tool for predicting free energy differences associated with the transfer of molecules from one environment to another. The hallmark of these methods is the use of "bridging" potential energy functions representing \emph{alchemical} intermediate states that cannot exist as real chemical species. The data collected from these bridging alchemical thermodynamic states allows the efficient computation of transfer free energies (or differences in transfer free energies) with orders of magnitude less simulation time than simulating the transfer process directly. While these methods are highly flexible, care must be taken in avoiding common pitfalls to ensure that computed free energy differences can be robust and reproducible for the chosen force field, and that appropriate corrections are included to permit direct comparison with experimental data. In this paper, we review current best practices for several popular application domains of alchemical free energy calculations, including relative and absolute small molecule binding free energy calculations to biomolecular targets.
[ { "created": "Fri, 7 Aug 2020 10:01:31 GMT", "version": "v1" }, { "created": "Mon, 10 Aug 2020 14:27:19 GMT", "version": "v2" }, { "created": "Fri, 21 Aug 2020 07:41:34 GMT", "version": "v3" } ]
2021-04-02
[ [ "Mey", "Antonia S. J. S.", "" ], [ "Allen", "Bryce", "" ], [ "Macdonald", "Hannah E. Bruce", "" ], [ "Chodera", "John D.", "" ], [ "Kuhn", "Maximilian", "" ], [ "Michel", "Julien", "" ], [ "Mobley", "David L.", "" ], [ "Naden", "Levi N.", "" ], [ "Prasad", "Samarjeet", "" ], [ "Rizzi", "Andrea", "" ], [ "Scheen", "Jenke", "" ], [ "Shirts", "Michael R.", "" ], [ "Tresadern", "Gary", "" ], [ "Xu", "Huafeng", "" ] ]
Alchemical free energy calculations are a useful tool for predicting free energy differences associated with the transfer of molecules from one environment to another. The hallmark of these methods is the use of "bridging" potential energy functions representing \emph{alchemical} intermediate states that cannot exist as real chemical species. The data collected from these bridging alchemical thermodynamic states allows the efficient computation of transfer free energies (or differences in transfer free energies) with orders of magnitude less simulation time than simulating the transfer process directly. While these methods are highly flexible, care must be taken in avoiding common pitfalls to ensure that computed free energy differences can be robust and reproducible for the chosen force field, and that appropriate corrections are included to permit direct comparison with experimental data. In this paper, we review current best practices for several popular application domains of alchemical free energy calculations, including relative and absolute small molecule binding free energy calculations to biomolecular targets.
1602.00600
Steven Schiff
Steven J. Schiff, Julius Kiwanuka, Gina Riggio, Lan Nguyen, Kevin Mu, Emily Sproul, Joel Bazira, Juliet Mwanga, Dickson Tumusiime, Eunice Nyesigire, Nkangi Lwanga, Kaleb T. Bogale, Vivek Kapur, James Broach, Sarah Morton, Benjamin C. Warf, Mary Poss
Separating Putative Pathogens from Background Contamination with Principal Orthogonal Decomposition: Evidence for Leptospira in the Ugandan Neonatal Septisome
23 pages, 2 figures
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neonatal sepsis (NS) is responsible for over a 1 million yearly deaths worldwide. In the developing world NS is often treated without an identified microbial pathogen. Amplicon sequencing of the bacterial 16S rRNA gene can be used to identify organisms that are difficult to detect by routine microbiological methods. However, contaminating bacteria are ubiquitous in both hospital settings and research reagents, and must be accounted for to make effective use of these data. In the present study, we sequenced the bacterial 16S rRNA gene obtained from blood and cerebrospinal fluid (CSF) of 80 neonates presenting with NS to the Mbarara Regional Hospital in Uganda. Assuming that patterns of background contamination would be independent of pathogenic microorganism DNA, we applied a novel quantitative approach using principal orthogonal decomposition to separate background contamination from potential pathogens in sequencing data. We designed our quantitative approach contrasting blood, CSF, and control specimens, and employed a variety of statistical random matrix bootstrap hypotheses to estimate statistical significance. These analyses demonstrate that Leptospira appears present in some infants presenting within 48 hr of birth, indicative of infection in utero, and up to 28 days of age, suggesting environmental exposure. This organism cannot be cultured in routine bacteriological settings, and is enzootic in the cattle that the rural peoples of western Uganda often live in close proximity. Our findings demonstrate that statistical approaches to remove background organisms common in 16S sequence data can reveal putative pathogens in small volume biological samples from newborns. This computational analysis thus reveals an important medical finding that has the potential to alter therapy and prevention efforts in a critically ill population.
[ { "created": "Mon, 1 Feb 2016 17:17:04 GMT", "version": "v1" } ]
2016-02-02
[ [ "Schiff", "Steven J.", "" ], [ "Kiwanuka", "Julius", "" ], [ "Riggio", "Gina", "" ], [ "Nguyen", "Lan", "" ], [ "Mu", "Kevin", "" ], [ "Sproul", "Emily", "" ], [ "Bazira", "Joel", "" ], [ "Mwanga", "Juliet", "" ], [ "Tumusiime", "Dickson", "" ], [ "Nyesigire", "Eunice", "" ], [ "Lwanga", "Nkangi", "" ], [ "Bogale", "Kaleb T.", "" ], [ "Kapur", "Vivek", "" ], [ "Broach", "James", "" ], [ "Morton", "Sarah", "" ], [ "Warf", "Benjamin C.", "" ], [ "Poss", "Mary", "" ] ]
Neonatal sepsis (NS) is responsible for over a 1 million yearly deaths worldwide. In the developing world NS is often treated without an identified microbial pathogen. Amplicon sequencing of the bacterial 16S rRNA gene can be used to identify organisms that are difficult to detect by routine microbiological methods. However, contaminating bacteria are ubiquitous in both hospital settings and research reagents, and must be accounted for to make effective use of these data. In the present study, we sequenced the bacterial 16S rRNA gene obtained from blood and cerebrospinal fluid (CSF) of 80 neonates presenting with NS to the Mbarara Regional Hospital in Uganda. Assuming that patterns of background contamination would be independent of pathogenic microorganism DNA, we applied a novel quantitative approach using principal orthogonal decomposition to separate background contamination from potential pathogens in sequencing data. We designed our quantitative approach contrasting blood, CSF, and control specimens, and employed a variety of statistical random matrix bootstrap hypotheses to estimate statistical significance. These analyses demonstrate that Leptospira appears present in some infants presenting within 48 hr of birth, indicative of infection in utero, and up to 28 days of age, suggesting environmental exposure. This organism cannot be cultured in routine bacteriological settings, and is enzootic in the cattle that the rural peoples of western Uganda often live in close proximity. Our findings demonstrate that statistical approaches to remove background organisms common in 16S sequence data can reveal putative pathogens in small volume biological samples from newborns. This computational analysis thus reveals an important medical finding that has the potential to alter therapy and prevention efforts in a critically ill population.
0808.2204
Michael Hagan
Michael F. Hagan
A theory for viral capsid assembly around electrostatic cores
This version has been updated from v1 as follows. The calculation accounts for curvature, explicitly represents the polymeric nature of surface functionalization molecules, and determines the dissociation equilibrium of the functionalized carboxylate groups. 13 pages main text, 4 pages appendix, 14 figures
Hagan, M.F., A theory for viral capsid assembly around electrostatic cores, J. Chem. Phys., 2009, v130, 114902
10.1063/1.3086041
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop equilibrium and kinetic theories that describe the assembly of viral capsid proteins on a charged central core, as seen in recent experiments in which brome mosaic virus (BMV) capsids assemble around nanoparticles functionalized with polyelectrolyte. We model interactions between capsid proteins and nanoparticle surfaces as the interaction of polyelectrolyte brushes with opposite charge, using the nonlinear Poisson Boltzmann equation. The models predict that there is a threshold density of functionalized charge, above which capsids efficiently assemble around nanoparticles, and that light scatter intensity increases rapidly at early times, without the lag phase characteristic of empty capsid assembly. These predictions are consistent with, and enable interpretation of, preliminary experimental data. However, the models predict a stronger dependence of nanoparticle incorporation efficiency on functionalized charge density than measured in experiments, and do not completely capture a logarithmic growth phase seen in experimental light scatter. These discrepancies may suggest the presence of metastable disordered states in the experimental system. In addition to discussing future experiments for nanoparticle-capsid systems, we discuss broader implications for understanding assembly around charged cores such as nucleic acids.
[ { "created": "Fri, 15 Aug 2008 21:29:48 GMT", "version": "v1" }, { "created": "Wed, 6 May 2009 18:38:20 GMT", "version": "v2" } ]
2009-05-06
[ [ "Hagan", "Michael F.", "" ] ]
We develop equilibrium and kinetic theories that describe the assembly of viral capsid proteins on a charged central core, as seen in recent experiments in which brome mosaic virus (BMV) capsids assemble around nanoparticles functionalized with polyelectrolyte. We model interactions between capsid proteins and nanoparticle surfaces as the interaction of polyelectrolyte brushes with opposite charge, using the nonlinear Poisson Boltzmann equation. The models predict that there is a threshold density of functionalized charge, above which capsids efficiently assemble around nanoparticles, and that light scatter intensity increases rapidly at early times, without the lag phase characteristic of empty capsid assembly. These predictions are consistent with, and enable interpretation of, preliminary experimental data. However, the models predict a stronger dependence of nanoparticle incorporation efficiency on functionalized charge density than measured in experiments, and do not completely capture a logarithmic growth phase seen in experimental light scatter. These discrepancies may suggest the presence of metastable disordered states in the experimental system. In addition to discussing future experiments for nanoparticle-capsid systems, we discuss broader implications for understanding assembly around charged cores such as nucleic acids.
1802.00864
Fatima Zohra Smaili
Fatima Zohra Smaili, Xin Gao, and Robert Hoehndorf
Onto2Vec: joint vector-based representation of biological entities and their ontology-based annotations
null
null
10.1093/bioinformatics/bty259
null
q-bio.QM cs.AI
http://creativecommons.org/licenses/by/4.0/
We propose the Onto2Vec method, an approach to learn feature vectors for biological entities based on their annotations to biomedical ontologies. Our method can be applied to a wide range of bioinformatics research problems such as similarity-based prediction of interactions between proteins, classification of interaction types using supervised learning, or clustering.
[ { "created": "Wed, 31 Jan 2018 08:23:45 GMT", "version": "v1" } ]
2018-07-13
[ [ "Smaili", "Fatima Zohra", "" ], [ "Gao", "Xin", "" ], [ "Hoehndorf", "Robert", "" ] ]
We propose the Onto2Vec method, an approach to learn feature vectors for biological entities based on their annotations to biomedical ontologies. Our method can be applied to a wide range of bioinformatics research problems such as similarity-based prediction of interactions between proteins, classification of interaction types using supervised learning, or clustering.
1906.11546
Matthias Fischer
Matthias M. Fischer and Matthias Bild
Gut microbiome composition: back to baseline?
4 pages, 2 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In Nature Microbiology, Palleja and colleagues studied the changes in gut microbiome composition in twelve healthy men over a period of six months following an antibiotic intervention. The authors argued that the 'gut microbiota of the subjects recovered to near-baseline composition within 1.5 months' and only exhibited a 'mild yet long-lasting imprint following antibiotics exposure.' We here present a series of re-analyses of their original data which demonstrate a significant loss of microbial taxa even after the complete study period of 180 days. Additionally we show that the composition of the microbiomes after the complete study period only moderately correlates with the initial baseline states. Taken together with the lack of significant compositional differences between day 42 and day 180, we think that these findings suggest the convergence of the microbiomes to another stable composition, which is different from the pre-treatment states, instead of a recovery of the baseline state. Given the accumulating evidence of the role of microbiome perturbations in a variety of infectious and non-infectious diseases, as well as the crucial role antibiotics play in modern medicine, we consider these differences in compositional states worthy of further investigation.
[ { "created": "Thu, 27 Jun 2019 11:08:39 GMT", "version": "v1" } ]
2019-06-28
[ [ "Fischer", "Matthias M.", "" ], [ "Bild", "Matthias", "" ] ]
In Nature Microbiology, Palleja and colleagues studied the changes in gut microbiome composition in twelve healthy men over a period of six months following an antibiotic intervention. The authors argued that the 'gut microbiota of the subjects recovered to near-baseline composition within 1.5 months' and only exhibited a 'mild yet long-lasting imprint following antibiotics exposure.' We here present a series of re-analyses of their original data which demonstrate a significant loss of microbial taxa even after the complete study period of 180 days. Additionally we show that the composition of the microbiomes after the complete study period only moderately correlates with the initial baseline states. Taken together with the lack of significant compositional differences between day 42 and day 180, we think that these findings suggest the convergence of the microbiomes to another stable composition, which is different from the pre-treatment states, instead of a recovery of the baseline state. Given the accumulating evidence of the role of microbiome perturbations in a variety of infectious and non-infectious diseases, as well as the crucial role antibiotics play in modern medicine, we consider these differences in compositional states worthy of further investigation.
2006.10265
Joann Jasiak
Christian Gourieroux, Joann Jasiak
Analysis of Virus Propagation: A Transition Model Representation of Stochastic Epidemiological Models
38 pages, 9 figures
null
null
null
q-bio.PE physics.soc-ph stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The growing literature on the propagation of COVID-19 relies on various dynamic SIR-type models (Susceptible-Infected-Recovered) which yield model-dependent results. For transparency and ease of comparing the results, we introduce a common representation of the SIR-type stochastic epidemiological models. This representation is a discrete time transition model, which allows us to classify the epidemiological models with respect to the number of states (compartments) and their interpretation. Additionally, the transition model eliminates several limitations of the deterministic continuous time epidemiological models which are pointed out in the paper. We also show that all SIR-type models have a nonlinear (pseudo) state space representation and are easily estimable from an extended Kalman filter.
[ { "created": "Thu, 18 Jun 2020 04:03:05 GMT", "version": "v1" } ]
2020-06-19
[ [ "Gourieroux", "Christian", "" ], [ "Jasiak", "Joann", "" ] ]
The growing literature on the propagation of COVID-19 relies on various dynamic SIR-type models (Susceptible-Infected-Recovered) which yield model-dependent results. For transparency and ease of comparing the results, we introduce a common representation of the SIR-type stochastic epidemiological models. This representation is a discrete time transition model, which allows us to classify the epidemiological models with respect to the number of states (compartments) and their interpretation. Additionally, the transition model eliminates several limitations of the deterministic continuous time epidemiological models which are pointed out in the paper. We also show that all SIR-type models have a nonlinear (pseudo) state space representation and are easily estimable from an extended Kalman filter.